Meet Leanne Hurley: Cloud Expert of the Month – April 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Leanne Hurley.

After starting out at the front counter of a two-way radio shop in 1993, Leanne worked her way from face-to-face customer service, to billing, to training and finally into sales. She has been in sales since 1996 and has (mostly!) loved every minute of it. Leanne started selling IaaS (whether co-lo, Managed Hosting or Cloud) during the dot.com boom and expanded her expertise since I’ve been at SAP.  Now, she enjoys leading a team of sales professionals as she works with companies to improve business outcomes and accelerate digital transformation utilizing SAP’s Intelligent enterprise.

When did you join Cloud Girls and why?

I was one of the first members of Cloud Girls in 2011. I joined because having a strong network and community of women in technology is important.

What do you value about being a Cloud Girl?  

I value the relationships and women in the group.

What advice would you give to your younger self at the start of your career?

Stop doubting yourself. Continue to ask questions and don’t be intimidated by people that try to squash your tenacity and curiosity.

What’s your favorite inspirational quote?

“You can have everything in life you want if you will just help other people get what they want.”  – Zig Ziglar

What one piece of advice would you share with young women to encourage them to take a seat at the table?

Never stop learning and always ask questions. In technology women (and men too for that matter) avoid asking questions because they think it reveals some sort of inadequacy. That is absolutely false. Use your curiosity and thirst for knowledge as a tool, it will serve you well all your life.

You’re a new addition to the crayon box. What color would you be and why?

I would be Sassy-molassy because I’m a bit sassy.

What was the best book you read this year and why?

I loved American Dirt because it humanized the US migrant plight and reminded me how blessed and lucky we all are to have been born in the US.

What’s the most useless talent you have? Why?.[…] Read more »…..

 

3 signs that it’s time to reevaluate your monitoring platform

As we move forward from the uncertainty of 2020, remote and hybrid styles of work are likely to remain beyond the pandemic. Amid the rise of modified workflows, we’ve also seen an increase in phishing scams, ransomware attacks, and simple user errors that result in the IT infrastructures we rely on crashing – sometimes with devastating long-term repercussions for the business. What’s needed to prevent this is a reliable monitoring system that is constantly scanning your system – whether you’re operating from a data center, a public cloud, or some combination – to alert you when something is amiss. Often these monitoring tools run so smoothly in the background of operations that we forget they’re even there – which can be a big problem.

When is the last time you assessed your monitoring platform? You may have already noticed signs indicating that your tools are not keeping up with the rapidly changing digital workforce – gathering nonessential data while failing to forewarn you about legitimate issues to your network operations. Post-2020, these systems have to handle workforces that are staying connected digitally regardless of where employees are working. Your monitoring tools should be hyper-focused on alerting you to issues from outside your network and any weakness from within it. Often, we turn out to be monitoring for too much and still missing the essential problems until it’s too late.

  1. Outages

One of the most damaging and costly setbacks a business can experience is network downtime when your network suddenly and without warning ceases to work. Applications are no longer functioning, files are inaccessible, and your business cannot perform its daily functions. Responding to network downtime isn’t a simple matter of rebooting your computer, either. Gartner estimates that for every minute of network downtime, the company in question loses an average of $5,600. On the higher end of this spectrum, a business could lose $540,000 per hour. Those figures are based on lost productivity. Getting your system up and running again, catching up on lost time, and, one would think, reevaluating and implementing a new monitoring system all incur additional costs.

In the case of one luxury hotel chain, an updated monitoring system accurately detected why they were experiencing outages – a change in network configuration. By utilizing a newly updated monitoring configuration, the chain quickly reverted the network change and restored service for their customers, saving hours of troubleshooting and costly downtime.

Systems should be proactive, not reactive. The time to reassess your monitoring infrastructure isn’t after it fails to warn you that something goes wrong. Your network monitoring system should be automatically measuring performance and sharing status updates, so you can fix a problem before it happens. If your system is working at its proper capacity, it will be routinely preventing unexpected outages by using performance thresholds to evaluate functionality in real time, and alert you when targeted metrics have reached a threshold that requires attention. With a robust monitoring system in place, your team should have complete network visibility and can respond to changes and prevent outages before they happen.

  1. Alert Fatigue

Alert fatigue is something we can all relate to following a year of working from home: email notifications, instant messages, texts, phone calls, and calendar reminders for your next video meeting. After so many of these day after day, we become desensitized to them; the more alerts we receive, the less urgent any of them seem. From a cybersecurity standpoint, some of the notifications may be for anomalies linked to a potential cyberattack, but more often will be a junk email. If a genuinely urgent message does come through, it often slips through the cracks because it seems no different from any other notification we receive.

So how can your IT infrastructure help prevent this? Intelligent monitoring systems, in general, aim to make the lives of the people using them easier. Your monitoring system should reduce the number of redundant alerts to recognize and prioritize actual issues. A tiered-alert priority system will have notifications display on your dashboard with a visual or auditory cue signifying how important it is. Can this wait until the afternoon, or does it need to be addressed immediately? Detecting a cyberattack early, for example, can make a huge difference in mitigating damage.

  1. Excess Tools

One of the root causes of any monitoring flaw can be excessive monitoring tools themselves – over-monitoring. If you have multiple tools to track your network, you’re likely getting notifications and warnings from each; contributing to alert fatigue, opening yourself up to a potential failure, resulting in a network outage and business interruption. Having multiple tools performing the same function is a waste of resources as they render each other redundant. The key is to consolidate the necessary functions in one monitoring system, regularly assessed for vulnerabilities and customized for your particular business needs.

Your business members will indeed want to track an abundance of metrics – server functionality, security, business metrics, and so on – and it may be that not all of these things can be monitored by the same tool. You should first decide which things are essential for your team to be actively monitoring and assessing. Security should be a top priority, but are there other data points that can be pulled in a quarterly or annual report instead? Your IT monitoring should be focused on tracking and alerting you to essential information and irregularities. You can avoid overextending the team and receiving alerts that will only be ignored by first doing your own assessment of what you need from your system.

Assessing Your Approach for Future Growth

We can’t operate at our full potential without the control and visibility that monitoring tools give us…[…] Read more »….

 

Protecting Remote Workers Against the Perils of Public WI-FI

In a physical office, front-desk security keeps strangers out of work spaces. In your own home, you control who walks through your door. But what happens when your “office” is a table at the local coffee shop, where you’re sipping a latte among total strangers?

Widespread remote work is likely here to stay, even after the pandemic is over. But the resumption of travel and the reopening of public spaces raises new concerns about how to keep remote work secure.

In particular, many employees used to working in the relative safety of an office or private home may be unaware of the risks associated with public Wi-Fi. Just like you can’t be sure who’s sitting next to your employee in a coffee shop or other public space, you can’t be sure whether the public Wi-Fi network they’re connecting to is safe. And the second your employee accidentally connects to a malicious hotspot, they could expose all the sensitive data that’s transmitted in their communications or stored on their device.

Taking scenarios like this into account when planning your cybersecurity protections will help keep your company’s data safe, no matter where employees choose to open their laptops.

The risks of Wi-Fi search

An employee leaving Wi-Fi enabled when they leave their house may seem harmless, but it really leaves them incredibly vulnerable. Wi-Fi enabled devices can reveal the network names (SSIDs) they normally connect to when they are on the move. An attacker can then use this information to emulate a known “trusted” network that is not encrypted and pretend to be that network.  Many devices will automatically connect to these “trusted” open networks without verifying that the network is legitimate.

Often, attackers don’t even need to emulate known networks to entice users to connect. According to a recent poll, two-thirds of people who use public Wi-Fi set their devices to connect automatically to nearby networks, without vetting which ones they’re joining.

If your employee automatically connects to a malicious network — or is tricked into doing so — a cybercriminal can unleash a number of damaging attacks. The network connection can enable the attacker to intercept and modify any unencrypted content that is sent to the employee’s device. That means they can insert malicious payloads into innocuous web pages or other content, enabling them to exploit any software vulnerabilities that may be present on the device.

Once such malicious content is running on a device, many technical attacks are possible against other, more important parts of the device software and operating system. Some of these provide administrative or root level access, which gives the attacker near total control of the device. Once an attacker has this level of access, all data, access, and functionality on the device is potentially compromised. The attacker can remove or alter the data, or encrypt it with ransomware and demand payment in exchange for the key.

The attacker could even use the data to emulate and impersonate the employee who owns and or uses the device. This sort of fraud can have devastating consequences for companies. Last year, a Florida teenager was able to take over multiple high-profile Twitter accounts by impersonating a member of the Twitter IT team.

A multi-layered approach to remote work security

These worst-case scenarios won’t occur every time an employee connects to an unknown network while working remotely outside the home — but it only takes one malicious network connection to create a major security incident. To protect against these problems, make sure you have more than one line of cybersecurity defenses protecting your remote workers against this particular attack vector.

Require VPN use. The best practice for users who need access to non-corporate Wi-Fi is to require that all web traffic on corporate devices go through a trusted VPN. This greatly limits the attack surface of a device, and reduces the probability of a device compromise if it connects to a malicious access point.

Educate employees about risk. Connecting freely to public Wi-Fi is normalized in everyday life, and most people have no idea how risky it is. Simply informing your employees about the risks can have a major impact on behavior. No one wants to be the one responsible for a data breach or hack…[…] Read more »

 

 

What types of cybersecurity skills can you learn in a cyber range?

What is a cyber range?

A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.

A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.

What skills can you learn in a cyber range?

A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.

SIEM, IDS/IPS and firewall management

Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.

A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.

Incident response

After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.

A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.

Operating system management: Linux and Windows

Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.

A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.

Endpoint controls and protection

As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.

Penetration testing

This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.

A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.

Network management

Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.

A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.

Malware analysis

Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.

Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.

Threat hunting

Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.

A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.

Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.

Computer forensics

Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics…[…] Read more »….

 

In an Uncertain World, You Can Count on These Four Trends in 2021

As leaders look to the year ahead, planning and predictions have taken on a whole new meaning in a post-pandemic world. With so many unknowns in 2021, how can anyone claim to know what’s coming with confidence?

However, if 2020 taught us nothing else, it is that major, unseen disruption does result in one certainty: enterprises must be more responsive. The pandemic shone a bright and unflattering spotlight on where companies need to update their IT infrastructure – both to contend with current challenges and to become a modern company. Despite cloud adoption, resilience and business continuity gradually fell to the bottom of enterprises’ infrastructure to-do lists in recent years. Now, companies are playing catchup to take hold of the future that has arrived on their doorstep, a few years early.

While 2021 feels largely uncharted, there are a few trends that are bound to define this year’s plans and investments. Companies that capitalize on these trends will position themselves not only to be competitive and stronger than ever, but to respond when the next disruption hits.

Reinforced mainframe recruiting and retention

We have witnessed the mainframe skills gap widen in recent years. As more mainframe pros retire, taking their knowledge with them, too little focus has been put on enabling emerging professionals to step in Enterprises must recalibrate, the mainframe is not going away, and the workforce needs to shift. Mainframe spend continues to grow at above 3% yearly, and upwards of 50% of organizations in several industries (financial services, insurance, public sector and more) state its running more of their business-critical applications than ever before.

To fill these vacant positions with new talent, enterprises need to adjust their approach to the mainframe. Emerging IT pros and college graduates don’t want to work on outdated interfaces. They prefer interfaces that are intuitive and allow for clicking and dragging. They want to work for a company with a highly collaborative and communicative culture, as well as modern processes and tools such as DevOps and related tool chains. DevOps enables a culture of collaboration, reduces siloes, automates for efficiency and, yes, can be applied to the mainframe. Enterprises that modernize the mainframe and related operations will close the skills gap in 2021 and usher in a new wave of innovation.

A call for COBOL programmers

COBOL (Common Business-Oriented Language) is not a dead programming language – far from it. A 2018 report for the Social Security Administration found that the administration maintained more than 60 million lines of COBOL. In fact, many government agencies rely on COBOL programs, but – no surprise – there are too few programmers to tackle them. The lack of COBOL programmers has been adding stress to the mainframe, especially for government agencies, which will only escalate over the next six to eight months.

COBOL development isn’t inherently the issue – most IT pros who can code Python can learn COBOL. The bigger problem is understanding how the programs work and what they’re doing. It requires a person who has application understanding and ample tribal knowledge. The second issue is good quality assurance. Many enterprises lack the ability to test applications and ensure nothing is at risk of breaking. COBOL coders need an easy way to digest 10 thousand lines of code, break it down and understand it. They need the right tools to avoid working unreasonable hours, battling rampant inefficiency and risking project failure. With these tools, enterprises can tackle COBOL programs in a way that is manageable for people new and experienced with COBOL.

A shift towards value stream management

As enterprises work to better align the IT and business sides of the business, including embracing agile and DevOps, value stream management is stepping into the spotlight. Up until recently, most organizations were focused on workload automation and scheduling, where they could automate certain parts of a system and schedule related jobs. However, workload automations solutions no longer fully meet most enterprises’ needs as IT moves toward DevOps and teams want to move faster and require more visibility as they orchestrate automations across multiple systems, technologies and platforms.

With value stream management, enterprises take a step beyond automating a particular “job” to instead orchestrating the automation of multiple jobs and tasks, across multiple applications and systems, to streamline a process or value stream. Many value streams are currently “in the dark,” managed manually by resources focused on ensuring a job runs to completion. With orchestration, enterprises can design and visualize their value streams, create workflows tying them together and collect metrics to figure out where improvements can be made. This visibility will allow enterprises to deliver value faster and innovate more quickly – key advantages in becoming a more responsive enterprise. This orchestration helps eliminate silos and create greater transparency enabling IT pros to find issues and remove bottlenecks faster. Enterprises can even orchestrate a DevOps toolchain, so they can kick off the creation of code and orchestrate its delivery to meet demands

The hyperautomation takeover

What is the difference between automation and hyperautomation? According to Gartner, hyperautomation applies advanced technologies like robotic process automation (RPA), artificial intelligence (AI), machine learning (ML) and more to enable the automation of virtually any repetitive task. In the age of efficiency and productivity, it is not hard to see why this trend is taking off.

The pandemic highlighted several areas within the enterprise that would benefit from hyperautomation. For instance, many companies put new workflows in place for COVID-19 tracking. HR departments need to monitor which employees are physically safe, which are not and what their IT needs are, especially with a remote workforce. While many companies were once reluctant to dabble in workflow automation with their content services solutions, never mind value stream management, COVID-19 has forced companies – and HR specifically – to reevaluate where they need automation capabilities, especially with extra processes added by the pandemic.

Automation will also be increasingly pertinent as companies apply DevOps across the organization, especially when integrating the mainframe into the DevOps toolchain. Using hyperautomation, enterprises can integrate tools that allow for continuous delivery and bring processes into a modern culture…[…] Read more »

 

3 key reasons why SOCs should implement policies over security standards

In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.

How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.

Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling

How Security Standards Look Today

Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.

There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.

We can greatly improve the security landscape by directly addressing these key issues:

1. Information Security and Compliance is Siloed

In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.

Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.

If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.

2. Security Standards are Too Generic

Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.

With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.

Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.

3. Current Practices are Reactive, Rather Than Proactive

The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.

Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.

The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.

Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.

DevSecOps – A New Way Forward

The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.

First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.

In outlining the rise and success of DevSecOps, a recent article gave three defining criteria of a true DevSecOps environment:

  1. Developers are in charge of security testing.
  2. Security experts act as consultants to developers when additional knowledge is required.
  3. Fixing security issues are managed by the development team.

Ongoing security-related issues are owned by the development team..[…] Read more »….

 

 

“All sectors can benefit from a simulated targeted attack”

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part.

On the surface, a Red Team exercise appears like a scene straight out of a Hollywood movie. Spies masquerading as employees walking straight into the office so instinctively that no one bats an eye. Plugging things into your devices that are not supposed to be there. Tapping cameras, telephones, microphones, rolling out emails, or even walking around with a banana so you may assume the new guy/girl didn’t have time to grab a proper lunch. By the time you figure out they weren’t supposed to be where they were, it’s already too late. And the only sigh of relief is the fact that they were on your side — and they were working for you.

So, before your company humors itself with a Red Team assessment it might be of use that you talk to an expert about it. And for that, we have Tom Van de Wiele, Principal Security Consultant at F-Secure. With nearly 20 years of experience in information security, Tom specializes in red team operations and targeted penetration testing for the financial, gaming, and service industry. When not breaking into banks Tom acts as an adviser on topics such as critical infrastructure and IoT as well as incident response and cybercrime. With a team that has a 100% success rate in overcoming the combination of targeted organizations’ physical and cybersecurity defenses to end up in places they should never be, Tom is possibly one of the best red team experts in the world. In an exclusive interview with Augustin Kurian of CISO MAG, Tom discusses key questions a company should ask before it engages in a Red Team assessment.

It is often said that Red Teaming is much better than regular penetration testing? What are your thoughts about it?

Red teaming, penetration testing, source code review, vulnerability scanning, and other facets of testing play a key part in trying to establish the level of control and maturity of an organization. They all have different purposes, strengths, and limitations. A penetration test is usually limited and only focused on a certain aspect of the business e.g. a certain network, application, building, or IT asset; a red team test is based on the attacker’s choice and discretion on what to target and when. Keeping in mind the actual objectives and goals of what the client wants to have simulated that is relevant to them. That means anything with the company logo on it could be in scope for the test — keeping in mind ethics, local and international laws, and good taste.

In general, Red Team Testing is only for organizations that have already established a certain maturity and resilience when it comes to opportunistic and targeted attacks. This resilience can be expressed in many ways, hence we want to make sure that we are performing it at the right time and place for our clients, to ensure they get value out of it. The goals are three-fold: to increase the detection capabilities of the organization tailored towards relevant attack scenarios, to ensure that certain attack scenarios become impossible, and increase the response and containment time to make sure that a future attack can be dealt with swiftly and with limited impact. Ultimately, all efforts should be focused on an “assume breach” mentality while increasing the cost of attack for a would-be attacker.

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part. For instance, just because you haven’t been hit by ransomware in the past, doesn’t mean you won’t get impacted by one in the future. “Forcing” organizations to look at their own structure and how they handle their daily operations and business continuity as part of threat modeling, sometimes brings surprising results in positive or negative form. But at the end of the day, everyone is better off knowing what the risks might be of certain aspects of the business, so that an organization can take better business decisions, for better or for worse, while they structure a plan on how to handle whatever it is that is causing concern to stakeholders.

When should a company realize that it is an apt time to hold a Red Team assessment? What kinds of industries should invest in Red Teaming? If so, how frequent should the Red Teaming assessment be? Should it be a yearly process, half-yearly, quarterly, or a continuous one? How often do you do one for your clients?

All sectors can benefit from a simulated targeted attack to test the sum of their security controls, as all business sectors have something to protect or care about, be it customer data, credibility, funds, intellectual property, disruption scenarios, industrial espionage, etc. What kind of testing and how frequently depends on the maturity of the organization, its size, and how much they regard information security as a key part of their organization, rather than a costly afterthought, which unfortunately is still the case for a lot of organizations.

Major financial institutions will usually schedule a red team engagement every 1 – 1.5 years or so.

In between those, a number of other initiatives are held on a periodical basis in order to keep track of the current attack surface, the current threat landscape as well as trying to understand where the business is going versus what technology, processes, and training are required to ensure risk can be kept at an acceptable level. As part of an organization’s own due diligence, it needs to ensure that networks and application receive different levels of scrutiny using a combination of preventive and reactive efforts e.g architecture reviews, threat modeling, vulnerability scanning, source code review, and attack path mapping, just to name a few..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

Cybersecurity Predictions For 2021

Here we are again for the annual predictions of the trends and events that will impact the cybersecurity landscape in 2021. Let’s try to predict which will be the threats and bad actors that will shape the landscape in the next 12 months. I’ve put together a list of the seven top cybersecurity trends that you should be aware of.

#1 Ransomware attacks on the rise

In the past months we have observed an unprecedented surge of ransomware attacks that hit major businesses and organizations across the world. The number of attacks will continue to increase in 2021, threat actors will use prominent botnets like Trickbot to deliver their ransomware. Security experts will also observe a dramatic increase in the human-operated attacks that see threat actors exploiting known vulnerabilities in targeted systems in order to gain access to the target networks. Once gained access to these network operators will manually deploy the ransomware. School districts and municipalities will be privileged targets of cybercriminal organizations because they have limited resources and poor cyber hygiene.

In the first quarter of 2021, a growing number of organizations will continue to allow their employees to remotely access their resources in response to the ongoing COVID-19 pandemic, thus enlarging their surface of attacks.

Most of the human-operated attacks will be targeted, ransomware operators will carefully choose their victims in order to maximize their efforts.

The ransomware-as-a-service model will allow network of affiliates to arrange their own campaign that will hit end-users and SMEs worldwide.

#2 The return of cyber attacks on cryptocurrency industry

The number of cyber-attacks against organizations and businesses in the cryptocurrency industry will surge again in the first months of 2021 due to a new increase in the value of currencies such as Bitcoin.

Cryptocurrency exchanges and platforms will be targeted by both cybercrime organizations and nation-state actors attempting to monetize their efforts.

If the values of the major cryptocurrencies will increase we will observe new malware specifically designed to steal cryptocurrencies from the wallets of the victims along with new phishing campaigns targeting users of cryptocurrency platforms.

#3 Crimeware-as-a-service even more efficient

In the Crimeware-as-a-Service (CaaS) model cybercriminals offer their advanced tools and services for sale or rent to other less skilled criminals. The CaaS is having a significant effect on the threat landscape because it lowers the bar for inexperienced threat actors to launch sophisticated cyber attacks.

The CaaS model will continue to enable both technically inexperienced criminals and APT groups to rapidly arrange sophisticated attacks. The most profitable services that will be offered using this model in 2021 are ransomware and malware attacks.

CaaS allows Advanced threat actors to rapidly arrange hit-and-run operations and make their attribution difficult. In 2021 major botnet operations, such as Emotet and Trickbot, will continues to infect devices worldwide.

In the next months we will assist to the growth of Remote Access Markets that allow attackers to exchange access credentials to compromised networks and services. These services expose organizations to a broad range of cyber threats, including, malware, ransomware and e-skimming.

#4 Cyberbullying, too many people suffer in the silent

Words can cause more damages than weapons, we cannot underestimate this threat and technology could exacerbate this dangers. Cyberbullying refers to the practice of using technology to harass, or bully, someone else.

The term cyberbullying is as an umbrella for different kinds of online abuse, some of which are rapidly increasing such as doxing, cyberstalking, and revenge porn.

Authorities and media are approaching the problem with increasing interest, but evidently it is not enough.

This criminal practice represents one of the greatest dangers of the Internet, it could have a devastating impact on teenagers.

In the upcoming months, the problem of cyberbullying will impact, most of ever, the online gaming community reaching worrisome level.

#5 State-sponsored hacking, all against all

In 2021, cyber attacks carried out by state-sponsored hackers will cause important damages to the target organizations.

The number of targeted attacks against government organizations and critical infrastructure will increase pushing the states to promote a global dialog to discuss about the risks connected to these campaigns.

The healthcare and the pharmaceutical sector, as well as academic and financial industries will be under attack.

Nation-state actors aim at gathering intelligence on strategic Intellectual Property.

Most of the campaigns that will be uncovered by security firms will be carried out by APT groups linked to Russia, China, Iran, and North Korea. This is just the tip of the iceberg because the level of sophistication of these campaigns will allow them to avoid the detection for long periods with dramatic consequences.

Nation-state actors will be also involved in long-running disinformation campaigns aimed at destabilizing the politics of other states.

#6 IoT industry under attack

The rapid evolution of the internet-of-things (IoT) industry and the implementation of 5G networks will push businesses to become ever more reliant on IoT technology.

The bad news it that a large number of smart devices fails in implementing security by design and most of their instances are poorly configured, exposing the organizations and individuals to the risk of hack.

Threat actors will develop new malware to target IoT devices that could be abused in multi-purposes malicious campaigns. Ransomware operators will also focus their efforts on the development of specific malware variants to target these systems.

IoT ransomware are designed to take over connected systems and force them to work incorrectly (i.e. changing the level of chemical elements in production processes or manipulating the level of medicine in an insulin pump), and forcing victims into paying the ransom in order to restore ordinary operations.

#7 Data breaches will continue to flood cybercrime underground market

Thousands of data breaches will be disclosed in 2021 by organizations worldwide..[…] Read more »….

 

Beyond standard risk feeds: Adopting a more holistic API solution

In July 2020, the gaming company Nintendo was compromised in a data breach that commentators described as unprecedented.

The breach, dubbed “the gigaleak,” exposed internal emails and identifying information, as well as a deluge of proprietary source code and other internal documents.  But the compromise wasn’t discovered by observing network traffic or even dark web analysis — it was first identified through a post on 4chan.

Less-regulated online spaces like imageboards, messaging apps, decentralized platforms, and other obscure sites are increasingly relevant for detecting these types of corporate security compromises. Serious threats can be easily missed if security teams aren’t looking beyond standard digital risk sources like technical and dark web data feeds.

Overlooked risks can cost companies millions in financial and reputational damage — but existing commercial threat intelligence solutions often lack data coverage, especially from these alternative web spaces.

How does this impact corporate security operations, and how can data coverage gaps be addressed?

An evolving corporate risk landscape

Security risk detection is no longer limited to highly anonymized online spaces like the dark web or technical feeds like network traffic data.

While these sources remain crucial, corporate security teams also need to assess obscure social sites, forums, and imageboards, messaging apps, decentralized platforms, and paste sites. These spaces are frequently used to circulate leaked data, as with the Nintendo breach, and discuss or advertise hacking tactics like malware and phishing.

echosec
Example of leaked data on RaidForums, a popular hacking website on the deep web—posted/discovered by Echosec Systems

Beyond malware and breach detection, these sources can indicate internal threats, fraud, theft, disinformation, brand impersonation, potentially damaging viral content, and other threats implicating a company or industry.

The rise of hacktivism and extremism on less-regulated networks also poses an increased risk to companies and executives. For example, disinformation or violence targeting high-profile personnel may be discussed and planned on these sites.

Why are these alternative sources becoming more relevant for threat detection?

To start, surface and deep web networks are more accessible for threat actors even though the dark web may offer more anonymity. They also have further reach than the dark web — a relatively small and isolated webspace — if the goal is to spread disinformation and leaked data.

Obfuscation tactics in text-based content are also becoming more sophisticated. For example, special characters (e.g. !4$@), intentional typos, code language, or acronyms can be used to hide targeted threats and company names. Adversaries are often less concerned with detection on surface and deep websites using these techniques.

Decentralization is also becoming a popular hosting method for threat actors concerned with censorship on mainstream networks and takedowns on the dark web. Decentralization means that content or social media platforms are hosted on multiple global or user-operated servers so that networks are theoretically impossible to dismantle.

 

echosec
CEO-targeted death threat on the decentralized social network Mastodon — discovered by Echosec Systems

 

While the dark web was once considered a mecca for detecting security threats, these factors are extending relevant intelligence sources to a wider range of alternative sites.

New barriers to threat detection

Emerging online spaces offer valuable security data, but the changing threat landscape is posing new challenges for corporate security. Many alternative threat intelligence sources are obscure enough that analysts may not know they exist or to look there for threats. Some surface and deep websites, like forums and imageboards, emerge and turn over quickly, making it hard to keep track of what’s currently relevant.

Additionally, many commercial, off-the-shelf APIs provide access to technical security feeds and common sources like the dark web and mainstream social media — but do not offer this alternative data. This creates a functional gap for security teams who realize the value of obscure online sources but may be forced to navigate them manually.

APIs enable security teams to funnel data from online sources directly into their security tooling and interfaces rather than collecting data through manual searches on-site.

 

echosec
Leaked image of a security operations Centre on social media — discovered by Echosec Systems

 

For most corporate security teams and operations centers, manual data gathering — which often requires creating dummy accounts — is unsustainable, requiring a significant amount of time and resources.

Efficient threat intelligence access is essential in an industry where security teams are often understaffed and overwhelmed by alerts. According to a recent survey by Forrester Consulting, the average security operations team sees 11,000 daily alerts but only has the resources to address 72% of them.

Putting aside the issue of niche data access, industry research suggests that commercial threat intelligence vendors vary widely in their data coverage — overlapping 4% at most even when tracking the same specific threat groups. This raises concerns about how many critical alerts are missed by security teams and operations centers — and how holistic their data coverage actually is, even when using more than one vendor.

Holistic APIs: The future of addressing corporate risk

How do security professionals and operations centers comprehensively access relevant data and accelerate analysis and triage? To address these issues, security teams must rethink their API coverage.

This means adopting commercial threat intelligence solutions that are transparent about their data coverage. Vendors must be able to offer a wider variety of standard and alternative threat sources than is commonly available through off-the-shelf APIs. To achieve this, vendors often must source data in unique ways — such as developing proprietary web crawlers to sit in less-regulated chat applications and forums.

When standard threat intelligence sources are combined with fringe online data in an API, analysts can do their jobs faster than merging conventional feeds with manual navigation. Analysts also get more contextual value within their tooling than viewing different sources separately. It also means that previously overlooked risks on obscure sites are included in a more holistic security strategy.

An API also retains content that has been deleted on the original site since being crawled, allowing for more thorough investigations than those possible with manual searches. This is important on more obscure networks like 4chan where content turns over quickly.

 

echosec

 

When collected and catalogued appropriately, a wider variety of online data can be used to train effective machine learning models. These can support faster and more accurate threat detection for overwhelmed security teams. In fact, some emerging APIs have machine learning functionality already built-in so analysts can narrow in on relevant data faster.

As alert volumes grow and threat actors migrate to a greater variety of online spaces, security professionals are likely to become more concerned with their data coverage — and how to integrate alternative data sources effectively into workflows…[…] Read more »….

 

 

The Inevitable Rise of Intelligence in the Edge Ecosystem

A new frontier is taking shape where smart, autonomous devices running data on 5G networks process information that can lead to near real-time insights enterprises need.

The implementation and adoption of 5G wireless, the cloud, and smarter devices is setting the stage for advanced capabilities to emerge at the edge, according to experts and stakeholders. Communications providers such as Verizon continue to flesh out the newest generation of wireless, which promises to offer more robust data capacity and mobile solutions. In brief, the edge has the potential to be a place where greater data processing and analytics happens with near real-time speed, even in seemingly small devices. On the hardware and services side, IBM, Nokia Enterprise, DXC Technology, and Intel all see potential for these converging resources to evolve the edge in 2021 in exponential ways — if all the right pieces fall into place.

The edge is poised to support highly responsive compute, far from core data centers, but Bob Gill, research vice president with Gartner, says the landscape needs to become more cohesive.  “As long as all we have are vertical, monolithic, bespoke stacks, edge isn’t going to scale,” he says, referring to the differing resources created to work at the edge that might not mesh well with other solutions.

Gill defines the edge as the place where the physical and digital worlds interact, which can include sensors and industrial machine controllers. He says it is a form of distributed computing with assets placed in locations that can optimize latency and bandwidth. Retailers, internet of things, and the industrial world have already been working at the edge for more than a decade, Gill says. The current activity at the edge may introduce the world to even more possibilities. “What’s changed is the huge plethora of services from the cloud along with the rising intelligence and number of devices at the edge,” he says. “The edge completes the cloud.”

The focus of the evolution at the edge is to push intelligence to locations where bandwidth, data latency, and autonomy might otherwise be concerns when connecting to the cloud or core computing. With more autonomy, Gill says devices at the edge will be able to operate even if their connections are down.

This might include robots in manufacturing or automated resources in warehousing and logistics, as well as transportation, oil, and gas. Organizations will need some normalization of platforms and solutions at the edge, he says, in order to see the full benefit of such resources. “They’re looking for standardized toolsets and a way that everything isn’t a bespoke one-off,” Gill says.  This could include using open source frameworks deployed to create solutions that can be tweaked.

Gill expects there to be move toward a standardized approach in the next five years. He says enterprise leadership should ask questions about ways the edge can help the organization achieve goals while also eliminating risk. “The c-suite should be saying, ‘What is the business benefit I’m getting out of this? Is it something that’s replicable?’”

Edge mimics public cloud

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute.

The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.

Part of that acceleration came from medical facilities, logistics, and distribution where the need is great for such implementations. Loucks sees investment and opportunities for companies to move quickly at the edge with such resources as professional services robots that work alongside people. Such robots need fast, low latency connections made possible through 5G and have edge AI chips to assist with computer visions, letting them “see” their environment, he says.

Loucks says there are an estimated 650 million edge AI chips in the wild this year with that number expected to scale up fast. “We are predicting [there will be] around 1.6 billion edge AI chips by 2024 as the chips get smaller with lower power consumption,” he says.

The COVID accelerator

World events have played a part in advancing the resources and capabilities at the edge, says Paul Silverglate, vice chairman and Deloitte’s US technology sector leader. “COVID has been an accelerator and a challenge as it relates to computing at the edge,” he says. Remote working, digital transformation, and cloud migration have all been pushed faster than expected in response to the repercussions of the pandemic. “We’ve gone 10s of years into the future,” Silverglate says.

That future may already be happening as Verizon sees the components of the edge coming together, says director of IoT and real-time enterprise Thierry Sender. “From a Verizon standpoint, we now have partners for enabling edge deeply integrated into our 5G network and wireless overall,” he says, “which means 4G devices get the benefit of the capabilities.” For example, Sender says for private infrastructure, Verizon has a relationship with Microsoft to deliver on compute resources that support mission critical applications large enterprises would have in warehouses or manufacturing. That ties together different bespoke solutions that enterprises use together to solve their needs.

The edge elements coming together in 2020 are building blocks for exponential change, Sender says. “2021 is the year of transformation,” he says. “That’s where a lot of the solutions will begin to truly manifest themselves.” Sender also says 2022 will be a year of disruption as industries adapt to real-time operational and customer insights that affect their businesses. “Every industry is being impacted with this edge integration to network,” Sender says.

This transformative move is well under way, says Evaristus Mainsah, general manager of the IBM Cloud private ecosystem. “What we’re seeing is lots of data moving out to edge locations.” That is thanks to more devices carrying enough compute to conduct analytics, he says, reducing the need to move data to a data center or to the cloud to process. By 2023, expect 50% of new on-prem infrastructure will be in edge locations, he says, compared with 10% now. Enterprise data processing outside of central data centers will also grow from 10% now to 75% in 2025, Mainsah says. “Think of it as a movement of data from traditional data center or cloud locations out into edges.”

There is a generation shift taking place, says Karl Bream, head of strategy for Nokia’s enterprise business, which will take some time and see more agility, automation, and efficiency. “The network is becoming higher capacity, much more reliable, much lower latency, and can perform better in situations where you’re controlling high value assets,” he says. Bream calls this an inflection point, though networks alone cannot achieve the next evolution. Data privacy and security remain concerns, he says, as many enterprises must decide if they can allow data to reside offsite.

Tradeoffs and choices

There are tradeoffs and choices to be made, but possibilities are growing fast at the edge. “We’re seeing web companies putting edge type scenarios into place to put storage closer and closer to the device,” Bream says..[…] Read more »…..