Is Your Organization Reaping the Rewards or Simply Ticking a Box?

 

In today’s interconnected world, third-parties play an important role in your organization’s success, but they can also be its weakest link in terms of risk management.

According to Gartner, 60% of organizations are now working with more than 1,000 third-parties. Despite the added complexities, these relationships are critical to business success – delivering affordable, responsive and scalable solutions that can help organizations to grow and adapt according to the needs of their customers. But as reliance on third-parties grows, so too does the exposure to additional risk.

If we are going to reap the rewards of third-party relationships, then we must also identify, manage and mitigate the risks. A rigorous TPRM program is key to achieving just that, which means effective third-party oversight is more important than ever. So how can you ensure that your third-party risk management (TPRM) processes are ready to face the challenges of our ever-evolving commercial landscape and what practical steps can be taken to improve them?

Third-party risk is more than a checkbox exercise

Often, organizations start thinking about TPRM as a result of compliance drivers. They are facing a wide range of regulatory requirements around data privacy, information security, and cloud hosting. Whatever the motivation, all too often we see this kind of activity treated as little more than checking a box.

Reducing this kind of risk management to an exercise in compliance doesn’t ensure that you address the root causes and underlying risks. In fact, by simply viewing TPRM as a set of minimum requirements, it’s easy to overlook potential risks that could become issues for your organization. It’s particularly true when vendors are viewed in isolation. This can mean that activities aren’t standardized and aligned across an entire organization, creating additional unforeseen risks within your vendors.

Instead, your organization should take a holistic approach. Integrating TPRM with your wider Governance, Risk and Compliance (GRC) can have huge benefits. By embedding your assessment program as part of your wider compliance landscape you won’t just be conducting a one-time vendor audit, you’ll be proactively assessing third-party risks and continuously improving operations, efficiencies and processes to enhance the security of every aspect of your supplier network. You will be able to pass information throughout the business, ensuring that risks are identified and treated on an ongoing basis.

Determining the scope of risk assessments is vital

Many organizations simply don’t have the resources to conduct assessments of all of their third-party providers at a granular level. So, your very first step should be to take an inventory of all of your third-parties, considering who your vendors are and what business functions they support. Then, armed with this information, you can prioritize your analysis. There are three key considerations to provide a structure for your assessment.

Cost is a sensible starting point for most organizations, and often the easiest way to structure your assessment. By looking at the contractual value of each vendor you can then tier them accordingly. Another way of categorizing your vendors is by considering the type of risk they expose your organization to. Consider factors such as geography, technology, and financial risk, then organize your risks based on how likely they are to occur. The most sophisticated approach is criticality which ranks each vendor by assessing which of your critical assets, systems and processes they impact, and what the repercussions of those risks would be to your organization.

Sometimes, there are other factors that might impact whether or not a third-party is included within the scope of your assessment. You may find, for example, that your vendor will not allow you to assess them. That’s often the case if you’re working with big companies like Google, Amazon or Microsoft, who may well be critical to your business success, but who are unlikely to give you bespoke information for your audit.

Alternatively, external factors might dictate the scope of your assessment. Whether it’s a global pandemic like COVID-19 or a major geopolitical event such as the Russia-Ukraine war, organizations will often conduct tactical assessments in order to analyze the impact of their expanded risk profiles.

It’s about quality not quantity

When it comes to crafting your TPRM Question Sets, less is most definitely more. You may be tempted to put together hundreds of questions covering every topic under the sun. But is this going to give you the information you need? And, more importantly, is your busy vendor even going to answer all of your questions?

Another important consideration is to decide just how specific to make your Question Sets. Make them too generic and you may not be able to capture the data you need. But make them too specific to your business and your vendor is going to find it incredibly difficult to provide answers in the detail you are looking for.

At the end of the day, it’s a balancing act – one that means you should keep your Question Sets as targeted as possible. So, rather than sending 200 questions, send 20, but make sure they are well thought through to ensure that they gather the information you need for your risk program. This is where it might be helpful to leverage existing Question Sets such as SCF, SIG and Cyber Risk Institute. Whether they have been provided by consultants or they’re part of an industry standard, this approach will help to ensure you get the data you need.

Practical steps to improve third-party risk management

There are four key actions to consider when it comes to improving a TPRM program.

Understand what’s going well, and what’s not

Conduct a self-assessment of your organization’s TPRM capability and ask key questions such as: what are our strengths? Where are our weaknesses? It’s a good idea to ensure part or all of the assessment is carried out by an external party as they will deliver impartial feedback and highlight potential areas for improvement.

Understand your target state

If you have a vendor-first strategy that leads to a large amount of outsourcing, it’s crucial that you understand your target state for third party due diligence. Have a roadmap that sets out realistic aims and objectives, and how you intend to achieve them. Looking to do too much too soon with your program can cause issues, slow down the progression and can be counter intuitive.

Build partnerships with vendors

Establishing a close relationship with critical vendors is central to the success of any TPRM program. Without partnerships, it becomes increasingly difficult to work toward common goals. Technology can help with monitoring and assessments, but having the ability to pick up the phone and openly discuss and address issues to mitigate any risks…[…] Read more »

 

5 questions CISOs should ask when evaluating cyber resiliency

When cybersecurity experts talk about cyber resiliency, they’re talking about the ability to effectively respond to and recover from a cybersecurity incident. Many organizations don’t like to think about that — and it’s easy to see why. Many have invested heavily in tools designed to protect their networks from intrusion and attack, and planning for cyber resiliency means accepting the possibility that those tools might fail.

But the truth is, they might. In fact, they probably will. Even with the best tools on the market, it isn’t possible to stop 100% of attacks, which means it’s important to plan for the worst. In doing so, you can improve your cyber resiliency, which can significantly mitigate the damage caused by those inevitable attacks that manage to slip through the cracks.

Putting a plan in place that details how to handle a cyber-triggered business disaster is essential, but it isn’t always easy to get started. Here are the top five questions CISOs should be asking when it comes time to evaluate — and improve — cyber resiliency.

1. Do you have strong retainers in place? 

It’s difficult — not to mention dangerous — to go it alone. There’s no shame in seeking out help from experts. In fact, it’s usually the smart thing to do. Most organizations (hopefully!) don’t have significant experience when it comes to dealing with cyber incidents, but there are many third parties that can provide invaluable guidance and assistance.

Do you have a good incident response retainer in place? What about a good cyber crisis communications retainer? These are not things you want to be scrambling for in the midst of a disaster, but having them in place in advance can help you respond quickly and effectively. A technical incident response firm can support and validate containment and eradication of the threat, while a crisis communications firm can help you coordinate both internal and external messaging. Communication can often make or break an incident response, so don’t overlook that element.

2. Do you have well-defined cyber incident response plans and resiliency playbooks?

A cyber incident response plan deals primarily with the security team’s categorization, notification, and escalation of the technical incident, but a strong cyber resilience playbook details the various resources and workstreams that need to be activated for a broad, enterprise-level response effort. Key stakeholders and decision-makers across internal and external counsel, public relations, disaster recovery, crisis communications, business continuity, security, and executive leadership should be involved in this playbook — who is going to lead which workstream and what will the decision-making process look like?

There may be decision-making thresholds you can pre-define. Ransomware payment is a good example, particularly with the continuous rise in ransomware attacks. Can you align in advance on when your organization would consider simply paying the ransom? Maybe that threshold is a certain amount of time without key business functions, or maybe it’s a dollar amount. Aligning on those decisions in advance can save significant time.

This is also a good opportunity to align on the decision-making resources that might be needed, such as out-of-band communications or deciding whether certain incidents need to be handled in person. Do you need corporate apartments that can be sourced through procurement? Are there other external relationships that need to be established? Having these discussions early can ease coordination across the whole enterprise.

3. Are you testing your playbooks and third-party firms?

You can’t just put policies and procedures in place. You need to test them. And that doesn’t just apply to internal parties — you can bring your retained firms in to conduct tabletop exercises as well.

For an incident response retainer, you might have them lead the security operations center (SOC) in a technical tabletop exercise. This tests the coordination between the SOC and the incident response firm to see how well they know the relevant procedures in the incident response plan and whether they can communicate effectively. For a crisis communications firm, try having them lead a management-level tabletop exercise, since the firm would be spearheading external communications and ensuring that everyone is aligned on the messaging. It can be helpful to work through that messaging in an executive tabletop.

Of course, these tabletop exercises can also be combined. The incident response firm and the crisis communications firm can be tested with a mock incident that escalates from the SOC all the way up to an enterprise-level concern. This can help gauge their response capabilities as an incident becomes more serious, as well as their ability to effectively communicate that response.

4. Do you have a strong grasp of your most critical business processes?

Maybe more to the point, do you understand the critical path for those business processes? That means the third-party applications, the underlying infrastructure, the data center locations, and other key factors that go towards producing those processes. Do you have backup processing methods? Do you have a manual process method that you can use in a pinch? Do you have offline contact information for your third-party vendors so you can quickly and easily get ahold of them in the event that your data is locked up?

These are all critical questions that organizations need to be able to answer in the event of an emergency. Understanding that critical path can help you know who to call and which business process needs to be activated during an incident. The last thing you want is to discover that the contact information for all of your vendors is stored on a server currently encrypted by ransomware attackers.

5. Do you have a disaster recovery plan in place?

Do you know clearly — and in what sequence — you need to recover data and infrastructure? Do you know the exact point of recovery? Do you know the recovery time objective for recovering that infrastructure data? Depending on the process, that time objective might be 30 minutes, or it might be a week. Knowing that answer is essential not just for setting expectations, but for planning your recovery effectively.

Business continuity and disaster recovery programs don’t just need to be in place, they need to be evaluated with failover tests. For example, if your system has regional redundancies, you might conduct a test in which one region fails and the system immediately falls over to another region. The security and disaster recovery teams can then practice recovering the data for the region that “failed.” This serves the dual purpose of both making sure the failover is working and ensuring recovery systems are operating as planned.

Hope for the Best, Plan for the Worst

No one wants to believe they will be the victim of a cyber-triggered business disaster, but it’s always better to have a plan and not need it than to need a plan and not have it. But cyber resiliency is not something that can be “achieved” and forgotten. It needs to be maintained as the organization changes and scales over time. By keeping these concerns top of mind and conducting regular testing and tabletop exercises, you can help ensure that your resiliency remains strong even as the organization evolves..[…] Read more »….

 

API Security 101: The Ultimate Guide

APIs, application programming interfaces, are driving forces in modern application development because they enable applications and services to communicate with each other. APIs provide a variety of functions that enable developers to more easily build applications that can share and extract data.

Companies are rapidly adopting APIs to improve platform integration, connectivity, and efficiency and to enable digital innovation projects. Research shows that the average number of APIs per company increased by 221% in 2021.

Unfortunately, over the last few years, API attacks have increased massively, and security concerns continue to impede innovations.

What’s worse, according to Gartner, API attacks will keep growing. They’ve already emerged as the most common type of attack in 2022. Therefore, it’s important to adopt security measures that will keep your APIs safe.

What is an API attack?

An API attack is malicious usage or manipulation of an API. In API attacks, cybercriminals look for business logic gaps they can exploit to access personal data, take over accounts, or perform other fraudulent activities.

What Is API security and why is it important?

API security is a set of strategies and procedures aimed at protecting an organization against API vulnerabilities and attacks.

APIs process and transfer sensitive data and other organizations’ critical assets. And they are now a primary target for attackers, hence the recent increase in the number of API attacks.

That’s why an effective API security strategy is a critical part of the application development lifecycle. It is the only way organizations running APIs can ensure those data conduits are secure and trustworthy.

A secure API improves the integrity of data by ensuring the content is not tampered with and available to only users, applications, and servers who have proper authentication and authorization to access it. API security techniques also help mitigate API vulnerabilities that attackers can exploit.

When is the API vulnerable?

Your API is vulnerable if:

  • The API host’s purpose is unclear, and you can’t tell which version is running, what data is collected and processed, or who should have access (for example, the general public, internal employees, and partners)
  • There is no documentation, or the documentation that exists is outdated.
  • Older API versions are still in use, and they haven’t been patched.
  • Integrated services inventory is either missing or outdated.
  • The API contains a business logic flaw that lets bad actors access accounts or data they shouldn’t be able to reach.

What are some common API attacks?

API attacks are extremely different from other cyberattacks and are harder to spot. This new approach is why you need to understand the most common API attacks, how they work and how to prevent them.

BOLA attack

This most common form of attack happens when a bad actor changes parameters across a sequence of API calls to request data that person is not authorized to have. For example, nefarious users might authenticate using one UserID, for example, and then enumerate UserIDs in subsequent API calls to pull back account information they’re not entitled to access.

Preventive measures:

Look for API tracking that can retain information over time about what different users in the system are doing. BOLA attacks can be very “low and slow,” drawn out over days or weeks, so you need API tracking that can store large amounts of data and apply AI to detect attack patterns in near real time.

Improper assets management attack

This type of attack happens if there are undocumented APIs running (“shadow APIs”) or older APIs that were developed, used, and then forgotten without being removed or replaced with newer more secure versions (“zombie APIs”). Undocumented APIs present a risk because they’re running outside the processes and tooling meant to manage APIs, such as API gateways. You can’t protect what you don’t know about, so you need your inventory to be complete, even with developers have left something undocumented. Older APIs are unpatched and often use older libraries. They are also undocumented and can remain undetected for a long time.

Preventive measures:

Set up a proper inventory management system that includes all the API endpoints, their versions, uses, and the environment and networks they are reachable on.

Always check to ensure that the API needs to be in production in the first place, it’s not an outdated version, there’s no sensitive data exposed and that data flows as expected throughout the application.

Insufficient logging & monitoring

API logs contain personal information that attackers can exploit. Logging and monitoring functions provide security teams with raw data to establish the usual user behavior patterns. When an attack happens, the threat can be easily detected by identifying unusual patterns.

Insufficient monitoring and logging results in untraceable user behavior patterns, thereby allowing threat actors to compromise the system and stay undetected for a long time.

Preventive measures:

Always have a consistent logging and monitoring plan so you have enough data to use as a baseline for normal behavior. That way you can quickly detect attacks and respond to incidents in real-time. Also, ensure that any data that goes into the logs are monitored and sanitized.

What are API security best practices?

Here’s a list of API best practices to help you improve your API security strategy:

  • Train employees and security teams on the nature of API attacks. Lack of knowledge and expertise is the biggest obstacle in API security. Your security team needs to understand how cybercriminals propagate API attacks and different call/response pairs so they can better harden APIs. Use the OWASP API Top 10 list as a starting point for your education efforts.
  • Adopt an effective API security strategy throughout the lifecycle of the APIs.
  • Turn on logging and monitoring and use the data to detect patterns of malicious activities and stop them in real-time.
  • Reduce the risk of sensitive data being exposed. Ensure that APIs return only as much data as is required to complete their task. In addition, implement data filtering, data access limits, and monitoring.
  • Document and manage your APIs so you’re aware of all the existing APIs in your organization and how they are built and integrated to secure and manage them effectively.
  • Have a retirement plan for old APIs and remove or patch those that are no longer in use.
  • Invest in software specifically designed for detecting API call manipulations. Traditional solutions cannot detect the subtle probing associated with API reconnaissance and attack traffic….[…] Read more »… 

 

Solving the identity crisis in cybersecurity

The evolving threat landscape is making identity protection within the enterprise a top priority. According to the 2022 CrowdStrike Global Threat Report, nearly 80% of cyberattacks leverage identity-based attacks to compromise legitimate credentials and use techniques like lateral movement to quickly evade detection. The reality is that identity-based attacks are difficult to detect, especially as the attack surface continues to increase for many organizations. 

Every business needs to authenticate every identity and authorize each request to maintain a strong security posture. It sounds simple, but the truth is this is still a pain point for many organizations. However, it doesn’t need to be.

 

Why identity protection must be an urgent priority for business leaders

We have seen adversaries become more adept at obtaining and abusing stolen credentials to gain a foothold in an organization. Identity has become the new perimeter, as attackers are increasingly targeting credentials to infiltrate an organization. Unfortunately, organizations continue to be compromised by identity-based attacks and lack the awareness necessary to prevent it until it’s too late.

Businesses are coming around to the fact that any user — whether it be an IT administrator, employee, remote worker, third-party vendor or customer — can be compromised and provide an attack path for adversaries. This means that organizations must authenticate every identity and authorize each request to maintain security and prevent a wide range of cyber threats, including ransomware and supply chain attacks. Otherwise, the damage is costly. According to a 2021 report, the most common initial attack vector — compromised credentials — was responsible for 20% of breaches at an average cost of $4.37 million.

 

How zero trust helps contain adversaries 

Identity protection cannot occur in a vacuum — it’s just one aspect of an effective security strategy and works best alongside a zero trust framework. To realize the benefits of identity protection paired with zero trust, we must first acknowledge that zero trust has become a very broad and overused term. With vendors of all shapes and sizes claiming to have zero trust solutions, there is a lot of confusion about what it is and what it isn’t. 

Zero trust requires all users, whether in or outside the organization’s network, to be authenticated, authorized and continuously validated before being granted or maintaining access to applications and data. Simply put, there is no such thing as a trusted source in a zero trust model. Just because a user is authenticated to access a certain level or area of a network does not necessarily automatically grant them access to every level and area. Each movement is monitored, and each access point and access request is analyzed. Always. This is why organizations with the strongest security defenses utilize an identity protection solution in conjunction with a zero trust framework. In fact, a 2021 survey found that 97% of identity and security professionals agree that identity is a foundational component of a zero trust security model.

 

It’s time to take identity protection seriously — here’s how 

As organizations adopt cloud-based technologies to enable people to work from anywhere over the past two years, it’s created an identity crisis that needs to be solved. This is evidenced in a 2021 report, which found a staggering 61% of breaches in the first half of 2021 involved credential data. 

A comprehensive identity protection solution should deliver a host of benefits and enhanced capabilities to the organization. This includes the ability to:

  • Stop modern attacks like ransomware or supply chain attacks
  • Pass red team/audit testing
  • Improve the visibility of credentials in a hybrid environment (including identities, privileged users and service accounts)
  • Enhance lateral movement detection and defense
  • Extend multi-factor authentication (MFA) to legacy and unmanaged systems
  • Strengthen the security of privileged users 
  • Protect identities from account takeover
  • Detect attack tools..[…] Read more »….

 

 

Building a risk management program

In today’s world, it’s important for every organization to have some form of vulnerability assessment and risk management program. While this can seem daunting, by focusing on some key concepts it’s possible for an organization of any size to develop a strong security posture with a firm grasp of its risk profile. We’ll discuss in this article how to build the technical foundation for a comprehensive security program and, crucially, the tools and processes necessary to develop that foundation into a mature vulnerability assessment and risk management program. 

 

Build the Foundation

It’s impossible to implement effective security, let alone manage risk, without a clear understanding of the environment. That means, essentially, taking an inventory of hosts, applications, resources, and users.

With the current computing environment, that combination is apt to include assets that reside in the cloud as well as those hosted in an organization’s own data center. Organizations have little control over their remote employees’ devices, who are accessing data on a bring-your-own-device (BYOD) basis, adding another layer of risk. There is also the aspect of software as a service applications (SaaS) that the organization uses. It’s essential to know what data is kept where. With SaaS, in particular, teams must have a clear understanding of who is responsible for the security of the data in contractual terms, so as to allocate resources accordingly. 

 

Manage the puzzle

Once the environment is scoped, managing it relies on three main components: visibility, control, and timely maintenance. 

Whether it is software vulnerabilities, vulnerable configurations, obsolete packages, or a range of other issues, a vulnerability scanner will show the security operations team what’s at risk and let them prioritize their reaction. That said, scanners, external or internal, are not the only option. At the high end, a penetration testing team can probe the environment to a level that vulnerability scanners can’t match. At the low end, establishing a process to monitor public vulnerability feeds and verifying whether newly exposed issues affect the environment can provide a baseline. It may not give as deep a picture scanning, or penetration testing, but the cost in SecOps time is often well worth it.

Protecting the users is a major point and doesn’t always get the attention it deserves. Ultimately, that starts with user education and establishing a culture that enhances a secure environment. Users are often the threat surface that presents the greatest risk, but with proper education and attitude they can become an effective layer of a defense depth strategy.

Another important step to protecting users is adding multi-factor authentication (MFA). In particular, those that require a physical or virtual token tend to be more secure than those that rely on text messaging or email. While MFA does add a minor annoyance to a user’s login, it can drastically reduce the threat posed by compromised accounts and reduce the organization’s overall risk profile.

User endpoints are another area of concern. While the default endpoint protection included in the main desktop operating systems (Windows and MacOS) are quite effective, they are also the defenses every malware writer in the world tests against. That makes investment in an additional layer of endpoint protection worthwhile. 

The last major piece here is a patch management program. This requires base processes that not only manage the patch process, but also the assets themselves. Fortunately, there are multiple tools available that can enhance and automate the process, and a regular patch cycle can have vulnerabilities fixed before they are even developed into exploits.

Ideally, the patch management process includes a change management system that’s able to smoothly accommodate emergency situations where a security hotfix must go in outside the normal window.

Pulling it all together

With the foundation laid, the final step involves communication. Simply assessing risk is not useful if there is no reliable way to organize people to act on it.

Bridging the information security teams, who are responsible for recognizing, analyzing, and mitigating threats to the organization, and the information technology teams, who are responsible for maintaining the organization’s infrastructure, is vital. Whether an organization achieves this with a process or a tool is up to them. But in either case, communication is vital, along with an ability to react across teams. This applies to non-technical teams as well — if folks are receiving phishing emails, security operations should know. 

These mechanisms need to be in place from the executive offices down to the sales or production floor, as reducing risk really is everyone’s responsibility. Moreover, the asset and patch management system needs a mechanism to prioritize patches based on business risk. Unless the IT team has the resources to deploy every single patch that comes their way, they will have to prioritize, and that prioritization needs to be based on the threat to business rather than arbitrary severity scores.

 An Investment 

There is no “one size fits all” solution for risk assessment and management. For example, for a restaurant that doesn’t accept reservations or orders online, a relatively insecure website doesn’t present much business risk. While it may be technically vulnerable, they are not at risk of losing valuable data...[…] Read more »….

 

Top 9 effective vulnerability management tips and tricks

The world is currently in a frenetic flux. With rising geopolitical tensions, an ever-present rise in cybercrime and continuous technological evolution, it can be difficult for security teams to maintain a straight bearing on what’s key to keeping their organization secure.

With the advent of the “Log4shell,” aka Log4J vulnerability, sound vulnerability management practices have jumped to the top of the list of skills needed to maintain an ideal state of cybersecurity. The impacts due to Log4j are expected to be fully realized throughout 2022.

As of 2021, missing security updates are a top-three security concern for organizations of all sizes — approximately one in five network-level vulnerabilities are associated with unpatched software.

Not only are attacks on the rise, but their financial impacts are as well. According to Cybersecurity Ventures, costs related to cybercrime are expected to balloon 15% year over year into 2025, totaling $11 trillion.

Vulnerability management best practices

Whether you’re performing vulnerability management for the first time or looking to revisit your current vulnerability management practices to find new perspectives or process efficiencies, there are some recommended useful strategies concerning vulnerability reduction.

Here are the top nine (We decided to just stop there!) tips and tricks for effective vulnerability management at your organization.

1. Vulnerability remediation is a long game

Extreme patience is required when it comes to vulnerability remediation. Your initial review of vulnerability counts, categories, and recommended remediations may instill a false sense of confidence: You may expect a large reduction after only a few meetings and executing a few patch activities. This is far from how reality will unfold.

Consider these factors as you begin initial vulnerability management efforts:

  • Take small steps: Incremental progress in reducing total vulnerabilities by severity should be the initial goal, not an unrealistic expectation of total elimination. The technology estate should ideally accumulate new vulnerabilities at a slightly lower pace versus what is remediated as the months and quarters roll on.
  • Patience is a virtue: Adopting a patient mindset is unequivocally necessary to avoid mental defeat, burnout and complacency. Remediation progress will be slow but must sustain a methodical approach.
  • Learn from challenges: As roadblocks are encountered, these serve as opportunities to approach alternate remediation strategies. Plan on what can be solved today or in the current week.

Avoid focusing on all the major problems preventing remediation and think with a growth mindset to overcome these challenges.

2. Cross-team collaboration is required

Achieving a large vulnerability reduction requires effective collaboration across technology teams. The high vulnerability counts across the IT estate likely exist due to several cultural and operational factors within the organization which pre-exists remediation efforts, including:

  • Insufficient staff to maintain effective vulnerability management processes
  • Legacy hardware that cannot be patched because they run on very expensive hardware — or provide a specific function that is cost-prohibitive to replace
  • Ineffective patching solutions that do not or cannot apply necessary updates completely (e.g., the solution can patch web browsers but not Java or Adobe)
  • Misguided beliefs that specialized classes of equipment cannot be patched or rebooted therefore, they are not revisited for extended periods

Part of your remediation efforts should focus on addressing systemic issues that have historically prevented effective vulnerability remediation while gaining support within or across the business to begin addressing existing vulnerabilities.

Determine how the various teams in your organization can serve as a force multiplier. For example, can the IT support desk or other technical teams assist directly in applying patches or decommissioning legacy devices? Can your vendors assist in applying patches or fine-tuning configurations of difficult to patch equipment to make?

These groups can assist in overall reduction while further plans are developed to address additional vulnerabilities.

3. Start by focusing on low-hanging fruit

Focus your initial efforts on the low-hanging fruit when building a plan to address vulnerabilities. Missing browser updates and applying updates to third-party browser software like Java or Adobe are likely to comprise the largest initial reduction efforts.

If software like Google Chrome or Firefox is missing the previous two years of security updates, it likely signifies the software is not being used. Some confirmation may be required, but the response is likely to remove software, not the application of patches.

To prevent a recurrence, there will likely be a need to revisit workstation and server imaging processes to determine if legacy, unapproved or unnecessary software is being installed as new devices are provisioned.

4. Leverage your end-users when needed

Don’t forget to leverage your end-users as a possible remediation vector. A single email you spend 30 minutes carefully crafting to include instructions on how they can self-update difficult-to-patch third-party applications can save you many hours of time and effort — compared to working with technical teams where the end result may be a reduction of fewer vulnerabilities.

However, end-user involvement should be an infrequent and short-term approach as the underlying problems outlined in cross-team collaboration (tip #2) are addressed.

This also provides an indirect approach to increasing security awareness via end-user engagement. Users are more likely to prioritize security when they are directly involved in the process.

5. Be prepared to get your hands dirty

Many of the vulnerabilities that exist will require a manual fix, including but not limited to:

  • Unquoted service paths in program directories
  • Weak or no passwords on periphery devices like printers
  • Updating SNMP community strings
  • Windows registry not set

While there is project downtime — or the security function is between remediation planning — focus on providing direct assistance where possible. A direct intervention provides an opportunity to learn more about the business and the people operating the technology in the environment. It also provides direct value when an automated process fails to remediate or cannot remediate identified vulnerabilities.

This may also be required when already stressed IT teams cannot assist in remediation activity.

6. Targeted patch applications can be effective for specific products

Some vulnerabilities may require the application of a specific update to address large numbers of vulnerabilities that automatic updates continuously fail to address. This is often seen in Microsoft security updates that did not apply completely or accurately for random months across several years and devices.

Search for and test the application of cumulative security updates. One targeted patch update may remediate dozens of vulnerabilities.

Once tested, use automated patch application tools like SCCM or remote management and monitoring (RMM) tools to stage and deploy the specific cumulative update.

7. Limit scan scope and schedules 

Vulnerability management seeks to identify and remediate vulnerabilities, not cause production downtime. Vulnerability scanning tools can unintentionally disrupt information systems and networks via the probing traffic generated towards organization devices or equipment.

Suppose an organization is onboarding a new scanning tool or spinning up a new vulnerability management practice. In that case, it is best to start scanning a small network subset that represents the asset types deployed across the network.

Over time, scanning can be rolled out to larger portions of the network as successful scanning activity on a smaller scale is consistently demonstrated.

8. Leverage analytics to focus remediation activity 

Native reporting functions provided by vulnerability scanning tools typically lack effective reporting functions that assist in value-add vulnerability reduction. Consider implementing programs like Power BI, which can help the organization focus on the following:

  • New vulnerabilities by type or category
  • Net new vulnerabilities
  • Risk severity ratings for groups of or individual vulnerabilities
9. Avoid overlooking compliance pitfalls or licensing issues

Ensure you fully understand any licensing requirements in relation to enterprise usage of third-party software and make plans to stay compliant.

As software evolves, its creators may look to harness new revenue streams, which have real-world impacts on vulnerability management efforts. A classic example is Java, which is highly prevalent in organizations across the globe. As of 2019, Java requires a paid license subscription to receive security updates for Java.

Should a third party decide to perform an onsite audit of the license usage, the company may find itself tackling a lawsuit on top of managing third-party software security updates…[…] Read more »….

 

Key Steps for Public Sector Agencies To Defend Against Ransomware Attacks

Over the past two years, the pandemic has fundamentally altered the business world and the modern work environment, leaving organizations scrambling to maintain productivity and keep operational efficiency intact while securing the flow of data across different networks (home and office). While this scenario has undoubtedly created new problems for businesses in terms of keeping sensitive data and IP safe, the “WFH shift” has opened up even greater risks and threat vectors for the US public sector.

Federal, state, local governments, education, healthcare, finance, and nonprofit organizations are all facing privacy and cybersecurity challenges the likes of which they’ve never seen before. Since March 2020, there’s been an astounding increase in the number of cyberattacks, high-profile ransomware incidents, and government security shortfalls. There are many more that go undetected or unreported. This is in part due to employees now accessing their computers and organization resources/applications from everywhere but the office, which is opening up new security threats for CISOs and IT teams.

Cyberthreats are expected to grow exponentially this year, particularly as the world faces geopolitical upheaval and international cyberwarfare. Whether it’s a smaller municipality or a local school system, no target is too small these days, and everyone is under attack due to bad actors now having more access to sophisticated automation tools.

The US public sector must be prepared to meet these new challenges and focus on shoring up vulnerable and critical technology infrastructures while implementing new cybersecurity and backup solutions that secure sensitive data.

Previous cyber protection challenges

As data volumes grow and methods of access change, safeguarding US public sector data, applications, and systems involves addressing complex and often competing considerations. Government agencies have focused on securing a perimeter around their networks, however, with a mobile workforce combined with the increase in devices, endpoints, and sophisticated threats, data is still extremely vulnerable. Hence the massive shift towards a Zero Trust model.

Today, there is an over-reliance on legacy and poorly integrated IT systems, leaving troves of hypersensitive constituent data vulnerable; government agencies have become increasingly appealing targets for cybercriminals. Many agencies still rely on outdated five-decade-old technology infrastructure and deal with a multitude of systems that need to interact with each other, which makes it even more challenging to lock down these systems. Critical infrastructure industries have more budget restraints than ever; they need flexible and affordable solutions to maintain business continuity and protect against system loss.

Protecting your organization’s data assets

The private sector, which owns and operates most US critical infrastructure, will continue being instrumental in helping government organizations (of all sizes) modernize their cyber defenses. The US continues to make strides in creating specific efforts that encourage cyber resilience and counter these emerging threats.

Agencies and US data centers must focus on solutions that attest to data protection frameworks like HIPAA, CJIS, NIST 800-171 first and then develop several key pillars for data protection built around the Zero Trust concept. This includes safety (ensuring organizational data, applications, and systems are always available), accessibility (allowing employees to access critical data anytime and anywhere), and privacy and authenticity (control who has access to your organization’s digital assets).

New cloud-based data backup, protection and cybersecurity solutions that are compliant to the appropriate frameworks and certified will enable agencies to maximize operational uptime, reduce the threat of ransomware, and ensure the highest levels of data security possible across all public sector computing environments.

Conclusion

First and foremost, the public sector and US data centers must prioritize using compliant and certified services to ensure that specific criteria are met…[…] Read more »

 

Is a Merger Between Information Security and Data Governance Imminent?

As with any merger, it is always difficult to predict an outcome until the final deal papers are signed, and press releases hit the wires.  However, there are clear indications that a tie up between these two is essential, and we will all be the better for it.  Data Governance has historically focused on the use of data in a business, legal and compliance context rather than how it should be protected, while the opposite is true for Information Security.

The idea of interweaving Data Governance and Information Security is not entirely new.  Gartner discussed this in their Data Security Governance Model, EDRM integrated multiple stakeholders including Information Security, Privacy, Legal and Risk into an overarching Unified Data Governance model, and an integrated approach to Governance, Risk, and Compliance has long been an aspiration in the eGRC market.  Organizations that have more mature programs are likely to have some level of integration between these functions already, but many continue to struggle with the idea and often treat them as separate, siloed programs.

As programs go, Information Security is ahead of Data Governance for its level of attention in the Boardroom; brought about primarily by news-worthy events that demonstrated what security and privacy practitioners had warning about for a long time.   These critial risks to the public and private sectors inspired significant, sweeping frameworks and industry standards(PCI, NIST, ISO, ISACA, SOC2) and regulatory legislation (HIPAA, GDPR, NYDS), and gave Information Security Officers (CISOs) a platform for change.

By contrast, data governance has been more fragmented in its definition, organization, development, and funding.  Many organizations accept the value of data governance, particularly as a proactive means to minimize risk, while enabling expansive use of information required in today’s business environment.   However, enterprises still struggle to balance information risk and value, and establishing the right enablers and controls.

Drivers

Risks and affirmative obligations associated with information are the primary drivers for the intersection of data governance and information security.  The reason that information security is so critical is that the loss ((through exfiltration or loss of access due to ransomware) of certain types of data carry legal and compliance consequences, along with impacting normal business operations.  And a lack of effective legal and compliance controls often lead to increased information security and privacy risk.

Additional common drivers include:

  • Volume, velocity, mobility, and sensitivity of information
  • Volume and complexity of legal, compliance, and privacy requirements
  • Hybrid technology and business environments
  • Multinational governance models and operations
  • Headline and business interruption risks

Finally, an underlying driver is the need to leverage investments in technology, practices, and personnel across an organization.  The interrelationships of so many information requirements simply demands a more coordinated approach.

Merging the models

We chose Information Risk Management, to define a construct that encompasses the overaching disciplines and requirements.  First, we did so because it places the focus on information.  For example, the same piece of information that requires protection, may also have retention and discovery requirements.  Second, risk management recognizes the need to balance the value and use of information from a business perspective, while also providing appropriate governance or protection.  Risk management also serves as an important means to evaluate priorities in investment, resources, and audit functions.

Information Risk Management
Figure 1: Information Risk Management

The primary objective is to integrate processes, people, and solutions into a framework that addresses common requirements; and does so “in depth” for both.  Security people, practices and technologies have long-been deployed at many levels (in-depth) to protect the organization.  The same has not often been the case for governance (legal, compliance, and privacy) obligations.  New practices and technologies are enablers for ntersecting programs, and support alignment amongst key constituencies, including Information Security, IT, Legal, Privacy, Risk and Compliance.  Done right, this provides leverage in an organization’s human and technology investments, improves risk posture, and increases the rate and reach of new practices and solutions.

Meshing disciplines and elements of each program are not meant as a new organizational construct; rather, it should start with a firm understanding of information requirements from key stakeholders; and from there establish synergies.  The list below, not meant to be exclusive, provides examples of shared enabling practices and technologies:

Shared Enablers and Requirements
Figure 2: Shared Enablers and Requirements

Conclusion

Integrating data governance, information security and privacy frameworks allows an enterprise to gain leverage from areas of common investment and provides a more comprehensive enterprise risk management strategy.  By improving proactive information management, organizations increase preventative control effectiveness and decrease reliance on detection and response activities.  It also develops cross functional capabilities across Privacy, Legal, Compliance, IT, and Information Security…[…] Read more »

 

 

Top 15 cybersecurity predictions for 2022

Over the past several years, cybersecurity risk management has become top of mind for boards. And rightly so. Given the onslaught of ransomware attacks and data breaches that organizations experienced in recent years, board members have increasingly realized how vulnerable they are.

This year, in particular, the public was directly impacted by ransomware attacks, from gasoline shortages, to meat supply, and even worse, hospitals and patients that rely on life-saving systems. The attacks reflected the continued expansion of cyber-physical systems — all of which present new challenges for organizations and opportunities for threat actors to exploit.

There should be a shared sense of urgency about staying on top of the battle against cyberattacks. Security columnist and Vice President and Ambassador-At-Large in Cylance’s Office of Security & Trust, John McClurg, in his latest Cyber Tactics column, explained it best: “It’s up to everyone in the cybersecurity community to ensure smart, strong defenses are in place in the coming year to protect against those threats.”

As you build your strategic planning, priorities and roadmap for the year ahead, security and risk experts offer the following cybersecurity predictions for 2022.

Prediction #1: Increased Scrutiny on Software Supply Chain Security, by John Hellickson, Cyber Executive Advisor, Coalfire

“As part of the executive order to improve the nation’s cybersecurity previously mentioned, one area of focus is the need to enhance software supply chain security. There are many aspects included that most would consider industry best practice of a robust DevSecOps program, but one area that will see increased scrutiny is providing the purchaser, the government in this example, a software bill of materials. This would be a complete list of all software components leveraged within the software solution, along with where it comes from. The expectation is that everything that is used within or can affect your software, such as open source, is understood, versions tracked, scrutinized for security issues and risks, assessed for vulnerabilities, and monitored, just as you do with any in-house developed code. This will impact organizations that both consume and those that deliver software services. Considering this can be very manual and time-consuming, we could expect that Third-Party Risk Management teams will likely play a key role in developing programs to track and assess software supply chain security, especially considering they are usually the front line team who also receives inbound security questionnaires from their business partners.”

 

Prediction #2: Security at the Edge Will Become Central, by Wendy Frank, Cyber 5G Leader, Deloitte

 

“As the Internet of Things (IoT) devices proliferate, it’s key to build security into the design of new connected devices themselves, as well as the artificial intelligence (AI) and machine learning (ML) running on them (e.g., tinyML). Taking a cyber-aware approach will also be crucial as some organizations begin using 5G bandwidth, which will drive up both the number of IoT devices in the world and attack surface sizes for IoT device users and producers, as well as the myriad networks to which they connect and supply chains through which they move.”

 

Prediction #3: Boards of Directors will Drive the Need to Elevate the Chief Information Security Officer (CISO) Role, by Hellickson

 

“In 2021, there was much more media awareness and senior executive awareness about the impacts of large cyberattacks and ransomware that brought many organizations to their knees. These high-profile attacks have elevated the cybersecurity conversations in the Board room across many different industries. This has reinforced the need for CISOs to be constantly on top of current threats while maintaining an agile but robust security strategy that also enables the business to achieve revenue and growth targets. With recent surveys, we are seeing a shift in CISO reporting structures moving up the chain, out from underneath the CIO or the infrastructure team, which has been commonplace for many years, now directly to the CEO. The ability to speak fluent threat & risk management applicable to the business is table stakes for any executive with cybersecurity & board reporting responsibilities. This elevated role will require a cybersecurity program strategy that extends beyond the standard industry frameworks and IT speak, and instead demonstrate how the cybersecurity program is threat aware while being aligned to each executive team’s business objectives that demonstrates positive business and cybersecurity outcomes. More CISOs will look for executive coaches and trusted business partners to help them overcome any weaknesses in this area.”

 

Prediction #4: Increase of Nation-State Attacks and Threats, by John Bambenek, Principal Threat Researcher at Netenrich

 

“Recent years have seen cyberattacks large and small conducted by state and non-state actors alike. State actors organize and fund these operations to achieve geopolitical objectives and seek to avoid attribution wherever possible. Non-state actors, however, often seek notoriety in addition to the typical monetary rewards. Both actors are part of a larger, more nebulous ecosystem of brokers that provides information, access, and financial channels for those willing to pay. Rising geopolitical tensions, increased access to cryptocurrencies and dark money, and general instability due to the pandemic will contribute to a continued rise in cyber threats in 2022 for nearly every industry. Top-down efforts, such as sanctions by the U.S. Treasury Department, may lead to arrests but will ultimately push these groups further underground and out of reach.”

 

And, Adversaries Outside of Russia Will Cause Problems

 

Recognizing that Russia is a safe harbor for ransomware attackers, Dmitri Alperovitch, Chairman, Silverado Policy Accelerator: “Adversaries in other countries, particularly North Korea, are watching this very closely. We are going to see an explosion of ransomware coming from DPRK and possibly Iran over the next 12 months.”

 

Ed Skoudis, President, SANS Technology Institute: “What’s concerning about this potential reality is that these other countries will have less practice at it, making it more likely that they will accidentally make mistakes. A little less experience, a little less finesse. I do think we are probably going to see — maybe accidentally or maybe on purpose — a significant ransomware attack that might bring down a federal government agency and its ability to execute its mission.”

 

Prediction #5: The Adoption of 5G Will Drive The Use Of Edge Computing Even Further, by Theresa Lanowitz, Head of Evangelism at AT&T Cybersecurity

 

“While in previous years, information security was the focus and CISOs were the norm, we’re moving to a new cybersecurity world. In this era, the role of the CISO expands to a CSO (Chief Security Officer) with the advent of 5G networks and edge computing.

The edge is in many locations — a smart city, a farm, a car, a home, an operating room, a wearable, or a medical device implanted in the body. We are seeing a new generation of computing with new networks, new architectures, new use cases, new applications/applets, and of course, new security requirements and risks.

While 5G adoption accelerated in 2021, in 2022, we will see 5G go from new technology to a business enabler. While the impact of 5G on new ecosystems, devices, applications, and use cases ranging from automatic mobile device charging to streaming, 5G will also benefit from the adoption of edge computing due to the convenience it brings. We’re moving away from the traditional information security approach to securing edge computing. With this shift to the edge, we will see more data from more devices, which will lead to the need for stronger data security.

 

Prediction #6: Continued Rise in Ransomware, by Lanowitz

 

“The year 2021 was the year the adversary refined their business model. With the shift to hybrid work, we have witnessed an increase in security vulnerabilities leading to unique attacks on networks and applications. In 2022, ransomware will continue to be a significant threat. Ransomware attacks are more understood and more real as a result of the attacks executed in 2021. Ransomware gangs have refined their business models through the use of Ransomware as a Service and are more aggressive in negotiations by doubling down with distributed denial-of-service (DDoS) attacks. The further convergence of IT and Operational Technology (OT) may cause more security issues and lead to a rise in ransomware attacks if proper cybersecurity hygiene isn’t followed.

While many employees are bringing their cyber skills and learnings from the workplace into their home environment, in 2022, we will see more cyber hygiene education. This awareness and education will help instill good habits and generate further awareness of what people should and shouldn’t click on, download, or explore.”

 

Prediction #6: How the Cyber Workforce Will Continue to be Revolutionized Among Ongoing Shortage of Employees, by Jon Check, Senior Director Of Cyber Protection Solutions at Raytheon Intelligence & Space

 

“Moving into 2022, the cybersecurity industry will continue to be impacted by an extreme shortage of employees. With that said, there will be unique advantages when facing the current so-called ‘Great Resignation’ that is affecting the entire workforce as a whole. As the industry continues to advocate for hiring individuals outside of the cyber industry, there is a growing number of individuals looking to leave their current jobs for new challenges and opportunities to expand their skills and potentially have the choice to work from anywhere. While these individuals will still need to be trained, there is extreme value in considering those who may not have the most perfect resume for the cyber jobs we’re hiring for, but may have a unique point of view on solving the next cyber challenge. This expansion will, of course, increase the importance of a positive work culture as such candidates will have a lot of choices of the direction they take within the cyber workforce — a workforce that is already competing against the same pool of talent. With that said, we will never be able to hire all the cyber people we need, so in 2022, there will be a heavier reliance on automation to help fulfill those positions that continue to remain vacant.”

 

Prediction #7: Expect Heightened Security around the 2022 Election Cycle, by Jadee Hanson CIO and CISO of Code42

 

“With multiple contentious and high-profile midterm elections coming up in 2022, cybersecurity will be a top priority for local and state governments. While security protections were in place to protect the 2020 election, publicized conversations surrounding the uncertainty of its security will facilitate heightened awareness around every aspect of voting next year.”

 

Prediction #8: A Shift to Zero Trust, by Brent Johnson, CISO at Bluefin

 

“As the office workspace model continues to shift to a more hybrid and full-time remote architecture, the traditional network design and implicit trust granted to users or devices based on network or system location are becoming a thing of the past. While the security industry had already begun its shift to the more secure zero-trust model (where anything and everything must be verified before connecting to systems and resources), the increased use of mobile devices, bring your own device (BYOD), and cloud service providers has accelerated this move. Enterprises can no longer rely on a specific device or location to grant access.

Encryption technology is obviously used as part of verifying identity within the zero-trust model, and another important aspect is to devalue sensitive information across an enterprise through tokenization or encryption. When sensitive data is devalued, it becomes essentially meaningless across all networks and devices. This is very helpful in limiting security practitioners’ area of concern and allows for designing specific micro-segmented areas where only verified and authorized users/resources may access the detokenized, or decrypted, values. As opposed to trying to track implicit trust relationships across networks, micro-segmented areas are much easier to lock down and enforce granular identity verification controls in line with the zero-trust model.”

 

 

Prediction #9: Securing Data with Third-Party Vendors in Mind Will Be Critical, by Bindu Sundareason, Director at AT&T Cybersecurity

 

Attacks via third parties are increasing every year as reliance on third-party vendors continues to grow. Organizations must prioritize the assessment of top-tier vendors, evaluating their network access, security procedures, and interactions with the business. Unfortunately, many operational obstacles will make this assessment difficult, including a lack of resources, increased organizational costs, and insufficient processes. The lack of up-to-date risk visibility on current third-party ecosystems will lead to loss of productivity, monetary damages, and damage to brand reputation.”

 

Prediction #10: Increased Privacy Laws and Regulation, by Kevin Dunne, President of Pathlock

 

“In 2022, we will continue to see jurisdictions pass further privacy laws to catch up with the states like California, Colorado and Virginia, who have recently passed bills of their own. As companies look to navigate the sea of privacy regulations, there will be an increasing need to be able to provide a real-time, comprehensive view of what data is being processed and stored, who can access it, and most importantly, who has accessed it and when. As the number of distinct regulations continues to grow, the pressure on organizations to put in place automated, proactive data governance will increase.”

 

Prediction #11: Cryptocurrency to Get Regulated, by Joseph Carson, Chief Security Scientist and Advisory CISO at ThycoticCentrify

 

“Cryptocurrencies are surely here to stay and will continue to disrupt the financial industry, but they must evolve to become a stable method for transactions and accelerate adoption. Some countries have taken a stance that energy consumption is creating a negative impact and therefore facing decisions to either ban or regulate cryptocurrency mining. Meanwhile, several countries have seen cryptocurrencies as a way to differentiate their economies to become more competitive in the tech industry and persuade investment. In 2022, more countries will look at how they can embrace cryptocurrencies while also creating more stabilization, and increased regulation is only a matter of time. Stabilization will accelerate adoption, but the big question is how the value of cryptocurrencies will be measured.  How many decimals will be the limit?”

 

Prediction #12: Application Security in Focus, by Michael Isbitski, Technical Evangelist at Salt Security

 

“According to the Salt Labs State of application programming interface (API) Security Report, Q3 2021, there was a 348% increase in API attacks in the first half of 2021 alone and that number is only set to go up.

With so much at stake, 2022 will witness a major push from nonsecurity and security teams towards the integration of security services and automation in the form of machine assistance to mitigate issues that arise from the rising threat landscape. The industry is beginning to understand that by taking a strategic approach to API security as opposed to a subcomponent of other security domains, organizations can more effectively align their technology, people, and security processes to harden their APIs against attacks. Organizations need to identify and determine their current level of API maturity and integrate processes for development, security, and operations in accordance; complete, comprehensive API security requires a strategic approach where all work in synergy.

To mitigate potential threats and system vulnerabilities, further industry-wide recognition of a comprehensive approach to API security is key. Next year, we anticipate that more organizations will see the need for and adopt solutions that offer a full life cycle approach to identifying and protecting APIs and the data they expose. This will require a significant change in mindset, moving away from the outdated practices of proxy-based web application firewalls (WAFs) or API gateways for runtime protection, as well as scanning code with tools that do not provide satisfactory coverage and leave business logic unaddressed. As we’ve already begun to witness, security teams will now focus on accounting for unique business logic in application source code as well as misconfigurations or misimplementations within their infrastructure that could lead to API vulnerabilities.

Implementing intelligent capabilities for behavior analysis and anomaly detection is also another way organizations can improve their API security posture in 2022. Anomaly detection is essential for satisfying increasingly strong API security requirements and defending against well-known, emerging and unknown threats. Implementing solutions that effectively utilize AI and ML can help organizations ensure visibility and monitoring capabilities into all the data and systems that APIs and API consumers touch. Such capabilities also help mitigate any manual mistakes that inadvertently create security gaps and could impact business uptime.”

 

Prediction #13: Disinformation on Social Media, by Jonathan Reiber, Senior Director of Cybersecurity Strategy and Policy at AttackIQ

 

“Over the last two years, pressure rose in Congress and the executive branch to regulate Section 230 and increased following the disclosures made by Frances Haugen, a former Facebook data scientist, who came forward with evidence of widespread deception related to Facebook’s management of hate speech and misinformation on its platform. Concurrent to those disclosures, in mid-November, the Aspen Institute’s Commission on Information Disorder published the findings of a major report, painting a picture of the United States as a country in a crisis of trust and truth, and highlighting the outsize role of social media companies in shaping public discourse. Building on Haugen’s testimony, the Aspen Institute report, and findings from the House of Representatives Select Committee investigating the January 6, 2021 attack on the U.S. Capitol, we should anticipate increasing regulatory pressure from Congress. Social media companies will likely continue to spend large sums of money on lobbying efforts to shape the legislative agenda to their advantage.”

 

Prediction #14: Ransomware To Impact Cyber Insurance, by Jason Rebholz, CISO at Corvus Insurance

 

“Ransomware is the defining force in cyber risk in 2021 and will likely continue to be in 2022. While ransomware has gained traction over the years, it jumped to the forefront of the news this year with high-profile attacks that impacted the day-to-day lives of millions of people. The increased visibility brought a positive shift in the security posture of businesses looking to avoid being the next news headline. We’re starting to see the proactive efforts of shoring up IT resilience and security defenses pay off, and my hope is that this positive trend will continue. When comparing Q3 2020 to Q3 2021, the ratio of ransoms demanded to ransoms paid is steadily declining, as payments shrank from 44% to 12%, respectively, due to improved backup processes and greater preparedness. Decreasing the need to pay a ransom to restore data is the first step in disrupting the cash machine that is ransomware. Although we cannot say for certain, in 2022, we can likely expect to see threat actors pivot their ransomware strategies. Attackers are nimble — and although they’ve had a ‘playbook’ over the past couple years, thanks to widespread crackdowns on their current strategies, we expect things to shift. We have already seen the opening moves from threat actors. In a shift from a single group managing the full attack life cycle, specialized groups have formed to gain access into companies who then sell that access to ransomware operators. As threat actors specialize in access into environments, it opens the opportunity for other extortion-based attacks such as data theft or account lockouts, all of which don’t require data encryption. The potential for these shifts will call for a great need in heavier investments in emerging tactics and trends to remove that volatility.”..[…] Read more »….

 

How should your company think about investing in security?

Like many things in life, with the security of your company’s application, you get what you pay for. You can spend too little, too much or just right.

To find the right balance, consider Goldilocks: she goes for a walk in the woods and comes upon a house. In the house, she finds three bowls of porridge. The first is too hot, the second is too cold, but the third is just right.

Goldilocks is the master of figuring out “just right.” To determine the appropriate security budget for your company, you need to be, too.

How much security effort is too much?

First, let’s explore the idea of overinvesting in security. How much is too much?

At a certain point with security, you start to see diminishing returns: issues still appear but more rarely. Security is never really “done,” so it’s tricky knowing when to move on. There’s always more to do, more to find, more to fix. Knowing when to wrap up depends on your threat model, risk appetite and your unique circumstances.

However, your company probably isn’t in this category. Almost nobody is. You certainly can get there, but you’re likely not there now. The takeaway is this: even though you’re probably not in this category yet, it’s important to know that security is not an endless investment of resources. There is a point at which you can accept the remaining risk and move forward.

The problem with too little effort

On the other hand, companies often spend too little effort on security. Almost everyone falls into this category.

Security is often viewed as a “tax” on the business. Companies want to minimize any kind of tax, and so they try to cut security spending inappropriately. However, most people don’t realize that when you cut costs, what you actually cut is effort: how much time you invest, how manual it is, how much attack surface you cover and how thoroughly you develop custom exploits. That’s a dangerous elixir because your attackers already invest more effort than you can. Cutting effort just cedes more advantage.

As a leader, you’re under tremendous pressure to make the best use of the limited money and person-power you have, and those resources need to cover a wide range of priorities. It’s sometimes hard to justify the investment in security, and even when you can, you aren’t always sure where the best place to invest it might be.

Here’s the harsh reality, though: the less you invest, the less it returns. When you cut costs too far, you prevent outcomes that help you get better. Achieving your security mission is going to cost you time, effort and money. There is no way around that. When those investments get cut to the bone, what’s really reduced is your ability to succeed.

The level of effort that’s “just right”

The trick to successful application security lies in finding your sweet spot, that magical balance where you uncover useful issues without investing too much or too little. There are many variables that influence this, including:

  • The value of your assets
  • The skills of your adversaries
  • The scope of your attack surfaces
  • The amount of risk you’re willing to accept

As a ballpark estimate, to do application security testing right is probably going to cost $30,000 to $150,000 or more per year, per application. Some cost far more than that.

That number might shock you; as discussed, most companies are in the category of spending too little. Security isn’t cheap because it’s not easy, it requires a unique skill set, and it takes effort.

However, doing security right is worth the price.

The incremental cost of doing security right is a tiny, microscopic spec compared to the gigantic cost of a security incident. Most importantly, since most companies struggle to do security right, those who do obtain an enormous advantage over their competitors. You want to be one of those companies. To get there, you need to invest appropriately.

There are no security shortcuts

Ultimately, you can’t achieve security excellence by going cheap. You can’t find the unknowns for cheap. You can’t discover custom exploits for cheap. You get what you pay for, and there’s no way around that. However, you also don’t need to spend endlessly either; even though there’s always more to fix, there is a point at which you can accept the remaining risk and move on.

The best approach is to channel your inner Goldilocks and find the budget that’s “just right” for your company. Figure out how rigorous and comprehensive an assessment your application requires, and don’t fall short of those standards…[…] Read more »….