Building a risk management program

In today’s world, it’s important for every organization to have some form of vulnerability assessment and risk management program. While this can seem daunting, by focusing on some key concepts it’s possible for an organization of any size to develop a strong security posture with a firm grasp of its risk profile. We’ll discuss in this article how to build the technical foundation for a comprehensive security program and, crucially, the tools and processes necessary to develop that foundation into a mature vulnerability assessment and risk management program. 

 

Build the Foundation

It’s impossible to implement effective security, let alone manage risk, without a clear understanding of the environment. That means, essentially, taking an inventory of hosts, applications, resources, and users.

With the current computing environment, that combination is apt to include assets that reside in the cloud as well as those hosted in an organization’s own data center. Organizations have little control over their remote employees’ devices, who are accessing data on a bring-your-own-device (BYOD) basis, adding another layer of risk. There is also the aspect of software as a service applications (SaaS) that the organization uses. It’s essential to know what data is kept where. With SaaS, in particular, teams must have a clear understanding of who is responsible for the security of the data in contractual terms, so as to allocate resources accordingly. 

 

Manage the puzzle

Once the environment is scoped, managing it relies on three main components: visibility, control, and timely maintenance. 

Whether it is software vulnerabilities, vulnerable configurations, obsolete packages, or a range of other issues, a vulnerability scanner will show the security operations team what’s at risk and let them prioritize their reaction. That said, scanners, external or internal, are not the only option. At the high end, a penetration testing team can probe the environment to a level that vulnerability scanners can’t match. At the low end, establishing a process to monitor public vulnerability feeds and verifying whether newly exposed issues affect the environment can provide a baseline. It may not give as deep a picture scanning, or penetration testing, but the cost in SecOps time is often well worth it.

Protecting the users is a major point and doesn’t always get the attention it deserves. Ultimately, that starts with user education and establishing a culture that enhances a secure environment. Users are often the threat surface that presents the greatest risk, but with proper education and attitude they can become an effective layer of a defense depth strategy.

Another important step to protecting users is adding multi-factor authentication (MFA). In particular, those that require a physical or virtual token tend to be more secure than those that rely on text messaging or email. While MFA does add a minor annoyance to a user’s login, it can drastically reduce the threat posed by compromised accounts and reduce the organization’s overall risk profile.

User endpoints are another area of concern. While the default endpoint protection included in the main desktop operating systems (Windows and MacOS) are quite effective, they are also the defenses every malware writer in the world tests against. That makes investment in an additional layer of endpoint protection worthwhile. 

The last major piece here is a patch management program. This requires base processes that not only manage the patch process, but also the assets themselves. Fortunately, there are multiple tools available that can enhance and automate the process, and a regular patch cycle can have vulnerabilities fixed before they are even developed into exploits.

Ideally, the patch management process includes a change management system that’s able to smoothly accommodate emergency situations where a security hotfix must go in outside the normal window.

Pulling it all together

With the foundation laid, the final step involves communication. Simply assessing risk is not useful if there is no reliable way to organize people to act on it.

Bridging the information security teams, who are responsible for recognizing, analyzing, and mitigating threats to the organization, and the information technology teams, who are responsible for maintaining the organization’s infrastructure, is vital. Whether an organization achieves this with a process or a tool is up to them. But in either case, communication is vital, along with an ability to react across teams. This applies to non-technical teams as well — if folks are receiving phishing emails, security operations should know. 

These mechanisms need to be in place from the executive offices down to the sales or production floor, as reducing risk really is everyone’s responsibility. Moreover, the asset and patch management system needs a mechanism to prioritize patches based on business risk. Unless the IT team has the resources to deploy every single patch that comes their way, they will have to prioritize, and that prioritization needs to be based on the threat to business rather than arbitrary severity scores.

 An Investment 

There is no “one size fits all” solution for risk assessment and management. For example, for a restaurant that doesn’t accept reservations or orders online, a relatively insecure website doesn’t present much business risk. While it may be technically vulnerable, they are not at risk of losing valuable data...[…] Read more »….

 

Top 9 effective vulnerability management tips and tricks

The world is currently in a frenetic flux. With rising geopolitical tensions, an ever-present rise in cybercrime and continuous technological evolution, it can be difficult for security teams to maintain a straight bearing on what’s key to keeping their organization secure.

With the advent of the “Log4shell,” aka Log4J vulnerability, sound vulnerability management practices have jumped to the top of the list of skills needed to maintain an ideal state of cybersecurity. The impacts due to Log4j are expected to be fully realized throughout 2022.

As of 2021, missing security updates are a top-three security concern for organizations of all sizes — approximately one in five network-level vulnerabilities are associated with unpatched software.

Not only are attacks on the rise, but their financial impacts are as well. According to Cybersecurity Ventures, costs related to cybercrime are expected to balloon 15% year over year into 2025, totaling $11 trillion.

Vulnerability management best practices

Whether you’re performing vulnerability management for the first time or looking to revisit your current vulnerability management practices to find new perspectives or process efficiencies, there are some recommended useful strategies concerning vulnerability reduction.

Here are the top nine (We decided to just stop there!) tips and tricks for effective vulnerability management at your organization.

1. Vulnerability remediation is a long game

Extreme patience is required when it comes to vulnerability remediation. Your initial review of vulnerability counts, categories, and recommended remediations may instill a false sense of confidence: You may expect a large reduction after only a few meetings and executing a few patch activities. This is far from how reality will unfold.

Consider these factors as you begin initial vulnerability management efforts:

  • Take small steps: Incremental progress in reducing total vulnerabilities by severity should be the initial goal, not an unrealistic expectation of total elimination. The technology estate should ideally accumulate new vulnerabilities at a slightly lower pace versus what is remediated as the months and quarters roll on.
  • Patience is a virtue: Adopting a patient mindset is unequivocally necessary to avoid mental defeat, burnout and complacency. Remediation progress will be slow but must sustain a methodical approach.
  • Learn from challenges: As roadblocks are encountered, these serve as opportunities to approach alternate remediation strategies. Plan on what can be solved today or in the current week.

Avoid focusing on all the major problems preventing remediation and think with a growth mindset to overcome these challenges.

2. Cross-team collaboration is required

Achieving a large vulnerability reduction requires effective collaboration across technology teams. The high vulnerability counts across the IT estate likely exist due to several cultural and operational factors within the organization which pre-exists remediation efforts, including:

  • Insufficient staff to maintain effective vulnerability management processes
  • Legacy hardware that cannot be patched because they run on very expensive hardware — or provide a specific function that is cost-prohibitive to replace
  • Ineffective patching solutions that do not or cannot apply necessary updates completely (e.g., the solution can patch web browsers but not Java or Adobe)
  • Misguided beliefs that specialized classes of equipment cannot be patched or rebooted therefore, they are not revisited for extended periods

Part of your remediation efforts should focus on addressing systemic issues that have historically prevented effective vulnerability remediation while gaining support within or across the business to begin addressing existing vulnerabilities.

Determine how the various teams in your organization can serve as a force multiplier. For example, can the IT support desk or other technical teams assist directly in applying patches or decommissioning legacy devices? Can your vendors assist in applying patches or fine-tuning configurations of difficult to patch equipment to make?

These groups can assist in overall reduction while further plans are developed to address additional vulnerabilities.

3. Start by focusing on low-hanging fruit

Focus your initial efforts on the low-hanging fruit when building a plan to address vulnerabilities. Missing browser updates and applying updates to third-party browser software like Java or Adobe are likely to comprise the largest initial reduction efforts.

If software like Google Chrome or Firefox is missing the previous two years of security updates, it likely signifies the software is not being used. Some confirmation may be required, but the response is likely to remove software, not the application of patches.

To prevent a recurrence, there will likely be a need to revisit workstation and server imaging processes to determine if legacy, unapproved or unnecessary software is being installed as new devices are provisioned.

4. Leverage your end-users when needed

Don’t forget to leverage your end-users as a possible remediation vector. A single email you spend 30 minutes carefully crafting to include instructions on how they can self-update difficult-to-patch third-party applications can save you many hours of time and effort — compared to working with technical teams where the end result may be a reduction of fewer vulnerabilities.

However, end-user involvement should be an infrequent and short-term approach as the underlying problems outlined in cross-team collaboration (tip #2) are addressed.

This also provides an indirect approach to increasing security awareness via end-user engagement. Users are more likely to prioritize security when they are directly involved in the process.

5. Be prepared to get your hands dirty

Many of the vulnerabilities that exist will require a manual fix, including but not limited to:

  • Unquoted service paths in program directories
  • Weak or no passwords on periphery devices like printers
  • Updating SNMP community strings
  • Windows registry not set

While there is project downtime — or the security function is between remediation planning — focus on providing direct assistance where possible. A direct intervention provides an opportunity to learn more about the business and the people operating the technology in the environment. It also provides direct value when an automated process fails to remediate or cannot remediate identified vulnerabilities.

This may also be required when already stressed IT teams cannot assist in remediation activity.

6. Targeted patch applications can be effective for specific products

Some vulnerabilities may require the application of a specific update to address large numbers of vulnerabilities that automatic updates continuously fail to address. This is often seen in Microsoft security updates that did not apply completely or accurately for random months across several years and devices.

Search for and test the application of cumulative security updates. One targeted patch update may remediate dozens of vulnerabilities.

Once tested, use automated patch application tools like SCCM or remote management and monitoring (RMM) tools to stage and deploy the specific cumulative update.

7. Limit scan scope and schedules 

Vulnerability management seeks to identify and remediate vulnerabilities, not cause production downtime. Vulnerability scanning tools can unintentionally disrupt information systems and networks via the probing traffic generated towards organization devices or equipment.

Suppose an organization is onboarding a new scanning tool or spinning up a new vulnerability management practice. In that case, it is best to start scanning a small network subset that represents the asset types deployed across the network.

Over time, scanning can be rolled out to larger portions of the network as successful scanning activity on a smaller scale is consistently demonstrated.

8. Leverage analytics to focus remediation activity 

Native reporting functions provided by vulnerability scanning tools typically lack effective reporting functions that assist in value-add vulnerability reduction. Consider implementing programs like Power BI, which can help the organization focus on the following:

  • New vulnerabilities by type or category
  • Net new vulnerabilities
  • Risk severity ratings for groups of or individual vulnerabilities
9. Avoid overlooking compliance pitfalls or licensing issues

Ensure you fully understand any licensing requirements in relation to enterprise usage of third-party software and make plans to stay compliant.

As software evolves, its creators may look to harness new revenue streams, which have real-world impacts on vulnerability management efforts. A classic example is Java, which is highly prevalent in organizations across the globe. As of 2019, Java requires a paid license subscription to receive security updates for Java.

Should a third party decide to perform an onsite audit of the license usage, the company may find itself tackling a lawsuit on top of managing third-party software security updates…[…] Read more »….

 

Key Steps for Public Sector Agencies To Defend Against Ransomware Attacks

Over the past two years, the pandemic has fundamentally altered the business world and the modern work environment, leaving organizations scrambling to maintain productivity and keep operational efficiency intact while securing the flow of data across different networks (home and office). While this scenario has undoubtedly created new problems for businesses in terms of keeping sensitive data and IP safe, the “WFH shift” has opened up even greater risks and threat vectors for the US public sector.

Federal, state, local governments, education, healthcare, finance, and nonprofit organizations are all facing privacy and cybersecurity challenges the likes of which they’ve never seen before. Since March 2020, there’s been an astounding increase in the number of cyberattacks, high-profile ransomware incidents, and government security shortfalls. There are many more that go undetected or unreported. This is in part due to employees now accessing their computers and organization resources/applications from everywhere but the office, which is opening up new security threats for CISOs and IT teams.

Cyberthreats are expected to grow exponentially this year, particularly as the world faces geopolitical upheaval and international cyberwarfare. Whether it’s a smaller municipality or a local school system, no target is too small these days, and everyone is under attack due to bad actors now having more access to sophisticated automation tools.

The US public sector must be prepared to meet these new challenges and focus on shoring up vulnerable and critical technology infrastructures while implementing new cybersecurity and backup solutions that secure sensitive data.

Previous cyber protection challenges

As data volumes grow and methods of access change, safeguarding US public sector data, applications, and systems involves addressing complex and often competing considerations. Government agencies have focused on securing a perimeter around their networks, however, with a mobile workforce combined with the increase in devices, endpoints, and sophisticated threats, data is still extremely vulnerable. Hence the massive shift towards a Zero Trust model.

Today, there is an over-reliance on legacy and poorly integrated IT systems, leaving troves of hypersensitive constituent data vulnerable; government agencies have become increasingly appealing targets for cybercriminals. Many agencies still rely on outdated five-decade-old technology infrastructure and deal with a multitude of systems that need to interact with each other, which makes it even more challenging to lock down these systems. Critical infrastructure industries have more budget restraints than ever; they need flexible and affordable solutions to maintain business continuity and protect against system loss.

Protecting your organization’s data assets

The private sector, which owns and operates most US critical infrastructure, will continue being instrumental in helping government organizations (of all sizes) modernize their cyber defenses. The US continues to make strides in creating specific efforts that encourage cyber resilience and counter these emerging threats.

Agencies and US data centers must focus on solutions that attest to data protection frameworks like HIPAA, CJIS, NIST 800-171 first and then develop several key pillars for data protection built around the Zero Trust concept. This includes safety (ensuring organizational data, applications, and systems are always available), accessibility (allowing employees to access critical data anytime and anywhere), and privacy and authenticity (control who has access to your organization’s digital assets).

New cloud-based data backup, protection and cybersecurity solutions that are compliant to the appropriate frameworks and certified will enable agencies to maximize operational uptime, reduce the threat of ransomware, and ensure the highest levels of data security possible across all public sector computing environments.

Conclusion

First and foremost, the public sector and US data centers must prioritize using compliant and certified services to ensure that specific criteria are met…[…] Read more »

 

Is a Merger Between Information Security and Data Governance Imminent?

As with any merger, it is always difficult to predict an outcome until the final deal papers are signed, and press releases hit the wires.  However, there are clear indications that a tie up between these two is essential, and we will all be the better for it.  Data Governance has historically focused on the use of data in a business, legal and compliance context rather than how it should be protected, while the opposite is true for Information Security.

The idea of interweaving Data Governance and Information Security is not entirely new.  Gartner discussed this in their Data Security Governance Model, EDRM integrated multiple stakeholders including Information Security, Privacy, Legal and Risk into an overarching Unified Data Governance model, and an integrated approach to Governance, Risk, and Compliance has long been an aspiration in the eGRC market.  Organizations that have more mature programs are likely to have some level of integration between these functions already, but many continue to struggle with the idea and often treat them as separate, siloed programs.

As programs go, Information Security is ahead of Data Governance for its level of attention in the Boardroom; brought about primarily by news-worthy events that demonstrated what security and privacy practitioners had warning about for a long time.   These critial risks to the public and private sectors inspired significant, sweeping frameworks and industry standards(PCI, NIST, ISO, ISACA, SOC2) and regulatory legislation (HIPAA, GDPR, NYDS), and gave Information Security Officers (CISOs) a platform for change.

By contrast, data governance has been more fragmented in its definition, organization, development, and funding.  Many organizations accept the value of data governance, particularly as a proactive means to minimize risk, while enabling expansive use of information required in today’s business environment.   However, enterprises still struggle to balance information risk and value, and establishing the right enablers and controls.

Drivers

Risks and affirmative obligations associated with information are the primary drivers for the intersection of data governance and information security.  The reason that information security is so critical is that the loss ((through exfiltration or loss of access due to ransomware) of certain types of data carry legal and compliance consequences, along with impacting normal business operations.  And a lack of effective legal and compliance controls often lead to increased information security and privacy risk.

Additional common drivers include:

  • Volume, velocity, mobility, and sensitivity of information
  • Volume and complexity of legal, compliance, and privacy requirements
  • Hybrid technology and business environments
  • Multinational governance models and operations
  • Headline and business interruption risks

Finally, an underlying driver is the need to leverage investments in technology, practices, and personnel across an organization.  The interrelationships of so many information requirements simply demands a more coordinated approach.

Merging the models

We chose Information Risk Management, to define a construct that encompasses the overaching disciplines and requirements.  First, we did so because it places the focus on information.  For example, the same piece of information that requires protection, may also have retention and discovery requirements.  Second, risk management recognizes the need to balance the value and use of information from a business perspective, while also providing appropriate governance or protection.  Risk management also serves as an important means to evaluate priorities in investment, resources, and audit functions.

Information Risk Management
Figure 1: Information Risk Management

The primary objective is to integrate processes, people, and solutions into a framework that addresses common requirements; and does so “in depth” for both.  Security people, practices and technologies have long-been deployed at many levels (in-depth) to protect the organization.  The same has not often been the case for governance (legal, compliance, and privacy) obligations.  New practices and technologies are enablers for ntersecting programs, and support alignment amongst key constituencies, including Information Security, IT, Legal, Privacy, Risk and Compliance.  Done right, this provides leverage in an organization’s human and technology investments, improves risk posture, and increases the rate and reach of new practices and solutions.

Meshing disciplines and elements of each program are not meant as a new organizational construct; rather, it should start with a firm understanding of information requirements from key stakeholders; and from there establish synergies.  The list below, not meant to be exclusive, provides examples of shared enabling practices and technologies:

Shared Enablers and Requirements
Figure 2: Shared Enablers and Requirements

Conclusion

Integrating data governance, information security and privacy frameworks allows an enterprise to gain leverage from areas of common investment and provides a more comprehensive enterprise risk management strategy.  By improving proactive information management, organizations increase preventative control effectiveness and decrease reliance on detection and response activities.  It also develops cross functional capabilities across Privacy, Legal, Compliance, IT, and Information Security…[…] Read more »

 

 

Top 15 cybersecurity predictions for 2022

Over the past several years, cybersecurity risk management has become top of mind for boards. And rightly so. Given the onslaught of ransomware attacks and data breaches that organizations experienced in recent years, board members have increasingly realized how vulnerable they are.

This year, in particular, the public was directly impacted by ransomware attacks, from gasoline shortages, to meat supply, and even worse, hospitals and patients that rely on life-saving systems. The attacks reflected the continued expansion of cyber-physical systems — all of which present new challenges for organizations and opportunities for threat actors to exploit.

There should be a shared sense of urgency about staying on top of the battle against cyberattacks. Security columnist and Vice President and Ambassador-At-Large in Cylance’s Office of Security & Trust, John McClurg, in his latest Cyber Tactics column, explained it best: “It’s up to everyone in the cybersecurity community to ensure smart, strong defenses are in place in the coming year to protect against those threats.”

As you build your strategic planning, priorities and roadmap for the year ahead, security and risk experts offer the following cybersecurity predictions for 2022.

Prediction #1: Increased Scrutiny on Software Supply Chain Security, by John Hellickson, Cyber Executive Advisor, Coalfire

“As part of the executive order to improve the nation’s cybersecurity previously mentioned, one area of focus is the need to enhance software supply chain security. There are many aspects included that most would consider industry best practice of a robust DevSecOps program, but one area that will see increased scrutiny is providing the purchaser, the government in this example, a software bill of materials. This would be a complete list of all software components leveraged within the software solution, along with where it comes from. The expectation is that everything that is used within or can affect your software, such as open source, is understood, versions tracked, scrutinized for security issues and risks, assessed for vulnerabilities, and monitored, just as you do with any in-house developed code. This will impact organizations that both consume and those that deliver software services. Considering this can be very manual and time-consuming, we could expect that Third-Party Risk Management teams will likely play a key role in developing programs to track and assess software supply chain security, especially considering they are usually the front line team who also receives inbound security questionnaires from their business partners.”

 

Prediction #2: Security at the Edge Will Become Central, by Wendy Frank, Cyber 5G Leader, Deloitte

 

“As the Internet of Things (IoT) devices proliferate, it’s key to build security into the design of new connected devices themselves, as well as the artificial intelligence (AI) and machine learning (ML) running on them (e.g., tinyML). Taking a cyber-aware approach will also be crucial as some organizations begin using 5G bandwidth, which will drive up both the number of IoT devices in the world and attack surface sizes for IoT device users and producers, as well as the myriad networks to which they connect and supply chains through which they move.”

 

Prediction #3: Boards of Directors will Drive the Need to Elevate the Chief Information Security Officer (CISO) Role, by Hellickson

 

“In 2021, there was much more media awareness and senior executive awareness about the impacts of large cyberattacks and ransomware that brought many organizations to their knees. These high-profile attacks have elevated the cybersecurity conversations in the Board room across many different industries. This has reinforced the need for CISOs to be constantly on top of current threats while maintaining an agile but robust security strategy that also enables the business to achieve revenue and growth targets. With recent surveys, we are seeing a shift in CISO reporting structures moving up the chain, out from underneath the CIO or the infrastructure team, which has been commonplace for many years, now directly to the CEO. The ability to speak fluent threat & risk management applicable to the business is table stakes for any executive with cybersecurity & board reporting responsibilities. This elevated role will require a cybersecurity program strategy that extends beyond the standard industry frameworks and IT speak, and instead demonstrate how the cybersecurity program is threat aware while being aligned to each executive team’s business objectives that demonstrates positive business and cybersecurity outcomes. More CISOs will look for executive coaches and trusted business partners to help them overcome any weaknesses in this area.”

 

Prediction #4: Increase of Nation-State Attacks and Threats, by John Bambenek, Principal Threat Researcher at Netenrich

 

“Recent years have seen cyberattacks large and small conducted by state and non-state actors alike. State actors organize and fund these operations to achieve geopolitical objectives and seek to avoid attribution wherever possible. Non-state actors, however, often seek notoriety in addition to the typical monetary rewards. Both actors are part of a larger, more nebulous ecosystem of brokers that provides information, access, and financial channels for those willing to pay. Rising geopolitical tensions, increased access to cryptocurrencies and dark money, and general instability due to the pandemic will contribute to a continued rise in cyber threats in 2022 for nearly every industry. Top-down efforts, such as sanctions by the U.S. Treasury Department, may lead to arrests but will ultimately push these groups further underground and out of reach.”

 

And, Adversaries Outside of Russia Will Cause Problems

 

Recognizing that Russia is a safe harbor for ransomware attackers, Dmitri Alperovitch, Chairman, Silverado Policy Accelerator: “Adversaries in other countries, particularly North Korea, are watching this very closely. We are going to see an explosion of ransomware coming from DPRK and possibly Iran over the next 12 months.”

 

Ed Skoudis, President, SANS Technology Institute: “What’s concerning about this potential reality is that these other countries will have less practice at it, making it more likely that they will accidentally make mistakes. A little less experience, a little less finesse. I do think we are probably going to see — maybe accidentally or maybe on purpose — a significant ransomware attack that might bring down a federal government agency and its ability to execute its mission.”

 

Prediction #5: The Adoption of 5G Will Drive The Use Of Edge Computing Even Further, by Theresa Lanowitz, Head of Evangelism at AT&T Cybersecurity

 

“While in previous years, information security was the focus and CISOs were the norm, we’re moving to a new cybersecurity world. In this era, the role of the CISO expands to a CSO (Chief Security Officer) with the advent of 5G networks and edge computing.

The edge is in many locations — a smart city, a farm, a car, a home, an operating room, a wearable, or a medical device implanted in the body. We are seeing a new generation of computing with new networks, new architectures, new use cases, new applications/applets, and of course, new security requirements and risks.

While 5G adoption accelerated in 2021, in 2022, we will see 5G go from new technology to a business enabler. While the impact of 5G on new ecosystems, devices, applications, and use cases ranging from automatic mobile device charging to streaming, 5G will also benefit from the adoption of edge computing due to the convenience it brings. We’re moving away from the traditional information security approach to securing edge computing. With this shift to the edge, we will see more data from more devices, which will lead to the need for stronger data security.

 

Prediction #6: Continued Rise in Ransomware, by Lanowitz

 

“The year 2021 was the year the adversary refined their business model. With the shift to hybrid work, we have witnessed an increase in security vulnerabilities leading to unique attacks on networks and applications. In 2022, ransomware will continue to be a significant threat. Ransomware attacks are more understood and more real as a result of the attacks executed in 2021. Ransomware gangs have refined their business models through the use of Ransomware as a Service and are more aggressive in negotiations by doubling down with distributed denial-of-service (DDoS) attacks. The further convergence of IT and Operational Technology (OT) may cause more security issues and lead to a rise in ransomware attacks if proper cybersecurity hygiene isn’t followed.

While many employees are bringing their cyber skills and learnings from the workplace into their home environment, in 2022, we will see more cyber hygiene education. This awareness and education will help instill good habits and generate further awareness of what people should and shouldn’t click on, download, or explore.”

 

Prediction #6: How the Cyber Workforce Will Continue to be Revolutionized Among Ongoing Shortage of Employees, by Jon Check, Senior Director Of Cyber Protection Solutions at Raytheon Intelligence & Space

 

“Moving into 2022, the cybersecurity industry will continue to be impacted by an extreme shortage of employees. With that said, there will be unique advantages when facing the current so-called ‘Great Resignation’ that is affecting the entire workforce as a whole. As the industry continues to advocate for hiring individuals outside of the cyber industry, there is a growing number of individuals looking to leave their current jobs for new challenges and opportunities to expand their skills and potentially have the choice to work from anywhere. While these individuals will still need to be trained, there is extreme value in considering those who may not have the most perfect resume for the cyber jobs we’re hiring for, but may have a unique point of view on solving the next cyber challenge. This expansion will, of course, increase the importance of a positive work culture as such candidates will have a lot of choices of the direction they take within the cyber workforce — a workforce that is already competing against the same pool of talent. With that said, we will never be able to hire all the cyber people we need, so in 2022, there will be a heavier reliance on automation to help fulfill those positions that continue to remain vacant.”

 

Prediction #7: Expect Heightened Security around the 2022 Election Cycle, by Jadee Hanson CIO and CISO of Code42

 

“With multiple contentious and high-profile midterm elections coming up in 2022, cybersecurity will be a top priority for local and state governments. While security protections were in place to protect the 2020 election, publicized conversations surrounding the uncertainty of its security will facilitate heightened awareness around every aspect of voting next year.”

 

Prediction #8: A Shift to Zero Trust, by Brent Johnson, CISO at Bluefin

 

“As the office workspace model continues to shift to a more hybrid and full-time remote architecture, the traditional network design and implicit trust granted to users or devices based on network or system location are becoming a thing of the past. While the security industry had already begun its shift to the more secure zero-trust model (where anything and everything must be verified before connecting to systems and resources), the increased use of mobile devices, bring your own device (BYOD), and cloud service providers has accelerated this move. Enterprises can no longer rely on a specific device or location to grant access.

Encryption technology is obviously used as part of verifying identity within the zero-trust model, and another important aspect is to devalue sensitive information across an enterprise through tokenization or encryption. When sensitive data is devalued, it becomes essentially meaningless across all networks and devices. This is very helpful in limiting security practitioners’ area of concern and allows for designing specific micro-segmented areas where only verified and authorized users/resources may access the detokenized, or decrypted, values. As opposed to trying to track implicit trust relationships across networks, micro-segmented areas are much easier to lock down and enforce granular identity verification controls in line with the zero-trust model.”

 

 

Prediction #9: Securing Data with Third-Party Vendors in Mind Will Be Critical, by Bindu Sundareason, Director at AT&T Cybersecurity

 

Attacks via third parties are increasing every year as reliance on third-party vendors continues to grow. Organizations must prioritize the assessment of top-tier vendors, evaluating their network access, security procedures, and interactions with the business. Unfortunately, many operational obstacles will make this assessment difficult, including a lack of resources, increased organizational costs, and insufficient processes. The lack of up-to-date risk visibility on current third-party ecosystems will lead to loss of productivity, monetary damages, and damage to brand reputation.”

 

Prediction #10: Increased Privacy Laws and Regulation, by Kevin Dunne, President of Pathlock

 

“In 2022, we will continue to see jurisdictions pass further privacy laws to catch up with the states like California, Colorado and Virginia, who have recently passed bills of their own. As companies look to navigate the sea of privacy regulations, there will be an increasing need to be able to provide a real-time, comprehensive view of what data is being processed and stored, who can access it, and most importantly, who has accessed it and when. As the number of distinct regulations continues to grow, the pressure on organizations to put in place automated, proactive data governance will increase.”

 

Prediction #11: Cryptocurrency to Get Regulated, by Joseph Carson, Chief Security Scientist and Advisory CISO at ThycoticCentrify

 

“Cryptocurrencies are surely here to stay and will continue to disrupt the financial industry, but they must evolve to become a stable method for transactions and accelerate adoption. Some countries have taken a stance that energy consumption is creating a negative impact and therefore facing decisions to either ban or regulate cryptocurrency mining. Meanwhile, several countries have seen cryptocurrencies as a way to differentiate their economies to become more competitive in the tech industry and persuade investment. In 2022, more countries will look at how they can embrace cryptocurrencies while also creating more stabilization, and increased regulation is only a matter of time. Stabilization will accelerate adoption, but the big question is how the value of cryptocurrencies will be measured.  How many decimals will be the limit?”

 

Prediction #12: Application Security in Focus, by Michael Isbitski, Technical Evangelist at Salt Security

 

“According to the Salt Labs State of application programming interface (API) Security Report, Q3 2021, there was a 348% increase in API attacks in the first half of 2021 alone and that number is only set to go up.

With so much at stake, 2022 will witness a major push from nonsecurity and security teams towards the integration of security services and automation in the form of machine assistance to mitigate issues that arise from the rising threat landscape. The industry is beginning to understand that by taking a strategic approach to API security as opposed to a subcomponent of other security domains, organizations can more effectively align their technology, people, and security processes to harden their APIs against attacks. Organizations need to identify and determine their current level of API maturity and integrate processes for development, security, and operations in accordance; complete, comprehensive API security requires a strategic approach where all work in synergy.

To mitigate potential threats and system vulnerabilities, further industry-wide recognition of a comprehensive approach to API security is key. Next year, we anticipate that more organizations will see the need for and adopt solutions that offer a full life cycle approach to identifying and protecting APIs and the data they expose. This will require a significant change in mindset, moving away from the outdated practices of proxy-based web application firewalls (WAFs) or API gateways for runtime protection, as well as scanning code with tools that do not provide satisfactory coverage and leave business logic unaddressed. As we’ve already begun to witness, security teams will now focus on accounting for unique business logic in application source code as well as misconfigurations or misimplementations within their infrastructure that could lead to API vulnerabilities.

Implementing intelligent capabilities for behavior analysis and anomaly detection is also another way organizations can improve their API security posture in 2022. Anomaly detection is essential for satisfying increasingly strong API security requirements and defending against well-known, emerging and unknown threats. Implementing solutions that effectively utilize AI and ML can help organizations ensure visibility and monitoring capabilities into all the data and systems that APIs and API consumers touch. Such capabilities also help mitigate any manual mistakes that inadvertently create security gaps and could impact business uptime.”

 

Prediction #13: Disinformation on Social Media, by Jonathan Reiber, Senior Director of Cybersecurity Strategy and Policy at AttackIQ

 

“Over the last two years, pressure rose in Congress and the executive branch to regulate Section 230 and increased following the disclosures made by Frances Haugen, a former Facebook data scientist, who came forward with evidence of widespread deception related to Facebook’s management of hate speech and misinformation on its platform. Concurrent to those disclosures, in mid-November, the Aspen Institute’s Commission on Information Disorder published the findings of a major report, painting a picture of the United States as a country in a crisis of trust and truth, and highlighting the outsize role of social media companies in shaping public discourse. Building on Haugen’s testimony, the Aspen Institute report, and findings from the House of Representatives Select Committee investigating the January 6, 2021 attack on the U.S. Capitol, we should anticipate increasing regulatory pressure from Congress. Social media companies will likely continue to spend large sums of money on lobbying efforts to shape the legislative agenda to their advantage.”

 

Prediction #14: Ransomware To Impact Cyber Insurance, by Jason Rebholz, CISO at Corvus Insurance

 

“Ransomware is the defining force in cyber risk in 2021 and will likely continue to be in 2022. While ransomware has gained traction over the years, it jumped to the forefront of the news this year with high-profile attacks that impacted the day-to-day lives of millions of people. The increased visibility brought a positive shift in the security posture of businesses looking to avoid being the next news headline. We’re starting to see the proactive efforts of shoring up IT resilience and security defenses pay off, and my hope is that this positive trend will continue. When comparing Q3 2020 to Q3 2021, the ratio of ransoms demanded to ransoms paid is steadily declining, as payments shrank from 44% to 12%, respectively, due to improved backup processes and greater preparedness. Decreasing the need to pay a ransom to restore data is the first step in disrupting the cash machine that is ransomware. Although we cannot say for certain, in 2022, we can likely expect to see threat actors pivot their ransomware strategies. Attackers are nimble — and although they’ve had a ‘playbook’ over the past couple years, thanks to widespread crackdowns on their current strategies, we expect things to shift. We have already seen the opening moves from threat actors. In a shift from a single group managing the full attack life cycle, specialized groups have formed to gain access into companies who then sell that access to ransomware operators. As threat actors specialize in access into environments, it opens the opportunity for other extortion-based attacks such as data theft or account lockouts, all of which don’t require data encryption. The potential for these shifts will call for a great need in heavier investments in emerging tactics and trends to remove that volatility.”..[…] Read more »….

 

How should your company think about investing in security?

Like many things in life, with the security of your company’s application, you get what you pay for. You can spend too little, too much or just right.

To find the right balance, consider Goldilocks: she goes for a walk in the woods and comes upon a house. In the house, she finds three bowls of porridge. The first is too hot, the second is too cold, but the third is just right.

Goldilocks is the master of figuring out “just right.” To determine the appropriate security budget for your company, you need to be, too.

How much security effort is too much?

First, let’s explore the idea of overinvesting in security. How much is too much?

At a certain point with security, you start to see diminishing returns: issues still appear but more rarely. Security is never really “done,” so it’s tricky knowing when to move on. There’s always more to do, more to find, more to fix. Knowing when to wrap up depends on your threat model, risk appetite and your unique circumstances.

However, your company probably isn’t in this category. Almost nobody is. You certainly can get there, but you’re likely not there now. The takeaway is this: even though you’re probably not in this category yet, it’s important to know that security is not an endless investment of resources. There is a point at which you can accept the remaining risk and move forward.

The problem with too little effort

On the other hand, companies often spend too little effort on security. Almost everyone falls into this category.

Security is often viewed as a “tax” on the business. Companies want to minimize any kind of tax, and so they try to cut security spending inappropriately. However, most people don’t realize that when you cut costs, what you actually cut is effort: how much time you invest, how manual it is, how much attack surface you cover and how thoroughly you develop custom exploits. That’s a dangerous elixir because your attackers already invest more effort than you can. Cutting effort just cedes more advantage.

As a leader, you’re under tremendous pressure to make the best use of the limited money and person-power you have, and those resources need to cover a wide range of priorities. It’s sometimes hard to justify the investment in security, and even when you can, you aren’t always sure where the best place to invest it might be.

Here’s the harsh reality, though: the less you invest, the less it returns. When you cut costs too far, you prevent outcomes that help you get better. Achieving your security mission is going to cost you time, effort and money. There is no way around that. When those investments get cut to the bone, what’s really reduced is your ability to succeed.

The level of effort that’s “just right”

The trick to successful application security lies in finding your sweet spot, that magical balance where you uncover useful issues without investing too much or too little. There are many variables that influence this, including:

  • The value of your assets
  • The skills of your adversaries
  • The scope of your attack surfaces
  • The amount of risk you’re willing to accept

As a ballpark estimate, to do application security testing right is probably going to cost $30,000 to $150,000 or more per year, per application. Some cost far more than that.

That number might shock you; as discussed, most companies are in the category of spending too little. Security isn’t cheap because it’s not easy, it requires a unique skill set, and it takes effort.

However, doing security right is worth the price.

The incremental cost of doing security right is a tiny, microscopic spec compared to the gigantic cost of a security incident. Most importantly, since most companies struggle to do security right, those who do obtain an enormous advantage over their competitors. You want to be one of those companies. To get there, you need to invest appropriately.

There are no security shortcuts

Ultimately, you can’t achieve security excellence by going cheap. You can’t find the unknowns for cheap. You can’t discover custom exploits for cheap. You get what you pay for, and there’s no way around that. However, you also don’t need to spend endlessly either; even though there’s always more to fix, there is a point at which you can accept the remaining risk and move on.

The best approach is to channel your inner Goldilocks and find the budget that’s “just right” for your company. Figure out how rigorous and comprehensive an assessment your application requires, and don’t fall short of those standards…[…] Read more »….

 

CIO Agenda: Cloud, Cybersecurity, and AI Investments Ahead

Enterprises that employed “business composability” were more likely to succeed during the volatility caused by the pandemic, according to Gartner. That volatility is here to stay, so now is the time to get ready for it.
Nearly two years after a massive disruption hit enterprises, a few lessons are evident. Some organizations quickly adapted to the circumstances, recognized the opportunities available, and acted to capitalize on them. Other organizations were caught unprepared for the unexpected and struggled to keep going. Some of them shut down.

What separated the successful organizations from the organizations that subsisted or didn’t make it at all? One factor might be what Gartner is calling “business composability,” or “the mindset, technologies, and a set of operating capabilities that enable organizations to innovate and adapt quickly to changing business needs.” This composability was a major theme at the Gartner IT Symposium/Xpo Americas, and Gartner is promoting the concept of business composability as the way for businesses to thrive through disruption in 2022 and beyond.

“Business composability is an antidote to volatility,” says Monika Sinha, research VP at Gartner,. “Sixty-three percent of CIOs at organizations with high composability reported superior business performance, compared with peers or competitors in the past year. They are better able to pursue new value streams through technology, too.”

Sinha compares the concept of composability to the way toy Legos work. She told InformationWeek in an interview that composability is about creating flexible and adaptive organizations with departments that can be re-arranged to create new value streams. She says organizations should target the following three domains of business composability:

1. Composable thinking

“This is the ability to be dynamic in your thinking as an organization,” Sinha says. This kind of thinking recognizes that business conditions often change, and it empowers the teams closest to the action to respond to the new conditions. “Traditional business thinking views change as a risk, while composable thinking is the means to master the risk of accelerating change and to create new business value.”

2. Composable business architecture

This is the ability of organizations to create dynamic ways of working, Sinha says. For instance, during the pandemic, some retailers were able to pivot quickly to providing curbside pickup, and some healthcare providers pivoted to providing telehealth appointments.

“Organizations looked at different types of models in terms of delivery,” she says. “In these types of organizations, it is really about creating ‘agile’ at scale, and agile types of working in the organization.”

Sinha notes that digital business initiatives fail when business leaders commission projects from IT and then shirk accountability for the implementation of results, treating it as another IT project. “High-composability enterprises embrace distributed accountability for digital outcomes, reflecting a shift that most CIOs have been trying to make for several years, as well as create multidisciplinary teams that blend business and IT units to drive business results,” Sinha says.

3. Composable technology

This is the IT architecture or technology stack, says Sinha. Technology is a catalyst for business transformation and thinking, and developing a flexible and modular technology architecture enables bringing together the parts needed to support transformation.

Distributed cloud and artificial intelligence are the two main technologies that a majority of high-composability enterprises have already deployed or plan to deploy in 2022, according to Gartner’s CIO Agenda survey. Gartner notes that these technologies are a catalyst for business composability because they enable modular technology capabilities.

Tech investments for 2022

Another major technology at the top of the list of planned investments for 2022 is cyber and information security, with 66% of respondents saying they expect to increase associated investments in the next year.

“Many organizations were dabbling with composability before the pandemic,” Sinha says. “What we saw was that those that were composable came out ahead after the pandemic. The pandemic highlighted the importance and the value of composability”..[…] Read more »…..

 

Bridging the gender gap in cybersecurity

In a panel at the ISC2 Security Congress 2021, Sharon Smith, CISSP, Lori Ross O’Neil, CISSP, Aanchal Gupta and Meg West, M.S., CISSP, discussed the challenges and opportunities of being a woman in cybersecurity. From the factors that lead to women being underrepresented in cybersecurity to removing those barriers, the cybersecurity leaders discussed their ideas on how to bridge the gender gap in the field.

Contributing factors to the underrepresentation of women

Gupta believes that a cybersecurity awareness gap contributes to the underrepresentation of women in the field. With her background in software engineering, Gupta declined her first offer to pivot to cybersecurity, believing that she didn’t possess the correct qualifications. Once she entered the field, she realized the vastness of the cybersecurity space and how people with varied skillsets thrive in the industry. Helping women understand that they don’t need a cybersecurity or computer science degree to enter the field can attract more qualified women to the industry. Smith added that hiring managers should also be aware that qualified candidates exist outside of those majors.

Women looking to transition into cybersecurity mid-career can frame the change as adding cyber to their profession. O’Neil’s passion is bringing cybersecurity to other disciplines — someone with an accounting or chemistry background can benefit from cybersecurity coursework in order to do their jobs safely and securely. Certifications are a great way to enter the industry, as well as seeking out online communities and information can help entering the field by immersing oneself in the cybersecurity sphere.

How do we remove barriers from cybersecurity

Although the number of women in cybersecurity has increased over the past years, there is still a ways to go to achieve equal gender representation in the field. “We should get ahead of this problem by engaging with women and other underrepresented groups early on,” said Gupta. Reaching young people with capture-the-flag style exercises, coding programs and cybersecurity information provides industry exposure at an early age and allows them to imagine what a career in cybersecurity might look like.

Breaking down self-imposed barriers, changing a broken hiring system that relies on AI searching for keywords to select candidates and more men stepping up as allies in the field are all ideas suggested by Smith to bridge the gender gap in cybersecurity. Looking for opportunities to educate women and other underrepresented groups on cybersecurity roles can increase the amount of those groups in the field.

All women on the panel shared experiences when they were affected by sexism in the industry. West began as an associate in cybersecurity at a Fortune 100 company as the youngest and only female employee on the team. On one of her first days on the job, one of her coworkers told her that the only reason she got the job was to fill a diversity quota. West took this comment as a challenge — within about 3 years, she was promoted from being a cybersecurity associate to the Global Incident Response Manager at the age of 24. She created the role and advocated for her promotion with statistics of her accomplishments. “Just because an opportunity does not exist, that doesn’t mean I can’t create it myself,” said West..[…] Read more »….

 

How gamification boosts security awareness training effectiveness

Ransomware and its partner in crime phishing are very much in the spotlight of late. According to the Phishing Activity Trends Report by APWG, the quantity of phishing doubled in 2020 and continues to rise.

In response, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has launched a campaign to address ransomware. It takes a two-pronged approach: improving security readiness and raising awareness. Its campaign encourages organizations to implement best practices, tools and resources to mitigate the risk of falling victim to ransomware.

“Anyone can be the victim of ransomware, and so everyone should take steps to protect their systems,” said Director of CISA Brandon Wales.

But protection doesn’t just mean buying new security systems and implementing better processes and policies. The CISA campaign emphasizes training as an integral part of anti-ransomware and anti-phishing success.

Employee security awareness training

“The greatest technology in the world won’t do you much good if your users are not well educated on security and don’t know what to do and not do,” said Greg Schulz, an analyst at Server StorageIO Group. “Too many security issues are attributed to humans falling prey to social engineering.”

But the effectiveness of security training varies significantly depending on the approach. According to a report by Computer Economics, Security Training Adoption and Best Practices 2021, the security training given to staff in some cases only goes as far as insisting all users sign off on reading organizational security policies and procedures. How much of it are they likely to retain?

“All it takes is one weak link among the workforce, and state-of-the-art security technology is breached,” said Frank Scavo, President of Computer Economics. “The goal of security training should be to erect a human firewall of informed and ever-vigilant users: an army of personnel with high awareness of social engineering methods provides an extra safeguard against attack.”

Lunch-and-learn anti-phishing awareness briefings are a little better than only making employees read policy. Lunch-and-learns may earn a small improvement in the reduction of phishing success, but not nearly enough. Similarly, traditional classroom and textbook learning has only a modest effect on the number of phishing victims.

What it takes is a multi-faceted response to phishing and ransomware via interactive learning. The introduction of gamification, in particular, to the field of security awareness training has been shown to boost results.

Gamification of cybersecurity training

“People can easily tune out when subjected to static security awareness training,” said Schulz. “But make it an interactive learning experience or game, and people are more likely to engage while being entertained and educated on the do’s and don’ts of modern IT security threat risks.”

What exactly is gamification? Gabe Zichermann, author of “The Gamification Revolution,” defined it as “taking what’s fun about games and applying it to situations that maybe aren’t so fun.”

Gamification is essentially about finding ways to engage people emotionally to motivate them to behave in a particular way or decide to forward a specific goal. In training, it’s used to make learning a lot more fun.

Effective gamification techniques applied to security training use quizzes, interactive videos, cartoons and short films with characters and plots that entertain while getting across the important facts about phishing and other scams — and how to avoid them.

In “The Forrester Wave: Security Awareness and Training Solutions, Q1 2020,” Jinan Budge, an analyst at Forrester Research, said, “Successful vendors deliver the ABCs of security: awareness, behavior and culture. Look for providers that truly understand how training contributes to your overall security culture and don’t just check the training requirement box.”

Later in the same report, she added: “Choose vendors that create positive content with inclusive, clear and compelling images and that engage users with alternative content types like gamification, microlearning and virtual reality (VR). Some vendors offer true gamification that involves teams, competition and advanced graphic design, engaging discerning audiences on a deeper level than multiple-choice tests or phishing simulations.”..[…] Read more »….

 

Why Detection-As-Code Is the Future of Threat Detection

As security moves to the cloud, manual threat detection processes are unable to keep pace. This article will discuss how detection engineering can advance security operations just as DevOps improved the app development world. We’ll explore detection-as-code (DaC) and innumerate several compelling benefits of this trending approach to threat detection.

What is detection-as-code?

Detection-as-code is a systematic, flexible, and comprehensive approach to threat detection powered by software; the same way infrastructure as code (IaC) and configuration-as-code are about machine-readable definition files and descriptive models for composing infrastructure at scale.

It is a structured approach to analyzing security log data used to identify attacker behaviors. Using software engineering best practices to write expressive detections and automate responses, security teams can build scalable processes to identify sophisticated threats across rapidly expanding environments.

Done right, detection engineering — the set of practices and systems to deliver modern and effective threat detection — can advance security operations just as DevOps improved the app development world.

Similar to a change CI/CD workflow, a detection engineering workflow might include the following steps:

  • Observe a suspicious or malicious behavior
  • Model it in code
  • Write various test cases
  • Commit to version control
  • Deploy to staging, then production
  • Tune and update

You can see that the detection engineering CI/CD workflow is not so much about treating detections as code but about improving detection engineering to be an authentic engineering practice; one that is built on modern software development principles.

The concept of detection-as-code grew out of security’s need for automated, systematic, repeatable, predictable, and shareable approaches. It is essential because threat detection was not previously fully developed as a systematic discipline with effective automation and predictably good results.

Threat detection programs that are precisely adjusted for particular environments and systems have the most potent effect. By using detections as well-written code that can be tested, checked into source control, and code-reviewed by peers, security teams can produce higher-quality alerts that reduce burnout and quickly flag questionable activity.

What are the benefits of detection-as-code?

The benefits of detection-as-code include the ability to:

  1. Build custom, flexible detections using a programming language
  2. Adopt a Test-Driven Development (TDD) approach
  3. Incorporate with version control systems
  4. Automate workflows
  5. Reuse code

Writing detections in a universally recognized, flexible, and expressive language like Python offers several advantages. Instead of using domain-specific languages with too many limitations, you can write more custom and complex detections to fit the precise needs of your enterprise. These language rules are also often more readable and easy to understand. This characteristic can be crucial as complexity increases.

An additional benefit of using expressive language is the ability to use a rich set of built-in or third-party libraries developed or familiar by security practitioners for communicating with APIs, which improves the effectiveness of the detection.

Quality assurance for detection code can illuminate detection blind spots, test for false positives, and promote detection efficacy. A TDD approach enables security teams to anticipate an attacker’s approach, document what they learn, and create a library of insights into the attacker’s strategy.

Over and above code correctness, a TDD approach improves the quality of detection code and enables more modular, extensible, and flexible detections. Engineers can easily modify their code without fear of breaking alerts or weakening security.

When writing or modifying detections, version control allows practitioners to revert to previous states swiftly. It also confirms that security teams are using the most updated detection. Additionally, version control can provide needed meaning for specific detections that trigger an alert or help identify changes in detections.

Over time, detections must change as new or additional data enters the system. Change control is an essential process to help teams adjust detections as needed. An effective change control process will also ensure that all changes are documented and reviewed.

Security teams that have been waiting to shift security left will benefit from a CI/CD pipeline. Starting security operations earlier in the delivery process helps to achieve these two goals:

  • Eliminate silos between teams that work together on a shared platform and code-review each other’s work.
  • Provide automated testing and delivery systems for your security detections. Security teams remain agile by focusing on building precision detections.

Finally, DaC promotes code reusability across broad sets of detections. As security detection engineers write detections over time, they start to identify patterns as they emerge. Engineers can reuse existing code to meet similar needs across different detections without starting completely over.

Reusability is an essential part of detection engineering that allows teams to share functions across different detections or change and adjust detections for particular use-cases…[…] Read more »