Why Detection-As-Code Is the Future of Threat Detection

As security moves to the cloud, manual threat detection processes are unable to keep pace. This article will discuss how detection engineering can advance security operations just as DevOps improved the app development world. We’ll explore detection-as-code (DaC) and innumerate several compelling benefits of this trending approach to threat detection.

What is detection-as-code?

Detection-as-code is a systematic, flexible, and comprehensive approach to threat detection powered by software; the same way infrastructure as code (IaC) and configuration-as-code are about machine-readable definition files and descriptive models for composing infrastructure at scale.

It is a structured approach to analyzing security log data used to identify attacker behaviors. Using software engineering best practices to write expressive detections and automate responses, security teams can build scalable processes to identify sophisticated threats across rapidly expanding environments.

Done right, detection engineering — the set of practices and systems to deliver modern and effective threat detection — can advance security operations just as DevOps improved the app development world.

Similar to a change CI/CD workflow, a detection engineering workflow might include the following steps:

  • Observe a suspicious or malicious behavior
  • Model it in code
  • Write various test cases
  • Commit to version control
  • Deploy to staging, then production
  • Tune and update

You can see that the detection engineering CI/CD workflow is not so much about treating detections as code but about improving detection engineering to be an authentic engineering practice; one that is built on modern software development principles.

The concept of detection-as-code grew out of security’s need for automated, systematic, repeatable, predictable, and shareable approaches. It is essential because threat detection was not previously fully developed as a systematic discipline with effective automation and predictably good results.

Threat detection programs that are precisely adjusted for particular environments and systems have the most potent effect. By using detections as well-written code that can be tested, checked into source control, and code-reviewed by peers, security teams can produce higher-quality alerts that reduce burnout and quickly flag questionable activity.

What are the benefits of detection-as-code?

The benefits of detection-as-code include the ability to:

  1. Build custom, flexible detections using a programming language
  2. Adopt a Test-Driven Development (TDD) approach
  3. Incorporate with version control systems
  4. Automate workflows
  5. Reuse code

Writing detections in a universally recognized, flexible, and expressive language like Python offers several advantages. Instead of using domain-specific languages with too many limitations, you can write more custom and complex detections to fit the precise needs of your enterprise. These language rules are also often more readable and easy to understand. This characteristic can be crucial as complexity increases.

An additional benefit of using expressive language is the ability to use a rich set of built-in or third-party libraries developed or familiar by security practitioners for communicating with APIs, which improves the effectiveness of the detection.

Quality assurance for detection code can illuminate detection blind spots, test for false positives, and promote detection efficacy. A TDD approach enables security teams to anticipate an attacker’s approach, document what they learn, and create a library of insights into the attacker’s strategy.

Over and above code correctness, a TDD approach improves the quality of detection code and enables more modular, extensible, and flexible detections. Engineers can easily modify their code without fear of breaking alerts or weakening security.

When writing or modifying detections, version control allows practitioners to revert to previous states swiftly. It also confirms that security teams are using the most updated detection. Additionally, version control can provide needed meaning for specific detections that trigger an alert or help identify changes in detections.

Over time, detections must change as new or additional data enters the system. Change control is an essential process to help teams adjust detections as needed. An effective change control process will also ensure that all changes are documented and reviewed.

Security teams that have been waiting to shift security left will benefit from a CI/CD pipeline. Starting security operations earlier in the delivery process helps to achieve these two goals:

  • Eliminate silos between teams that work together on a shared platform and code-review each other’s work.
  • Provide automated testing and delivery systems for your security detections. Security teams remain agile by focusing on building precision detections.

Finally, DaC promotes code reusability across broad sets of detections. As security detection engineers write detections over time, they start to identify patterns as they emerge. Engineers can reuse existing code to meet similar needs across different detections without starting completely over.

Reusability is an essential part of detection engineering that allows teams to share functions across different detections or change and adjust detections for particular use-cases…[…] Read more »

 

How to Become an IT Industry Thought Leader

Many IT executives — and prospective executives — would like to share their knowledge and insights as industry thought leaders. Here’s a look at what you need to know to get started.

An IT industry thought leader or influencer is someone who uses their expertise and perspective to offer specialized guidance, inspire innovation, and motivate followers to business success. A thought leader’s followers can include colleagues, business partners, on-site and virtual conference audiences, and website and book readers, as well as social media followers.

Getting started as an IT thought leader requires industry experience, a winning personality, lessons to teach, and an eager audience. The most successful thought leaders address the two major needs of IT and business executives: making money and saving money, said Juan Orlandini, chief architect at Insight Enterprises, an IT systems and services provider.

Leaders, Not Followers

IT thought leaders don’t follow the crowd. “You have to be able to set a path that’s your own — a path that’s informed by the prevailing wisdom, not driven by it,” Orlandini said. “IT thought leaders, regardless of what level they’re at, also need to remain current and relevant with the changing landscape,” he added.

Beyond deep technical and/or business knowledge, becoming an IT thought leader requires a significant amount of self-reflection. Points to consider include motivations, such as career advancement, enterprise recognition, increasing product or services sales, building close ties with business partners, and perhaps even the desire to help improve and advance the IT community. “Understanding those motivations will help you plan your approach,” said Jeff Ton, a strategic IT advisor to IT solutions provider InterVision.

Gaining widespread recognition requires the ability to communicate ideas through multiple channels. “Critically assess your ability to write, to speak in front of audiences, to be the subject of an interview,” Ton advised. Getting started doesn’t require perfection, but if there are any glaring weaknesses, it’s important to address them. “For example, if the thought of public speaking makes your palms sweaty, find low-risk opportunities to speak to groups,” he suggested. If you need to hone your writing skills, seek out guest blog opportunities. “If conversation is more your thing, identify tech-related podcasts and propose being a guest on the program,” Ton recommended.

Ask your enterprise’s marketing department to help you get your thought leadership career off on the right foot. “Marketing can help you find opportunities to amplify your voice,” Ton said. “They can also help you with editing, fine-tuning your message, graphics, social media, and much more.”

Personal and Career Benefits

Most thought leaders launch their quest with the goal of enhancing their careers. “Within your company, other executives will gain an understanding of the way you think about your role, the business, and the industry,” Ton said. “They will begin to see you as more than just the ‘IT person’,” he noted.

Becoming a thought leader creates an instant credibility that can be used to build strong connections to C-suite executives, said Ari Lightman, a professor of digital media and marketing at the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. It also creates a sense of pride in the IT department that there’s a leader who can help other in-house strategic thinkers work through challenging issues, he explained.

As they raise their industry profile, thought leaders are frequently targeted by enterprises searching for a new CIO. “They may think of reaching out to you prospectively because they know your name and know your reputation,” said Rich Temple, vice president and CIO at the Deborah Heart and Lung Center. “It becomes that much easier for potential employers to see your body of work and get to know you,” he noted. “That can be a real differentiator for you in a competitive job search.”..[…] Read more »…..

 

Why You Need a Data Fabric, Not Just IT Architecture

Data fabrics offer an opportunity to track, monitor and utilize data, while IT architectures track, monitor and maintain IT assets. Both are needed for a long-term digitalization strategy.

As companies move into hybrid computing, they’re redefining their IT architectures. IT architecture describes a company’s entire IT asset base, whether on-premises or in-cloud. This architecture is stratified into three basic levels: hardware such as mainframes, servers, etc.; middleware, which encompasses operating systems, transaction processing engines, and other system software utilities; and the user-facing applications and services that this underlying infrastructure supports.

IT architecture has been a recent IT focus because as organizations move to the cloud, IT assets also move, and there is a need to track and monitor these shifts.

However, with the growth of digitalization and analytics, there is also a need to track, monitor, and maximize the use of data that can come from a myriad of sources. An IT architecture can’t provide data management, but a data fabric can. Unfortunately, most organizations lack well-defined data fabrics, and many are still trying to understand why they need a data fabric at all.

What Is a Data Fabric?

Gartner defines a data fabric as “a design concept that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable and inferenced metadata assets to support the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.”

Let’s break it down.

Every organization wants to use data analytics for business advantage. To use analytics well, you need data agility that enables you to easily connect and combine data from any source your company uses –whether the source is an enterprise legacy database or data that is culled from social media or the Internet of Things (IoT).  You can’t achieve data integration and connectivity without using data integration tools, and you also must find a way to connect and relate disparate data to each other in meaningful ways if your analytics are going to work.

This is where data fabric enters. The data fabric contains all the connections and relationships between an organization’s data, no matter what type of data it is or where it comes from. The goal of the fabric is to function as an overall tapestry of data that interweaves all data so data in its entirety is searchable. This has the potential to not only optimize data value, but to create a data environment that can answer virtually any analytics query. The data fabric does what an IT architecture can’t: it tells you what data does, and how data relates to each other. Without a data fabric, companies’ abilities to leverage data and analytics are limited.

Building a Data Fabric

When you build a data fabric, it’s best to start small and in a place where your staff already has familiarity.

That “place” for most companies will be with the tools that they are already using to extract, transform and load (ETL) data from one source to another, along with any other data integration software such as standard and custom APIs. All of these are examples of data integration you have already achieved.

Now, you want to add more data to your core. You can do this by continuing to use the ETL and other data integration methods you already have in place as you build out your data fabric. In the process, care should be taken to also add the metadata about your data, which will include the origin point for the data, how it was created, what business and operational processes use it, what its form is (e.g.,  single field in a fixed record, or an entire image file), etc. By maintaining the data’s history, as well as all its transformations, you are in a better position to check data for reliability, and to ensure that it is secure.

As your data fabric grows, you will probably add data tools that are missing from your workbench. These might be tools that help with tracking data, sharing metadata, applying governance to data, etc. A recommendation in this area is to look for an all-inclusive data management software that contains not only all the tools that you’ll need build a data fabric, but also important automation such as built-in machine learning.

The machine learning observes how data in your data fabric is working together, and which combinations of data are used most often in different business and operational contexts. When you query the data, the ML assists in pulling the data together that is most likely to answer your queries…[…] Read more »…..

 

9 best practices for network security

Network security is the practice of protecting the network and data to maintain the integrity, confidentiality and accessibility of the computer systems in the network. It covers a multitude of technologies, devices and processes, and makes use of both software- and hardware-based technologies.

Each organization, no matter what industry they belong to or what their infrastructure size is, requires comprehensive network security solutions to protect it from various cyberthreats happening in the wild today.

Network security layers

When we talk about network security, we need to consider layers of protection:

Physical network security

Physical network security controls deal with preventing unauthorized persons from gaining physical access to the office and network devices, such as firewalls and routers. Physical locks, ID verification and biometric authentication are few measures in place to take care of such issues.

Technical network security

Technical security controls deal with the devices in the network and data stored and in transit. Also, technical security needs to protect data and systems from unauthorized personnel and malicious activities from employees.

Administrative network security

Administrative security controls deal with security policies and compliance processes on user behavior. It also includes user authentication, their privilege level and implementing changes to the existing infrastructure.

Network security best practices

Now we have a basic understanding and overview of network security, let’s focus on some of the network security best practices you should be following.

1. Perform a network audit

The first step to secure a network is to perform a thorough audit to identify the weakness in the network posture and design. Performing a network audit identifies and assesses:

  • Presence of security vulnerabilities
  • Unused or unnecessary applications
  • Open ports
  • Anti-virus/anti-malware and malicious traffic detection software
  • Backups

In addition, third-party vendor assessments should be conducted to identify additional security gaps.

2. Deploy network and security devices

Every organization should have a firewall and a web application firewall (WAF) for protecting their website from various web-based attacks and to ensure safe storage of their data. To maintain the optimum security of the organization and monitor traffic, various additional systems should be used, such as intrusion detection and prevention (IDS/IPS) systems, security information and event management (SIEM) systems and data loss prevention (DLP) software.

3. Disable file sharing features

Though file sharing sounds like a convenient method for exchanging files, it’s advisable to enable file sharing only on a few independent and private servers. File sharing should be disabled on all employee devices.

4. Update antivirus and anti-malware software

Businesses purchase desktop computers and laptops with the latest version of antivirus and anti-malware software but fail to keep it updated with new rules and updates. By ensuring that antivirus and anti-malware are up to date, one can be assured that the device is running antivirus with the latest bug fixes and security updates.

5. Secure your routers

A security breach or a security event can take place simply by hitting the reset button on the network router. Thus it is paramount to consider moving routers to a more secure location such as a locked room or closet. Also, video surveillance equipment and CCTV can be installed in the server or network room. In addition, the router should be configured to change default passwords and network names, which attackers can easily find online.

6. Use a private IP address

To avoid unauthorized users or devices from accessing the critical devices and servers in the network, private IP addresses should be assigned to them. This practice enables the IT administrator to easily tap on all unauthorized attempts by the users or devices connecting to your network for any suspicious activity.

7. Establish a network security maintenance system

A proper network security maintenance system should be established which involves processes such as :

  1. Perform regular backups
  2. Updating the software
  3. Schedule change in network name and passwords

Once a network security maintenance system is established, document it and circulate it to your team…[…] Read more »….

 

What Will Be the Next New Normal in Cloud Software Security?

Accelerated moves to the cloud made sense at the height of the pandemic — organizations may face different concerns in the future.

Organizations that accelerated their adoption of cloud native apps, SaaS, and other cloud-driven resources to cope with the pandemic may have to weigh other security matters as potential “new normal” operations take shape. Though many enterprises continue to make the most of remote operations, hybrid workplaces might be on the horizon for some. Experts from cybersecurity company Snyk and SaaS management platform BetterCloud see new scenarios in security emerging for cloud resources in a post-pandemic world.

The swift move to remote operations and work-from-home situations naturally led to fresh concerns about endpoint and network security, says Guy Podjarny, CEO and co-founder of Snyk. His company recently issued a report on the State of Cloud Native Application Security, exploring how cloud-native adoption affects defenses against threats. As more operations were pushed remote and to the cloud, security had to discern between authorized personnel who needed access from outside the office versus actual threats from bad actors.

Decentralization was already underway at many enterprises before COVID-19, though that trend may have been further catalyzed by the response to the pandemic. “Organizations are becoming more agile and the thinking that you can know everything that’s going on hasn’t been true for a long while,” Podjarny says. “The pandemic has forced us to look in the mirror and see that we don’t have line of sight into everything that’s going on.”

This led to distribution of security controls, he says, to allow for more autonomous usage by independent teams who are governed in an asynchronous manner. “That means investing more in security training and education,” Podjarny says.

A need for a security-based version of digital transformation surfaced, he says, with more automated tools that work at scale, offering insight on distributed activities. Podjarny says he expects most security needs that emerged amid the pandemic will remain after businesses can reopen more fully. “The return to the office will be partial,” he says, expecting some members of teams to not be onsite. This may be for personal, work-life needs, or organizations want to take advantage of less office space, Podjarny says.

That could lead to some issues, however, with the governance of decentralized activities and related security controls. “People don’t feel they have the tools to understand what’s going on,” he says. The net changes that organizations continue to make in response to the pandemic, and what may come after, have been largely positive, Podjarny says. “It moves us towards security models that scale better and adapted the SaaS, remote working reality.”

The rush to cloud-based applications such as SaaS and platform-as-a-service at the onset of the pandemic brought on some recognition of the necessity to offer ways to maintain operations under quarantine guidelines. “Employees were just trying to get the job done,” says Jim Brennan, chief product officer with BetterCloud. Spinning up such technologies, he says, enabled staff to meet those goals. That compares with the past where such “shadow IT” actions might have been regarded as a threat to the business. “We heard from a lot of CIOs where it really changed their thinking,” Brennan says, which led to efforts to facilitate the availability of such resources to support employees.

Meeting those needs at scale, however, created a new challenge. “How do I successfully onboard a new application for 100 employees? One thousand employees? How do I do that for 50 new applications? One hundred new applications?” Brennan says many CIOs and chief security officers have sought greater visibility into the cloud applications that have been spun up within their organizations and how they are being used. BetterCloud produced a brief recently on the State of SaaS, which looks at SaaS file security exposure.

Automation is being put to work, Brennan says, to improve visibility into those applications. This is part of the emerging landscape that even sees some organizations decide that the concept of shadow IT — the use of technology without direct approval — is a misnomer. “A CIO told me they don’t believe in ‘shadow IT,’” he says. In effect, the CIO regarded all IT, authorized or not, as a means to get work done…[…] Read more »…..

 

When security and resiliency converge: A CSO’s perspective on how security organizations can thrive

You’ve just been hired to lead the security program of a prominent multinational organization. You’re provided a seasoned team and budget, but you can’t help looking around and asking yourself, “How will I possibly protect every asset of this company, every day, against every threat, globally?” After all, this is the expectation of most organizations, their customers and shareholders, as well as regulators and lawmakers. In my experience, one of the top challenges security leaders face is trying to optimize a modest security budget to protect a highly complex and ever-expanding organizational attack surface. In fact, Accenture found that 69% of security professionals say staying ahead of attackers is a constant battle and the cost is unsustainable. For most, this challenge is extremely discouraging. However, success is not necessarily promised to those with resources – it’s more about how resourceful you can be.

As organizations worldwide digitally transform at a breakneck pace, the stakes are increasing for cybersecurity programs. Cyberattacks no longer just take down websites and internal email. They can disrupt the availability of every revenue-generating digital business process, threatening the very existence of many organizations. With this heightened risk, organizations must shift from a prevention-first mindset to one that balances aggressive prevention measures with a keen focus on enabling efficient consequence management. This shouldn’t be read as a response-only strategy, but it does mean:

  1. Designing business processes to minimize single points of failure and reduce sensitivity to technology and data latency, recognizing that technology and data risk is extremely high in today’s environment.
  2. Focusing asset protection programs disproportionately on the assets that underpin the most critical business processes or present the greatest risk.
  3. Architecting technology to anticipate and recover from persistent, sophisticated attacks, as the “zero trust” approach suggests.
  4. Establishing an organizational culture that acknowledges, anticipates, accepts and thrives in a pervasive threat environment.

Most cybersecurity leaders today only focus on, or are limited to focusing on, the third of these four items. Many are aggressively pursuing zero trust related modernization programs to increase the technology resilience of their organization’s systems and networks. However, the other three strategic imperatives are not achieved due to a lack of organizational knowledge, access, influence or governance.

The same can be said for physical security leaders, who likely do their best to focus on the second item, but may not understand the interdependency between an organization’s physical and digital assets. Unless a building is labeled as a data center, they may be unlikely to protect those physical assets that are most critical to their organization’s digital operations.

All security programs, both digital and physical, struggle to achieve the fourth item, limited by their lack of business access and influence. So, how do security organizations move from being security-centric to business-centric? The journey starts by taking a converged approach.

Why converge?

Implementing a converged security organization is perhaps one of the most resourceful and beneficial business decisions an organization can make when seeking to enhance security risk management. In this era of heightened consequences and sophisticated security threats, the need for integration between siloed security and risk management teams is imperative. The need for collaboration between those two teams and the business is equally imperative.

In my role as the Chief Security Officer of Dell Technologies, I oversee a converged organization with responsibility for physical security, cybersecurity, product security, privacy and enterprise resiliency programs, including business continuity, disaster recovery and crisis management. As discussed in a recent article, organizations that treat different aspects of security – such as physical and cybersecurity – as separate endeavors, can unintentionally undermine one area and in turn, weaken both areas. With a converged organization, the goal is to bring those once-separate entities together in a more impactful manner. I’ve seen convergence lead to greater effectiveness in corporate risk management practices. But, the benefits don’t stop there. It also increases financial and operational efficiency, improves stakeholder communications and strengthens customer trust.

Over the course of this series, I will walk you through how security, privacy and resiliency teams with seemingly different capabilities and goals can work together to advance one another’s priorities, all while marching towards one common goal – greater organizational outcomes. First up, let’s discuss the benefits gained from converging enterprise resiliency and security programs.

The road less traveled – benefits of converging resiliency and security

While I’ve observed an increase in organizations merging cybersecurity and physical security programs, I’ve seen fewer organizations bring resiliency into the mix, despite it being potentially more important. In fact, an ASIS study found that only 19% of organizations converged cybersecurity, physical security and business continuity into a single department.

In my experience, converging resiliency programs with all security programs enables organizations to consistently prepare for and respond to any security incident or crisis – natural disaster, global pandemic or cyberattack – with a high degree of resiliency. More importantly, converging these programs empowers security organizations to achieve the strategic imperatives mentioned earlier.

Now, let’s look at some of the more specific benefits:

  1. Business continuity programs help prioritize security resources

As discussed earlier, one of the main challenges for security leaders is trying to find resourceful ways to adequately secure the breadth of a company’s assets, often with a less-than-ideal budget that limits implementing leading security practices across every asset. By converging business continuity, a core component of a resiliency program, with cybersecurity and physical security programs, security leaders can identify the most critical business processes and the digital and physical assets that underpin them. This in turn provides clear priorities for security focus and investment.

Non-converged security organizations generally prioritize their focus through the lens of regulatory and litigation risk, rather than having a deep understanding of business operational risk and its ties to revenue generation. For a physical security leader, this may look like prioritizing physical security resources in countries that have stronger regulatory oversight and more stringent fine structures, or those that contain the most employees. For a cybersecurity leader, it may mean focusing on databases that contain the most records of personal information, a costly data element to lose. While these approaches are not wrong, they are incomplete. In fact, the most critical business assets don’t often look like those most commonly prioritized by security. It requires a business lens to find the assets that the business depends upon to thrive, rather than focusing on the assets that might lead to a lawsuit if left unprotected. It means thinking about business risk more holistically.

Business continuity planners have perfected the art of applying a business lens to explore complex, interdependent business processes, some of which even sit with third parties. When organizations don’t continuity plan well, it isn’t until an incident strikes that they find most of their company’s revenue was dependent on an overlooked single point of failure.

However, business continuity alone is typically only looking for issues of availability. By converging resiliency and security programs, business impact assessments and security reviews can merge, resulting in more holistic assessments that consider both business and security risk across the full spectrum of availability, confidentiality and integrity issues. As a further sweetener, business stakeholders can have a single conversation with the converged risk program, reducing distractions that pull them from their primary business focus.

By integrating these two programs, converged security organizations can ensure their priorities are closely aligned with the business’ priorities. Whether it be digital assets, buildings or people, an organization’s most critical assets are clearly identified and traced to critical business processes through robust business continuity planning, then secured. Tying these programs together enables security leaders to protect what matters most, the most, which is the most important benefit of converging security and resiliency programs.

  1. Security makes business continuity programs smarter

For the modern security professional, the only thing better than spotting a difficult-to-find critical business asset in need of protection is for a business to improve its processes and reduce the number of assets needing protection in the first place. By embedding security context into the continuity planning process, business continuity programs become smarter. With this knowledge, converged organizations can more effectively propose process engineering opportunities that optimize security budgets and reduce organizational risk. This is particularly true where the resiliency team has deeper access and insights to the supported organization than the security team.

Typically, business continuity planners are introduced to business processes and underlying assets only after they are in place, which means planners discover existing resiliency risks. Contrast that with modern security programs embedded in business and digital transformation projects from the beginning. By merging security and business continuity programs, the value proposition shifts from “smart discovery” of business process reengineering opportunities to one of resilient and secure business process engineering from the initial design point, helping organizations get it right the first time.

This type of value can extend from the most tactical processes to more strategic business initiatives, such as launching a new design center overseas. Converged security organizations can share a holistic, converged risk picture to inform business decision making. A typical converged risk assessment for such a project may consider historical storm patterns, geopolitical instability, national economic espionage, domestic terrorism, labor risk and so on. This holistic view results in better risk decisions and better business outcomes.

  1. Security and crisis management go together like peanut butter and jelly

Crisis management is another core capability of resiliency programs. The benefit of converging crisis management and security programs is twofold. First, security is often the cause of the crisis. Historically, organizational crises would be a broad mix of mismanagement, natural, political, brand, labor and other issues. In the last year alone, the world has seen a dramatic rise in cyberattacks.

Second, this is the area where the culture of the two organizations is most closely aligned, allowing for low-friction integration and improvement. Crisis management professionals are accustomed to preparing for and managing through low-likelihood, high-impact events and facilitating critical decisions quickly, with imperfect information. If you ask a security leader what the motion of their organization looks like, you will likely get an identical answer. Leaders can unify and augment these skillsets and capabilities by bringing crisis management and security programs together. And, this is becoming more important in a world where consequence management – how capably a company responds when things go wrong – can be the difference between a glancing blow and a knockout.

  1. Disaster recovery programs thrive when paired with security

Disaster recovery teams focus on identifying critical data and technology, and ensuring it is architected and tested to handle common continuity disruptions. In a mature resiliency program, this means close relationships between continuity planners and application owners. Often, however, resiliency programs struggle to gain deep access and influence within technology organizations, or the disaster recovery technology-centric arm of the program is challenged to integrate with the more business-centric continuity planning arm. A converged resiliency and security program eases these challenges.

Disaster recovery programs often sit within the technology organizations themselves, and in those cases, technology integration is not a challenge. However, these programs can sometimes struggle to maintain close access to the business organizations they support. In these cases, converging resiliency and physical security programs enables teams to leverage the strong business relationships and closer business access that physical security programs often have. By integrating these programs, physical security teams can create the inroads needed so disaster recovery programs can deliver the most value in a business-connected manner.

Conversely, for disaster recovery programs that sit within business or resiliency teams, they can often struggle to gain traction with an organization’s technical owners. In these cases, converging disaster recovery with a cybersecurity program can be a game changer. Cybersecurity core programs focus on application, database and system security, and have an existing engagement model with those the disaster recovery teams need to influence. By integrating with cybersecurity programs, disaster recovery teams can leverage existing processes and organizational relationships to accelerate their impact. The integration of these programs also provides a more efficient unified engagement model for the technology asset owners, creating overall efficiency for the organization.

Finally, the cause of disaster recovery events is increasingly cybersecurity related. Disaster recovery teams must adjust their architectures and programs to account for ransomware, destructive malware attacks and other evolving threats. The expertise needed to do this well rests with cybersecurity organizations who, once converged, are well positioned to help with this journey.

  1. Security brings digital expertise to resiliency programs

Consider this: When a hurricane strikes, the location and severity of the storm’s eye depends on the time of day, the topography and numerous meteorological factors. It doesn’t target you specifically. Organizations are informed of the hurricane’s arrival days in advance. And, the organization is not the only victim of the hurricane, so external support is mobilized and resources are provided. Given all these factors, organizations infrequently experience the most severe possible outcomes. Now, consider a typical cyber crisis: When a ransomware attack strikes, it is without warning, usually targeting and impacting the most critical business assets and is designed to hit at the most inopportune time. Moreover, the victim is often blamed, which means outside help is scarce. Of course, organizations should continue planning for hurricanes, earthquakes, pandemics and other natural disasters, but the evolution of digital crises makes the resiliency threat landscape more complex. The results of these troubling trends: Cybercrime will have cost the world $6 trillion by the end of this year, up from $3 trillion in 2015. Natural disasters globally cost $84 billion in 2015.

Business continuity professionals have thrived for decades by helping their organizations predict and prepare for natural disasters and physical security incidents. To date, the best practice to prepare resilient data centers is to evaluate redundant electrical grid availability, historical weather patterns, earthquake trends and, most importantly, to confirm that the backup data center doesn’t reside within a certain physical distance of the primary data center. Cyber threats have added new challenges to this equation, as even two ideally positioned, geographically distanced, modern data centers often rely on the same underlying cyber systems and networks. It’s not uncommon to find ransomware attacks, which travel at the speed of light and aren’t bound by physical distance, devastating organizations when both primary and backup data centers are encrypted for ransom or, worse, deleted by destructive malware. This is only one example that highlights the new resiliency risks faced by the world’s recent dramatic increase in digital dependency and cyber threats. By converging cybersecurity and resiliency programs, organizations are better positioned to contend with this challenging new reality…[…] Read more »….

 

When Certificate Management Becomes Daunting, Automate It

Tracking and managing digital certificates has become a challenge that overwhelms many IT managers and security professionals, making the task a clear candidate for automation. The sheer number of servers, users, devices, and software applications in today’s enterprise which require authentication is daunting—particularly for those still using massive Excel sheets.

Despite the lost productivity and increased risks of manually managing private and public certificates, automation of certificate management remains less common than one would assume.

Yes, there are various third-party tools and IETF protocols that can help modernize this onerous task, but many organizations have holes in the process, making efficient certificate management elusive. Some organizations have fewer than a few dozen certificates to manage, while others, especially those organizations with facilities and offices spread across continents and oceans, may have tens of thousands. In addition, for companies with millions of edge devices and sensors, tracking and managing the security certificates required to establish and ensure secure communications is a herculean task.

By adopting a complete certificate automation solution, enterprises can reduce the risk of breaches or outages owing themselves to expired certificates or that are unknowingly deployed in their environment. In addition, automated certificate lifecycle management enables businesses to respond quickly and with agility as the security and business landscape evolves and new types of cybersecurity threats emerge.

There are four pillars of certificate automation designed to take enterprises from tactical to strategic certificate management: Discovery, Deployment, Lifecycle Management, and Renewal. These pillars feed into a single pane of glass platform for complete visibility.

 

Four Pillars of Certification Automation
 

#1  Discovery

Finding and cataloging all of your company’s certificates is essential for securing a modern enterprise. There are likely rogue certificates floating around that were not directly issued by your IT or network security teams. Often, smaller activities and projects from other lines of business are not appropriately removed, with the seemingly non-critical certificates becoming ticking time bombs, which if forgotten or ignored, can create massive security vulnerabilities.

By using an automated discovery process that regularly scans an entire environment, network or security admins can identify every certificate across the organization, ensuring that there are no rogue certificates that go undiscovered until they have opened the door to a cyberattack or create other security issues.

#2 Deployment

Manually provisioning or registering a certificate at the right time for the right purpose is an incredibly time-intensive task. Merely deploying an SSL certificate on just one server could take up to two hours! And that is just the beginning.

Now add the other required tasks such as documenting each certificate’s location and purpose, configuring certificates according to myriad endpoint devices and varying operating systems, and then confirming that each performs correctly. This can require a lot of additional time and effort.

Today’s enterprises need to be quick-moving and agile to keep up with constant flux and rapid change. Beyond time saved, automated deployment means reduced human-error and increased reliability and consistency. Fortunately, IETF standards, like the Automated Certificate Management Environment (ACME) protocol, are gaining traction and cover most use cases for end-to-end certificate management.

#3 Lifecycle management

Certificates include the requirements and policies that enterprises use to define trust within their organization, extending the security of using only highly trusted key architectures.

To ensure a certificate is always in its best possible state, organizations need to be able to revoke and replace certificates on demand, quickly and efficiently.  Spending more than two hours per certificate is unreasonable. It needs to happen seamlessly and at scale.

Automated lifecycle management makes revoking and replacing certificates a touch-free process. And administrators no longer need to wait until the expiration date to make critical certificate upgrades. Instead, they can simply order and provision new valid certificates and easily revoke old or noncompliant certificates. The platform manages these changes without downtime.

#4 Renewal

All certificates have an expiration date. It is a fundamental trust element that certificates are time-bound and will need to be replaced. Effective September 2020, browsers further shortened certificate validity to a 398-day period, sending organizations still manually managing hundreds or thousands of certificates with spreadsheets into panic mode. When certificates expire without having been replaced, that is when we start to see headlines about costly outages or breaches.

Timely certificate renewal is a cornerstone of cybersecurity, and reliance on manual management increases the risk of human errors.

Some organizations claim to be automated because part of their process, such as receiving email notifications about the impending certificate expiration, is automated. Unfortunately, many of these emergency notice emails end up in a flooded inbox or spam folder or are sent to someone on vacation or who is no longer with the organization. More importantly, an email is just an alert. It does not actually DO anything. It does not actually renew and install the new certificate.

While it is essential for organizations to know that renewals are coming, the most significant value of automating renewals is that the entire process is scheduled to run with minimal action from individual contributors.

Single pane of glass visibility for monitoring and management

Enterprises are best served by using one certificate management platform—a so-called “single pane of glass”—to discover, deploy, manage lifecycles, and renew all digital certificates.

Visibility is the critical capability enterprises require to enable and enhance the four pillars. Having this one-stop insight speeds up and simplifies certificate management, making it easier to track and monitor the certificate types, vendors, public and private certificates, cryptographic choices, and upcoming certificate expirations. Such visibility is also the basis for sound corporate governance of trust policies and compliance audits…[…] Read more »

 

 

Meet Leanne Hurley: Cloud Expert of the Month – April 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Leanne Hurley.

After starting out at the front counter of a two-way radio shop in 1993, Leanne worked her way from face-to-face customer service, to billing, to training and finally into sales. She has been in sales since 1996 and has (mostly!) loved every minute of it. Leanne started selling IaaS (whether co-lo, Managed Hosting or Cloud) during the dot.com boom and expanded her expertise since I’ve been at SAP.  Now, she enjoys leading a team of sales professionals as she works with companies to improve business outcomes and accelerate digital transformation utilizing SAP’s Intelligent enterprise.

When did you join Cloud Girls and why?

I was one of the first members of Cloud Girls in 2011. I joined because having a strong network and community of women in technology is important.

What do you value about being a Cloud Girl?  

I value the relationships and women in the group.

What advice would you give to your younger self at the start of your career?

Stop doubting yourself. Continue to ask questions and don’t be intimidated by people that try to squash your tenacity and curiosity.

What’s your favorite inspirational quote?

“You can have everything in life you want if you will just help other people get what they want.”  – Zig Ziglar

What one piece of advice would you share with young women to encourage them to take a seat at the table?

Never stop learning and always ask questions. In technology women (and men too for that matter) avoid asking questions because they think it reveals some sort of inadequacy. That is absolutely false. Use your curiosity and thirst for knowledge as a tool, it will serve you well all your life.

You’re a new addition to the crayon box. What color would you be and why?

I would be Sassy-molassy because I’m a bit sassy.

What was the best book you read this year and why?

I loved American Dirt because it humanized the US migrant plight and reminded me how blessed and lucky we all are to have been born in the US.

What’s the most useless talent you have? Why?.[…] Read more »…..

 

3 signs that it’s time to reevaluate your monitoring platform

As we move forward from the uncertainty of 2020, remote and hybrid styles of work are likely to remain beyond the pandemic. Amid the rise of modified workflows, we’ve also seen an increase in phishing scams, ransomware attacks, and simple user errors that result in the IT infrastructures we rely on crashing – sometimes with devastating long-term repercussions for the business. What’s needed to prevent this is a reliable monitoring system that is constantly scanning your system – whether you’re operating from a data center, a public cloud, or some combination – to alert you when something is amiss. Often these monitoring tools run so smoothly in the background of operations that we forget they’re even there – which can be a big problem.

When is the last time you assessed your monitoring platform? You may have already noticed signs indicating that your tools are not keeping up with the rapidly changing digital workforce – gathering nonessential data while failing to forewarn you about legitimate issues to your network operations. Post-2020, these systems have to handle workforces that are staying connected digitally regardless of where employees are working. Your monitoring tools should be hyper-focused on alerting you to issues from outside your network and any weakness from within it. Often, we turn out to be monitoring for too much and still missing the essential problems until it’s too late.

  1. Outages

One of the most damaging and costly setbacks a business can experience is network downtime when your network suddenly and without warning ceases to work. Applications are no longer functioning, files are inaccessible, and your business cannot perform its daily functions. Responding to network downtime isn’t a simple matter of rebooting your computer, either. Gartner estimates that for every minute of network downtime, the company in question loses an average of $5,600. On the higher end of this spectrum, a business could lose $540,000 per hour. Those figures are based on lost productivity. Getting your system up and running again, catching up on lost time, and, one would think, reevaluating and implementing a new monitoring system all incur additional costs.

In the case of one luxury hotel chain, an updated monitoring system accurately detected why they were experiencing outages – a change in network configuration. By utilizing a newly updated monitoring configuration, the chain quickly reverted the network change and restored service for their customers, saving hours of troubleshooting and costly downtime.

Systems should be proactive, not reactive. The time to reassess your monitoring infrastructure isn’t after it fails to warn you that something goes wrong. Your network monitoring system should be automatically measuring performance and sharing status updates, so you can fix a problem before it happens. If your system is working at its proper capacity, it will be routinely preventing unexpected outages by using performance thresholds to evaluate functionality in real time, and alert you when targeted metrics have reached a threshold that requires attention. With a robust monitoring system in place, your team should have complete network visibility and can respond to changes and prevent outages before they happen.

  1. Alert Fatigue

Alert fatigue is something we can all relate to following a year of working from home: email notifications, instant messages, texts, phone calls, and calendar reminders for your next video meeting. After so many of these day after day, we become desensitized to them; the more alerts we receive, the less urgent any of them seem. From a cybersecurity standpoint, some of the notifications may be for anomalies linked to a potential cyberattack, but more often will be a junk email. If a genuinely urgent message does come through, it often slips through the cracks because it seems no different from any other notification we receive.

So how can your IT infrastructure help prevent this? Intelligent monitoring systems, in general, aim to make the lives of the people using them easier. Your monitoring system should reduce the number of redundant alerts to recognize and prioritize actual issues. A tiered-alert priority system will have notifications display on your dashboard with a visual or auditory cue signifying how important it is. Can this wait until the afternoon, or does it need to be addressed immediately? Detecting a cyberattack early, for example, can make a huge difference in mitigating damage.

  1. Excess Tools

One of the root causes of any monitoring flaw can be excessive monitoring tools themselves – over-monitoring. If you have multiple tools to track your network, you’re likely getting notifications and warnings from each; contributing to alert fatigue, opening yourself up to a potential failure, resulting in a network outage and business interruption. Having multiple tools performing the same function is a waste of resources as they render each other redundant. The key is to consolidate the necessary functions in one monitoring system, regularly assessed for vulnerabilities and customized for your particular business needs.

Your business members will indeed want to track an abundance of metrics – server functionality, security, business metrics, and so on – and it may be that not all of these things can be monitored by the same tool. You should first decide which things are essential for your team to be actively monitoring and assessing. Security should be a top priority, but are there other data points that can be pulled in a quarterly or annual report instead? Your IT monitoring should be focused on tracking and alerting you to essential information and irregularities. You can avoid overextending the team and receiving alerts that will only be ignored by first doing your own assessment of what you need from your system.

Assessing Your Approach for Future Growth

We can’t operate at our full potential without the control and visibility that monitoring tools give us…[…] Read more »….

 

Protecting Remote Workers Against the Perils of Public WI-FI

In a physical office, front-desk security keeps strangers out of work spaces. In your own home, you control who walks through your door. But what happens when your “office” is a table at the local coffee shop, where you’re sipping a latte among total strangers?

Widespread remote work is likely here to stay, even after the pandemic is over. But the resumption of travel and the reopening of public spaces raises new concerns about how to keep remote work secure.

In particular, many employees used to working in the relative safety of an office or private home may be unaware of the risks associated with public Wi-Fi. Just like you can’t be sure who’s sitting next to your employee in a coffee shop or other public space, you can’t be sure whether the public Wi-Fi network they’re connecting to is safe. And the second your employee accidentally connects to a malicious hotspot, they could expose all the sensitive data that’s transmitted in their communications or stored on their device.

Taking scenarios like this into account when planning your cybersecurity protections will help keep your company’s data safe, no matter where employees choose to open their laptops.

The risks of Wi-Fi search

An employee leaving Wi-Fi enabled when they leave their house may seem harmless, but it really leaves them incredibly vulnerable. Wi-Fi enabled devices can reveal the network names (SSIDs) they normally connect to when they are on the move. An attacker can then use this information to emulate a known “trusted” network that is not encrypted and pretend to be that network.  Many devices will automatically connect to these “trusted” open networks without verifying that the network is legitimate.

Often, attackers don’t even need to emulate known networks to entice users to connect. According to a recent poll, two-thirds of people who use public Wi-Fi set their devices to connect automatically to nearby networks, without vetting which ones they’re joining.

If your employee automatically connects to a malicious network — or is tricked into doing so — a cybercriminal can unleash a number of damaging attacks. The network connection can enable the attacker to intercept and modify any unencrypted content that is sent to the employee’s device. That means they can insert malicious payloads into innocuous web pages or other content, enabling them to exploit any software vulnerabilities that may be present on the device.

Once such malicious content is running on a device, many technical attacks are possible against other, more important parts of the device software and operating system. Some of these provide administrative or root level access, which gives the attacker near total control of the device. Once an attacker has this level of access, all data, access, and functionality on the device is potentially compromised. The attacker can remove or alter the data, or encrypt it with ransomware and demand payment in exchange for the key.

The attacker could even use the data to emulate and impersonate the employee who owns and or uses the device. This sort of fraud can have devastating consequences for companies. Last year, a Florida teenager was able to take over multiple high-profile Twitter accounts by impersonating a member of the Twitter IT team.

A multi-layered approach to remote work security

These worst-case scenarios won’t occur every time an employee connects to an unknown network while working remotely outside the home — but it only takes one malicious network connection to create a major security incident. To protect against these problems, make sure you have more than one line of cybersecurity defenses protecting your remote workers against this particular attack vector.

Require VPN use. The best practice for users who need access to non-corporate Wi-Fi is to require that all web traffic on corporate devices go through a trusted VPN. This greatly limits the attack surface of a device, and reduces the probability of a device compromise if it connects to a malicious access point.

Educate employees about risk. Connecting freely to public Wi-Fi is normalized in everyday life, and most people have no idea how risky it is. Simply informing your employees about the risks can have a major impact on behavior. No one wants to be the one responsible for a data breach or hack…[…] Read more »