Why You Need a Data Fabric, Not Just IT Architecture

Data fabrics offer an opportunity to track, monitor and utilize data, while IT architectures track, monitor and maintain IT assets. Both are needed for a long-term digitalization strategy.

As companies move into hybrid computing, they’re redefining their IT architectures. IT architecture describes a company’s entire IT asset base, whether on-premises or in-cloud. This architecture is stratified into three basic levels: hardware such as mainframes, servers, etc.; middleware, which encompasses operating systems, transaction processing engines, and other system software utilities; and the user-facing applications and services that this underlying infrastructure supports.

IT architecture has been a recent IT focus because as organizations move to the cloud, IT assets also move, and there is a need to track and monitor these shifts.

However, with the growth of digitalization and analytics, there is also a need to track, monitor, and maximize the use of data that can come from a myriad of sources. An IT architecture can’t provide data management, but a data fabric can. Unfortunately, most organizations lack well-defined data fabrics, and many are still trying to understand why they need a data fabric at all.

What Is a Data Fabric?

Gartner defines a data fabric as “a design concept that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable and inferenced metadata assets to support the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.”

Let’s break it down.

Every organization wants to use data analytics for business advantage. To use analytics well, you need data agility that enables you to easily connect and combine data from any source your company uses –whether the source is an enterprise legacy database or data that is culled from social media or the Internet of Things (IoT).  You can’t achieve data integration and connectivity without using data integration tools, and you also must find a way to connect and relate disparate data to each other in meaningful ways if your analytics are going to work.

This is where data fabric enters. The data fabric contains all the connections and relationships between an organization’s data, no matter what type of data it is or where it comes from. The goal of the fabric is to function as an overall tapestry of data that interweaves all data so data in its entirety is searchable. This has the potential to not only optimize data value, but to create a data environment that can answer virtually any analytics query. The data fabric does what an IT architecture can’t: it tells you what data does, and how data relates to each other. Without a data fabric, companies’ abilities to leverage data and analytics are limited.

Building a Data Fabric

When you build a data fabric, it’s best to start small and in a place where your staff already has familiarity.

That “place” for most companies will be with the tools that they are already using to extract, transform and load (ETL) data from one source to another, along with any other data integration software such as standard and custom APIs. All of these are examples of data integration you have already achieved.

Now, you want to add more data to your core. You can do this by continuing to use the ETL and other data integration methods you already have in place as you build out your data fabric. In the process, care should be taken to also add the metadata about your data, which will include the origin point for the data, how it was created, what business and operational processes use it, what its form is (e.g.,  single field in a fixed record, or an entire image file), etc. By maintaining the data’s history, as well as all its transformations, you are in a better position to check data for reliability, and to ensure that it is secure.

As your data fabric grows, you will probably add data tools that are missing from your workbench. These might be tools that help with tracking data, sharing metadata, applying governance to data, etc. A recommendation in this area is to look for an all-inclusive data management software that contains not only all the tools that you’ll need build a data fabric, but also important automation such as built-in machine learning.

The machine learning observes how data in your data fabric is working together, and which combinations of data are used most often in different business and operational contexts. When you query the data, the ML assists in pulling the data together that is most likely to answer your queries…[…] Read more »…..

 

9 best practices for network security

Network security is the practice of protecting the network and data to maintain the integrity, confidentiality and accessibility of the computer systems in the network. It covers a multitude of technologies, devices and processes, and makes use of both software- and hardware-based technologies.

Each organization, no matter what industry they belong to or what their infrastructure size is, requires comprehensive network security solutions to protect it from various cyberthreats happening in the wild today.

Network security layers

When we talk about network security, we need to consider layers of protection:

Physical network security

Physical network security controls deal with preventing unauthorized persons from gaining physical access to the office and network devices, such as firewalls and routers. Physical locks, ID verification and biometric authentication are few measures in place to take care of such issues.

Technical network security

Technical security controls deal with the devices in the network and data stored and in transit. Also, technical security needs to protect data and systems from unauthorized personnel and malicious activities from employees.

Administrative network security

Administrative security controls deal with security policies and compliance processes on user behavior. It also includes user authentication, their privilege level and implementing changes to the existing infrastructure.

Network security best practices

Now we have a basic understanding and overview of network security, let’s focus on some of the network security best practices you should be following.

1. Perform a network audit

The first step to secure a network is to perform a thorough audit to identify the weakness in the network posture and design. Performing a network audit identifies and assesses:

  • Presence of security vulnerabilities
  • Unused or unnecessary applications
  • Open ports
  • Anti-virus/anti-malware and malicious traffic detection software
  • Backups

In addition, third-party vendor assessments should be conducted to identify additional security gaps.

2. Deploy network and security devices

Every organization should have a firewall and a web application firewall (WAF) for protecting their website from various web-based attacks and to ensure safe storage of their data. To maintain the optimum security of the organization and monitor traffic, various additional systems should be used, such as intrusion detection and prevention (IDS/IPS) systems, security information and event management (SIEM) systems and data loss prevention (DLP) software.

3. Disable file sharing features

Though file sharing sounds like a convenient method for exchanging files, it’s advisable to enable file sharing only on a few independent and private servers. File sharing should be disabled on all employee devices.

4. Update antivirus and anti-malware software

Businesses purchase desktop computers and laptops with the latest version of antivirus and anti-malware software but fail to keep it updated with new rules and updates. By ensuring that antivirus and anti-malware are up to date, one can be assured that the device is running antivirus with the latest bug fixes and security updates.

5. Secure your routers

A security breach or a security event can take place simply by hitting the reset button on the network router. Thus it is paramount to consider moving routers to a more secure location such as a locked room or closet. Also, video surveillance equipment and CCTV can be installed in the server or network room. In addition, the router should be configured to change default passwords and network names, which attackers can easily find online.

6. Use a private IP address

To avoid unauthorized users or devices from accessing the critical devices and servers in the network, private IP addresses should be assigned to them. This practice enables the IT administrator to easily tap on all unauthorized attempts by the users or devices connecting to your network for any suspicious activity.

7. Establish a network security maintenance system

A proper network security maintenance system should be established which involves processes such as :

  1. Perform regular backups
  2. Updating the software
  3. Schedule change in network name and passwords

Once a network security maintenance system is established, document it and circulate it to your team…[…] Read more »….

 

5 minutes with Vishal Jain – Navigating cybersecurity in a hybrid work environment

Are you ready for hybrid work? Though the hybrid office will create great opportunities for employees and employers alike, it will create some cybersecurity challenges for security and IT operations. Here, Vishal Jain, Co-Founder and CTO at Valtix, a Santa Clara, Calif.-based provider of cloud native network security services, speaks to Security magazine about the many ways to develop a sustainable cybersecurity program for the new hybrid workforce.

Security: What is your background and current role? 

Jain: I am the co-founder and CTO of Valtix. My background is primarily building products and technology at the intersection of networking, security and cloud; built Content Delivery Networks (CDNs) during early days of Akamai and just finished doing Software Defined Networking (SDN) in a startup which built ACI for Cisco.

 

Security: There’s a consensus that for many of us, the reality will be a hybrid workplace. What does the hybrid workforce mean for cybersecurity teams?

Jain: The pandemic has accelerated trends that had already begun before 2019. We’ve just hit an inflection point on the rate of change – taking on much more change in a much shorter period of time. The pandemic is an inflection point for cloud tech adoption. I think about this in three intersections of work, apps, infrastructure, and security:

  1. Work and Apps: A major portion of the workforce will continue to work remotely, communicating using collaboration tools like Zoom, WebEx, etc. Post-pandemic, video meetings would be the new norm compared to the old model where in-person meeting was the norm. The defaults have changed. Similarly, the expectation now is that any app is accessible anywhere from any device.
  2. Apps and Infrastructure: Default is cloud. This also means that expectation on various infrastructure is now towards speed, agility, being infinite and elastic and being delivered as a service.
  3. Infrastructure and Security: This is very important for cybersecurity teams, how do they take a discipline like security from a static environment (traditional enterprise) and apply it to a dynamic environment like cloud.

Security: What solutions will be necessary for enterprise security to implement as we move towards this new work environment?

Jain: In this new work environment where any app is accessible anywhere from any device, enterprise security needs to focus on security of users accessing those apps and security of those apps themselves. User-side security and securing access to the cloud is a well-understood problem now, plenty of innovation and investments have been made here. For security of apps, we need to look back at intersections 2 and 3, mentioned previously.

Enterprises need to understand security disciplines but implementation of these is very different in this new work environment. Security solutions need to evolve to address security & ops challenges. On the security side, definition of visibility has to expand. On the operational side of security, solutions need to be cloud-native, elastic, and infinitely scalable so that enterprises can focus on applications, not the infrastructure.

Security: What are some of the challenges that will need to be overcome as part of a hybrid workplace?

Jain: Engineering teams typically have experiences working across distributed teams so engineering and the product side of things are not super challenging as part of a hybrid workplace. On the other hand, selling becomes very different, getting both customers and the sales team used to this different world is a challenge enterprises need to focus on. Habits and culture are always the hardest part to change. This is true in security too. There is a tendency to bring in old solutions to secure this new world. Security practitioners could try to bring in the same tech and product he/she has been using for 10 years but deep down they know it’s a bad fit…[…] Read more »….

 

What Will Be the Next New Normal in Cloud Software Security?

Accelerated moves to the cloud made sense at the height of the pandemic — organizations may face different concerns in the future.

Organizations that accelerated their adoption of cloud native apps, SaaS, and other cloud-driven resources to cope with the pandemic may have to weigh other security matters as potential “new normal” operations take shape. Though many enterprises continue to make the most of remote operations, hybrid workplaces might be on the horizon for some. Experts from cybersecurity company Snyk and SaaS management platform BetterCloud see new scenarios in security emerging for cloud resources in a post-pandemic world.

The swift move to remote operations and work-from-home situations naturally led to fresh concerns about endpoint and network security, says Guy Podjarny, CEO and co-founder of Snyk. His company recently issued a report on the State of Cloud Native Application Security, exploring how cloud-native adoption affects defenses against threats. As more operations were pushed remote and to the cloud, security had to discern between authorized personnel who needed access from outside the office versus actual threats from bad actors.

Decentralization was already underway at many enterprises before COVID-19, though that trend may have been further catalyzed by the response to the pandemic. “Organizations are becoming more agile and the thinking that you can know everything that’s going on hasn’t been true for a long while,” Podjarny says. “The pandemic has forced us to look in the mirror and see that we don’t have line of sight into everything that’s going on.”

This led to distribution of security controls, he says, to allow for more autonomous usage by independent teams who are governed in an asynchronous manner. “That means investing more in security training and education,” Podjarny says.

A need for a security-based version of digital transformation surfaced, he says, with more automated tools that work at scale, offering insight on distributed activities. Podjarny says he expects most security needs that emerged amid the pandemic will remain after businesses can reopen more fully. “The return to the office will be partial,” he says, expecting some members of teams to not be onsite. This may be for personal, work-life needs, or organizations want to take advantage of less office space, Podjarny says.

That could lead to some issues, however, with the governance of decentralized activities and related security controls. “People don’t feel they have the tools to understand what’s going on,” he says. The net changes that organizations continue to make in response to the pandemic, and what may come after, have been largely positive, Podjarny says. “It moves us towards security models that scale better and adapted the SaaS, remote working reality.”

The rush to cloud-based applications such as SaaS and platform-as-a-service at the onset of the pandemic brought on some recognition of the necessity to offer ways to maintain operations under quarantine guidelines. “Employees were just trying to get the job done,” says Jim Brennan, chief product officer with BetterCloud. Spinning up such technologies, he says, enabled staff to meet those goals. That compares with the past where such “shadow IT” actions might have been regarded as a threat to the business. “We heard from a lot of CIOs where it really changed their thinking,” Brennan says, which led to efforts to facilitate the availability of such resources to support employees.

Meeting those needs at scale, however, created a new challenge. “How do I successfully onboard a new application for 100 employees? One thousand employees? How do I do that for 50 new applications? One hundred new applications?” Brennan says many CIOs and chief security officers have sought greater visibility into the cloud applications that have been spun up within their organizations and how they are being used. BetterCloud produced a brief recently on the State of SaaS, which looks at SaaS file security exposure.

Automation is being put to work, Brennan says, to improve visibility into those applications. This is part of the emerging landscape that even sees some organizations decide that the concept of shadow IT — the use of technology without direct approval — is a misnomer. “A CIO told me they don’t believe in ‘shadow IT,’” he says. In effect, the CIO regarded all IT, authorized or not, as a means to get work done…[…] Read more »…..

 

When security and resiliency converge: A CSO’s perspective on how security organizations can thrive

You’ve just been hired to lead the security program of a prominent multinational organization. You’re provided a seasoned team and budget, but you can’t help looking around and asking yourself, “How will I possibly protect every asset of this company, every day, against every threat, globally?” After all, this is the expectation of most organizations, their customers and shareholders, as well as regulators and lawmakers. In my experience, one of the top challenges security leaders face is trying to optimize a modest security budget to protect a highly complex and ever-expanding organizational attack surface. In fact, Accenture found that 69% of security professionals say staying ahead of attackers is a constant battle and the cost is unsustainable. For most, this challenge is extremely discouraging. However, success is not necessarily promised to those with resources – it’s more about how resourceful you can be.

As organizations worldwide digitally transform at a breakneck pace, the stakes are increasing for cybersecurity programs. Cyberattacks no longer just take down websites and internal email. They can disrupt the availability of every revenue-generating digital business process, threatening the very existence of many organizations. With this heightened risk, organizations must shift from a prevention-first mindset to one that balances aggressive prevention measures with a keen focus on enabling efficient consequence management. This shouldn’t be read as a response-only strategy, but it does mean:

  1. Designing business processes to minimize single points of failure and reduce sensitivity to technology and data latency, recognizing that technology and data risk is extremely high in today’s environment.
  2. Focusing asset protection programs disproportionately on the assets that underpin the most critical business processes or present the greatest risk.
  3. Architecting technology to anticipate and recover from persistent, sophisticated attacks, as the “zero trust” approach suggests.
  4. Establishing an organizational culture that acknowledges, anticipates, accepts and thrives in a pervasive threat environment.

Most cybersecurity leaders today only focus on, or are limited to focusing on, the third of these four items. Many are aggressively pursuing zero trust related modernization programs to increase the technology resilience of their organization’s systems and networks. However, the other three strategic imperatives are not achieved due to a lack of organizational knowledge, access, influence or governance.

The same can be said for physical security leaders, who likely do their best to focus on the second item, but may not understand the interdependency between an organization’s physical and digital assets. Unless a building is labeled as a data center, they may be unlikely to protect those physical assets that are most critical to their organization’s digital operations.

All security programs, both digital and physical, struggle to achieve the fourth item, limited by their lack of business access and influence. So, how do security organizations move from being security-centric to business-centric? The journey starts by taking a converged approach.

Why converge?

Implementing a converged security organization is perhaps one of the most resourceful and beneficial business decisions an organization can make when seeking to enhance security risk management. In this era of heightened consequences and sophisticated security threats, the need for integration between siloed security and risk management teams is imperative. The need for collaboration between those two teams and the business is equally imperative.

In my role as the Chief Security Officer of Dell Technologies, I oversee a converged organization with responsibility for physical security, cybersecurity, product security, privacy and enterprise resiliency programs, including business continuity, disaster recovery and crisis management. As discussed in a recent article, organizations that treat different aspects of security – such as physical and cybersecurity – as separate endeavors, can unintentionally undermine one area and in turn, weaken both areas. With a converged organization, the goal is to bring those once-separate entities together in a more impactful manner. I’ve seen convergence lead to greater effectiveness in corporate risk management practices. But, the benefits don’t stop there. It also increases financial and operational efficiency, improves stakeholder communications and strengthens customer trust.

Over the course of this series, I will walk you through how security, privacy and resiliency teams with seemingly different capabilities and goals can work together to advance one another’s priorities, all while marching towards one common goal – greater organizational outcomes. First up, let’s discuss the benefits gained from converging enterprise resiliency and security programs.

The road less traveled – benefits of converging resiliency and security

While I’ve observed an increase in organizations merging cybersecurity and physical security programs, I’ve seen fewer organizations bring resiliency into the mix, despite it being potentially more important. In fact, an ASIS study found that only 19% of organizations converged cybersecurity, physical security and business continuity into a single department.

In my experience, converging resiliency programs with all security programs enables organizations to consistently prepare for and respond to any security incident or crisis – natural disaster, global pandemic or cyberattack – with a high degree of resiliency. More importantly, converging these programs empowers security organizations to achieve the strategic imperatives mentioned earlier.

Now, let’s look at some of the more specific benefits:

  1. Business continuity programs help prioritize security resources

As discussed earlier, one of the main challenges for security leaders is trying to find resourceful ways to adequately secure the breadth of a company’s assets, often with a less-than-ideal budget that limits implementing leading security practices across every asset. By converging business continuity, a core component of a resiliency program, with cybersecurity and physical security programs, security leaders can identify the most critical business processes and the digital and physical assets that underpin them. This in turn provides clear priorities for security focus and investment.

Non-converged security organizations generally prioritize their focus through the lens of regulatory and litigation risk, rather than having a deep understanding of business operational risk and its ties to revenue generation. For a physical security leader, this may look like prioritizing physical security resources in countries that have stronger regulatory oversight and more stringent fine structures, or those that contain the most employees. For a cybersecurity leader, it may mean focusing on databases that contain the most records of personal information, a costly data element to lose. While these approaches are not wrong, they are incomplete. In fact, the most critical business assets don’t often look like those most commonly prioritized by security. It requires a business lens to find the assets that the business depends upon to thrive, rather than focusing on the assets that might lead to a lawsuit if left unprotected. It means thinking about business risk more holistically.

Business continuity planners have perfected the art of applying a business lens to explore complex, interdependent business processes, some of which even sit with third parties. When organizations don’t continuity plan well, it isn’t until an incident strikes that they find most of their company’s revenue was dependent on an overlooked single point of failure.

However, business continuity alone is typically only looking for issues of availability. By converging resiliency and security programs, business impact assessments and security reviews can merge, resulting in more holistic assessments that consider both business and security risk across the full spectrum of availability, confidentiality and integrity issues. As a further sweetener, business stakeholders can have a single conversation with the converged risk program, reducing distractions that pull them from their primary business focus.

By integrating these two programs, converged security organizations can ensure their priorities are closely aligned with the business’ priorities. Whether it be digital assets, buildings or people, an organization’s most critical assets are clearly identified and traced to critical business processes through robust business continuity planning, then secured. Tying these programs together enables security leaders to protect what matters most, the most, which is the most important benefit of converging security and resiliency programs.

  1. Security makes business continuity programs smarter

For the modern security professional, the only thing better than spotting a difficult-to-find critical business asset in need of protection is for a business to improve its processes and reduce the number of assets needing protection in the first place. By embedding security context into the continuity planning process, business continuity programs become smarter. With this knowledge, converged organizations can more effectively propose process engineering opportunities that optimize security budgets and reduce organizational risk. This is particularly true where the resiliency team has deeper access and insights to the supported organization than the security team.

Typically, business continuity planners are introduced to business processes and underlying assets only after they are in place, which means planners discover existing resiliency risks. Contrast that with modern security programs embedded in business and digital transformation projects from the beginning. By merging security and business continuity programs, the value proposition shifts from “smart discovery” of business process reengineering opportunities to one of resilient and secure business process engineering from the initial design point, helping organizations get it right the first time.

This type of value can extend from the most tactical processes to more strategic business initiatives, such as launching a new design center overseas. Converged security organizations can share a holistic, converged risk picture to inform business decision making. A typical converged risk assessment for such a project may consider historical storm patterns, geopolitical instability, national economic espionage, domestic terrorism, labor risk and so on. This holistic view results in better risk decisions and better business outcomes.

  1. Security and crisis management go together like peanut butter and jelly

Crisis management is another core capability of resiliency programs. The benefit of converging crisis management and security programs is twofold. First, security is often the cause of the crisis. Historically, organizational crises would be a broad mix of mismanagement, natural, political, brand, labor and other issues. In the last year alone, the world has seen a dramatic rise in cyberattacks.

Second, this is the area where the culture of the two organizations is most closely aligned, allowing for low-friction integration and improvement. Crisis management professionals are accustomed to preparing for and managing through low-likelihood, high-impact events and facilitating critical decisions quickly, with imperfect information. If you ask a security leader what the motion of their organization looks like, you will likely get an identical answer. Leaders can unify and augment these skillsets and capabilities by bringing crisis management and security programs together. And, this is becoming more important in a world where consequence management – how capably a company responds when things go wrong – can be the difference between a glancing blow and a knockout.

  1. Disaster recovery programs thrive when paired with security

Disaster recovery teams focus on identifying critical data and technology, and ensuring it is architected and tested to handle common continuity disruptions. In a mature resiliency program, this means close relationships between continuity planners and application owners. Often, however, resiliency programs struggle to gain deep access and influence within technology organizations, or the disaster recovery technology-centric arm of the program is challenged to integrate with the more business-centric continuity planning arm. A converged resiliency and security program eases these challenges.

Disaster recovery programs often sit within the technology organizations themselves, and in those cases, technology integration is not a challenge. However, these programs can sometimes struggle to maintain close access to the business organizations they support. In these cases, converging resiliency and physical security programs enables teams to leverage the strong business relationships and closer business access that physical security programs often have. By integrating these programs, physical security teams can create the inroads needed so disaster recovery programs can deliver the most value in a business-connected manner.

Conversely, for disaster recovery programs that sit within business or resiliency teams, they can often struggle to gain traction with an organization’s technical owners. In these cases, converging disaster recovery with a cybersecurity program can be a game changer. Cybersecurity core programs focus on application, database and system security, and have an existing engagement model with those the disaster recovery teams need to influence. By integrating with cybersecurity programs, disaster recovery teams can leverage existing processes and organizational relationships to accelerate their impact. The integration of these programs also provides a more efficient unified engagement model for the technology asset owners, creating overall efficiency for the organization.

Finally, the cause of disaster recovery events is increasingly cybersecurity related. Disaster recovery teams must adjust their architectures and programs to account for ransomware, destructive malware attacks and other evolving threats. The expertise needed to do this well rests with cybersecurity organizations who, once converged, are well positioned to help with this journey.

  1. Security brings digital expertise to resiliency programs

Consider this: When a hurricane strikes, the location and severity of the storm’s eye depends on the time of day, the topography and numerous meteorological factors. It doesn’t target you specifically. Organizations are informed of the hurricane’s arrival days in advance. And, the organization is not the only victim of the hurricane, so external support is mobilized and resources are provided. Given all these factors, organizations infrequently experience the most severe possible outcomes. Now, consider a typical cyber crisis: When a ransomware attack strikes, it is without warning, usually targeting and impacting the most critical business assets and is designed to hit at the most inopportune time. Moreover, the victim is often blamed, which means outside help is scarce. Of course, organizations should continue planning for hurricanes, earthquakes, pandemics and other natural disasters, but the evolution of digital crises makes the resiliency threat landscape more complex. The results of these troubling trends: Cybercrime will have cost the world $6 trillion by the end of this year, up from $3 trillion in 2015. Natural disasters globally cost $84 billion in 2015.

Business continuity professionals have thrived for decades by helping their organizations predict and prepare for natural disasters and physical security incidents. To date, the best practice to prepare resilient data centers is to evaluate redundant electrical grid availability, historical weather patterns, earthquake trends and, most importantly, to confirm that the backup data center doesn’t reside within a certain physical distance of the primary data center. Cyber threats have added new challenges to this equation, as even two ideally positioned, geographically distanced, modern data centers often rely on the same underlying cyber systems and networks. It’s not uncommon to find ransomware attacks, which travel at the speed of light and aren’t bound by physical distance, devastating organizations when both primary and backup data centers are encrypted for ransom or, worse, deleted by destructive malware. This is only one example that highlights the new resiliency risks faced by the world’s recent dramatic increase in digital dependency and cyber threats. By converging cybersecurity and resiliency programs, organizations are better positioned to contend with this challenging new reality…[…] Read more »….

 

When Certificate Management Becomes Daunting, Automate It

Tracking and managing digital certificates has become a challenge that overwhelms many IT managers and security professionals, making the task a clear candidate for automation. The sheer number of servers, users, devices, and software applications in today’s enterprise which require authentication is daunting—particularly for those still using massive Excel sheets.

Despite the lost productivity and increased risks of manually managing private and public certificates, automation of certificate management remains less common than one would assume.

Yes, there are various third-party tools and IETF protocols that can help modernize this onerous task, but many organizations have holes in the process, making efficient certificate management elusive. Some organizations have fewer than a few dozen certificates to manage, while others, especially those organizations with facilities and offices spread across continents and oceans, may have tens of thousands. In addition, for companies with millions of edge devices and sensors, tracking and managing the security certificates required to establish and ensure secure communications is a herculean task.

By adopting a complete certificate automation solution, enterprises can reduce the risk of breaches or outages owing themselves to expired certificates or that are unknowingly deployed in their environment. In addition, automated certificate lifecycle management enables businesses to respond quickly and with agility as the security and business landscape evolves and new types of cybersecurity threats emerge.

There are four pillars of certificate automation designed to take enterprises from tactical to strategic certificate management: Discovery, Deployment, Lifecycle Management, and Renewal. These pillars feed into a single pane of glass platform for complete visibility.

 

Four Pillars of Certification Automation
 

#1  Discovery

Finding and cataloging all of your company’s certificates is essential for securing a modern enterprise. There are likely rogue certificates floating around that were not directly issued by your IT or network security teams. Often, smaller activities and projects from other lines of business are not appropriately removed, with the seemingly non-critical certificates becoming ticking time bombs, which if forgotten or ignored, can create massive security vulnerabilities.

By using an automated discovery process that regularly scans an entire environment, network or security admins can identify every certificate across the organization, ensuring that there are no rogue certificates that go undiscovered until they have opened the door to a cyberattack or create other security issues.

#2 Deployment

Manually provisioning or registering a certificate at the right time for the right purpose is an incredibly time-intensive task. Merely deploying an SSL certificate on just one server could take up to two hours! And that is just the beginning.

Now add the other required tasks such as documenting each certificate’s location and purpose, configuring certificates according to myriad endpoint devices and varying operating systems, and then confirming that each performs correctly. This can require a lot of additional time and effort.

Today’s enterprises need to be quick-moving and agile to keep up with constant flux and rapid change. Beyond time saved, automated deployment means reduced human-error and increased reliability and consistency. Fortunately, IETF standards, like the Automated Certificate Management Environment (ACME) protocol, are gaining traction and cover most use cases for end-to-end certificate management.

#3 Lifecycle management

Certificates include the requirements and policies that enterprises use to define trust within their organization, extending the security of using only highly trusted key architectures.

To ensure a certificate is always in its best possible state, organizations need to be able to revoke and replace certificates on demand, quickly and efficiently.  Spending more than two hours per certificate is unreasonable. It needs to happen seamlessly and at scale.

Automated lifecycle management makes revoking and replacing certificates a touch-free process. And administrators no longer need to wait until the expiration date to make critical certificate upgrades. Instead, they can simply order and provision new valid certificates and easily revoke old or noncompliant certificates. The platform manages these changes without downtime.

#4 Renewal

All certificates have an expiration date. It is a fundamental trust element that certificates are time-bound and will need to be replaced. Effective September 2020, browsers further shortened certificate validity to a 398-day period, sending organizations still manually managing hundreds or thousands of certificates with spreadsheets into panic mode. When certificates expire without having been replaced, that is when we start to see headlines about costly outages or breaches.

Timely certificate renewal is a cornerstone of cybersecurity, and reliance on manual management increases the risk of human errors.

Some organizations claim to be automated because part of their process, such as receiving email notifications about the impending certificate expiration, is automated. Unfortunately, many of these emergency notice emails end up in a flooded inbox or spam folder or are sent to someone on vacation or who is no longer with the organization. More importantly, an email is just an alert. It does not actually DO anything. It does not actually renew and install the new certificate.

While it is essential for organizations to know that renewals are coming, the most significant value of automating renewals is that the entire process is scheduled to run with minimal action from individual contributors.

Single pane of glass visibility for monitoring and management

Enterprises are best served by using one certificate management platform—a so-called “single pane of glass”—to discover, deploy, manage lifecycles, and renew all digital certificates.

Visibility is the critical capability enterprises require to enable and enhance the four pillars. Having this one-stop insight speeds up and simplifies certificate management, making it easier to track and monitor the certificate types, vendors, public and private certificates, cryptographic choices, and upcoming certificate expirations. Such visibility is also the basis for sound corporate governance of trust policies and compliance audits…[…] Read more »

 

 

Meet Leanne Hurley: Cloud Expert of the Month – April 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Leanne Hurley.

After starting out at the front counter of a two-way radio shop in 1993, Leanne worked her way from face-to-face customer service, to billing, to training and finally into sales. She has been in sales since 1996 and has (mostly!) loved every minute of it. Leanne started selling IaaS (whether co-lo, Managed Hosting or Cloud) during the dot.com boom and expanded her expertise since I’ve been at SAP.  Now, she enjoys leading a team of sales professionals as she works with companies to improve business outcomes and accelerate digital transformation utilizing SAP’s Intelligent enterprise.

When did you join Cloud Girls and why?

I was one of the first members of Cloud Girls in 2011. I joined because having a strong network and community of women in technology is important.

What do you value about being a Cloud Girl?  

I value the relationships and women in the group.

What advice would you give to your younger self at the start of your career?

Stop doubting yourself. Continue to ask questions and don’t be intimidated by people that try to squash your tenacity and curiosity.

What’s your favorite inspirational quote?

“You can have everything in life you want if you will just help other people get what they want.”  – Zig Ziglar

What one piece of advice would you share with young women to encourage them to take a seat at the table?

Never stop learning and always ask questions. In technology women (and men too for that matter) avoid asking questions because they think it reveals some sort of inadequacy. That is absolutely false. Use your curiosity and thirst for knowledge as a tool, it will serve you well all your life.

You’re a new addition to the crayon box. What color would you be and why?

I would be Sassy-molassy because I’m a bit sassy.

What was the best book you read this year and why?

I loved American Dirt because it humanized the US migrant plight and reminded me how blessed and lucky we all are to have been born in the US.

What’s the most useless talent you have? Why?.[…] Read more »…..

 

3 signs that it’s time to reevaluate your monitoring platform

As we move forward from the uncertainty of 2020, remote and hybrid styles of work are likely to remain beyond the pandemic. Amid the rise of modified workflows, we’ve also seen an increase in phishing scams, ransomware attacks, and simple user errors that result in the IT infrastructures we rely on crashing – sometimes with devastating long-term repercussions for the business. What’s needed to prevent this is a reliable monitoring system that is constantly scanning your system – whether you’re operating from a data center, a public cloud, or some combination – to alert you when something is amiss. Often these monitoring tools run so smoothly in the background of operations that we forget they’re even there – which can be a big problem.

When is the last time you assessed your monitoring platform? You may have already noticed signs indicating that your tools are not keeping up with the rapidly changing digital workforce – gathering nonessential data while failing to forewarn you about legitimate issues to your network operations. Post-2020, these systems have to handle workforces that are staying connected digitally regardless of where employees are working. Your monitoring tools should be hyper-focused on alerting you to issues from outside your network and any weakness from within it. Often, we turn out to be monitoring for too much and still missing the essential problems until it’s too late.

  1. Outages

One of the most damaging and costly setbacks a business can experience is network downtime when your network suddenly and without warning ceases to work. Applications are no longer functioning, files are inaccessible, and your business cannot perform its daily functions. Responding to network downtime isn’t a simple matter of rebooting your computer, either. Gartner estimates that for every minute of network downtime, the company in question loses an average of $5,600. On the higher end of this spectrum, a business could lose $540,000 per hour. Those figures are based on lost productivity. Getting your system up and running again, catching up on lost time, and, one would think, reevaluating and implementing a new monitoring system all incur additional costs.

In the case of one luxury hotel chain, an updated monitoring system accurately detected why they were experiencing outages – a change in network configuration. By utilizing a newly updated monitoring configuration, the chain quickly reverted the network change and restored service for their customers, saving hours of troubleshooting and costly downtime.

Systems should be proactive, not reactive. The time to reassess your monitoring infrastructure isn’t after it fails to warn you that something goes wrong. Your network monitoring system should be automatically measuring performance and sharing status updates, so you can fix a problem before it happens. If your system is working at its proper capacity, it will be routinely preventing unexpected outages by using performance thresholds to evaluate functionality in real time, and alert you when targeted metrics have reached a threshold that requires attention. With a robust monitoring system in place, your team should have complete network visibility and can respond to changes and prevent outages before they happen.

  1. Alert Fatigue

Alert fatigue is something we can all relate to following a year of working from home: email notifications, instant messages, texts, phone calls, and calendar reminders for your next video meeting. After so many of these day after day, we become desensitized to them; the more alerts we receive, the less urgent any of them seem. From a cybersecurity standpoint, some of the notifications may be for anomalies linked to a potential cyberattack, but more often will be a junk email. If a genuinely urgent message does come through, it often slips through the cracks because it seems no different from any other notification we receive.

So how can your IT infrastructure help prevent this? Intelligent monitoring systems, in general, aim to make the lives of the people using them easier. Your monitoring system should reduce the number of redundant alerts to recognize and prioritize actual issues. A tiered-alert priority system will have notifications display on your dashboard with a visual or auditory cue signifying how important it is. Can this wait until the afternoon, or does it need to be addressed immediately? Detecting a cyberattack early, for example, can make a huge difference in mitigating damage.

  1. Excess Tools

One of the root causes of any monitoring flaw can be excessive monitoring tools themselves – over-monitoring. If you have multiple tools to track your network, you’re likely getting notifications and warnings from each; contributing to alert fatigue, opening yourself up to a potential failure, resulting in a network outage and business interruption. Having multiple tools performing the same function is a waste of resources as they render each other redundant. The key is to consolidate the necessary functions in one monitoring system, regularly assessed for vulnerabilities and customized for your particular business needs.

Your business members will indeed want to track an abundance of metrics – server functionality, security, business metrics, and so on – and it may be that not all of these things can be monitored by the same tool. You should first decide which things are essential for your team to be actively monitoring and assessing. Security should be a top priority, but are there other data points that can be pulled in a quarterly or annual report instead? Your IT monitoring should be focused on tracking and alerting you to essential information and irregularities. You can avoid overextending the team and receiving alerts that will only be ignored by first doing your own assessment of what you need from your system.

Assessing Your Approach for Future Growth

We can’t operate at our full potential without the control and visibility that monitoring tools give us…[…] Read more »….

 

Protecting Remote Workers Against the Perils of Public WI-FI

In a physical office, front-desk security keeps strangers out of work spaces. In your own home, you control who walks through your door. But what happens when your “office” is a table at the local coffee shop, where you’re sipping a latte among total strangers?

Widespread remote work is likely here to stay, even after the pandemic is over. But the resumption of travel and the reopening of public spaces raises new concerns about how to keep remote work secure.

In particular, many employees used to working in the relative safety of an office or private home may be unaware of the risks associated with public Wi-Fi. Just like you can’t be sure who’s sitting next to your employee in a coffee shop or other public space, you can’t be sure whether the public Wi-Fi network they’re connecting to is safe. And the second your employee accidentally connects to a malicious hotspot, they could expose all the sensitive data that’s transmitted in their communications or stored on their device.

Taking scenarios like this into account when planning your cybersecurity protections will help keep your company’s data safe, no matter where employees choose to open their laptops.

The risks of Wi-Fi search

An employee leaving Wi-Fi enabled when they leave their house may seem harmless, but it really leaves them incredibly vulnerable. Wi-Fi enabled devices can reveal the network names (SSIDs) they normally connect to when they are on the move. An attacker can then use this information to emulate a known “trusted” network that is not encrypted and pretend to be that network.  Many devices will automatically connect to these “trusted” open networks without verifying that the network is legitimate.

Often, attackers don’t even need to emulate known networks to entice users to connect. According to a recent poll, two-thirds of people who use public Wi-Fi set their devices to connect automatically to nearby networks, without vetting which ones they’re joining.

If your employee automatically connects to a malicious network — or is tricked into doing so — a cybercriminal can unleash a number of damaging attacks. The network connection can enable the attacker to intercept and modify any unencrypted content that is sent to the employee’s device. That means they can insert malicious payloads into innocuous web pages or other content, enabling them to exploit any software vulnerabilities that may be present on the device.

Once such malicious content is running on a device, many technical attacks are possible against other, more important parts of the device software and operating system. Some of these provide administrative or root level access, which gives the attacker near total control of the device. Once an attacker has this level of access, all data, access, and functionality on the device is potentially compromised. The attacker can remove or alter the data, or encrypt it with ransomware and demand payment in exchange for the key.

The attacker could even use the data to emulate and impersonate the employee who owns and or uses the device. This sort of fraud can have devastating consequences for companies. Last year, a Florida teenager was able to take over multiple high-profile Twitter accounts by impersonating a member of the Twitter IT team.

A multi-layered approach to remote work security

These worst-case scenarios won’t occur every time an employee connects to an unknown network while working remotely outside the home — but it only takes one malicious network connection to create a major security incident. To protect against these problems, make sure you have more than one line of cybersecurity defenses protecting your remote workers against this particular attack vector.

Require VPN use. The best practice for users who need access to non-corporate Wi-Fi is to require that all web traffic on corporate devices go through a trusted VPN. This greatly limits the attack surface of a device, and reduces the probability of a device compromise if it connects to a malicious access point.

Educate employees about risk. Connecting freely to public Wi-Fi is normalized in everyday life, and most people have no idea how risky it is. Simply informing your employees about the risks can have a major impact on behavior. No one wants to be the one responsible for a data breach or hack…[…] Read more »

 

 

How We’ll Conduct Algorithmic Audits in the New Economy

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes.

Algorithms are the heartbeat of applications, but they may not be perceived as entirely benign by their intended beneficiaries.

Most educated people know that an algorithm is simply any stepwise computational procedure. Most computer programs are algorithms of one sort of another. Embedded in operational applications, algorithms make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings — encroaching on customer privacy, refusing them a home loan, or perhaps targeting them with a barrage of objectionable solicitation — stakeholders’ understandable reaction may be to swat back in anger, and possibly with legal action.

Regulatory mandates are starting to require algorithm auditing

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes, especially those powered by artificial intelligence (AI), deep learning (DL), and machine learning (ML).

Many of these concerns revolve around the possibility that algorithmic processes can unwittingly inflict racial biases, privacy encroachments, and job-killing automations on society at large, or on vulnerable segments thereof. Surprisingly, some leading tech industry execs even regard algorithmic processes as a potential existential threat to humanity. Other observers see ample potential for algorithmic outcomes to grow increasingly absurd and counterproductive.

Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity — coupled with their daunting size, complexity and obscurity — presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions?

Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens.

Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data — including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision. And that requires that an audit log be kept for review and that auditing tools support rollup of algorithmic decision factors.

Considering how influential GDPR has been on other privacy-focused regulatory initiatives around the world, it wouldn’t be surprising to see laws and regulations mandate these sorts of auditing requirements placed on businesses operating in most industrialized nations before long.

For example, US federal lawmakers introduced the Algorithmic Accountability Act in 2019 to require companies to survey and fix algorithms that result in discriminatory or unfair treatment.

Anticipating this trend by a decade, the US Federal Reserve’s SR-11 guidance on model risk management, issued in 2011, mandates that banking organizations conduct audits of ML and other statistical models in order to be alert to the possibility of financial loss due to algorithmic decisions. It also spells out the key aspects of an effective model risk management framework, including robust model development, implementation, and use; effective model validation; and sound governance, policies, and controls.

Even if one’s organization is not responding to any specific legal or regulatory requirements for rooting out evidence of fairness, bias, and discrimination in your algorithms, it may be prudent from a public relations standpoint. If nothing else, it would signal enterprise commitment to ethical guidance that encompasses application development and machine learning DevOps practices.

But algorithms can be fearsomely complex entities to audit

CIOs need to get ahead of this trend by establishing internal practices focused on algorithm auditing, accounting, and transparency. Organizations in every industry should be prepared to respond to growing demands that they audit the complete set of business rules and AI/DL/ML models that their developers have encoded into any processes that impact customers, employees, and other stakeholders.

Of course, that can be a tall order to fill. For example, GDPR’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity — coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms — be they machine learning, convolutional neural networks, or whatever — are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Most organizations — even the likes of Amazon, Google, and Facebook — might find it difficult to keep track of all the variables encoded into its algorithmic business processes. What could prove even trickier is the requirement that they roll up these audits into plain-English narratives that explain to a customer, regulator, or jury why a particular algorithmic process took a specific action under real-world circumstances. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding.

Throwing more algorithm experts at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. Explaining what goes on inside an algorithm is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. One can’t easily trace their precise path to a final answer.

Algorithmic auditing is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

Most of the people you’re training to explain this stuff to may not know a machine-learning algorithm from a hole in the ground. More often than we’d like to believe, there will be no single human expert — or even (irony alert) algorithmic tool — that can frame a specific decision-automation narrative in simple, but not simplistic, English. Even if you could replay automated decisions in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance — such as a legal proceeding, contractual dispute, or showstopping technical glitch — will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory.

Establishing standard approach to algorithmic auditing

CIOs should recognize that they don’t need to go it alone on algorithm accounting. Enterprises should be able to call on independent third-party algorithm auditors. Auditors may be called on to review algorithms prior to deployment as part of the DevOps process, or post-deployment in response to unexpected legal, regulatory, and other challenges.

Some specialized consultancies offer algorithm auditing services to private and public sector clients. These include:

BNH.ai: This firm describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.” It provides enterprise-wide assessments of enterprise AI liabilities and model governance practices; AI incident detection and response, model- and project-specific risk certifications; and regulatory and compliance guidance. It also trains clients’ technical, legal and risk personnel how to perform algorithm audits.

O’Neil Risk Consulting and Algorithmic Auditing: ORCAA describes itself as a “consultancy that helps companies and organizations manage and audit algorithmic risks.” It works with clients to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. It helps clients to institute “early warning systems” that flag when a problematic algorithm (ethical, legal, reputational, or otherwise) is in development or in production, and thereby escalate the matter to the relevant parties for remediation. They serve as expert witnesses to assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. They help organizations develop strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools. They work with regulators to translate fairness laws and rules into specific standards for algorithm builders. And they train client personnel on algorithm auditing.

Currently, there are few hard-and-fast standards in algorithm auditing. What gets included in an audit and how the auditing process is conducted are more or less defined by every enterprise that undertakes it, or by the specific consultancy being engaged to conduct it. Looking ahead to possible future standards in algorithm auditing, Google Research and Open AI teamed with a wide range of universities and research institutes last year to publish a research study that recommends third-party auditing of AI systems. The paper also recommends that enterprises:

  • Develop audit trail requirements for “safety-critical applications” of AI systems;
  • Conduct regular audits and risk assessments associated with the AI-based algorithmic systems that they develop and manage;
  • Institute bias and safety bounties to strengthen incentives and processes for auditing and remediating issues with AI systems;
  • Share audit logs and other information about incidents with AI systems through their collaborative processes with peers;
  • Share best practices and tools for algorithm auditing and risk assessment; and
  • Conduct research into the interpretability and transparency of AI systems to support more efficient and effective auditing and risk assessment.

Other recent AI industry initiatives relevant to standardization of algorithm auditing include:

  • Google published an internal audit framework that is designed help enterprise engineering teams audit AI systems for privacy, bias, and other ethical issues before deploying them.
  • AI researchers from Google, Mozilla, and the University of Washington published a paper that outlines improved processes for auditing and data management to ensure that ethical principles are built into DevOps workflows that deploy AI/DL/ML algorithms into applications.
  • The Partnership on AI published a database to document instances in which AI systems fail to live up to acceptable anti-bias, ethical, and other practices.

Recommendations

CIOs should explore how best to institute algorithmic auditing in their organizations’ DevOps practices…[…] Read more »…..