How To Define Risks for Your Information Assets

To define risks, learn where they come from, and what their effect on information assets and the operation of your company is, you will need to carry out a risk assessment. In this article we will talk about IT assets and risks. I’m not going to outline the organizational or preparational side of things, such as appointing a risk manager or setting up the assessment process. If you need to learn about the different aspects of defining a process, take a look at ISO/IEC 27005:2018.

Basic method

There are a few different approaches to defining risk, but let’s explore the basics. The first thing you will need to do is define the scope of your information assets. Information assets are all assets which could impact on the confidentiality, integrity, and accessibility of information within your company.

There aren’t any strict criteria on how to assess this scope. The result should be a list of systems, applications, code, etc. which you need to define risks for.

Defining your assets

Assets can be singular or grouped together to unify identical risks for a set of assets.

The simplest way is to make a logical list of systems and applications, grouping them by type. For example:

  • HR systems, like BambooHR, Zoho, Workable, etc.
  • Security systems, like IPS, SIEM, Nexpose, etc.)
  • Communication systems, like Slack, Facebook workspace, Google meet, etc.
  • Access control systems, like PACS, CCTV, etc.
  • Business support systems, like Google Workspace, MS AD, LDAP, etc.

It’s worth taking into account that IT is assets that aren’t just the standard systems and applications with recognizable names, but also:

  • In-house systems
  • Your code
  • Employee workstations
  • Your network and its components
  • Software licenses
  • etc.

When grouping assets, you need to take into account the critical nature of the assets. For example, a service for ordering coffee in the office isn’t as critical as a customer support system. Obviously, you set how critical the system is as you see fit, taking into account that each risk can have different effects on different assets.

Zone of responsibility

This article is not supposed to go into detail about how to define zones of responsibility, but it’s worth mentioning in short.

You need to define who is responsible for what: which employees or departments are responsible for which systems from a business perspective (i.e. responsible for the data and system processes) and which are responsible for the technical aspects (i.e. asset support and management). You also need to define who your users are and who assesses the risks. You can express the result using the RACI matrix:

  • (R) Responsible
  • (A) Accountable
  • (C) Consulted
  • (I) Informed

This is necessary in order to define who will

  • Identify assets
  • Support assets
  • Assess critical nature of assets
  • Assess damage (consequences)
  • Process risks
  • Administer processes for information risk management

Damage assessment

The next step is to work with people in your company to define the damage that could become of the different risks coming to fruition.

Take a look at the table below to see an example of how this is done.

Damage table

Damage Table

Identification of risks

You can identify risks by combining the threats and vulnerabilities associated with each asset. Risks can be categorized by the type of impact they could have on a system or dataset:

  • Confidentiality
  • Integrity
  • Accessibility

Threats and vulnerabilities can be split into two types and this will help you define the impact level the risk will have on the asset and the overall applicability of the risk to a particular asset:

  • Internal (within your security or network perimeter)
  • External (outside of your company’s perimeter
Example

The risk that sensitive data could be stolen when being transferred across your network due to incorrect system configuration. For data being transferred within the company (internal threat), the effects of this risk coming to fruition are much less than if you were to transfer the data externally (e.g. to a cloud provider).

The most difficult part of all is defining and forming the list of risks. You can use the risks that are listed in standards such as ISO, PCI DSS, NIST, COBIT, etc. and adapt them to your own processes.

The domains you consider should include but not be limited to:

  • Access and role management
  • Change and development management
  • System backup and recovery
  • Monitoring
  • Password security
  • Vulnerability management
  • Privileged account management
  • Third party management
  • Physical security

What else affects risks?

The possibility and frequency that a risk might be realized also affects your assessment. Let’s take a look at an example.

Example 1

Unsanctioned access to internal systems which leads to the system admin password being exposed. However, you can only access the system by being on the company’s local network (where connection is only possible with a user certificate and set device) or via VPN that requires two-factor authentication.

In this case:

  • The chance that this risk will be realized is low
  • The possible frequency of this risk being realized is low

As we can see, the actual impact of this risk on an asset is practically zero and you can either not even consider it, or mark it as a risk that you are willing to accept.

Now, let’s take a look at this risk in different circumstances. If we say this risk is prevalent for an external system in a cloud and with local authorization via http, then:

  • The chance that this risk will be realized is high because the admin password is transmitted across an open channel and there is no additional security applied to the admin account
  • The possible frequency of this risk being realized is high because the system is accessible from anywhere with an internet connection

As you can see, the circumstances are something you need to consider when defining and grouping risks for assets according to type and critical nature.

Let’s take a look at some more examples in the context of a marketplace and consider their impact.

Example 2

Neither the company’s site nor mobile application have undergone a comprehensive security review during design and implementation. In addition, there is no process of continuous security assessment (vulnerabilities detection) of the site or mobile application.

This may result in prolonged existence of exploitable vulnerabilities which may lead to the systems being compromised by an outside intruder and a leak of confidential data.

This would impact on:

  • Data confidentiality (misconfiguration in authentication form that grants access to client data)
  • Data and application integrity (vulnerabilities like an SQL injection)
  • Application accessibility (e.g. DDOS vulnerabilities)

If we consider the risks and outcome using the damage table above, we several have types of harm:

  • Reputational damage — Moderate
  • Idleness or inefficiency in service operation — Low
  • Contravention of laws and regulations — Moderate

You should define the value of the damage and the impact in a way appropriate for your business.

Example 3

The company’s disaster recovery plan is outdated and has not been tested for years. Given the moderate potential of an intruder breaching the systems, a combination of events may result in the inability to restore operations at the recovery site within an acceptable time frame.

The data integrity or data accessibility and harm will be:

  • Financial loss – High because it’s very harmful for a marketplace to lose all of its customer data; the company will lose money if customers won’t be able to order goods.

What comes out the other end

When you have completed the process of setting and assessing risks, you should have a document/matrix/table which shows for each asset or group of assets:.[…] Read more »

 

 

A CIO’s Introduction to the Metaverse

The “metaverse” is coming. Are you ready? Microsoft, Nvidia, and Facebook have all announced significant applications to give enterprises a door into the metaverse. Many startups are also building this kind of technology.

But just what is the metaverse anyway? Is it something that CIOs need to have on their radar? What are the use cases for businesses? And what are the caveats that organizations need to watch for to reduce risk?

What Is a Metaverse?

Metaverse is essentially a 3D mixed reality “place” that combines the real world/physical world with the digital world. It is persistent, meaning it continues to exist even if you close the app or logout. It is also collaborative, meaning that people in that world see the same thing and can work together. Some experts say that the metaverse will be a new 3D layer of the internet. Gartner’s definition goes one step further, says Tuong Nguyen, senior research analyst at Gartner, specifying that a true metaverse must be interoperable with other metaverses (and thus, many of today’s iterations don’t fit the Gartner definition yet.)

Here’s how Nvidia CEP Jensen Huang put it during his keynote address at Nvidia GTC 2021 online event this month: “The internet is essentially a digital overlay on the world. The overlay is largely 2D information — text voice, images, video — but that’s about to change. We now have the technology to create new 3D virtual worlds or model our physical world.”

Today’s video conferencing, driven into the mainstream by the pandemic, is an example of two-dimensional collaboration. People can participate via their laptop cameras and microphones from home, or they can be in the office in a teleconference room. They can share their screens or use apps that allow for a collaborative whiteboard.

A metaverse layers immersive 3D on top of that. Participants can create avatars (digital representations of themselves) and use those to enter a virtual 3D room. In that room they can collaborate on a virtual whiteboard on the virtual wall or walk around a virtual 3D model of a car they are designing, for instance.

That’s essentially the use case that Microsoft CEO Satya Nadella described when he announced Mesh for Microsoft Teams at the tech giant’s Ignite conference this month. Microsoft will add this capability to its Teams collaboration tool starting in 2022.

This feature combines the capabilities of Microsoft’s mixed-reality platform Mesh (announced in March 2021 as a platform for building metaverses) with the productivity tools of Microsoft Teams, according to Microsoft.

Facebook, which rebranded itself as Meta earlier this year, introduced Horizon Workrooms in August, which are VR meeting spaces for remote collaboration.

Metaverse Use Cases for the Enterprise

Collaboration is one of three primary use cases for a metaverse in the enterprise right now, according to Forrester VP J.P. Gownder.

Another primary use case is one championed by chip giant Nvidia — simulations and digital twins. Huang announced Nvidia Omniverse Enterprise during his keynote address at the company’s GTC 2021 online AI conference this month and offered several use cases that focused on simulations and digital twins in industrial settings such as warehouses, plants, and factories.

If you are an organization in an industry with expensive assets — for instance oil and gas, manufacturing, or logistics — it makes sense to have this use case on your radar, according to Gartner’s Nguyen. “That’s where augmented reality is benefiting enterprise right now,” he says.

As an example, during his keynote address, Nvidia’s Huang showed a video of a virtual warehouse created with Nvidia Omniverse Enterprise enabling an organization to visualize the impact of optimized routing in an automated order picking scenario. That’s an example of a particular use case, but Omniverse itself is Nvidia’s platform to enable organizations to create their own simulations or virtual worlds.

“We built Omniverse for builders of these virtual worlds,” Huang said at GTC. “Some worlds will be for gatherings and games. But a great many will be built by scientists, creators, and companies. Virtual worlds will crop up like websites today.”

The third use case for enterprises falls in the business-to-consumer marketing realm as demonstrated by online gaming platform company Roblox, according to Gownder. On this gaming platform that’s popular with the pre-teen crowd, users can purchase digital clothing to outfit their avatars, and brands are taking notice. For instance, apparel brands including Vans and Gucci have created customized, branded worlds on Roblox.

Should CIOs Put Metaverse on Their Tech Roadmaps?

Yes, but no need to jump in with both feet yet, the experts say.

“CIOs should be thinking about these examples,” says Nguyen. “But you don’t need to have a metaverse presence.” Yet. “It would behoove you to get that frame of reference because of the inevitability. Not being a part of this in some way, you will likely be missing out substantially, just like any organization that doesn’t have a website today.”

Indeed, it may pay off if you decide to wait for version 2. Microsoft’s Mesh for Teams lets users create an avatar and use that instead of turning on their webcams. These personal avatars come complete with facial expressions to convey reactions.

“This is unlikely to get the same level of engagement for others utilizing video in a meeting,” says Tim Banting, Omdia’s practice leader for the Digital Workplace. “Consequently, Omdia believes this feature to be somewhat of a gimmick.”

However, some other use cases may appeal to organizations, he adds. Yet there are other caveats for enterprises to consider when it comes to practical implementation.

“A specific headset, rather than a PC or mobile device, would be required to maximize the user experience,” Banting says. “With many organizations failing to offer remote staff business-quality headsets and external webcams, it’s unlikely that enterprises could justify the expense of VR equipment for regular employee meetings.”

Do You Need to Skill Up Your IT Workforce?

Many of the benefits of metaverse technology will be available through your existing technology vendors already, like Mesh for Microsoft Teams. What’s more, Banting points out that in the consumer VR world, “it’s very much a plug and play environment with easy setup.”

However, “Where things could get interesting is when businesses want to create their own ‘branded’ metaverse. I expect this will be an advanced services opportunity for a new category of partners working in conjunction with marketing.”

Gownder said that an understanding of 3D is a rare skill today, so finding people who can develop on Unity or Unreal Engine may be valuable. But it’s not something that everyone will need to jump on right away..[…] Read more »…..

 

Bridging the gender gap in cybersecurity

In a panel at the ISC2 Security Congress 2021, Sharon Smith, CISSP, Lori Ross O’Neil, CISSP, Aanchal Gupta and Meg West, M.S., CISSP, discussed the challenges and opportunities of being a woman in cybersecurity. From the factors that lead to women being underrepresented in cybersecurity to removing those barriers, the cybersecurity leaders discussed their ideas on how to bridge the gender gap in the field.

Contributing factors to the underrepresentation of women

Gupta believes that a cybersecurity awareness gap contributes to the underrepresentation of women in the field. With her background in software engineering, Gupta declined her first offer to pivot to cybersecurity, believing that she didn’t possess the correct qualifications. Once she entered the field, she realized the vastness of the cybersecurity space and how people with varied skillsets thrive in the industry. Helping women understand that they don’t need a cybersecurity or computer science degree to enter the field can attract more qualified women to the industry. Smith added that hiring managers should also be aware that qualified candidates exist outside of those majors.

Women looking to transition into cybersecurity mid-career can frame the change as adding cyber to their profession. O’Neil’s passion is bringing cybersecurity to other disciplines — someone with an accounting or chemistry background can benefit from cybersecurity coursework in order to do their jobs safely and securely. Certifications are a great way to enter the industry, as well as seeking out online communities and information can help entering the field by immersing oneself in the cybersecurity sphere.

How do we remove barriers from cybersecurity

Although the number of women in cybersecurity has increased over the past years, there is still a ways to go to achieve equal gender representation in the field. “We should get ahead of this problem by engaging with women and other underrepresented groups early on,” said Gupta. Reaching young people with capture-the-flag style exercises, coding programs and cybersecurity information provides industry exposure at an early age and allows them to imagine what a career in cybersecurity might look like.

Breaking down self-imposed barriers, changing a broken hiring system that relies on AI searching for keywords to select candidates and more men stepping up as allies in the field are all ideas suggested by Smith to bridge the gender gap in cybersecurity. Looking for opportunities to educate women and other underrepresented groups on cybersecurity roles can increase the amount of those groups in the field.

All women on the panel shared experiences when they were affected by sexism in the industry. West began as an associate in cybersecurity at a Fortune 100 company as the youngest and only female employee on the team. On one of her first days on the job, one of her coworkers told her that the only reason she got the job was to fill a diversity quota. West took this comment as a challenge — within about 3 years, she was promoted from being a cybersecurity associate to the Global Incident Response Manager at the age of 24. She created the role and advocated for her promotion with statistics of her accomplishments. “Just because an opportunity does not exist, that doesn’t mean I can’t create it myself,” said West..[…] Read more »….

 

How gamification boosts security awareness training effectiveness

Ransomware and its partner in crime phishing are very much in the spotlight of late. According to the Phishing Activity Trends Report by APWG, the quantity of phishing doubled in 2020 and continues to rise.

In response, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has launched a campaign to address ransomware. It takes a two-pronged approach: improving security readiness and raising awareness. Its campaign encourages organizations to implement best practices, tools and resources to mitigate the risk of falling victim to ransomware.

“Anyone can be the victim of ransomware, and so everyone should take steps to protect their systems,” said Director of CISA Brandon Wales.

But protection doesn’t just mean buying new security systems and implementing better processes and policies. The CISA campaign emphasizes training as an integral part of anti-ransomware and anti-phishing success.

Employee security awareness training

“The greatest technology in the world won’t do you much good if your users are not well educated on security and don’t know what to do and not do,” said Greg Schulz, an analyst at Server StorageIO Group. “Too many security issues are attributed to humans falling prey to social engineering.”

But the effectiveness of security training varies significantly depending on the approach. According to a report by Computer Economics, Security Training Adoption and Best Practices 2021, the security training given to staff in some cases only goes as far as insisting all users sign off on reading organizational security policies and procedures. How much of it are they likely to retain?

“All it takes is one weak link among the workforce, and state-of-the-art security technology is breached,” said Frank Scavo, President of Computer Economics. “The goal of security training should be to erect a human firewall of informed and ever-vigilant users: an army of personnel with high awareness of social engineering methods provides an extra safeguard against attack.”

Lunch-and-learn anti-phishing awareness briefings are a little better than only making employees read policy. Lunch-and-learns may earn a small improvement in the reduction of phishing success, but not nearly enough. Similarly, traditional classroom and textbook learning has only a modest effect on the number of phishing victims.

What it takes is a multi-faceted response to phishing and ransomware via interactive learning. The introduction of gamification, in particular, to the field of security awareness training has been shown to boost results.

Gamification of cybersecurity training

“People can easily tune out when subjected to static security awareness training,” said Schulz. “But make it an interactive learning experience or game, and people are more likely to engage while being entertained and educated on the do’s and don’ts of modern IT security threat risks.”

What exactly is gamification? Gabe Zichermann, author of “The Gamification Revolution,” defined it as “taking what’s fun about games and applying it to situations that maybe aren’t so fun.”

Gamification is essentially about finding ways to engage people emotionally to motivate them to behave in a particular way or decide to forward a specific goal. In training, it’s used to make learning a lot more fun.

Effective gamification techniques applied to security training use quizzes, interactive videos, cartoons and short films with characters and plots that entertain while getting across the important facts about phishing and other scams — and how to avoid them.

In “The Forrester Wave: Security Awareness and Training Solutions, Q1 2020,” Jinan Budge, an analyst at Forrester Research, said, “Successful vendors deliver the ABCs of security: awareness, behavior and culture. Look for providers that truly understand how training contributes to your overall security culture and don’t just check the training requirement box.”

Later in the same report, she added: “Choose vendors that create positive content with inclusive, clear and compelling images and that engage users with alternative content types like gamification, microlearning and virtual reality (VR). Some vendors offer true gamification that involves teams, competition and advanced graphic design, engaging discerning audiences on a deeper level than multiple-choice tests or phishing simulations.”..[…] Read more »….

 

Key Steps To Secure Every Identity in the Age of Digital Transformation

Identity management is in crisis. The need for a secure, seamless authentication method for all digital identities has reached a critical point with the widespread remote working practices and accelerated digital transformation ushered in by the pandemic. Microsoft’s CEO went as far as to say that the pandemic was the foundation for ‘two years’ worth of digital transformation in two months’ as organizations of all shapes and sizes prepared for the new normal.

The past year’s accelerated digital transformation has highlighted the need to leave behind the total reliance on passwords, reflecting the fact that they aren’t as effective as the already available alternatives. Compromised passwords are far and away the most common cause of data breaches, with 80% of breaches involving the misuse or abuse of credentials. Gartner predicts that 60% of large enterprises and 90% of midsize businesses will be using passwordless authentication by 2024. Not only will this reduce the security issues around passwords but will also free up the time IT teams spend dealing with issues of authentication.

Businesses are turning to Zero Trust security with multi-factor authentication as a step towards passwordless, which is a key factor in an identity-first cybersecurity strategy. MFA is now a must for authenticating end users, but it is not enough in itself. A comprehensive approach to identity management will not just ensure users stay safe and secure but will also secure your machines, devices and interactions. IT leaders need to incorporate certificate-based services in order to defend all the identities on their network. This article will take us through three facets of a holistic approach to identity management that will help IT practitioners go beyond user identity management to ensure all the machines, devices, and interactions on their network are trusted identities.

Enabling trusted identities with automated PKI

It isn’t enough to authenticate just your users, businesses need to authenticate all their identities – whether it’s their systems, machines, or digital processes like signing contracts or wiring money– and ensure trusted and secure interactions among them.

This is where we see the benefits of a solution such as Public Key Infrastructure (PKI). With PKI, you can issue certificates for a variety of different machines, so you no longer need to worry about unverified devices or applications on your network. In the era of flexible, hybrid, remote and ‘work from anywhere’ models, this is an invaluable part of PKI’s appeal: you can rest easy that a user is not bringing an unsafe device into the corporate network, even if this is a personal device which they are using for work.

When deployed and managed correctly by a knowledgeable security partner on your behalf, PKI is an effective authentication method both in terms of costs and security. By selecting PKI security partners, overworked IT teams can automate their PKI management and alleviate the pressure of implementing this technology manually without having the expertise in-house.

Securing digital interactions

Phishing emails and other forms of email-based compromise are increasingly worrying for IT teams and are the frequent causes of a data breach. Millions of phishing attacks are attempted every day, with the FBI reporting that in 2020 there were 11 times more phishing complaints recorded than in 2016. While many of these are random and ineffective, those who do manage to successfully target a specific individual at a company are often hugely lucrative for cybercriminals. A business email compromise (BEC) attack costs a business an average of $3.9 million.

These high stakes are another reason why PKI should be deployed across not just access requirements, but digital interactions across your business. By issuing personalized, secure certificates for emails, documents, and other business documents, you can sign and encrypt essential online interactions. Certificates can also be issued across authentication tools such as hardware tokens or smart cards. This will help to streamline digital transformation for not just users but their online interactions making it that much harder for hackers to operate with impunity on your network.

A user empowerment mentality

It’s also of paramount importance that the authentication solutions you are using are assessed by their long-term impact. Choosing an automated, user-centric solution is the best way to simplify the whole process, while ensuring that your IT teams and wider employees are able to do their jobs with as little interference as possible. Placing the user front and center of the process is the best way to ensure this is done. A user-centric policy also accounts for the diversity present in the types of users that identity management must cater for – differing login locations, devices, and combinations of remote and on-premise work can all be included in a user-centric identity management policy…[…] Read more »

 

 

 

Cloud Native Driving Change in Enterprise and Analytics

A pair of keynote talks at the DeveloperWeek Global conference held online this week hashed out the growing trends among enterprises going cloud native and how cloud native can affect the future of business analytics. Dan McKinney, senior engineer and developer relations lead with Cloudsmith, focused on cloud native supporting the continuous software pipelines in enterprises. Roman Stanek, CEO of GoodData, spoke on the influence cloud native can have on the analytics space. Their keynotes highlighted how software development in the cloud is creating new dynamics within organizations.

In his keynote, Stanek spoke about how cloud native could transform analytics and business intelligence. He described how developers might take ownership of business intelligence, looking at how data is exposed, workflows, and platforms. “Most people are just overloaded with PDF files and Excel files and it’s up to them to visualize and interpret the data,” Stanek said.

There is a democratization underway of data embedded into workflows and Slack, he said, but being able to expose data from applications or natively integrated in applications is the province of developers. Tools exist, Stanek said, for developers to make such data analytics more accessible and understandable by users. “We want to help people make decisions,” he said. “We also want to get them data at the right time, with the right context and volume.”

Stanek said he sees more developers owning business applications, insights, and intelligence up to the point where end users can make decisions. “This industry is heading away from an isolated industry where business people are copying data into visualization tools and data preparation tools and analytics tools,” he said. “We are moving into a world where we will be providing all of this functionality as a headless functionality.” The rise of headless compute services, which do not have local keyboards, monitors, or other means of input and are controlled over a network, may lead to different composition tools that allow business users to build their own applications with low-code/no-code resources, Stanek said.

Enterprise understanding of what constitutes cloud is evolving as well. Though cloud native and cloud hosted sound similar, McKinney said they can be different resources. “The cloud goes way beyond just storing and hosting,” he said. “It is at the heart of a whole new range of technical possibilities.” Many enterprises are moving from on-prem and cloud-hosted solutions to completely cloud-native solutions for continuous software, McKinney said, as cloud providers expand their offerings. “It is opening up new ways to build and deploy applications.”

The first wave of applications migrated to the cloud were cloud hosted, he said. “At a very high level, a cloud-hosted application has been lifted and shifted onto cloud-based server instances.” That gave them access to basic features from cloud providers and offered some advantages to on-prem applications, McKinney said. Still, the underlying architecture of the applications remained largely the same. “Legacy applications migrated to the cloud were never built to take advantage of the paradigm shift that cloud providers present,” he said. Such applications cannot take advantage of shared services or pools of resources and are not suitable for scaling. “It doesn’t have the elasticity,” McKinney said.

The march toward the cloud has since accelerated with the next wave of applications to take advantage of the cloud were constructed natively, he said. Applications born and deployed with the technology of cloud providers in mind typically make use of continuous integration, orchestrators, container engines, and microservices, McKinney said. “Cloud-native applications are increasingly architected as smaller and smaller pieces and they share and reuse services wherever possible.”

Enterprises favor cloud-native solutions now for such reasons as the total cost of ownership, performance and security of the solution, and accommodating distributed teams, McKinney said. There is a desire, he said, to shift from capital expense on infrastructure to operational expense on running costs. These days the costs of cloud-native applications can be calculated fairly easily, McKinney said. Cloud-native resources offer fully managed service models, which can maintain the application itself. “You don’t have to think about what version of the application you have deployed,” he said. “It’s all part of the subscription.”

The ability to scale up with the cloud to meet increased demand was one of the first drivers of migration, McKinney said, but cloud-native applications can go beyond simple scaling. “Cloud-native applications can scale down to the level of individual functions,” he said. “It’s more responsive, efficient, and able to better suit increasing demands — particularly spike loads.”..[…] Read more »…..

 

Securing cloud endpoints: What you should know

What is endpoint security in the cloud?

Endpoint security solutions, such as endpoint protection platforms (EPP) and endpoint detection and response (EDR), were once considered a separate discipline from cloud security. These technologies have since merged to create solutions for endpoint protection in the cloud.

Traditional endpoint security was only sufficient when employees all worked on-premises, accessing workloads through company computers. However, changes to the market, including greater competition, the need for 24/7 accessibility, and rising IT costs, have led more organizations to embrace cloud computing to enable a more open and accessible IT environment. The cloud is accessible from any device, which is good for work flexibility but can complicate security.

Challenges for cloud security include:

  • Cloud systems introduce new types of endpoints, including SaaS applications, cloud storage buckets, managed databases and computer instances (such as EC2 instances or Azure VMs). Each of these is, for all intents and purposes, an endpoint that attackers can gain access to and compromise.
  • The number and types of endpoints accessing the cloud are constantly growing, with devices ranging from laptops to smartphones and tablets. As the Internet of Things (IoT) grows, so does the list of devices and the associated vulnerabilities.
  • External bring-your-own-device (BYOD) endpoints do not provide sufficient visibility into their state or contents. You cannot know what potential security threat may be hidden in a connected device.
  • It is difficult to manage and monitor endpoint behavior and access. Even if your security policy stipulates a list of approved devices and installed apps, you need the right tools to monitor and enforce endpoint security. To ensure you are protected, you need to find a way to extend security to include monitoring remote endpoint access and behavior.

Cloud endpoint security challenges

Let’s take a closer look at security challenges affecting endpoints in public and private clouds.

Public cloud endpoint security

Public cloud resources are more vulnerable to attackers because they are outside the control of IT departments and typically have access to public networks. All public cloud providers use a shared responsibility model, in which the cloud provider secures cloud infrastructure, while cloud users must secure their workloads and data and are responsible for secure configuration.

Many organizations use multiple computing models, including public Infrastructure-as-a-Service (IaaS) such as Amazon EC2, Platform-as-a-Service (PaaS) such as Amazon Lambda and Software-as-a-Service (SaaS) such as SalesForce and Microsoft Office 365.

It can be challenging to identify endpoints, understand access controls and establish secure configurations, as these can work differently for each cloud provider. You cannot centrally view and control all your public cloud branches without specialized tools, and you have to find them one by one across multiple cloud environments.

Another dimension of cloud security, which is unique to the public cloud, is that attacks can not only compromise sensitive resources but also increase cloud costs as attackers leverage cloud infrastructure to create their own, malicious resources.

Private cloud endpoint security

The private cloud may seem more secure because it is fully controlled by the organization and runs in a local data center. However, private clouds are also vulnerable to attack.

Security issues that can impact private clouds include:

  • Insider attacks — a malicious employee or attacker who holds or compromises a legitimate account within the private cloud, can use it to wage an attack. Endpoints are usually connected to other resources and networks, which can lead to lateral movements by malicious insiders.
  • Phishing — social engineering is a common way to compromise endpoints. For example, in a spearphishing attack, hackers investigate victim behavior in your organization, send a crafted and trusted email and trick them into clicking a link to grant attackers access or distribute malicious code.
  • Non-compliance — organizations must ensure that endpoint controls are properly configured and sensitive data is adequately protected. If the necessary control measures are not implemented and there are audits or actual violations, the organization may lose certification or incur fines.
  • Data exfiltration — intellectual property, sensitive or business-critical data or security controls can be leaked to external sources. This is often the result of endpoint vulnerabilities. Data can be stolen by malicious software deployed on endpoints by attackers, transmitted via tunneling over traditional communication protocols (e.g. DNS) or using other methods, such as cloud storage, FTP or Tor.

4 Cloud endpoint security best practices

The following best practices can help you enhance endpoint security in the cloud.

Centralize your security strategy

To identify threats across multiple cloud platforms and effectively integrate a security strategy that meets the needs of each platform, security teams can centralize security controls to gain data visibility across multiple cloud environments.

Information about security measures and tools should be shared between the teams responsible for each platform. Having a common protocol for secure implementation of services ensures consistency, and facilitates secure integration of multi-cloud architectures.

Secure user endpoints

Most users access cloud services through a web browser. Therefore, it is important to implement a high degree of client-side security, ensuring user browsers are up-to-date and free of vulnerabilities.

You should also consider implementing an endpoint security solution to protect your end-user devices. This is critical given the explosive growth of mobile devices and remote work, as users increasingly access cloud services through non-company-owned devices.

Combine endpoint security with additional security solutions, including firewall, mobile device security and intrusion detection/prevention (IDS/IPS) systems.

Network segmentation

EDR tools typically respond to events by isolating endpoints. This type of response quickly deters threat actors. But by creating a segmented network from the outset, you can provide additional protection and prevent attacks before they begin. You can use network segmentation to restrict access to specific services and datastores. This reduces the risk of data loss and limits the extent of damage from a successful attack.

Using Ethernet Switched Path (ESP) technology, network structure can be hidden to further protect the network. This makes it more difficult for attackers to laterally move from one network segment to another.

Preventing cloud phishing by securing credentials

Many security breaches are caused by leaked credentials. Users may intentionally share their credentials with others, store their credentials on public devices or use weak passwords that are easy to crack.

Credential phishing is also a major risk. Many users are easily tricked into using fake portals through malicious scripts and email scams. These users may provide their credentials without realizing that something is suspicious. Once a malicious attacker obtains these credentials, they can gain access to applications, application data and corporate systems.

To protect against these situations, you can implement endpoint protection that detects anomalous use of credentials. For example, if someone logs in from an unexpected geographic location or from multiple IPs at once, you’ll receive an alert.

You should also implement a secure password and login policy. If possible, set a session timeout policy and force users to change their passwords periodically. Implement multi-factor authentication (MFA) whenever possible. If you can’t change the authentication scheme because you are using third-party services, implement an internal policy that defines password complexity and lifecycle.

Cloud endpoint security challenges

Here are suggested best practices that can help improve endpoint security in a cloud environment:

  1. Centralize security for cloud endpoints: make sure you apply consistent policies across all your cloud environments and the on-premise data center.
  2. Secure user endpoints: users frequently access the cloud from remote, personal devices. Ensure that only devices with a known level of security hygiene can access your cloud..[…] Read more »…. 

Machine identities: What they are and how to use automation to secure them

Security teams who aim to control secure access to networked applications and sensitive data often focus on the authentication of user credentials. Yet, the explosive growth of connected devices and machines in today’s enterprises exposes critical security vulnerabilities within machine-to-machine communications, where no human is involved. 

That’s where machine identity comes in. Machine identity is the digital credential or “fingerprint” used to establish trust, authenticate other machines, and encrypt communication.

Much more than a digital ID number or a simple identifier such as a serial number or part number, machine identity is a collection of authenticated credentials that certify that a machine is authorized access to online resources or a network. 

Machine identities are a subset of a broader digital identity foundation that also includes all human and application identities in an enterprise environment. It goes beyond easily recognizable use cases like authenticating a laptop that is accessing the network remotely through Wi-Fi. Machine identity is required for the millions or billions of daily communications between systems where no human is involved, like routing messages across the globe through various network appliances or application servers generating or using data stored across multiple data centers. 

Why Machine Identity Management Needs to Be Automated 

As the number of processes and devices requiring machine-to-machine communication grows, the number of machine identities to track also grows. According to the Cisco Annual Internet Report, by 2023, there will be 29.3 billion networked devices globally, up from 18.4 billion in 2018. That is more than 10 billion new devices in just five years!

Improper identity management not only makes enterprises more vulnerable to cybercriminals, malware and fraud, it also exposes organizations to risks related to employee productivity, customer experience issues, compliance shortfalls and more. While there is no stronger, more versatile authentication and encryption solution than PKI-based digital identity, the challenge for busy IT teams is that manually deploying and managing certificates is time-consuming and can result in unnecessary risk if a mistake is made. 

Whether an enterprise deploys certificates to enable device authentication for a single control network or manages millions of certificates across all its networked device identities, the end-to-end process of certificate issuance, configuration and deployment can overwhelm the workforce. 

The bottom line? Manual machine identity management is neither sustainable nor scalable.

In addition, manually managing certificates puts enterprises at significant risk of neglected certificates expiring unexpectedly. This can result in certificate-related outages, critical business systems failures and security breaches and attacks.

In recent years, expired certificates have resulted in many high-profile website and service outages. These mistakes have cost billions of dollars in lost revenue, contract penalties, lawsuits and the incalculable cost of lost customer goodwill and tarnished brand reputations. 

How to Automate Machine Identity Management

With such high stakes, IT professionals are rethinking their certificate lifecycle management strategies. Organizations need an automated solution that ensures all their digital certificates are correctly configured, installed and managed without human intervention. Yes, automation helps reduce risk, but it also aids IT departments in controlling operational costs and streamlining time-to-market for products and services.

In response to market forces and hacking attacks, PKI has become even more versatile. Consistent high uptime, interoperability and governance are still crucial benefits. But modern PKI solutions can also improve administration and certificate lifecycle management through:

●    Crypto-agility: Updating cryptographic strength and revoking and replacing at-risk certificates with quantum-safe certificates rapidly in response to new or changing threats.

●    Visibility: Viewing certificate status with a single pane of glass across all use cases.

●    Coordination: Using automation to manage a broad portfolio of tasks.

●    Scalability: Managing certificates numbering in the hundreds, thousands, or even millions.

●    Automation: Completing individual tasks while minimizing manual processes...[…] Read more »….

 

Why Detection-As-Code Is the Future of Threat Detection

As security moves to the cloud, manual threat detection processes are unable to keep pace. This article will discuss how detection engineering can advance security operations just as DevOps improved the app development world. We’ll explore detection-as-code (DaC) and innumerate several compelling benefits of this trending approach to threat detection.

What is detection-as-code?

Detection-as-code is a systematic, flexible, and comprehensive approach to threat detection powered by software; the same way infrastructure as code (IaC) and configuration-as-code are about machine-readable definition files and descriptive models for composing infrastructure at scale.

It is a structured approach to analyzing security log data used to identify attacker behaviors. Using software engineering best practices to write expressive detections and automate responses, security teams can build scalable processes to identify sophisticated threats across rapidly expanding environments.

Done right, detection engineering — the set of practices and systems to deliver modern and effective threat detection — can advance security operations just as DevOps improved the app development world.

Similar to a change CI/CD workflow, a detection engineering workflow might include the following steps:

  • Observe a suspicious or malicious behavior
  • Model it in code
  • Write various test cases
  • Commit to version control
  • Deploy to staging, then production
  • Tune and update

You can see that the detection engineering CI/CD workflow is not so much about treating detections as code but about improving detection engineering to be an authentic engineering practice; one that is built on modern software development principles.

The concept of detection-as-code grew out of security’s need for automated, systematic, repeatable, predictable, and shareable approaches. It is essential because threat detection was not previously fully developed as a systematic discipline with effective automation and predictably good results.

Threat detection programs that are precisely adjusted for particular environments and systems have the most potent effect. By using detections as well-written code that can be tested, checked into source control, and code-reviewed by peers, security teams can produce higher-quality alerts that reduce burnout and quickly flag questionable activity.

What are the benefits of detection-as-code?

The benefits of detection-as-code include the ability to:

  1. Build custom, flexible detections using a programming language
  2. Adopt a Test-Driven Development (TDD) approach
  3. Incorporate with version control systems
  4. Automate workflows
  5. Reuse code

Writing detections in a universally recognized, flexible, and expressive language like Python offers several advantages. Instead of using domain-specific languages with too many limitations, you can write more custom and complex detections to fit the precise needs of your enterprise. These language rules are also often more readable and easy to understand. This characteristic can be crucial as complexity increases.

An additional benefit of using expressive language is the ability to use a rich set of built-in or third-party libraries developed or familiar by security practitioners for communicating with APIs, which improves the effectiveness of the detection.

Quality assurance for detection code can illuminate detection blind spots, test for false positives, and promote detection efficacy. A TDD approach enables security teams to anticipate an attacker’s approach, document what they learn, and create a library of insights into the attacker’s strategy.

Over and above code correctness, a TDD approach improves the quality of detection code and enables more modular, extensible, and flexible detections. Engineers can easily modify their code without fear of breaking alerts or weakening security.

When writing or modifying detections, version control allows practitioners to revert to previous states swiftly. It also confirms that security teams are using the most updated detection. Additionally, version control can provide needed meaning for specific detections that trigger an alert or help identify changes in detections.

Over time, detections must change as new or additional data enters the system. Change control is an essential process to help teams adjust detections as needed. An effective change control process will also ensure that all changes are documented and reviewed.

Security teams that have been waiting to shift security left will benefit from a CI/CD pipeline. Starting security operations earlier in the delivery process helps to achieve these two goals:

  • Eliminate silos between teams that work together on a shared platform and code-review each other’s work.
  • Provide automated testing and delivery systems for your security detections. Security teams remain agile by focusing on building precision detections.

Finally, DaC promotes code reusability across broad sets of detections. As security detection engineers write detections over time, they start to identify patterns as they emerge. Engineers can reuse existing code to meet similar needs across different detections without starting completely over.

Reusability is an essential part of detection engineering that allows teams to share functions across different detections or change and adjust detections for particular use-cases…[…] Read more »

 

How to Become an IT Industry Thought Leader

Many IT executives — and prospective executives — would like to share their knowledge and insights as industry thought leaders. Here’s a look at what you need to know to get started.

An IT industry thought leader or influencer is someone who uses their expertise and perspective to offer specialized guidance, inspire innovation, and motivate followers to business success. A thought leader’s followers can include colleagues, business partners, on-site and virtual conference audiences, and website and book readers, as well as social media followers.

Getting started as an IT thought leader requires industry experience, a winning personality, lessons to teach, and an eager audience. The most successful thought leaders address the two major needs of IT and business executives: making money and saving money, said Juan Orlandini, chief architect at Insight Enterprises, an IT systems and services provider.

Leaders, Not Followers

IT thought leaders don’t follow the crowd. “You have to be able to set a path that’s your own — a path that’s informed by the prevailing wisdom, not driven by it,” Orlandini said. “IT thought leaders, regardless of what level they’re at, also need to remain current and relevant with the changing landscape,” he added.

Beyond deep technical and/or business knowledge, becoming an IT thought leader requires a significant amount of self-reflection. Points to consider include motivations, such as career advancement, enterprise recognition, increasing product or services sales, building close ties with business partners, and perhaps even the desire to help improve and advance the IT community. “Understanding those motivations will help you plan your approach,” said Jeff Ton, a strategic IT advisor to IT solutions provider InterVision.

Gaining widespread recognition requires the ability to communicate ideas through multiple channels. “Critically assess your ability to write, to speak in front of audiences, to be the subject of an interview,” Ton advised. Getting started doesn’t require perfection, but if there are any glaring weaknesses, it’s important to address them. “For example, if the thought of public speaking makes your palms sweaty, find low-risk opportunities to speak to groups,” he suggested. If you need to hone your writing skills, seek out guest blog opportunities. “If conversation is more your thing, identify tech-related podcasts and propose being a guest on the program,” Ton recommended.

Ask your enterprise’s marketing department to help you get your thought leadership career off on the right foot. “Marketing can help you find opportunities to amplify your voice,” Ton said. “They can also help you with editing, fine-tuning your message, graphics, social media, and much more.”

Personal and Career Benefits

Most thought leaders launch their quest with the goal of enhancing their careers. “Within your company, other executives will gain an understanding of the way you think about your role, the business, and the industry,” Ton said. “They will begin to see you as more than just the ‘IT person’,” he noted.

Becoming a thought leader creates an instant credibility that can be used to build strong connections to C-suite executives, said Ari Lightman, a professor of digital media and marketing at the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. It also creates a sense of pride in the IT department that there’s a leader who can help other in-house strategic thinkers work through challenging issues, he explained.

As they raise their industry profile, thought leaders are frequently targeted by enterprises searching for a new CIO. “They may think of reaching out to you prospectively because they know your name and know your reputation,” said Rich Temple, vice president and CIO at the Deborah Heart and Lung Center. “It becomes that much easier for potential employers to see your body of work and get to know you,” he noted. “That can be a real differentiator for you in a competitive job search.”..[…] Read more »…..