Is Your Organization Reaping the Rewards or Simply Ticking a Box?

 

In today’s interconnected world, third-parties play an important role in your organization’s success, but they can also be its weakest link in terms of risk management.

According to Gartner, 60% of organizations are now working with more than 1,000 third-parties. Despite the added complexities, these relationships are critical to business success – delivering affordable, responsive and scalable solutions that can help organizations to grow and adapt according to the needs of their customers. But as reliance on third-parties grows, so too does the exposure to additional risk.

If we are going to reap the rewards of third-party relationships, then we must also identify, manage and mitigate the risks. A rigorous TPRM program is key to achieving just that, which means effective third-party oversight is more important than ever. So how can you ensure that your third-party risk management (TPRM) processes are ready to face the challenges of our ever-evolving commercial landscape and what practical steps can be taken to improve them?

Third-party risk is more than a checkbox exercise

Often, organizations start thinking about TPRM as a result of compliance drivers. They are facing a wide range of regulatory requirements around data privacy, information security, and cloud hosting. Whatever the motivation, all too often we see this kind of activity treated as little more than checking a box.

Reducing this kind of risk management to an exercise in compliance doesn’t ensure that you address the root causes and underlying risks. In fact, by simply viewing TPRM as a set of minimum requirements, it’s easy to overlook potential risks that could become issues for your organization. It’s particularly true when vendors are viewed in isolation. This can mean that activities aren’t standardized and aligned across an entire organization, creating additional unforeseen risks within your vendors.

Instead, your organization should take a holistic approach. Integrating TPRM with your wider Governance, Risk and Compliance (GRC) can have huge benefits. By embedding your assessment program as part of your wider compliance landscape you won’t just be conducting a one-time vendor audit, you’ll be proactively assessing third-party risks and continuously improving operations, efficiencies and processes to enhance the security of every aspect of your supplier network. You will be able to pass information throughout the business, ensuring that risks are identified and treated on an ongoing basis.

Determining the scope of risk assessments is vital

Many organizations simply don’t have the resources to conduct assessments of all of their third-party providers at a granular level. So, your very first step should be to take an inventory of all of your third-parties, considering who your vendors are and what business functions they support. Then, armed with this information, you can prioritize your analysis. There are three key considerations to provide a structure for your assessment.

Cost is a sensible starting point for most organizations, and often the easiest way to structure your assessment. By looking at the contractual value of each vendor you can then tier them accordingly. Another way of categorizing your vendors is by considering the type of risk they expose your organization to. Consider factors such as geography, technology, and financial risk, then organize your risks based on how likely they are to occur. The most sophisticated approach is criticality which ranks each vendor by assessing which of your critical assets, systems and processes they impact, and what the repercussions of those risks would be to your organization.

Sometimes, there are other factors that might impact whether or not a third-party is included within the scope of your assessment. You may find, for example, that your vendor will not allow you to assess them. That’s often the case if you’re working with big companies like Google, Amazon or Microsoft, who may well be critical to your business success, but who are unlikely to give you bespoke information for your audit.

Alternatively, external factors might dictate the scope of your assessment. Whether it’s a global pandemic like COVID-19 or a major geopolitical event such as the Russia-Ukraine war, organizations will often conduct tactical assessments in order to analyze the impact of their expanded risk profiles.

It’s about quality not quantity

When it comes to crafting your TPRM Question Sets, less is most definitely more. You may be tempted to put together hundreds of questions covering every topic under the sun. But is this going to give you the information you need? And, more importantly, is your busy vendor even going to answer all of your questions?

Another important consideration is to decide just how specific to make your Question Sets. Make them too generic and you may not be able to capture the data you need. But make them too specific to your business and your vendor is going to find it incredibly difficult to provide answers in the detail you are looking for.

At the end of the day, it’s a balancing act – one that means you should keep your Question Sets as targeted as possible. So, rather than sending 200 questions, send 20, but make sure they are well thought through to ensure that they gather the information you need for your risk program. This is where it might be helpful to leverage existing Question Sets such as SCF, SIG and Cyber Risk Institute. Whether they have been provided by consultants or they’re part of an industry standard, this approach will help to ensure you get the data you need.

Practical steps to improve third-party risk management

There are four key actions to consider when it comes to improving a TPRM program.

Understand what’s going well, and what’s not

Conduct a self-assessment of your organization’s TPRM capability and ask key questions such as: what are our strengths? Where are our weaknesses? It’s a good idea to ensure part or all of the assessment is carried out by an external party as they will deliver impartial feedback and highlight potential areas for improvement.

Understand your target state

If you have a vendor-first strategy that leads to a large amount of outsourcing, it’s crucial that you understand your target state for third party due diligence. Have a roadmap that sets out realistic aims and objectives, and how you intend to achieve them. Looking to do too much too soon with your program can cause issues, slow down the progression and can be counter intuitive.

Build partnerships with vendors

Establishing a close relationship with critical vendors is central to the success of any TPRM program. Without partnerships, it becomes increasingly difficult to work toward common goals. Technology can help with monitoring and assessments, but having the ability to pick up the phone and openly discuss and address issues to mitigate any risks…[…] Read more »

 

How does encryption work? Examples and video walkthrough

Cryptography — the practice of taking a message or data and camouflaging its appearance in order to share it securely

What is cryptography used for?

It’s the stuff of spy stories, secret decoder rings and everyday business — taking data, converting it to another form for security, sharing it, and then converting it back so it’s usable. Infosec Skills author Mike Meyers provides an easy-to-understand walkthrough of cryptography.

Watch the complete walkthrough below.

Cyber Work listeners get free cybersecurity training resources. Click below to see free courses and other free materials.

Cryptography types and examples

Cryptography is the science of taking data and making it hidden in some way so that other people can’t see it and then bringing the data back. It’s not confined to espionage but is really a part of everyday life in digital filesharing, business transactions, texts and phone calls.

What is cryptography?

Simply put, cryptography is taking some kind of information (encrypting) and providing confidentiality to it to share with intended partners and then returning it to its original form (decrypting) so that the intended audience can use that information. Cryptography is the process of making this happen.

What are obfuscation, diffusion and confusion?

(00:40) Obfuscation is taking something that looks like it makes sense and hiding it so that it does not make sense to the casual outside observer.

(00:56) One of the things we can do to obfuscate a message or image is diffusion, where we take an image and make it fuzzier, so the details are lost or blurred. Diffusion only allows us to make it less visible, less obvious.

(01:26) We can also use confusion, where we take that image, stir it up and make a mess out of it like a Picasso painting so that it would be difficult for somebody to simply observe the image and understand what it represents.

How a Caesar cipher works

(02:10) Cryptography has been around for a long, long time. In fact, probably one of the oldest types of cryptography that has ever been around is something called the Caesar cipherIf you’ve ever had or seen a “secret decoder ring” when you were young, you know how a Caesar cipher works.

Encrypting using a Caesar cipher

(02:40) I’ve made my own decoder ring right here. It’s basically a wheel with all the letters of the alphabet, A through Z and on the inside, and all of the letters of the alphabet A through Z on the outside, and to start, you line them up from A to A, B to B, C to C.

(02:59) To make a secret code, you can rotate the inside wheel to change the letters from our original, plain text on the outside wheel. We call this substitution. We’re taking one value and substituting it for another. (03:20) Rotating the wheel two times is called ROT two; turning it three times would be ROT three. (03:37) So we can take, like the word ACE, A-C-E, and I can change ACE to CEG. Get the idea? So that’s the cornerstone of the Caesar cipher.

(04:00) As an example, our piece of plain text that we want to encrypt is, “We attack at dawn.” The first thing we’re going to do is get rid of all the spaces, so now it just says “weattackatdawn.” We’ll rotate our wheel five times — it’s ROT five. And now the encrypted “weattackatdawn” is “bjfyyfhpfyifbs.” (04:44) So we now have generated a classic Caesar cipher.

(04:49) Now there’s a problem with Caesar ciphers. Even though it is a substitution cipher, it’s too easy to predict what the text is because we’re used to looking at words.

How a Vigenere cipher works

(05:32) To make things more challenging, we can use a Vigenere cipher, which is really just a Caesar cipher with a little bit of extra confusion involved. For illustrative purposes, the Vigenere cipher is a table that shows all the possible Caesar ciphers there are. At the top, on Line 0 is the alphabet — from A to Z. On the far left-hand side, it says zero through 25. So these are all the possible ROT values you can have, from ROT zero, which means A equals A, B equals B, all the way down to ROT 25.

Encrypting using a Vigenere cipher and key

(6:17) Let’s start with a piece of plain text. Let’s use “we attack at dawn” one more time. This time, we’re going to apply a key. The key is simply a word that’s going to help us do this encryption. In this particular case, I’m going to use the word face, F-A-C-E.

(06:34) I’m going to put F-A-C-E above the first four letters of “we attack at dawn,” and then I’m going to just keep repeating that. And what we’ve done is we have applied a key to our plain text.

(06:58) Now we’re going to use the key to change the Caesar cipher ROT value for every single letter. So the first letter of the plain text is the W in “wheat” up at the top, and the key value is F, so let’s go down on the Y-axis until we get to an F. Now you see that F, you’ll see the number five right next to it. So this is ROT five.

(07:31) So all I need to do is find the intersection of these, and we get the letter B.

(07:39) The second letter in our plain text is the letter E from “we,” and in this particular case, the key value is A, which is kind of interesting, because that’s ROT zero, but that still works. So we start up at the top, find the letter E, then we find the A, and in this case, because it’s ROT zero, E is going to stay as E.

(08:00) Now, this time, it’s the A in attack. So we go up to the top. There’s the letter A, and the key value is C, as in Charlie. So we go down to the C that’s ROT two, and we then see that the letter A is now going to be C.

(08:19) Now, do the first T in attack. We come over to the Ts, and now the key value is E, as in FACE. So we go down here, that’s ROT four, we do the intersection, and now we’ve got an X. So the first four letters of our encrypted code are B, E, C, X.

Understanding algorithms and keys

(08:52) The beauty of the Vigenere is that it actually gives us all the pieces we need to create a classic piece of cryptography. We have an algorithm. The algorithm is the different types of Caesar ciphers and the rotations. And second, we have a key that allows us to make any type of changes we want within ROT zero to ROT 25 to be able to encrypt our values.

Any algorithm out there will use a key in today’s world. So when we’re talking about cryptography today, we’re always going to be talking about algorithms and keys.

(09:31) The problem with the Vigenere is that it’s surprisingly crackable. It works great for letters of the alphabet, but it’s terrible for encrypting pictures or Sequel databases or your credit card information.

(09:53) In the computer world, everything is binary. Everything is ones and zeros. We need to come up with algorithms that encrypt and decrypt long strings of just ones and zeros.

(10:11) While long strings of ones and zeros may look like nothing to a human being, computers recognize them. They could be a Microsoft Word document, or it could be a voiceover IP conversation, or it could be a database stored on a hard drive.

How to encrypt binary data

(10:37) We need to come up with algorithms which, unlike Caesars or Vigeneres, will work with binary data.

(10:45) There are a lot of different ways to do this. We can do this using a very interesting type of binary calculation called “exclusive OR.”

(11:08) For our first encryption, I’m going to encrypt my name, and we have to convert this to the binary equivalents of the text values that a computer would use. Anybody who’s ever looked at ASCII code or Unicode should be aware that we can convert these into binary.

Exclusive OR (XOR) encryption example

(11:38) So here’s M-I-K-E converted into binary. Now notice that each character takes eight binary digits. So we got 32 bits of data that we need to encrypt. So that’s our clear text. Now, in order to do this, we’re going to need two things.

(11:58) First, we need an algorithm and then we’re going to need a key.

(12:09) Now our algorithm is extremely simple, using what we call an exclusive OR and what we call a truth table. This Mike algorithm chooses a five-bit key for this illustration. In the real world, keys can be thousands of bytes long.

(12:41) So, to make this work, let’s start placing the key. I’m going to put the key over the first five bits, here at the letter M for Mike, and now we can look at this table, and we can start doing the conversion. So let’s convert those first two values, then the next, then the next, then the next.

(12:58) Now, we’ve converted a whole key’s worth, but in order to keep going, all we have to do is schlep that key right back up there and extend the key all the way out and just keep repeating it to the end. It doesn’t quite line up, so we add whatever amount of key is needed to fill up the rest of this line.

(13:28)  Using the Exclusive OR algorithm, we then create our cipher text.

(13:44) Notice that we have an algorithm that is extremely simplistic. We have a key, which is very, very simple and short, but we now have an absolutely perfect example of binary encryption.

(13:58) To decrypt this, we’d simply reverse the process. We would take the cipher text, place the key up to it, and then basically run the algorithm backward. And then we would have the decrypted data.

What is Kerckhoffs’s principle?

Having the algorithm and a key makes cryptography successful. But which is more important, the algorithm or the key?
(14:30) In the 19th century, Dutch-born cryptographer Auguste Kerckhoffs said a system should be secure, even if everything about the system, except the key, is public knowledge. This is really important. Today’s super-encryption tools that we use to protect you on the internet are all open standards. Everybody knows how the algorithms work….[…] Read more »….

 

API Security 101: The Ultimate Guide

APIs, application programming interfaces, are driving forces in modern application development because they enable applications and services to communicate with each other. APIs provide a variety of functions that enable developers to more easily build applications that can share and extract data.

Companies are rapidly adopting APIs to improve platform integration, connectivity, and efficiency and to enable digital innovation projects. Research shows that the average number of APIs per company increased by 221% in 2021.

Unfortunately, over the last few years, API attacks have increased massively, and security concerns continue to impede innovations.

What’s worse, according to Gartner, API attacks will keep growing. They’ve already emerged as the most common type of attack in 2022. Therefore, it’s important to adopt security measures that will keep your APIs safe.

What is an API attack?

An API attack is malicious usage or manipulation of an API. In API attacks, cybercriminals look for business logic gaps they can exploit to access personal data, take over accounts, or perform other fraudulent activities.

What Is API security and why is it important?

API security is a set of strategies and procedures aimed at protecting an organization against API vulnerabilities and attacks.

APIs process and transfer sensitive data and other organizations’ critical assets. And they are now a primary target for attackers, hence the recent increase in the number of API attacks.

That’s why an effective API security strategy is a critical part of the application development lifecycle. It is the only way organizations running APIs can ensure those data conduits are secure and trustworthy.

A secure API improves the integrity of data by ensuring the content is not tampered with and available to only users, applications, and servers who have proper authentication and authorization to access it. API security techniques also help mitigate API vulnerabilities that attackers can exploit.

When is the API vulnerable?

Your API is vulnerable if:

  • The API host’s purpose is unclear, and you can’t tell which version is running, what data is collected and processed, or who should have access (for example, the general public, internal employees, and partners)
  • There is no documentation, or the documentation that exists is outdated.
  • Older API versions are still in use, and they haven’t been patched.
  • Integrated services inventory is either missing or outdated.
  • The API contains a business logic flaw that lets bad actors access accounts or data they shouldn’t be able to reach.

What are some common API attacks?

API attacks are extremely different from other cyberattacks and are harder to spot. This new approach is why you need to understand the most common API attacks, how they work and how to prevent them.

BOLA attack

This most common form of attack happens when a bad actor changes parameters across a sequence of API calls to request data that person is not authorized to have. For example, nefarious users might authenticate using one UserID, for example, and then enumerate UserIDs in subsequent API calls to pull back account information they’re not entitled to access.

Preventive measures:

Look for API tracking that can retain information over time about what different users in the system are doing. BOLA attacks can be very “low and slow,” drawn out over days or weeks, so you need API tracking that can store large amounts of data and apply AI to detect attack patterns in near real time.

Improper assets management attack

This type of attack happens if there are undocumented APIs running (“shadow APIs”) or older APIs that were developed, used, and then forgotten without being removed or replaced with newer more secure versions (“zombie APIs”). Undocumented APIs present a risk because they’re running outside the processes and tooling meant to manage APIs, such as API gateways. You can’t protect what you don’t know about, so you need your inventory to be complete, even with developers have left something undocumented. Older APIs are unpatched and often use older libraries. They are also undocumented and can remain undetected for a long time.

Preventive measures:

Set up a proper inventory management system that includes all the API endpoints, their versions, uses, and the environment and networks they are reachable on.

Always check to ensure that the API needs to be in production in the first place, it’s not an outdated version, there’s no sensitive data exposed and that data flows as expected throughout the application.

Insufficient logging & monitoring

API logs contain personal information that attackers can exploit. Logging and monitoring functions provide security teams with raw data to establish the usual user behavior patterns. When an attack happens, the threat can be easily detected by identifying unusual patterns.

Insufficient monitoring and logging results in untraceable user behavior patterns, thereby allowing threat actors to compromise the system and stay undetected for a long time.

Preventive measures:

Always have a consistent logging and monitoring plan so you have enough data to use as a baseline for normal behavior. That way you can quickly detect attacks and respond to incidents in real-time. Also, ensure that any data that goes into the logs are monitored and sanitized.

What are API security best practices?

Here’s a list of API best practices to help you improve your API security strategy:

  • Train employees and security teams on the nature of API attacks. Lack of knowledge and expertise is the biggest obstacle in API security. Your security team needs to understand how cybercriminals propagate API attacks and different call/response pairs so they can better harden APIs. Use the OWASP API Top 10 list as a starting point for your education efforts.
  • Adopt an effective API security strategy throughout the lifecycle of the APIs.
  • Turn on logging and monitoring and use the data to detect patterns of malicious activities and stop them in real-time.
  • Reduce the risk of sensitive data being exposed. Ensure that APIs return only as much data as is required to complete their task. In addition, implement data filtering, data access limits, and monitoring.
  • Document and manage your APIs so you’re aware of all the existing APIs in your organization and how they are built and integrated to secure and manage them effectively.
  • Have a retirement plan for old APIs and remove or patch those that are no longer in use.
  • Invest in software specifically designed for detecting API call manipulations. Traditional solutions cannot detect the subtle probing associated with API reconnaissance and attack traffic….[…] Read more »… 

 

Graph Databases: What They Are and How to Get Started

For the past few years, graph databases have been pushed by vendors and pundits as a better way to scale database access and manage data. Enterprise IT was initially slow to move to graph databases, but now momentum is picking up.

MarketsandMarkets research predicts that graph database software sales will grow from $1.9 billion in 2021 to $5.1 billion by 2026. Emergen research projects that by 2030, the global graph database software market will be at $11.25 billion.

Graph databases began as a concept in the 1960s, when limitations in hierarchical databases like IBM’s IMS were circumvented with the help of what was then known as virtual records. However, it wasn’t until the 2010s that graph databases began to be noticed by companies.

Indeed, graph databases can surpass the performance of relational databases like SQL when it comes to processing large troves of data from disparate sources and systems.

Current use cases seem to confirm this.

In the financial sector, a graph database supports complex analytics by connecting many different data points that give companies insights into how, when, and where fraudulent activity begins to emerge. With the help of graph databases, companies can also see links between fraudulent activity and credit cards, addresses and transactions. Being able to detect and intercept fraud before it manifests is huge. In 2021, US consumers lost over $5.8 billion dollars to fraud.

In aerospace, Lockheed Martin Space is using graph databases to manage its large supply chain. “Think about the lifecycle of how a product is created,” said Tobin Thomas, CDAO at Lockheed Martin Space, in a Business of Data report. “[We’re] using technologies like graphs to connect the relationships together, so we can see the lifecycle based on particular parts or components and the relationships between every element.”

In healthcare, graph databases can connect a diversity of data points to observe how patients move from providers to specialists throughout the healthcare systems, and to better understand rates of disease occurrence and causative factors.

In a word, any organization faced with analyzing a large spectrum of data points, with many of them seemingly unrelated, will benefit from using a graph database.

What Is a Graph Database?

A graph database is so named because it follows the point-to-point structure of a graph. The database store items in a data store that are linked to a collection of nodes and edges, with the edges representing the relationships between the nodes. These nodal relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation.

Graph databases use NoSQL, which is a boon to IT, which usually has staff with SQL skills—and the power of a graph database to discover and link thousands of different data relationships for analytics and insights makes it an ideal fit for analysis of Web, social media, and unstructured data. The point to point, non-columnar structure of a graph database makes it faster and more agile than its relational SQL database counterpart.

Getting Started With Graph Databases

Despite rosy market forecasts, only 12.7% of company respondents in a 2019 DATAVERSITY survey said they were using graph databases, and only one quarter of survey respondents said they were planning to use graph databases in the future.

One barrier to use has been understanding.

IT has well-honed skills in relational databases and understands the place of relational databases in the overall database landscape, but there is still fuzziness about how graph databases differ from relational databases and how graph databases can be used for advantage.

Given these gaps in knowledge and experience, what can IT do now to ensure that it doesn’t miss out on what could become a powerful analytics platform? Here are some ideas:

1. Find a use case

In criminal forensics, where it is important to connect many different data points (some of them seeming unrelated) to create a picture of a suspect, graph databases are useful. The same goes for a medical application that seeks to understand the origin of an illness, and why it affects some people but not others.

In both cases, large amounts of data need to be analyzed. This is where graph databases shine, and where relational databases have their limits.

2. Start with one project

The first business use case should be tightly defined and projectized. This gives your staff a chance to learn and to experiment with graph database technology. It also enables staff to define a methodology for working with graph databases. The DBA can begin thinking about how graph databases should fit in overall data architecture.

3. Look for a strategic vendor or consultant partner

Graph database expertise is available..[…] Read more »…..

 

Finding the right MSSP for securing your business and training employees

Over the past year, small businesses have had to navigate the pandemic’s many challenges — from changes in business models and supply shortages to hiring and retaining employees. On top of these pandemic-driven challenges, SMBs also faced a growing business risk: cybersecurity incidents.

Cybercriminals often target SMBs due to the limited security resources and training that leave these businesses vulnerable. According to a study, Verizon found 61% of all SMBs reported at least one cyberattack during 2020, with 93% of small business attacks focused on monetary gain. Unfortunately, this leaves many SMBs forced to close after an incident due to the high costs incurred during a cyberattack.

Cybersecurity is no longer just “nice to have” for SMBs, but many business owners don’t know where to start. And while measures like a VPN or antivirus system can help, they aren’t enough by themselves. Managed security service providers (MSSPs) are a valuable resource for SMBs, allowing them to bring in the expertise needed to secure infrastructure that they might not be able to afford in this highly competitive labor market.

When looking for an MSSP, hundreds of options often leave businesses overwhelmed. To learn more about the value MSSPs should and can bring to the table, I spoke with Frank Rauch and Shay Solomon at Check Point Software Technologies.

Koziol: What should small and medium business owners look for when selecting a cybersecurity MSSP? What are the must-haves and the nice-to-haves?

Rauch: We are living in a time where businesses, SMBs especially, cannot afford to leave their security to chance. SMBs are a prime target for cybercriminals, as SMBs inherently struggle with the expertise, resources and IT budget needed to protect against today’s sophisticated cyberattacks. We are now experiencing the fifth generation of cyberattacks: large-scale, multi-vector, mega attacks targeting businesses, individuals and countries. SMBs should be looking for a true leader in cybersecurity. They should partner with an MSSP that can cover all customer sizes and all use cases. To make it easy, we can focus on three key areas:

  1. Security. The best MSSPs have security solutions that are validated by renowned third parties. They should prove their threat prevention capabilities and leverage a vast threat intelligence database that can help prevent threats at a moment’s notice.
  2. Capabilities. MSSPs should be offering a broad set of solutions, no matter the size—from large enterprises to small businesses, data centers, mobile, cloud, SD-WAN protection, all the way to IoT security. Having this broad range of expertise will ensure that your MSSP is ready to cover your business in all instances.
  3. Individualized. This may be one of the most critical areas. Your MSSP should be offering flexible growth-based financial models and provide service and support 24/7 with real-time prevention. Collaborative business processes and principles will ensure success and security in the long run.

Koziol: How can SMBs measure the value of bringing in an MSSP? Or, the risks of inaction?

Rauch: The biggest tell-tale sign of a match made in heaven is if you’re receiving your security needs through one single vendor. If not, those options are out there! Getting the best security through one experienced, leading vendor can reduce costs, simplify, support and ensure consistency across all products. This ranges from simply protecting your sensitive data all the way to ensuring you can secure the business through a centralized security management platform. How can you protect what you can’t see?

It makes sense to keep an eye on how many cybersecurity attacks you’re preventing each month. How long is it taking you to create, change and manage your policies? Are you scaling to your liking? Can you adapt on the fly if need be? Are your connected devices secure? These are just some examples that you should be able to measure with simplicity.

Koziol: How has the shift in remote/hybrid workforce changed how cybersecurity MSSPs support SMBs?

Rauch: The shift to a larger work-from-home practice has caused attackers to shift their attacks outside of their network. It is more important now than ever for MSSPs to be providing their SMBs with a complete portfolio — endpoint, mobile, cloud, email and office — that allows them to connect reliably, scale rapidly and stay protected, no matter the environment.

The best MSSPs should have been ready for this day. At any moment, day or night, your organization can be victimized by devastating cybercrime. You can’t predict when cyberattacks will happen, but you can use proactive practices and security services to quickly mitigate their effects or prevent them altogether. The shift to a hybrid workforce exposed the holes in the existing security infrastructure.

On the bright side, security incidents present an opportunity to comprehensively reevaluate and improve information security programs. They show threat vectors that we previously overlooked and raise awareness across the organization to enhance existing or implement new controls. So at the very least, this shift has been an eye-opener for MSSPs.

Koziol: Should MSSPs offer security awareness and training as part of their offering? Why?

Solomon: Absolutely, yes. At the end of the day, knowledge is power. Cyberattacks are evolving and training can help keep SMB employees protected and educated. According to a study from VIPRE, 47% of SMBs leaders reported keeping data secure as their top concern. At the same time, many SMBs lack sufficient skills and capacity to drive improved security on their own.

The only way to fight cybercrime effectively is by sharing experiences and knowledge. Due to the cyber shortage, Check Point Software, along with 200 global training partners, recently announced a free cybersecurity training program called Check Point Mind. It offers many training and cybersecurity awareness programs to give SMBs (or any business) the chance to extend their skills with comprehensive cybersecurity training programs led by world-class professionals.

Koziol: How can working with an MSSP on security awareness education improve a business’s overall security posture?

Solomon: Raising awareness with employees is a crucial step that’s often overlooked. Employees need to be able to identify a phishing attempt and know how to react. In our experience, we see a majority of employees attacked using emails. They receive an email that looks like an official email from someone with authority, asking them to open attachments or click on a link that contains malicious intent.

If employees go through a training course that teaches them what to look for in an attack, this will surely reduce the chance of that employee falling victim to the phishing attempt.

Koziol: What questions should SMBs be asking their current or future MSSPs about cybersecurity?

Solomon: Building on what was mentioned earlier, it is never too late to reevaluate and improve information security programs. Asking questions and investing in a better security posture shows us threat vectors that we previously might have overlooked and raises awareness across the organization to the need to improve existing or implement new controls. SMBs must proactively approach their MSSPs to ensure they are getting the best bang for their buck—security solutions that require minimal configuration and simple onboarding. In addition, they need to ensure they are taking the proper steps when evaluating security architecture, advanced threat prevention, endpoint, mobile, cloud, email and office.

Koziol: What’s ahead for MSSPs in the cybersecurity space? What should SMB owners expect to see next?

Rauch: One of the key areas we’ll see continuously growing is the need for a next-generation cybersecurity solution that enables organizations to proactively protect themselves against cyberthreats: incident detection and response management. As attacks continue to evolve and grow in numbers, unified visibility is a must-have across multiple vectors that a cyberthreat actor could use to attack a network.

A common challenge we see is an overwhelming volume of security data generated by an array of stand-alone point security solutions. What’s needed is a single dashboard, or, in other words, unified visibility, that enables a lean security team to maximize their efficiency and effectiveness. SMBs should take the opportunity to check security investments. The highest level of visibility, reached through consolidation, will guarantee the best effectiveness…[…] Read more »….

 

Data: The future of quantifying risk

The world is perpetually moving onwards and upwards with cloud adoption.

This phenomenon is no longer surprising or in-and-of-itself noteworthy. In fact, according to recent research, 92% of global enterprises used public clouds in 2021. While there will always be a few inevitable holdouts, soon, nearly all organizations will embrace the cloud in some form or another.

But amidst this shift, there are the ever-growing corporate risks associated with reliance on cloud technology. December 2021’s repeated AWS outages serve as a stark reminder that, despite tremendous benefits, cloud dependence can be a double-edged sword for many enterprise organizations.

Mission-critical issues, such as the need to minimize reliance on concentrated platforms, the necessity to avoid outages, data exposure prevention and more, have now moved the issue from IT manager and developer discussions to full C-suite level priorities, with the goal of removing and reducing risk wherever possible.

Risk is inevitable

Of course, all organizations have some corporate risk — there’s just no way around it. Truth be told, the only way to prevent modern risk altogether would be to go back to the Stone Age and miss out on the huge benefits that come with advanced technology; and even then, companies might still wind up exposed to other types of business risks. In the modern cloud and Software as a Service (SaaS)-based ecosystem, however, corporate risk is clearly something that not only has to be accepted, but properly managed as well.

But this undertaking of trying to decipher and then manage risk has proven to be a challenge. The risk management community continually struggles to build generic models that adequately address these issues, especially while balancing the need to justify risks to business stakeholders. Leadership wants to understand these risks in terms of dollars and cents rather than technical jargon or qualitative input.

For sustained success, security leaders must get a clear view of the risks their companies face, understand how to measure them, invest in them properly, and, when required, defend against them on an ongoing basis.

To this end, in 2017, Gartner coined the term Integrated Risk Management (IRM), which delineates a way to look at and address risk management across the organization to make better, more informed decisions for optimized results. With parameters to address risk identification, assessment, response, communication and monitoring, IRM creates an achievable pathway for this.

In theory, that is.

During the risk identification stage in the IRM model, the responsible party identifies the risk via assessments and/or meetings with stakeholders. The risks are then collected into a spreadsheet or other static legacy solution. They are then analyzed with existing IRM tools, which feed predefined formulas based on manual input from the risk manager in an attempt to try to prioritize those that are most pressing.

But what if companies could incorporate objective data — such as intelligence that has been pulled directly from sources — into the risk assessment? What if, instead of basing risk management on interviews, assessments and gut feelings — and then relegating that information to a static spreadsheet — it could be defined according to the underlying live data and used to make impactful, data-based decisions in real time?

The future of IRM lies in quantifying risk with live — and most importantly — objective data.

Data: The key to truly understanding risk

Instead of relying on inherently unreliable elements like spreadsheets, workflow GRC tools and one-on-one conversations, the use of normalized and structured data collected from all applications a company uses can provide a full, comprehensive picture regarding the risks the company is facing in reality. In place of feelings and potentially subjective assessments, data can express the true story behind the scenes and give companies a far more accurate observability tool with which to understand the corporate risks they must address and then act in time upon it. From there, companies can create a true risk matrix to prioritize what needs to be addressed first, and so on.

Risk professionals will tell you they already do rely on real data gathered from the field during their last survey. In truth, this isn’t the same as data continuously and independently pulled directly from sources. Shifting to a true data-based IRM approach gives companies the ability to objectively view their risks to enable maximum understanding of risk posture…[…] Read more »….

 

Chase CIO Gill Haus Discusses Recruitment, Agile, and Automation

The world of banking and finance faces aggressive change in innovation, increasing the need to adapt to new evolutionary cycles in financial technology. As customers want more resources and guidance with their finances, institutions such as JPMorgan Chase must nimbly respond in a way that belies their large size.

Gill Haus, CIO of consumer and community banking (Chase) at JPMorgan Chase, spoke with InformationWeek about his institution’s approach to finding the right tech talent to meet demands for innovation, the growing importance of automation, and the personal directives he follows.

When looking at technology recruitment, what skillsets is Chase seeking, both to meet current needs and also for what may come next?

At the root of what we do, we are in the business of building complex features and services for our customers. We have about 58 million digitally active customers; they depend heavily on the services we provide. Technology is behind all those products and services we offer. We are looking for the quintessential engineers that have the background in Java, machine learning engineers, those that have mobile experience as well. We also have technologies that are in “heritage” — systems that we’ve had for many years and we’re looking for engineers that understand how to use those technologies. Not just to support them but to modernize them. The key of our practice is to make sure also that we have those engineers and talent in general that is adaptable … because the market is constantly changing.

Why this is important is not just so we can have talent come in and help us build great solutions; it is also a great opportunity for talent to grow themselves. We provide our employees opportunities to use those new technologies whether it’s public cloud, private cloud, or machine learning. Also, to grow the breadth of their experiences, whether they’re working on mobile technologies, backend systems, or some other solution that touches millions and millions of customers. We offer our employees the opportunity, whether they are an entry-level software engineer, we have programs like our software engineer program where we bring in talent from universities and boot camps to do training. We offer things across the organization where our talent can contribute and learn with teams to build solutions, learn how to use other technology, and become more adaptable.

Gill_Haus_CHASE.jpg
Gill Haus, JPMorgan Chase

Are there particular technologies or methodologies that have come into play of late that Chase has wanted to adopt or look at?

We’ve made a large move to be an agile organization to organize around our products versus organizing around our businesses. The reason for that is we need to be able to build solutions quickly and those local teams — the product, technology, data, and design leaders — they’re more able to see what’s happening in the market, make decisions quickly, decide what to build or what service to provide, and make sure we’re applying that for our customer versus being organized in a way that makes it more difficult to operate.

The move to an agile work style is really key for us to compete.

The other [part] is the skills themselves. At our scale, machine learning absolutely. We have tons of data about our customers, on how customers are using our products. Customers ask us to provide them insights or guidance. If you go into our mobile app, we have something called Snapshot that tells you how you’re spending money compared to other people like you, ways you can save. Machine learning is the essence and power behind making that happen.

Mobile engineering is also incredibly important for us because more and more of our customers are moving to be digitally active in the mobile space. We want to be where our customers are.

What isn’t often talked about is a lot of our backend services, which is the main Java programming that we do, empowers all of this. From APIs to public cloud because when you deposit money, you’re using those rails. When you are executing machine learning models, you’re still using a lot of those rails.

While we are focused on a lot of the new, we’re also focused on modernizing the core that we have because that is so fundamental to the services we provide.

In terms of scouting tech talent, is there an emphasis on finding brand new graduates of schools that offer the latest skills, retraining existing staff to make use of their institutional knowledge as well?

All the above. The purpose-driven culture we have is really a big factor for us. Money is at the center of people’s lives. If you can create a positive experience for customers in using their money, whether they are able to save more, to pay for something they didn’t expect, or prevent fraud for them, it provides an incredible positive benefit to that individual. That’s important. Many of the people joining, or already at that firm, want to have that positive impact.

One of our software engineering programs is called Tech Connect, which is how we get in software engineers who might not have come in through the traditional software engineering degrees. It’s a way for them to go through training here and find a role within the organization. We also have the software engineering program where we look at entry level candidates coming in from colleges with computer science and other engineering degrees. For employees that we have here, we have programs like Power Up, which is at 20 JPMorgan Chase technology centers where over 17,000 employees meet on an annual basis. There they learn all different types technologies, from machine learning, to data, to cloud. That allows us not only to have people that are here be trained but it makes it compelling to join the firm…[…] Read more »…..

 

 

Top 15 cybersecurity predictions for 2022

Over the past several years, cybersecurity risk management has become top of mind for boards. And rightly so. Given the onslaught of ransomware attacks and data breaches that organizations experienced in recent years, board members have increasingly realized how vulnerable they are.

This year, in particular, the public was directly impacted by ransomware attacks, from gasoline shortages, to meat supply, and even worse, hospitals and patients that rely on life-saving systems. The attacks reflected the continued expansion of cyber-physical systems — all of which present new challenges for organizations and opportunities for threat actors to exploit.

There should be a shared sense of urgency about staying on top of the battle against cyberattacks. Security columnist and Vice President and Ambassador-At-Large in Cylance’s Office of Security & Trust, John McClurg, in his latest Cyber Tactics column, explained it best: “It’s up to everyone in the cybersecurity community to ensure smart, strong defenses are in place in the coming year to protect against those threats.”

As you build your strategic planning, priorities and roadmap for the year ahead, security and risk experts offer the following cybersecurity predictions for 2022.

Prediction #1: Increased Scrutiny on Software Supply Chain Security, by John Hellickson, Cyber Executive Advisor, Coalfire

“As part of the executive order to improve the nation’s cybersecurity previously mentioned, one area of focus is the need to enhance software supply chain security. There are many aspects included that most would consider industry best practice of a robust DevSecOps program, but one area that will see increased scrutiny is providing the purchaser, the government in this example, a software bill of materials. This would be a complete list of all software components leveraged within the software solution, along with where it comes from. The expectation is that everything that is used within or can affect your software, such as open source, is understood, versions tracked, scrutinized for security issues and risks, assessed for vulnerabilities, and monitored, just as you do with any in-house developed code. This will impact organizations that both consume and those that deliver software services. Considering this can be very manual and time-consuming, we could expect that Third-Party Risk Management teams will likely play a key role in developing programs to track and assess software supply chain security, especially considering they are usually the front line team who also receives inbound security questionnaires from their business partners.”

 

Prediction #2: Security at the Edge Will Become Central, by Wendy Frank, Cyber 5G Leader, Deloitte

 

“As the Internet of Things (IoT) devices proliferate, it’s key to build security into the design of new connected devices themselves, as well as the artificial intelligence (AI) and machine learning (ML) running on them (e.g., tinyML). Taking a cyber-aware approach will also be crucial as some organizations begin using 5G bandwidth, which will drive up both the number of IoT devices in the world and attack surface sizes for IoT device users and producers, as well as the myriad networks to which they connect and supply chains through which they move.”

 

Prediction #3: Boards of Directors will Drive the Need to Elevate the Chief Information Security Officer (CISO) Role, by Hellickson

 

“In 2021, there was much more media awareness and senior executive awareness about the impacts of large cyberattacks and ransomware that brought many organizations to their knees. These high-profile attacks have elevated the cybersecurity conversations in the Board room across many different industries. This has reinforced the need for CISOs to be constantly on top of current threats while maintaining an agile but robust security strategy that also enables the business to achieve revenue and growth targets. With recent surveys, we are seeing a shift in CISO reporting structures moving up the chain, out from underneath the CIO or the infrastructure team, which has been commonplace for many years, now directly to the CEO. The ability to speak fluent threat & risk management applicable to the business is table stakes for any executive with cybersecurity & board reporting responsibilities. This elevated role will require a cybersecurity program strategy that extends beyond the standard industry frameworks and IT speak, and instead demonstrate how the cybersecurity program is threat aware while being aligned to each executive team’s business objectives that demonstrates positive business and cybersecurity outcomes. More CISOs will look for executive coaches and trusted business partners to help them overcome any weaknesses in this area.”

 

Prediction #4: Increase of Nation-State Attacks and Threats, by John Bambenek, Principal Threat Researcher at Netenrich

 

“Recent years have seen cyberattacks large and small conducted by state and non-state actors alike. State actors organize and fund these operations to achieve geopolitical objectives and seek to avoid attribution wherever possible. Non-state actors, however, often seek notoriety in addition to the typical monetary rewards. Both actors are part of a larger, more nebulous ecosystem of brokers that provides information, access, and financial channels for those willing to pay. Rising geopolitical tensions, increased access to cryptocurrencies and dark money, and general instability due to the pandemic will contribute to a continued rise in cyber threats in 2022 for nearly every industry. Top-down efforts, such as sanctions by the U.S. Treasury Department, may lead to arrests but will ultimately push these groups further underground and out of reach.”

 

And, Adversaries Outside of Russia Will Cause Problems

 

Recognizing that Russia is a safe harbor for ransomware attackers, Dmitri Alperovitch, Chairman, Silverado Policy Accelerator: “Adversaries in other countries, particularly North Korea, are watching this very closely. We are going to see an explosion of ransomware coming from DPRK and possibly Iran over the next 12 months.”

 

Ed Skoudis, President, SANS Technology Institute: “What’s concerning about this potential reality is that these other countries will have less practice at it, making it more likely that they will accidentally make mistakes. A little less experience, a little less finesse. I do think we are probably going to see — maybe accidentally or maybe on purpose — a significant ransomware attack that might bring down a federal government agency and its ability to execute its mission.”

 

Prediction #5: The Adoption of 5G Will Drive The Use Of Edge Computing Even Further, by Theresa Lanowitz, Head of Evangelism at AT&T Cybersecurity

 

“While in previous years, information security was the focus and CISOs were the norm, we’re moving to a new cybersecurity world. In this era, the role of the CISO expands to a CSO (Chief Security Officer) with the advent of 5G networks and edge computing.

The edge is in many locations — a smart city, a farm, a car, a home, an operating room, a wearable, or a medical device implanted in the body. We are seeing a new generation of computing with new networks, new architectures, new use cases, new applications/applets, and of course, new security requirements and risks.

While 5G adoption accelerated in 2021, in 2022, we will see 5G go from new technology to a business enabler. While the impact of 5G on new ecosystems, devices, applications, and use cases ranging from automatic mobile device charging to streaming, 5G will also benefit from the adoption of edge computing due to the convenience it brings. We’re moving away from the traditional information security approach to securing edge computing. With this shift to the edge, we will see more data from more devices, which will lead to the need for stronger data security.

 

Prediction #6: Continued Rise in Ransomware, by Lanowitz

 

“The year 2021 was the year the adversary refined their business model. With the shift to hybrid work, we have witnessed an increase in security vulnerabilities leading to unique attacks on networks and applications. In 2022, ransomware will continue to be a significant threat. Ransomware attacks are more understood and more real as a result of the attacks executed in 2021. Ransomware gangs have refined their business models through the use of Ransomware as a Service and are more aggressive in negotiations by doubling down with distributed denial-of-service (DDoS) attacks. The further convergence of IT and Operational Technology (OT) may cause more security issues and lead to a rise in ransomware attacks if proper cybersecurity hygiene isn’t followed.

While many employees are bringing their cyber skills and learnings from the workplace into their home environment, in 2022, we will see more cyber hygiene education. This awareness and education will help instill good habits and generate further awareness of what people should and shouldn’t click on, download, or explore.”

 

Prediction #6: How the Cyber Workforce Will Continue to be Revolutionized Among Ongoing Shortage of Employees, by Jon Check, Senior Director Of Cyber Protection Solutions at Raytheon Intelligence & Space

 

“Moving into 2022, the cybersecurity industry will continue to be impacted by an extreme shortage of employees. With that said, there will be unique advantages when facing the current so-called ‘Great Resignation’ that is affecting the entire workforce as a whole. As the industry continues to advocate for hiring individuals outside of the cyber industry, there is a growing number of individuals looking to leave their current jobs for new challenges and opportunities to expand their skills and potentially have the choice to work from anywhere. While these individuals will still need to be trained, there is extreme value in considering those who may not have the most perfect resume for the cyber jobs we’re hiring for, but may have a unique point of view on solving the next cyber challenge. This expansion will, of course, increase the importance of a positive work culture as such candidates will have a lot of choices of the direction they take within the cyber workforce — a workforce that is already competing against the same pool of talent. With that said, we will never be able to hire all the cyber people we need, so in 2022, there will be a heavier reliance on automation to help fulfill those positions that continue to remain vacant.”

 

Prediction #7: Expect Heightened Security around the 2022 Election Cycle, by Jadee Hanson CIO and CISO of Code42

 

“With multiple contentious and high-profile midterm elections coming up in 2022, cybersecurity will be a top priority for local and state governments. While security protections were in place to protect the 2020 election, publicized conversations surrounding the uncertainty of its security will facilitate heightened awareness around every aspect of voting next year.”

 

Prediction #8: A Shift to Zero Trust, by Brent Johnson, CISO at Bluefin

 

“As the office workspace model continues to shift to a more hybrid and full-time remote architecture, the traditional network design and implicit trust granted to users or devices based on network or system location are becoming a thing of the past. While the security industry had already begun its shift to the more secure zero-trust model (where anything and everything must be verified before connecting to systems and resources), the increased use of mobile devices, bring your own device (BYOD), and cloud service providers has accelerated this move. Enterprises can no longer rely on a specific device or location to grant access.

Encryption technology is obviously used as part of verifying identity within the zero-trust model, and another important aspect is to devalue sensitive information across an enterprise through tokenization or encryption. When sensitive data is devalued, it becomes essentially meaningless across all networks and devices. This is very helpful in limiting security practitioners’ area of concern and allows for designing specific micro-segmented areas where only verified and authorized users/resources may access the detokenized, or decrypted, values. As opposed to trying to track implicit trust relationships across networks, micro-segmented areas are much easier to lock down and enforce granular identity verification controls in line with the zero-trust model.”

 

 

Prediction #9: Securing Data with Third-Party Vendors in Mind Will Be Critical, by Bindu Sundareason, Director at AT&T Cybersecurity

 

Attacks via third parties are increasing every year as reliance on third-party vendors continues to grow. Organizations must prioritize the assessment of top-tier vendors, evaluating their network access, security procedures, and interactions with the business. Unfortunately, many operational obstacles will make this assessment difficult, including a lack of resources, increased organizational costs, and insufficient processes. The lack of up-to-date risk visibility on current third-party ecosystems will lead to loss of productivity, monetary damages, and damage to brand reputation.”

 

Prediction #10: Increased Privacy Laws and Regulation, by Kevin Dunne, President of Pathlock

 

“In 2022, we will continue to see jurisdictions pass further privacy laws to catch up with the states like California, Colorado and Virginia, who have recently passed bills of their own. As companies look to navigate the sea of privacy regulations, there will be an increasing need to be able to provide a real-time, comprehensive view of what data is being processed and stored, who can access it, and most importantly, who has accessed it and when. As the number of distinct regulations continues to grow, the pressure on organizations to put in place automated, proactive data governance will increase.”

 

Prediction #11: Cryptocurrency to Get Regulated, by Joseph Carson, Chief Security Scientist and Advisory CISO at ThycoticCentrify

 

“Cryptocurrencies are surely here to stay and will continue to disrupt the financial industry, but they must evolve to become a stable method for transactions and accelerate adoption. Some countries have taken a stance that energy consumption is creating a negative impact and therefore facing decisions to either ban or regulate cryptocurrency mining. Meanwhile, several countries have seen cryptocurrencies as a way to differentiate their economies to become more competitive in the tech industry and persuade investment. In 2022, more countries will look at how they can embrace cryptocurrencies while also creating more stabilization, and increased regulation is only a matter of time. Stabilization will accelerate adoption, but the big question is how the value of cryptocurrencies will be measured.  How many decimals will be the limit?”

 

Prediction #12: Application Security in Focus, by Michael Isbitski, Technical Evangelist at Salt Security

 

“According to the Salt Labs State of application programming interface (API) Security Report, Q3 2021, there was a 348% increase in API attacks in the first half of 2021 alone and that number is only set to go up.

With so much at stake, 2022 will witness a major push from nonsecurity and security teams towards the integration of security services and automation in the form of machine assistance to mitigate issues that arise from the rising threat landscape. The industry is beginning to understand that by taking a strategic approach to API security as opposed to a subcomponent of other security domains, organizations can more effectively align their technology, people, and security processes to harden their APIs against attacks. Organizations need to identify and determine their current level of API maturity and integrate processes for development, security, and operations in accordance; complete, comprehensive API security requires a strategic approach where all work in synergy.

To mitigate potential threats and system vulnerabilities, further industry-wide recognition of a comprehensive approach to API security is key. Next year, we anticipate that more organizations will see the need for and adopt solutions that offer a full life cycle approach to identifying and protecting APIs and the data they expose. This will require a significant change in mindset, moving away from the outdated practices of proxy-based web application firewalls (WAFs) or API gateways for runtime protection, as well as scanning code with tools that do not provide satisfactory coverage and leave business logic unaddressed. As we’ve already begun to witness, security teams will now focus on accounting for unique business logic in application source code as well as misconfigurations or misimplementations within their infrastructure that could lead to API vulnerabilities.

Implementing intelligent capabilities for behavior analysis and anomaly detection is also another way organizations can improve their API security posture in 2022. Anomaly detection is essential for satisfying increasingly strong API security requirements and defending against well-known, emerging and unknown threats. Implementing solutions that effectively utilize AI and ML can help organizations ensure visibility and monitoring capabilities into all the data and systems that APIs and API consumers touch. Such capabilities also help mitigate any manual mistakes that inadvertently create security gaps and could impact business uptime.”

 

Prediction #13: Disinformation on Social Media, by Jonathan Reiber, Senior Director of Cybersecurity Strategy and Policy at AttackIQ

 

“Over the last two years, pressure rose in Congress and the executive branch to regulate Section 230 and increased following the disclosures made by Frances Haugen, a former Facebook data scientist, who came forward with evidence of widespread deception related to Facebook’s management of hate speech and misinformation on its platform. Concurrent to those disclosures, in mid-November, the Aspen Institute’s Commission on Information Disorder published the findings of a major report, painting a picture of the United States as a country in a crisis of trust and truth, and highlighting the outsize role of social media companies in shaping public discourse. Building on Haugen’s testimony, the Aspen Institute report, and findings from the House of Representatives Select Committee investigating the January 6, 2021 attack on the U.S. Capitol, we should anticipate increasing regulatory pressure from Congress. Social media companies will likely continue to spend large sums of money on lobbying efforts to shape the legislative agenda to their advantage.”

 

Prediction #14: Ransomware To Impact Cyber Insurance, by Jason Rebholz, CISO at Corvus Insurance

 

“Ransomware is the defining force in cyber risk in 2021 and will likely continue to be in 2022. While ransomware has gained traction over the years, it jumped to the forefront of the news this year with high-profile attacks that impacted the day-to-day lives of millions of people. The increased visibility brought a positive shift in the security posture of businesses looking to avoid being the next news headline. We’re starting to see the proactive efforts of shoring up IT resilience and security defenses pay off, and my hope is that this positive trend will continue. When comparing Q3 2020 to Q3 2021, the ratio of ransoms demanded to ransoms paid is steadily declining, as payments shrank from 44% to 12%, respectively, due to improved backup processes and greater preparedness. Decreasing the need to pay a ransom to restore data is the first step in disrupting the cash machine that is ransomware. Although we cannot say for certain, in 2022, we can likely expect to see threat actors pivot their ransomware strategies. Attackers are nimble — and although they’ve had a ‘playbook’ over the past couple years, thanks to widespread crackdowns on their current strategies, we expect things to shift. We have already seen the opening moves from threat actors. In a shift from a single group managing the full attack life cycle, specialized groups have formed to gain access into companies who then sell that access to ransomware operators. As threat actors specialize in access into environments, it opens the opportunity for other extortion-based attacks such as data theft or account lockouts, all of which don’t require data encryption. The potential for these shifts will call for a great need in heavier investments in emerging tactics and trends to remove that volatility.”..[…] Read more »….

 

How To Define Risks for Your Information Assets

To define risks, learn where they come from, and what their effect on information assets and the operation of your company is, you will need to carry out a risk assessment. In this article we will talk about IT assets and risks. I’m not going to outline the organizational or preparational side of things, such as appointing a risk manager or setting up the assessment process. If you need to learn about the different aspects of defining a process, take a look at ISO/IEC 27005:2018.

Basic method

There are a few different approaches to defining risk, but let’s explore the basics. The first thing you will need to do is define the scope of your information assets. Information assets are all assets which could impact on the confidentiality, integrity, and accessibility of information within your company.

There aren’t any strict criteria on how to assess this scope. The result should be a list of systems, applications, code, etc. which you need to define risks for.

Defining your assets

Assets can be singular or grouped together to unify identical risks for a set of assets.

The simplest way is to make a logical list of systems and applications, grouping them by type. For example:

  • HR systems, like BambooHR, Zoho, Workable, etc.
  • Security systems, like IPS, SIEM, Nexpose, etc.)
  • Communication systems, like Slack, Facebook workspace, Google meet, etc.
  • Access control systems, like PACS, CCTV, etc.
  • Business support systems, like Google Workspace, MS AD, LDAP, etc.

It’s worth taking into account that IT is assets that aren’t just the standard systems and applications with recognizable names, but also:

  • In-house systems
  • Your code
  • Employee workstations
  • Your network and its components
  • Software licenses
  • etc.

When grouping assets, you need to take into account the critical nature of the assets. For example, a service for ordering coffee in the office isn’t as critical as a customer support system. Obviously, you set how critical the system is as you see fit, taking into account that each risk can have different effects on different assets.

Zone of responsibility

This article is not supposed to go into detail about how to define zones of responsibility, but it’s worth mentioning in short.

You need to define who is responsible for what: which employees or departments are responsible for which systems from a business perspective (i.e. responsible for the data and system processes) and which are responsible for the technical aspects (i.e. asset support and management). You also need to define who your users are and who assesses the risks. You can express the result using the RACI matrix:

  • (R) Responsible
  • (A) Accountable
  • (C) Consulted
  • (I) Informed

This is necessary in order to define who will

  • Identify assets
  • Support assets
  • Assess critical nature of assets
  • Assess damage (consequences)
  • Process risks
  • Administer processes for information risk management

Damage assessment

The next step is to work with people in your company to define the damage that could become of the different risks coming to fruition.

Take a look at the table below to see an example of how this is done.

Damage table

Damage Table

Identification of risks

You can identify risks by combining the threats and vulnerabilities associated with each asset. Risks can be categorized by the type of impact they could have on a system or dataset:

  • Confidentiality
  • Integrity
  • Accessibility

Threats and vulnerabilities can be split into two types and this will help you define the impact level the risk will have on the asset and the overall applicability of the risk to a particular asset:

  • Internal (within your security or network perimeter)
  • External (outside of your company’s perimeter
Example

The risk that sensitive data could be stolen when being transferred across your network due to incorrect system configuration. For data being transferred within the company (internal threat), the effects of this risk coming to fruition are much less than if you were to transfer the data externally (e.g. to a cloud provider).

The most difficult part of all is defining and forming the list of risks. You can use the risks that are listed in standards such as ISO, PCI DSS, NIST, COBIT, etc. and adapt them to your own processes.

The domains you consider should include but not be limited to:

  • Access and role management
  • Change and development management
  • System backup and recovery
  • Monitoring
  • Password security
  • Vulnerability management
  • Privileged account management
  • Third party management
  • Physical security

What else affects risks?

The possibility and frequency that a risk might be realized also affects your assessment. Let’s take a look at an example.

Example 1

Unsanctioned access to internal systems which leads to the system admin password being exposed. However, you can only access the system by being on the company’s local network (where connection is only possible with a user certificate and set device) or via VPN that requires two-factor authentication.

In this case:

  • The chance that this risk will be realized is low
  • The possible frequency of this risk being realized is low

As we can see, the actual impact of this risk on an asset is practically zero and you can either not even consider it, or mark it as a risk that you are willing to accept.

Now, let’s take a look at this risk in different circumstances. If we say this risk is prevalent for an external system in a cloud and with local authorization via http, then:

  • The chance that this risk will be realized is high because the admin password is transmitted across an open channel and there is no additional security applied to the admin account
  • The possible frequency of this risk being realized is high because the system is accessible from anywhere with an internet connection

As you can see, the circumstances are something you need to consider when defining and grouping risks for assets according to type and critical nature.

Let’s take a look at some more examples in the context of a marketplace and consider their impact.

Example 2

Neither the company’s site nor mobile application have undergone a comprehensive security review during design and implementation. In addition, there is no process of continuous security assessment (vulnerabilities detection) of the site or mobile application.

This may result in prolonged existence of exploitable vulnerabilities which may lead to the systems being compromised by an outside intruder and a leak of confidential data.

This would impact on:

  • Data confidentiality (misconfiguration in authentication form that grants access to client data)
  • Data and application integrity (vulnerabilities like an SQL injection)
  • Application accessibility (e.g. DDOS vulnerabilities)

If we consider the risks and outcome using the damage table above, we several have types of harm:

  • Reputational damage — Moderate
  • Idleness or inefficiency in service operation — Low
  • Contravention of laws and regulations — Moderate

You should define the value of the damage and the impact in a way appropriate for your business.

Example 3

The company’s disaster recovery plan is outdated and has not been tested for years. Given the moderate potential of an intruder breaching the systems, a combination of events may result in the inability to restore operations at the recovery site within an acceptable time frame.

The data integrity or data accessibility and harm will be:

  • Financial loss – High because it’s very harmful for a marketplace to lose all of its customer data; the company will lose money if customers won’t be able to order goods.

What comes out the other end

When you have completed the process of setting and assessing risks, you should have a document/matrix/table which shows for each asset or group of assets:.[…] Read more »

 

 

A CIO’s Introduction to the Metaverse

The “metaverse” is coming. Are you ready? Microsoft, Nvidia, and Facebook have all announced significant applications to give enterprises a door into the metaverse. Many startups are also building this kind of technology.

But just what is the metaverse anyway? Is it something that CIOs need to have on their radar? What are the use cases for businesses? And what are the caveats that organizations need to watch for to reduce risk?

What Is a Metaverse?

Metaverse is essentially a 3D mixed reality “place” that combines the real world/physical world with the digital world. It is persistent, meaning it continues to exist even if you close the app or logout. It is also collaborative, meaning that people in that world see the same thing and can work together. Some experts say that the metaverse will be a new 3D layer of the internet. Gartner’s definition goes one step further, says Tuong Nguyen, senior research analyst at Gartner, specifying that a true metaverse must be interoperable with other metaverses (and thus, many of today’s iterations don’t fit the Gartner definition yet.)

Here’s how Nvidia CEP Jensen Huang put it during his keynote address at Nvidia GTC 2021 online event this month: “The internet is essentially a digital overlay on the world. The overlay is largely 2D information — text voice, images, video — but that’s about to change. We now have the technology to create new 3D virtual worlds or model our physical world.”

Today’s video conferencing, driven into the mainstream by the pandemic, is an example of two-dimensional collaboration. People can participate via their laptop cameras and microphones from home, or they can be in the office in a teleconference room. They can share their screens or use apps that allow for a collaborative whiteboard.

A metaverse layers immersive 3D on top of that. Participants can create avatars (digital representations of themselves) and use those to enter a virtual 3D room. In that room they can collaborate on a virtual whiteboard on the virtual wall or walk around a virtual 3D model of a car they are designing, for instance.

That’s essentially the use case that Microsoft CEO Satya Nadella described when he announced Mesh for Microsoft Teams at the tech giant’s Ignite conference this month. Microsoft will add this capability to its Teams collaboration tool starting in 2022.

This feature combines the capabilities of Microsoft’s mixed-reality platform Mesh (announced in March 2021 as a platform for building metaverses) with the productivity tools of Microsoft Teams, according to Microsoft.

Facebook, which rebranded itself as Meta earlier this year, introduced Horizon Workrooms in August, which are VR meeting spaces for remote collaboration.

Metaverse Use Cases for the Enterprise

Collaboration is one of three primary use cases for a metaverse in the enterprise right now, according to Forrester VP J.P. Gownder.

Another primary use case is one championed by chip giant Nvidia — simulations and digital twins. Huang announced Nvidia Omniverse Enterprise during his keynote address at the company’s GTC 2021 online AI conference this month and offered several use cases that focused on simulations and digital twins in industrial settings such as warehouses, plants, and factories.

If you are an organization in an industry with expensive assets — for instance oil and gas, manufacturing, or logistics — it makes sense to have this use case on your radar, according to Gartner’s Nguyen. “That’s where augmented reality is benefiting enterprise right now,” he says.

As an example, during his keynote address, Nvidia’s Huang showed a video of a virtual warehouse created with Nvidia Omniverse Enterprise enabling an organization to visualize the impact of optimized routing in an automated order picking scenario. That’s an example of a particular use case, but Omniverse itself is Nvidia’s platform to enable organizations to create their own simulations or virtual worlds.

“We built Omniverse for builders of these virtual worlds,” Huang said at GTC. “Some worlds will be for gatherings and games. But a great many will be built by scientists, creators, and companies. Virtual worlds will crop up like websites today.”

The third use case for enterprises falls in the business-to-consumer marketing realm as demonstrated by online gaming platform company Roblox, according to Gownder. On this gaming platform that’s popular with the pre-teen crowd, users can purchase digital clothing to outfit their avatars, and brands are taking notice. For instance, apparel brands including Vans and Gucci have created customized, branded worlds on Roblox.

Should CIOs Put Metaverse on Their Tech Roadmaps?

Yes, but no need to jump in with both feet yet, the experts say.

“CIOs should be thinking about these examples,” says Nguyen. “But you don’t need to have a metaverse presence.” Yet. “It would behoove you to get that frame of reference because of the inevitability. Not being a part of this in some way, you will likely be missing out substantially, just like any organization that doesn’t have a website today.”

Indeed, it may pay off if you decide to wait for version 2. Microsoft’s Mesh for Teams lets users create an avatar and use that instead of turning on their webcams. These personal avatars come complete with facial expressions to convey reactions.

“This is unlikely to get the same level of engagement for others utilizing video in a meeting,” says Tim Banting, Omdia’s practice leader for the Digital Workplace. “Consequently, Omdia believes this feature to be somewhat of a gimmick.”

However, some other use cases may appeal to organizations, he adds. Yet there are other caveats for enterprises to consider when it comes to practical implementation.

“A specific headset, rather than a PC or mobile device, would be required to maximize the user experience,” Banting says. “With many organizations failing to offer remote staff business-quality headsets and external webcams, it’s unlikely that enterprises could justify the expense of VR equipment for regular employee meetings.”

Do You Need to Skill Up Your IT Workforce?

Many of the benefits of metaverse technology will be available through your existing technology vendors already, like Mesh for Microsoft Teams. What’s more, Banting points out that in the consumer VR world, “it’s very much a plug and play environment with easy setup.”

However, “Where things could get interesting is when businesses want to create their own ‘branded’ metaverse. I expect this will be an advanced services opportunity for a new category of partners working in conjunction with marketing.”

Gownder said that an understanding of 3D is a rare skill today, so finding people who can develop on Unity or Unreal Engine may be valuable. But it’s not something that everyone will need to jump on right away..[…] Read more »…..