Is Your Organization Reaping the Rewards or Simply Ticking a Box?


In today’s interconnected world, third-parties play an important role in your organization’s success, but they can also be its weakest link in terms of risk management.

According to Gartner, 60% of organizations are now working with more than 1,000 third-parties. Despite the added complexities, these relationships are critical to business success – delivering affordable, responsive and scalable solutions that can help organizations to grow and adapt according to the needs of their customers. But as reliance on third-parties grows, so too does the exposure to additional risk.

If we are going to reap the rewards of third-party relationships, then we must also identify, manage and mitigate the risks. A rigorous TPRM program is key to achieving just that, which means effective third-party oversight is more important than ever. So how can you ensure that your third-party risk management (TPRM) processes are ready to face the challenges of our ever-evolving commercial landscape and what practical steps can be taken to improve them?

Third-party risk is more than a checkbox exercise

Often, organizations start thinking about TPRM as a result of compliance drivers. They are facing a wide range of regulatory requirements around data privacy, information security, and cloud hosting. Whatever the motivation, all too often we see this kind of activity treated as little more than checking a box.

Reducing this kind of risk management to an exercise in compliance doesn’t ensure that you address the root causes and underlying risks. In fact, by simply viewing TPRM as a set of minimum requirements, it’s easy to overlook potential risks that could become issues for your organization. It’s particularly true when vendors are viewed in isolation. This can mean that activities aren’t standardized and aligned across an entire organization, creating additional unforeseen risks within your vendors.

Instead, your organization should take a holistic approach. Integrating TPRM with your wider Governance, Risk and Compliance (GRC) can have huge benefits. By embedding your assessment program as part of your wider compliance landscape you won’t just be conducting a one-time vendor audit, you’ll be proactively assessing third-party risks and continuously improving operations, efficiencies and processes to enhance the security of every aspect of your supplier network. You will be able to pass information throughout the business, ensuring that risks are identified and treated on an ongoing basis.

Determining the scope of risk assessments is vital

Many organizations simply don’t have the resources to conduct assessments of all of their third-party providers at a granular level. So, your very first step should be to take an inventory of all of your third-parties, considering who your vendors are and what business functions they support. Then, armed with this information, you can prioritize your analysis. There are three key considerations to provide a structure for your assessment.

Cost is a sensible starting point for most organizations, and often the easiest way to structure your assessment. By looking at the contractual value of each vendor you can then tier them accordingly. Another way of categorizing your vendors is by considering the type of risk they expose your organization to. Consider factors such as geography, technology, and financial risk, then organize your risks based on how likely they are to occur. The most sophisticated approach is criticality which ranks each vendor by assessing which of your critical assets, systems and processes they impact, and what the repercussions of those risks would be to your organization.

Sometimes, there are other factors that might impact whether or not a third-party is included within the scope of your assessment. You may find, for example, that your vendor will not allow you to assess them. That’s often the case if you’re working with big companies like Google, Amazon or Microsoft, who may well be critical to your business success, but who are unlikely to give you bespoke information for your audit.

Alternatively, external factors might dictate the scope of your assessment. Whether it’s a global pandemic like COVID-19 or a major geopolitical event such as the Russia-Ukraine war, organizations will often conduct tactical assessments in order to analyze the impact of their expanded risk profiles.

It’s about quality not quantity

When it comes to crafting your TPRM Question Sets, less is most definitely more. You may be tempted to put together hundreds of questions covering every topic under the sun. But is this going to give you the information you need? And, more importantly, is your busy vendor even going to answer all of your questions?

Another important consideration is to decide just how specific to make your Question Sets. Make them too generic and you may not be able to capture the data you need. But make them too specific to your business and your vendor is going to find it incredibly difficult to provide answers in the detail you are looking for.

At the end of the day, it’s a balancing act – one that means you should keep your Question Sets as targeted as possible. So, rather than sending 200 questions, send 20, but make sure they are well thought through to ensure that they gather the information you need for your risk program. This is where it might be helpful to leverage existing Question Sets such as SCF, SIG and Cyber Risk Institute. Whether they have been provided by consultants or they’re part of an industry standard, this approach will help to ensure you get the data you need.

Practical steps to improve third-party risk management

There are four key actions to consider when it comes to improving a TPRM program.

Understand what’s going well, and what’s not

Conduct a self-assessment of your organization’s TPRM capability and ask key questions such as: what are our strengths? Where are our weaknesses? It’s a good idea to ensure part or all of the assessment is carried out by an external party as they will deliver impartial feedback and highlight potential areas for improvement.

Understand your target state

If you have a vendor-first strategy that leads to a large amount of outsourcing, it’s crucial that you understand your target state for third party due diligence. Have a roadmap that sets out realistic aims and objectives, and how you intend to achieve them. Looking to do too much too soon with your program can cause issues, slow down the progression and can be counter intuitive.

Build partnerships with vendors

Establishing a close relationship with critical vendors is central to the success of any TPRM program. Without partnerships, it becomes increasingly difficult to work toward common goals. Technology can help with monitoring and assessments, but having the ability to pick up the phone and openly discuss and address issues to mitigate any risks…[…] Read more »


5 questions CISOs should ask when evaluating cyber resiliency

When cybersecurity experts talk about cyber resiliency, they’re talking about the ability to effectively respond to and recover from a cybersecurity incident. Many organizations don’t like to think about that — and it’s easy to see why. Many have invested heavily in tools designed to protect their networks from intrusion and attack, and planning for cyber resiliency means accepting the possibility that those tools might fail.

But the truth is, they might. In fact, they probably will. Even with the best tools on the market, it isn’t possible to stop 100% of attacks, which means it’s important to plan for the worst. In doing so, you can improve your cyber resiliency, which can significantly mitigate the damage caused by those inevitable attacks that manage to slip through the cracks.

Putting a plan in place that details how to handle a cyber-triggered business disaster is essential, but it isn’t always easy to get started. Here are the top five questions CISOs should be asking when it comes time to evaluate — and improve — cyber resiliency.

1. Do you have strong retainers in place? 

It’s difficult — not to mention dangerous — to go it alone. There’s no shame in seeking out help from experts. In fact, it’s usually the smart thing to do. Most organizations (hopefully!) don’t have significant experience when it comes to dealing with cyber incidents, but there are many third parties that can provide invaluable guidance and assistance.

Do you have a good incident response retainer in place? What about a good cyber crisis communications retainer? These are not things you want to be scrambling for in the midst of a disaster, but having them in place in advance can help you respond quickly and effectively. A technical incident response firm can support and validate containment and eradication of the threat, while a crisis communications firm can help you coordinate both internal and external messaging. Communication can often make or break an incident response, so don’t overlook that element.

2. Do you have well-defined cyber incident response plans and resiliency playbooks?

A cyber incident response plan deals primarily with the security team’s categorization, notification, and escalation of the technical incident, but a strong cyber resilience playbook details the various resources and workstreams that need to be activated for a broad, enterprise-level response effort. Key stakeholders and decision-makers across internal and external counsel, public relations, disaster recovery, crisis communications, business continuity, security, and executive leadership should be involved in this playbook — who is going to lead which workstream and what will the decision-making process look like?

There may be decision-making thresholds you can pre-define. Ransomware payment is a good example, particularly with the continuous rise in ransomware attacks. Can you align in advance on when your organization would consider simply paying the ransom? Maybe that threshold is a certain amount of time without key business functions, or maybe it’s a dollar amount. Aligning on those decisions in advance can save significant time.

This is also a good opportunity to align on the decision-making resources that might be needed, such as out-of-band communications or deciding whether certain incidents need to be handled in person. Do you need corporate apartments that can be sourced through procurement? Are there other external relationships that need to be established? Having these discussions early can ease coordination across the whole enterprise.

3. Are you testing your playbooks and third-party firms?

You can’t just put policies and procedures in place. You need to test them. And that doesn’t just apply to internal parties — you can bring your retained firms in to conduct tabletop exercises as well.

For an incident response retainer, you might have them lead the security operations center (SOC) in a technical tabletop exercise. This tests the coordination between the SOC and the incident response firm to see how well they know the relevant procedures in the incident response plan and whether they can communicate effectively. For a crisis communications firm, try having them lead a management-level tabletop exercise, since the firm would be spearheading external communications and ensuring that everyone is aligned on the messaging. It can be helpful to work through that messaging in an executive tabletop.

Of course, these tabletop exercises can also be combined. The incident response firm and the crisis communications firm can be tested with a mock incident that escalates from the SOC all the way up to an enterprise-level concern. This can help gauge their response capabilities as an incident becomes more serious, as well as their ability to effectively communicate that response.

4. Do you have a strong grasp of your most critical business processes?

Maybe more to the point, do you understand the critical path for those business processes? That means the third-party applications, the underlying infrastructure, the data center locations, and other key factors that go towards producing those processes. Do you have backup processing methods? Do you have a manual process method that you can use in a pinch? Do you have offline contact information for your third-party vendors so you can quickly and easily get ahold of them in the event that your data is locked up?

These are all critical questions that organizations need to be able to answer in the event of an emergency. Understanding that critical path can help you know who to call and which business process needs to be activated during an incident. The last thing you want is to discover that the contact information for all of your vendors is stored on a server currently encrypted by ransomware attackers.

5. Do you have a disaster recovery plan in place?

Do you know clearly — and in what sequence — you need to recover data and infrastructure? Do you know the exact point of recovery? Do you know the recovery time objective for recovering that infrastructure data? Depending on the process, that time objective might be 30 minutes, or it might be a week. Knowing that answer is essential not just for setting expectations, but for planning your recovery effectively.

Business continuity and disaster recovery programs don’t just need to be in place, they need to be evaluated with failover tests. For example, if your system has regional redundancies, you might conduct a test in which one region fails and the system immediately falls over to another region. The security and disaster recovery teams can then practice recovering the data for the region that “failed.” This serves the dual purpose of both making sure the failover is working and ensuring recovery systems are operating as planned.

Hope for the Best, Plan for the Worst

No one wants to believe they will be the victim of a cyber-triggered business disaster, but it’s always better to have a plan and not need it than to need a plan and not have it. But cyber resiliency is not something that can be “achieved” and forgotten. It needs to be maintained as the organization changes and scales over time. By keeping these concerns top of mind and conducting regular testing and tabletop exercises, you can help ensure that your resiliency remains strong even as the organization evolves..[…] Read more »….


How does encryption work? Examples and video walkthrough

Cryptography — the practice of taking a message or data and camouflaging its appearance in order to share it securely

What is cryptography used for?

It’s the stuff of spy stories, secret decoder rings and everyday business — taking data, converting it to another form for security, sharing it, and then converting it back so it’s usable. Infosec Skills author Mike Meyers provides an easy-to-understand walkthrough of cryptography.

Watch the complete walkthrough below.

Cyber Work listeners get free cybersecurity training resources. Click below to see free courses and other free materials.

Cryptography types and examples

Cryptography is the science of taking data and making it hidden in some way so that other people can’t see it and then bringing the data back. It’s not confined to espionage but is really a part of everyday life in digital filesharing, business transactions, texts and phone calls.

What is cryptography?

Simply put, cryptography is taking some kind of information (encrypting) and providing confidentiality to it to share with intended partners and then returning it to its original form (decrypting) so that the intended audience can use that information. Cryptography is the process of making this happen.

What are obfuscation, diffusion and confusion?

(00:40) Obfuscation is taking something that looks like it makes sense and hiding it so that it does not make sense to the casual outside observer.

(00:56) One of the things we can do to obfuscate a message or image is diffusion, where we take an image and make it fuzzier, so the details are lost or blurred. Diffusion only allows us to make it less visible, less obvious.

(01:26) We can also use confusion, where we take that image, stir it up and make a mess out of it like a Picasso painting so that it would be difficult for somebody to simply observe the image and understand what it represents.

How a Caesar cipher works

(02:10) Cryptography has been around for a long, long time. In fact, probably one of the oldest types of cryptography that has ever been around is something called the Caesar cipherIf you’ve ever had or seen a “secret decoder ring” when you were young, you know how a Caesar cipher works.

Encrypting using a Caesar cipher

(02:40) I’ve made my own decoder ring right here. It’s basically a wheel with all the letters of the alphabet, A through Z and on the inside, and all of the letters of the alphabet A through Z on the outside, and to start, you line them up from A to A, B to B, C to C.

(02:59) To make a secret code, you can rotate the inside wheel to change the letters from our original, plain text on the outside wheel. We call this substitution. We’re taking one value and substituting it for another. (03:20) Rotating the wheel two times is called ROT two; turning it three times would be ROT three. (03:37) So we can take, like the word ACE, A-C-E, and I can change ACE to CEG. Get the idea? So that’s the cornerstone of the Caesar cipher.

(04:00) As an example, our piece of plain text that we want to encrypt is, “We attack at dawn.” The first thing we’re going to do is get rid of all the spaces, so now it just says “weattackatdawn.” We’ll rotate our wheel five times — it’s ROT five. And now the encrypted “weattackatdawn” is “bjfyyfhpfyifbs.” (04:44) So we now have generated a classic Caesar cipher.

(04:49) Now there’s a problem with Caesar ciphers. Even though it is a substitution cipher, it’s too easy to predict what the text is because we’re used to looking at words.

How a Vigenere cipher works

(05:32) To make things more challenging, we can use a Vigenere cipher, which is really just a Caesar cipher with a little bit of extra confusion involved. For illustrative purposes, the Vigenere cipher is a table that shows all the possible Caesar ciphers there are. At the top, on Line 0 is the alphabet — from A to Z. On the far left-hand side, it says zero through 25. So these are all the possible ROT values you can have, from ROT zero, which means A equals A, B equals B, all the way down to ROT 25.

Encrypting using a Vigenere cipher and key

(6:17) Let’s start with a piece of plain text. Let’s use “we attack at dawn” one more time. This time, we’re going to apply a key. The key is simply a word that’s going to help us do this encryption. In this particular case, I’m going to use the word face, F-A-C-E.

(06:34) I’m going to put F-A-C-E above the first four letters of “we attack at dawn,” and then I’m going to just keep repeating that. And what we’ve done is we have applied a key to our plain text.

(06:58) Now we’re going to use the key to change the Caesar cipher ROT value for every single letter. So the first letter of the plain text is the W in “wheat” up at the top, and the key value is F, so let’s go down on the Y-axis until we get to an F. Now you see that F, you’ll see the number five right next to it. So this is ROT five.

(07:31) So all I need to do is find the intersection of these, and we get the letter B.

(07:39) The second letter in our plain text is the letter E from “we,” and in this particular case, the key value is A, which is kind of interesting, because that’s ROT zero, but that still works. So we start up at the top, find the letter E, then we find the A, and in this case, because it’s ROT zero, E is going to stay as E.

(08:00) Now, this time, it’s the A in attack. So we go up to the top. There’s the letter A, and the key value is C, as in Charlie. So we go down to the C that’s ROT two, and we then see that the letter A is now going to be C.

(08:19) Now, do the first T in attack. We come over to the Ts, and now the key value is E, as in FACE. So we go down here, that’s ROT four, we do the intersection, and now we’ve got an X. So the first four letters of our encrypted code are B, E, C, X.

Understanding algorithms and keys

(08:52) The beauty of the Vigenere is that it actually gives us all the pieces we need to create a classic piece of cryptography. We have an algorithm. The algorithm is the different types of Caesar ciphers and the rotations. And second, we have a key that allows us to make any type of changes we want within ROT zero to ROT 25 to be able to encrypt our values.

Any algorithm out there will use a key in today’s world. So when we’re talking about cryptography today, we’re always going to be talking about algorithms and keys.

(09:31) The problem with the Vigenere is that it’s surprisingly crackable. It works great for letters of the alphabet, but it’s terrible for encrypting pictures or Sequel databases or your credit card information.

(09:53) In the computer world, everything is binary. Everything is ones and zeros. We need to come up with algorithms that encrypt and decrypt long strings of just ones and zeros.

(10:11) While long strings of ones and zeros may look like nothing to a human being, computers recognize them. They could be a Microsoft Word document, or it could be a voiceover IP conversation, or it could be a database stored on a hard drive.

How to encrypt binary data

(10:37) We need to come up with algorithms which, unlike Caesars or Vigeneres, will work with binary data.

(10:45) There are a lot of different ways to do this. We can do this using a very interesting type of binary calculation called “exclusive OR.”

(11:08) For our first encryption, I’m going to encrypt my name, and we have to convert this to the binary equivalents of the text values that a computer would use. Anybody who’s ever looked at ASCII code or Unicode should be aware that we can convert these into binary.

Exclusive OR (XOR) encryption example

(11:38) So here’s M-I-K-E converted into binary. Now notice that each character takes eight binary digits. So we got 32 bits of data that we need to encrypt. So that’s our clear text. Now, in order to do this, we’re going to need two things.

(11:58) First, we need an algorithm and then we’re going to need a key.

(12:09) Now our algorithm is extremely simple, using what we call an exclusive OR and what we call a truth table. This Mike algorithm chooses a five-bit key for this illustration. In the real world, keys can be thousands of bytes long.

(12:41) So, to make this work, let’s start placing the key. I’m going to put the key over the first five bits, here at the letter M for Mike, and now we can look at this table, and we can start doing the conversion. So let’s convert those first two values, then the next, then the next, then the next.

(12:58) Now, we’ve converted a whole key’s worth, but in order to keep going, all we have to do is schlep that key right back up there and extend the key all the way out and just keep repeating it to the end. It doesn’t quite line up, so we add whatever amount of key is needed to fill up the rest of this line.

(13:28)  Using the Exclusive OR algorithm, we then create our cipher text.

(13:44) Notice that we have an algorithm that is extremely simplistic. We have a key, which is very, very simple and short, but we now have an absolutely perfect example of binary encryption.

(13:58) To decrypt this, we’d simply reverse the process. We would take the cipher text, place the key up to it, and then basically run the algorithm backward. And then we would have the decrypted data.

What is Kerckhoffs’s principle?

Having the algorithm and a key makes cryptography successful. But which is more important, the algorithm or the key?
(14:30) In the 19th century, Dutch-born cryptographer Auguste Kerckhoffs said a system should be secure, even if everything about the system, except the key, is public knowledge. This is really important. Today’s super-encryption tools that we use to protect you on the internet are all open standards. Everybody knows how the algorithms work….[…] Read more »….


API Security 101: The Ultimate Guide

APIs, application programming interfaces, are driving forces in modern application development because they enable applications and services to communicate with each other. APIs provide a variety of functions that enable developers to more easily build applications that can share and extract data.

Companies are rapidly adopting APIs to improve platform integration, connectivity, and efficiency and to enable digital innovation projects. Research shows that the average number of APIs per company increased by 221% in 2021.

Unfortunately, over the last few years, API attacks have increased massively, and security concerns continue to impede innovations.

What’s worse, according to Gartner, API attacks will keep growing. They’ve already emerged as the most common type of attack in 2022. Therefore, it’s important to adopt security measures that will keep your APIs safe.

What is an API attack?

An API attack is malicious usage or manipulation of an API. In API attacks, cybercriminals look for business logic gaps they can exploit to access personal data, take over accounts, or perform other fraudulent activities.

What Is API security and why is it important?

API security is a set of strategies and procedures aimed at protecting an organization against API vulnerabilities and attacks.

APIs process and transfer sensitive data and other organizations’ critical assets. And they are now a primary target for attackers, hence the recent increase in the number of API attacks.

That’s why an effective API security strategy is a critical part of the application development lifecycle. It is the only way organizations running APIs can ensure those data conduits are secure and trustworthy.

A secure API improves the integrity of data by ensuring the content is not tampered with and available to only users, applications, and servers who have proper authentication and authorization to access it. API security techniques also help mitigate API vulnerabilities that attackers can exploit.

When is the API vulnerable?

Your API is vulnerable if:

  • The API host’s purpose is unclear, and you can’t tell which version is running, what data is collected and processed, or who should have access (for example, the general public, internal employees, and partners)
  • There is no documentation, or the documentation that exists is outdated.
  • Older API versions are still in use, and they haven’t been patched.
  • Integrated services inventory is either missing or outdated.
  • The API contains a business logic flaw that lets bad actors access accounts or data they shouldn’t be able to reach.

What are some common API attacks?

API attacks are extremely different from other cyberattacks and are harder to spot. This new approach is why you need to understand the most common API attacks, how they work and how to prevent them.

BOLA attack

This most common form of attack happens when a bad actor changes parameters across a sequence of API calls to request data that person is not authorized to have. For example, nefarious users might authenticate using one UserID, for example, and then enumerate UserIDs in subsequent API calls to pull back account information they’re not entitled to access.

Preventive measures:

Look for API tracking that can retain information over time about what different users in the system are doing. BOLA attacks can be very “low and slow,” drawn out over days or weeks, so you need API tracking that can store large amounts of data and apply AI to detect attack patterns in near real time.

Improper assets management attack

This type of attack happens if there are undocumented APIs running (“shadow APIs”) or older APIs that were developed, used, and then forgotten without being removed or replaced with newer more secure versions (“zombie APIs”). Undocumented APIs present a risk because they’re running outside the processes and tooling meant to manage APIs, such as API gateways. You can’t protect what you don’t know about, so you need your inventory to be complete, even with developers have left something undocumented. Older APIs are unpatched and often use older libraries. They are also undocumented and can remain undetected for a long time.

Preventive measures:

Set up a proper inventory management system that includes all the API endpoints, their versions, uses, and the environment and networks they are reachable on.

Always check to ensure that the API needs to be in production in the first place, it’s not an outdated version, there’s no sensitive data exposed and that data flows as expected throughout the application.

Insufficient logging & monitoring

API logs contain personal information that attackers can exploit. Logging and monitoring functions provide security teams with raw data to establish the usual user behavior patterns. When an attack happens, the threat can be easily detected by identifying unusual patterns.

Insufficient monitoring and logging results in untraceable user behavior patterns, thereby allowing threat actors to compromise the system and stay undetected for a long time.

Preventive measures:

Always have a consistent logging and monitoring plan so you have enough data to use as a baseline for normal behavior. That way you can quickly detect attacks and respond to incidents in real-time. Also, ensure that any data that goes into the logs are monitored and sanitized.

What are API security best practices?

Here’s a list of API best practices to help you improve your API security strategy:

  • Train employees and security teams on the nature of API attacks. Lack of knowledge and expertise is the biggest obstacle in API security. Your security team needs to understand how cybercriminals propagate API attacks and different call/response pairs so they can better harden APIs. Use the OWASP API Top 10 list as a starting point for your education efforts.
  • Adopt an effective API security strategy throughout the lifecycle of the APIs.
  • Turn on logging and monitoring and use the data to detect patterns of malicious activities and stop them in real-time.
  • Reduce the risk of sensitive data being exposed. Ensure that APIs return only as much data as is required to complete their task. In addition, implement data filtering, data access limits, and monitoring.
  • Document and manage your APIs so you’re aware of all the existing APIs in your organization and how they are built and integrated to secure and manage them effectively.
  • Have a retirement plan for old APIs and remove or patch those that are no longer in use.
  • Invest in software specifically designed for detecting API call manipulations. Traditional solutions cannot detect the subtle probing associated with API reconnaissance and attack traffic….[…] Read more »… 


IT Budgets in the Face of a Recession: How to Plan

The threat of an economic recession could impact the digitalization plans of organizations both large and small, requiring chief information officers and chief financial officers to closely coordinate investment plans.

This planning will require determining the IT funding initiatives that are “mission critical” and other projects that might be nice to have but don’t require immediate investment.

Despite the cloudy economic outlook, the relentless push to digitalization will likely continue apace. Businesses may therefore have to adjust priorities, rather than reduce spending.

“There is always uncertainty even in the best of times, so the key to IT budget planning is ranking your priorities and executing them — especially if you can beat your vendors down a bit,” says Rich Quattrocchi, vice president of digital transformation at Mutare, an enterprise communications and security provider. “If you liked the project at $500,000, you must love it $450,000.”

He says IT is akin to investing in the stock market, so if you can invest at a discount during a downturn, pulling the trigger will pay big dividends downstream.

“Keep in mind the average length of a recession is only 13 months, and recent recessions have been shorter,” Quattrocchi adds. “The best tip is to ensure all your employees involved in IT, and other projects, think like owners and spend money as if it were their own.”

Weathering Recessions

After more than 30 years in business, Quattrocchi notes that Mutare has weathered several recessions. “We invest heavily in IT security, automation, and digital transformation, especially during downturns,” he says. “Digital transformation delivers more productivity from existing resources diminishing the impact of headwinds by enabling our people to do more with new tools.”

He adds that the company has found that during downturns, vendors are far more eager to discount products and services, which makes it an excellent time to invest in IT infrastructure to better serve customers, employees, and constituents.

However, a great deal of critical thinking must go into determining what is mission critical vs. “nice to have”.

From Quattrocchi’s perspective, the “mission” should come top down, and then leadership needs to get out of the way and let the business unit get the job done.

“The best leaders don’t tell their people how to do the job, but what to do then ensure they have the tools to get it done,” he says.

He adds the KPIs need to be mutually agreed upon and be achievable and objectively measurable. “Having a regular feedback loop is essential,” Quattrocchi says. “If the KPIs are going in the wrong direction, then a course correction is required. This is where common sense, consensus and critical thinking intersect.”

Establish and Refresh Detailed IT Budget

Coinme’s CFO Chris Roling says he thinks it’s useful for an organization to establish a detailed IT budget at the beginning of each year and then review the actual/forecasted cash spent every month with the key budget holders and the senior finance team.

“We then ‘refresh’ the budgeted spending for the remainder of the year,” he explains. “Effective communication between finance and IT is critical as both parties can understand and agree on what IT investments are required/necessary versus nice to haves.”

Roling says the company evaluates every major project on its own merits and employs a “decision matrix” template whereby the project team documents the proposed project spend and highlights the strategic rationale, the impact on internal/external customers, the financial return and cash flow timing, project resource planning and implementation risk.

“We also conduct a full legal review of all proposed contract provisions, and service level agreements should the project involve third parties,” he says. “The leadership team then can ask additional questions and ‘challenge’ the spend, timing or approach.

Finally, the group takes the final consensus decision and closely monitors the project.

“We do not anticipate cutting any budgeted IT expenses but rather may defer some significant IT project spending during the year’s balance,” he says, noting all cost center budgets, including IT, are under monthly review.

Roling explains that at his company, the entire leadership team is involved in planning and reviewing annual departmental budgets and significant IT spending.

“The IT team and finance are the primary stakeholders in agreeing and documenting the detailed monthly cost budgets and forecasts, and the communication is interactive and as frequent as required,” he says.

He points to the benefits of scheduling fixed monthly “budget reviews” in advance, along with a transparent process for reviewing actual and forecasted monthly IT spending.

“We also try to establish “contract owners” when we enter into new contracts with third parties,” he explains. “These individuals are responsible for owning each IT contract and being aware of actual invoicing and monthly spending, user metrics and related pricing, escalation clauses, renewal and exit timing and terms. This allows us to manage our spends and contractual relationships proactively.”

When planning an IT budget, Quattrocchi says the stakeholders are the same irrespective of economic uncertainty depending on the project, mission critical objectives, and business unit responsibilities.

Collaboration and Alignment

He says best practices should be the same concerning collaboration and alignment in both good and bad times. “Uncertainty shouldn’t change collaboration, as it can result in untended consequences,” he says.

He points to the recent Robin Hood data breach that originated from a vishing attack.

“Protecting their voice network should have been a mission critical project, yet they didn’t invest in a single technical control to filter voice traffic from bad actors,” he says..[…] Read more »…..


Solving the identity crisis in cybersecurity

The evolving threat landscape is making identity protection within the enterprise a top priority. According to the 2022 CrowdStrike Global Threat Report, nearly 80% of cyberattacks leverage identity-based attacks to compromise legitimate credentials and use techniques like lateral movement to quickly evade detection. The reality is that identity-based attacks are difficult to detect, especially as the attack surface continues to increase for many organizations. 

Every business needs to authenticate every identity and authorize each request to maintain a strong security posture. It sounds simple, but the truth is this is still a pain point for many organizations. However, it doesn’t need to be.


Why identity protection must be an urgent priority for business leaders

We have seen adversaries become more adept at obtaining and abusing stolen credentials to gain a foothold in an organization. Identity has become the new perimeter, as attackers are increasingly targeting credentials to infiltrate an organization. Unfortunately, organizations continue to be compromised by identity-based attacks and lack the awareness necessary to prevent it until it’s too late.

Businesses are coming around to the fact that any user — whether it be an IT administrator, employee, remote worker, third-party vendor or customer — can be compromised and provide an attack path for adversaries. This means that organizations must authenticate every identity and authorize each request to maintain security and prevent a wide range of cyber threats, including ransomware and supply chain attacks. Otherwise, the damage is costly. According to a 2021 report, the most common initial attack vector — compromised credentials — was responsible for 20% of breaches at an average cost of $4.37 million.


How zero trust helps contain adversaries 

Identity protection cannot occur in a vacuum — it’s just one aspect of an effective security strategy and works best alongside a zero trust framework. To realize the benefits of identity protection paired with zero trust, we must first acknowledge that zero trust has become a very broad and overused term. With vendors of all shapes and sizes claiming to have zero trust solutions, there is a lot of confusion about what it is and what it isn’t. 

Zero trust requires all users, whether in or outside the organization’s network, to be authenticated, authorized and continuously validated before being granted or maintaining access to applications and data. Simply put, there is no such thing as a trusted source in a zero trust model. Just because a user is authenticated to access a certain level or area of a network does not necessarily automatically grant them access to every level and area. Each movement is monitored, and each access point and access request is analyzed. Always. This is why organizations with the strongest security defenses utilize an identity protection solution in conjunction with a zero trust framework. In fact, a 2021 survey found that 97% of identity and security professionals agree that identity is a foundational component of a zero trust security model.


It’s time to take identity protection seriously — here’s how 

As organizations adopt cloud-based technologies to enable people to work from anywhere over the past two years, it’s created an identity crisis that needs to be solved. This is evidenced in a 2021 report, which found a staggering 61% of breaches in the first half of 2021 involved credential data. 

A comprehensive identity protection solution should deliver a host of benefits and enhanced capabilities to the organization. This includes the ability to:

  • Stop modern attacks like ransomware or supply chain attacks
  • Pass red team/audit testing
  • Improve the visibility of credentials in a hybrid environment (including identities, privileged users and service accounts)
  • Enhance lateral movement detection and defense
  • Extend multi-factor authentication (MFA) to legacy and unmanaged systems
  • Strengthen the security of privileged users 
  • Protect identities from account takeover
  • Detect attack tools..[…] Read more »….



Planning for post-quantum cryptography: Impact, challenges and next steps

Symmetric vs. asymmetric cryptography

Encryption algorithms can be classified into one of two categories based on their use of encryption keys. Symmetric encryption algorithms use the same secret key for both encryption and decryption. Asymmetric or public-key encryption algorithms use a pair of related keys. Public keys are used for encryption and digital signature validation, while private keys are used for decryption and digital signature validation.

Different types of encryption algorithms have different benefits and downsides. For example, symmetric encryption algorithms are often more efficient, making them well-suited to bulk data encryption. However, they need the shared secret key to be shared between the sender and recipient over a secure channel before message encryption/decryption can be performed.

Asymmetric cryptography is less efficient but does not have this requirement. Encryption is performed using public keys, which, as their name suggests, are designed to be public. As a result, asymmetric algorithms are often used to create a secret channel over which a shared symmetric key is established for bulk data encryption.

Asymmetric cryptography and “hard” problems

Asymmetric encryption algorithms are built using a mathematically “hard” problem. This is a mathematical function where performing an operation is far easier than undoing it. For example, a commonly used “hard” problem in asymmetric cryptography is the factoring problem. Multiplying two large prime numbers together is relatively “easy” with polynomial complexity. In contrast, factoring the result of this multiplication is “hard” with exponential complexity.

This difference in complexity makes it possible to develop cryptographic algorithms that are both usable and secure. Public key encryption algorithms are designed so that legitimate users only perform “easy” operations, while an attacker must perform “hard” ones. The asymmetric complexity between these operations makes it possible to choose key lengths for which performing the “easy” operation” is possible, while the “hard” operations are infeasible on modern computers.

Impacts of quantum computing on asymmetric cryptography

The security of public-key cryptography depends on the “hardness” of these underlying problems. If the “hard” problem (factoring, logarithms, etc.) can be solved with polynomial complexity, then the security of the algorithm is broken. Even if the complexity of breaking cryptography is hundreds, thousands, etc., times more difficult than using it, an attacker with sufficient resources and incentives (nation-states, etc.) could perform the attack.

Quantum computing poses a threat to asymmetric cryptography due to the existence of Shor’s algorithm. On a sufficiently large quantum computer, Shor’s algorithm has the ability to solve the factoring problem in polynomial time, breaking the security of asymmetric cryptography…[…] Read more »….


5 Critical Considerations in Building a Zero Trust Architecture

Zero Trust is everywhere. It’s covered in industry trade publications and events, it’s a topic of conversation at board meetings, and it’s on the minds of CISOs, CIOs and even the President.

What is Zero Trust, and why is it important?

Zero Trust isn’t a cybersecurity solution in and of itself. However, implementing a Zero Trust architecture will help mitigate and ultimately lower the number of successful cybersecurity attacks your organization might otherwise endure, greatly reducing operational and financial risk.

What is Zero Trust?

A Zero Trust security model, simply put, is the idea that anything inside or outside an organization’s networks should never implicitly be trusted. It dictates that users, their devices, the network’s components, and in fact any and every packet that holds a stated identity, should continuously be monitored and verified before anyone or anything is allowed to access the organization’s environment – especially its most critical assets.

This concept is the exact opposite of the old “trust everything if it’s in my zone” model that many IT models operated under in years past. Today, Zero Trust takes a “trust nothing unless it can be verified in multiple ways” approach to security.

How do you build a Zero Trust architecture?

If you’re considering implementing a Zero Trust model in your organization and want to better understand how to get started, John Kindervag, the creator of Zero Trust, outlines these five practical steps.

Step 1: Define your protect surfaces.

Most organizations understand the concept of the attack surface, which includes every potential point of entry a malicious actor might try to access in an attempt to compromise an organization.

Protect surfaces are different. They encompass the data, physical equipment, networks, applications and other crucial assets your organization wants to deliberately protect, given how important they are to the business.

Why take the protect surface approach instead of looking at the entire attack surface? Kindervag puts it simply: “Protect surface becomes a problem that’s solvable, versus a problem, like the attack surface, that’s actually unsolvable. How could you ever solve a problem as big as the internet itself?”

It’s essential to first identify the assets within your environment that require protection. Where does the most sensitive data reside? What operational technology is most critical to your plant and production processes? Make a list of those assets that you absolutely must prioritize from a security and access management standpoint and prioritize them.

Step 2: Map the transaction flows

Once you’ve identified your protect surfaces, you can start to map their transaction flows.

This includes examining all the ways in which various users have access to those assets and how each protect surface interacts with all other systems in your environment. For example, a user might be able to access terminal services only if multi-factor authentication (MFA) is implemented and verified, the user is logging on at an expected time and from the expected place and doing an expected task.

With your protect surfaces identified, prioritized and transaction flows mapped, you’re now ready to begin architecting a Zero Trust environment. Start with the highest priority protect surface and when completed, move to the next. Each protect surface with a Zero Trust architecture implemented is a high-quality step toward stronger cyber resiliency and lowered risk.

Step 3: Architect a Zero Trust environment

Keep in mind: no single product delivers a complete Zero Trust architecture. Zero Trust environments take advantage of multiple cybersecurity tools, ranging from access controls like MFA and identity and access management (IAM), to technology that protects sensitive data through processes like encryption or tokenization.

Beyond a toolbox of security technologies, every Zero Trust architecture essentially starts with creating smart, detailed segmentation and firewall policies. It takes those policies and then creates multiple variations based on attributes like the individual requesting access, the device they’re using, the type of network connection, the time of day they’re making the request and more – step by step, building a secure perimeter around each protect surface.

Step 4: Create a Zero Trust policy

This step focuses on creating the policies that govern activities and expectations related to things like access controls and firewall rules.

Think beyond posting those new policies to your organization’s intranet, too. Consider educational programs that you may need to implement throughout the organization to promote strong security practices among your employees, vendors and consultants. Frequent cyber-awareness training has moved into the mainstream, becoming a necessity that will help reduce risk.

Step 5: Monitor and maintain the network

The final step in Kindervag’s process focuses on verifying that your Zero Trust environment and the policies governing it are working the way you intended, identifying gaps or areas for improvement and course-correcting as necessary…[…] Read more »


Building a risk management program

In today’s world, it’s important for every organization to have some form of vulnerability assessment and risk management program. While this can seem daunting, by focusing on some key concepts it’s possible for an organization of any size to develop a strong security posture with a firm grasp of its risk profile. We’ll discuss in this article how to build the technical foundation for a comprehensive security program and, crucially, the tools and processes necessary to develop that foundation into a mature vulnerability assessment and risk management program. 


Build the Foundation

It’s impossible to implement effective security, let alone manage risk, without a clear understanding of the environment. That means, essentially, taking an inventory of hosts, applications, resources, and users.

With the current computing environment, that combination is apt to include assets that reside in the cloud as well as those hosted in an organization’s own data center. Organizations have little control over their remote employees’ devices, who are accessing data on a bring-your-own-device (BYOD) basis, adding another layer of risk. There is also the aspect of software as a service applications (SaaS) that the organization uses. It’s essential to know what data is kept where. With SaaS, in particular, teams must have a clear understanding of who is responsible for the security of the data in contractual terms, so as to allocate resources accordingly. 


Manage the puzzle

Once the environment is scoped, managing it relies on three main components: visibility, control, and timely maintenance. 

Whether it is software vulnerabilities, vulnerable configurations, obsolete packages, or a range of other issues, a vulnerability scanner will show the security operations team what’s at risk and let them prioritize their reaction. That said, scanners, external or internal, are not the only option. At the high end, a penetration testing team can probe the environment to a level that vulnerability scanners can’t match. At the low end, establishing a process to monitor public vulnerability feeds and verifying whether newly exposed issues affect the environment can provide a baseline. It may not give as deep a picture scanning, or penetration testing, but the cost in SecOps time is often well worth it.

Protecting the users is a major point and doesn’t always get the attention it deserves. Ultimately, that starts with user education and establishing a culture that enhances a secure environment. Users are often the threat surface that presents the greatest risk, but with proper education and attitude they can become an effective layer of a defense depth strategy.

Another important step to protecting users is adding multi-factor authentication (MFA). In particular, those that require a physical or virtual token tend to be more secure than those that rely on text messaging or email. While MFA does add a minor annoyance to a user’s login, it can drastically reduce the threat posed by compromised accounts and reduce the organization’s overall risk profile.

User endpoints are another area of concern. While the default endpoint protection included in the main desktop operating systems (Windows and MacOS) are quite effective, they are also the defenses every malware writer in the world tests against. That makes investment in an additional layer of endpoint protection worthwhile. 

The last major piece here is a patch management program. This requires base processes that not only manage the patch process, but also the assets themselves. Fortunately, there are multiple tools available that can enhance and automate the process, and a regular patch cycle can have vulnerabilities fixed before they are even developed into exploits.

Ideally, the patch management process includes a change management system that’s able to smoothly accommodate emergency situations where a security hotfix must go in outside the normal window.

Pulling it all together

With the foundation laid, the final step involves communication. Simply assessing risk is not useful if there is no reliable way to organize people to act on it.

Bridging the information security teams, who are responsible for recognizing, analyzing, and mitigating threats to the organization, and the information technology teams, who are responsible for maintaining the organization’s infrastructure, is vital. Whether an organization achieves this with a process or a tool is up to them. But in either case, communication is vital, along with an ability to react across teams. This applies to non-technical teams as well — if folks are receiving phishing emails, security operations should know. 

These mechanisms need to be in place from the executive offices down to the sales or production floor, as reducing risk really is everyone’s responsibility. Moreover, the asset and patch management system needs a mechanism to prioritize patches based on business risk. Unless the IT team has the resources to deploy every single patch that comes their way, they will have to prioritize, and that prioritization needs to be based on the threat to business rather than arbitrary severity scores.

 An Investment 

There is no “one size fits all” solution for risk assessment and management. For example, for a restaurant that doesn’t accept reservations or orders online, a relatively insecure website doesn’t present much business risk. While it may be technically vulnerable, they are not at risk of losing valuable data...[…] Read more »….


Building Data Literacy: What CDOs Need to Know

Data literacy is the ability to read, work with, analyze, and communicate with data.

As businesses have become increasingly digital, all business functions are generating valuable data that can guide their decisions and optimize their performance.

Employees now have data available to augment their experience and intuition with analytical insights. Leading organizations are using this data to answer their every question — including questions they didn’t know they had.

The chief data officer’s (CDO) role in data literacy and ensuring that data literacy efforts are successful is to be the chief evangelist and educator to the organization.

Standardizing basic data training across the organization and creating a center of excellence for self-service in all departments can help ensure everyone can benefit from data literacy.

“As the leader of data and analytics, CDOs can no longer afford to work exclusively with data scientists in siloed environments,” explains Paul Barth, Qlik’s global head of data literacy. “They must now work to promote a culture of data literacy in which every employee is able to use data to the benefit of their role and of their employer.”

Cultural Mindset on Data

This culture starts with a change in mindset: It’s imperative that every employee, from new hires fresh out of college all the way to the C-suite, can understand the value of data.

At the top, CDOs can make the strongest case for improving data literacy by highlighting the benefits of becoming a data-driven organization.

For example, McKinsey found that, among high-performing businesses, data and analytics initiatives contributed at least 20% to earnings before interest and taxes (EBIT), and according to Gartner, enterprises will fail to identify potential business opportunities without data-literate employees across the organization.

Abe Gong, CEO and co-founder of Superconductive, adds for an organization to be data literate, there needs to be a critical mass of data-literate people on the team.

“A CDO’s role is to build a nervous system with the right process and technical infrastructure to support a shared understanding of data and its impact across the organization,” he says. “They promote data literacy at the individual level as well as building that organizational nervous system of policies, processes, and tools.”

Data Literacy: Start with Specific Examples

From his perspective, the way to build data literacy is not by doing some giant end-to-end system or a massive overhaul, but rather by coming up with specific discrete examples that really work.

“I think you start small with doable challenges and a small number of stakeholders on short timelines,” he says. “You get those to work, then iterate and add complexity.”

From his perspective, data-literate organizations simply think better together and can draw conclusions and respond to new information in a way that they couldn’t if they didn’t understand how data works.

“As businesses prepare for the future of work and the advancements that automation will bring, they need employees who are capable of leading with data, not guesswork,” Barth notes. “When the C-suite understands this, they will be eager to make data literacy a top priority.”

He says CDOs need to take the lead and properly educate staff about why they should appreciate, pay attention to and work with data.

“Data literacy training can greatly help in this regard and can be used to highlight the various tools and technologies employees need to ensure they can make the most of their data,” he adds.

As CDOs work to break down the data barriers and limitations that are present in so many firms, they can empower more employees with the necessary skills to advance their organization’s data strategy.

“And as employees become more data literate, they will be better positioned to help their employers accelerate future growth,” Barth says.

Formalizing Data Initiative and Strategies

Data literacy should start with a formal conversation between people charged with leading data initiatives and strategies within the organization.

The CDO or another data leader should craft a thoughtful communication plan that explains why the team needs to become data literate and why a data literacy program is being put into place.

“While surveys suggest few are confident in their data literacy skills, I would advise against relying on preconceptions or assumptions about team members’ comfort in working with data,” Barth says. “There are a variety of free assessment tools in the market, such as The Data Literacy Project, to jumpstart this process.”

However, training is only the beginning of what businesses need to build a data literate culture: Every decision should be supported with data and analysis, and leaders should be prepared to model data-driven decision-making in meetings and communications.

“The only playbook that I have seen work for an incoming CDO is to do a fast assessment of where the opportunities are and then look for ways to create immediate value,” Gong adds. “If you can create some quick but meaningful wins, you can earn the trust you need to do deeper work.”

For opportunities, CDOs should be looking for places the organization can make better use of its data on a short timeline — it’s usually weeks, not months.

“Once you’ve built a library of those wins and trust in your leadership, you can have a conversation about infrastructure — both technical and cultural,” he says. “Data literacy is part of the cultural infrastructure you need.”.[…] Read more »…..