How does encryption work? Examples and video walkthrough

Cryptography — the practice of taking a message or data and camouflaging its appearance in order to share it securely

What is cryptography used for?

It’s the stuff of spy stories, secret decoder rings and everyday business — taking data, converting it to another form for security, sharing it, and then converting it back so it’s usable. Infosec Skills author Mike Meyers provides an easy-to-understand walkthrough of cryptography.

Watch the complete walkthrough below.

Cyber Work listeners get free cybersecurity training resources. Click below to see free courses and other free materials.

Cryptography types and examples

Cryptography is the science of taking data and making it hidden in some way so that other people can’t see it and then bringing the data back. It’s not confined to espionage but is really a part of everyday life in digital filesharing, business transactions, texts and phone calls.

What is cryptography?

Simply put, cryptography is taking some kind of information (encrypting) and providing confidentiality to it to share with intended partners and then returning it to its original form (decrypting) so that the intended audience can use that information. Cryptography is the process of making this happen.

What are obfuscation, diffusion and confusion?

(00:40) Obfuscation is taking something that looks like it makes sense and hiding it so that it does not make sense to the casual outside observer.

(00:56) One of the things we can do to obfuscate a message or image is diffusion, where we take an image and make it fuzzier, so the details are lost or blurred. Diffusion only allows us to make it less visible, less obvious.

(01:26) We can also use confusion, where we take that image, stir it up and make a mess out of it like a Picasso painting so that it would be difficult for somebody to simply observe the image and understand what it represents.

How a Caesar cipher works

(02:10) Cryptography has been around for a long, long time. In fact, probably one of the oldest types of cryptography that has ever been around is something called the Caesar cipherIf you’ve ever had or seen a “secret decoder ring” when you were young, you know how a Caesar cipher works.

Encrypting using a Caesar cipher

(02:40) I’ve made my own decoder ring right here. It’s basically a wheel with all the letters of the alphabet, A through Z and on the inside, and all of the letters of the alphabet A through Z on the outside, and to start, you line them up from A to A, B to B, C to C.

(02:59) To make a secret code, you can rotate the inside wheel to change the letters from our original, plain text on the outside wheel. We call this substitution. We’re taking one value and substituting it for another. (03:20) Rotating the wheel two times is called ROT two; turning it three times would be ROT three. (03:37) So we can take, like the word ACE, A-C-E, and I can change ACE to CEG. Get the idea? So that’s the cornerstone of the Caesar cipher.

(04:00) As an example, our piece of plain text that we want to encrypt is, “We attack at dawn.” The first thing we’re going to do is get rid of all the spaces, so now it just says “weattackatdawn.” We’ll rotate our wheel five times — it’s ROT five. And now the encrypted “weattackatdawn” is “bjfyyfhpfyifbs.” (04:44) So we now have generated a classic Caesar cipher.

(04:49) Now there’s a problem with Caesar ciphers. Even though it is a substitution cipher, it’s too easy to predict what the text is because we’re used to looking at words.

How a Vigenere cipher works

(05:32) To make things more challenging, we can use a Vigenere cipher, which is really just a Caesar cipher with a little bit of extra confusion involved. For illustrative purposes, the Vigenere cipher is a table that shows all the possible Caesar ciphers there are. At the top, on Line 0 is the alphabet — from A to Z. On the far left-hand side, it says zero through 25. So these are all the possible ROT values you can have, from ROT zero, which means A equals A, B equals B, all the way down to ROT 25.

Encrypting using a Vigenere cipher and key

(6:17) Let’s start with a piece of plain text. Let’s use “we attack at dawn” one more time. This time, we’re going to apply a key. The key is simply a word that’s going to help us do this encryption. In this particular case, I’m going to use the word face, F-A-C-E.

(06:34) I’m going to put F-A-C-E above the first four letters of “we attack at dawn,” and then I’m going to just keep repeating that. And what we’ve done is we have applied a key to our plain text.

(06:58) Now we’re going to use the key to change the Caesar cipher ROT value for every single letter. So the first letter of the plain text is the W in “wheat” up at the top, and the key value is F, so let’s go down on the Y-axis until we get to an F. Now you see that F, you’ll see the number five right next to it. So this is ROT five.

(07:31) So all I need to do is find the intersection of these, and we get the letter B.

(07:39) The second letter in our plain text is the letter E from “we,” and in this particular case, the key value is A, which is kind of interesting, because that’s ROT zero, but that still works. So we start up at the top, find the letter E, then we find the A, and in this case, because it’s ROT zero, E is going to stay as E.

(08:00) Now, this time, it’s the A in attack. So we go up to the top. There’s the letter A, and the key value is C, as in Charlie. So we go down to the C that’s ROT two, and we then see that the letter A is now going to be C.

(08:19) Now, do the first T in attack. We come over to the Ts, and now the key value is E, as in FACE. So we go down here, that’s ROT four, we do the intersection, and now we’ve got an X. So the first four letters of our encrypted code are B, E, C, X.

Understanding algorithms and keys

(08:52) The beauty of the Vigenere is that it actually gives us all the pieces we need to create a classic piece of cryptography. We have an algorithm. The algorithm is the different types of Caesar ciphers and the rotations. And second, we have a key that allows us to make any type of changes we want within ROT zero to ROT 25 to be able to encrypt our values.

Any algorithm out there will use a key in today’s world. So when we’re talking about cryptography today, we’re always going to be talking about algorithms and keys.

(09:31) The problem with the Vigenere is that it’s surprisingly crackable. It works great for letters of the alphabet, but it’s terrible for encrypting pictures or Sequel databases or your credit card information.

(09:53) In the computer world, everything is binary. Everything is ones and zeros. We need to come up with algorithms that encrypt and decrypt long strings of just ones and zeros.

(10:11) While long strings of ones and zeros may look like nothing to a human being, computers recognize them. They could be a Microsoft Word document, or it could be a voiceover IP conversation, or it could be a database stored on a hard drive.

How to encrypt binary data

(10:37) We need to come up with algorithms which, unlike Caesars or Vigeneres, will work with binary data.

(10:45) There are a lot of different ways to do this. We can do this using a very interesting type of binary calculation called “exclusive OR.”

(11:08) For our first encryption, I’m going to encrypt my name, and we have to convert this to the binary equivalents of the text values that a computer would use. Anybody who’s ever looked at ASCII code or Unicode should be aware that we can convert these into binary.

Exclusive OR (XOR) encryption example

(11:38) So here’s M-I-K-E converted into binary. Now notice that each character takes eight binary digits. So we got 32 bits of data that we need to encrypt. So that’s our clear text. Now, in order to do this, we’re going to need two things.

(11:58) First, we need an algorithm and then we’re going to need a key.

(12:09) Now our algorithm is extremely simple, using what we call an exclusive OR and what we call a truth table. This Mike algorithm chooses a five-bit key for this illustration. In the real world, keys can be thousands of bytes long.

(12:41) So, to make this work, let’s start placing the key. I’m going to put the key over the first five bits, here at the letter M for Mike, and now we can look at this table, and we can start doing the conversion. So let’s convert those first two values, then the next, then the next, then the next.

(12:58) Now, we’ve converted a whole key’s worth, but in order to keep going, all we have to do is schlep that key right back up there and extend the key all the way out and just keep repeating it to the end. It doesn’t quite line up, so we add whatever amount of key is needed to fill up the rest of this line.

(13:28)  Using the Exclusive OR algorithm, we then create our cipher text.

(13:44) Notice that we have an algorithm that is extremely simplistic. We have a key, which is very, very simple and short, but we now have an absolutely perfect example of binary encryption.

(13:58) To decrypt this, we’d simply reverse the process. We would take the cipher text, place the key up to it, and then basically run the algorithm backward. And then we would have the decrypted data.

What is Kerckhoffs’s principle?

Having the algorithm and a key makes cryptography successful. But which is more important, the algorithm or the key?
(14:30) In the 19th century, Dutch-born cryptographer Auguste Kerckhoffs said a system should be secure, even if everything about the system, except the key, is public knowledge. This is really important. Today’s super-encryption tools that we use to protect you on the internet are all open standards. Everybody knows how the algorithms work….[…] Read more »….

 

API Security 101: The Ultimate Guide

APIs, application programming interfaces, are driving forces in modern application development because they enable applications and services to communicate with each other. APIs provide a variety of functions that enable developers to more easily build applications that can share and extract data.

Companies are rapidly adopting APIs to improve platform integration, connectivity, and efficiency and to enable digital innovation projects. Research shows that the average number of APIs per company increased by 221% in 2021.

Unfortunately, over the last few years, API attacks have increased massively, and security concerns continue to impede innovations.

What’s worse, according to Gartner, API attacks will keep growing. They’ve already emerged as the most common type of attack in 2022. Therefore, it’s important to adopt security measures that will keep your APIs safe.

What is an API attack?

An API attack is malicious usage or manipulation of an API. In API attacks, cybercriminals look for business logic gaps they can exploit to access personal data, take over accounts, or perform other fraudulent activities.

What Is API security and why is it important?

API security is a set of strategies and procedures aimed at protecting an organization against API vulnerabilities and attacks.

APIs process and transfer sensitive data and other organizations’ critical assets. And they are now a primary target for attackers, hence the recent increase in the number of API attacks.

That’s why an effective API security strategy is a critical part of the application development lifecycle. It is the only way organizations running APIs can ensure those data conduits are secure and trustworthy.

A secure API improves the integrity of data by ensuring the content is not tampered with and available to only users, applications, and servers who have proper authentication and authorization to access it. API security techniques also help mitigate API vulnerabilities that attackers can exploit.

When is the API vulnerable?

Your API is vulnerable if:

  • The API host’s purpose is unclear, and you can’t tell which version is running, what data is collected and processed, or who should have access (for example, the general public, internal employees, and partners)
  • There is no documentation, or the documentation that exists is outdated.
  • Older API versions are still in use, and they haven’t been patched.
  • Integrated services inventory is either missing or outdated.
  • The API contains a business logic flaw that lets bad actors access accounts or data they shouldn’t be able to reach.

What are some common API attacks?

API attacks are extremely different from other cyberattacks and are harder to spot. This new approach is why you need to understand the most common API attacks, how they work and how to prevent them.

BOLA attack

This most common form of attack happens when a bad actor changes parameters across a sequence of API calls to request data that person is not authorized to have. For example, nefarious users might authenticate using one UserID, for example, and then enumerate UserIDs in subsequent API calls to pull back account information they’re not entitled to access.

Preventive measures:

Look for API tracking that can retain information over time about what different users in the system are doing. BOLA attacks can be very “low and slow,” drawn out over days or weeks, so you need API tracking that can store large amounts of data and apply AI to detect attack patterns in near real time.

Improper assets management attack

This type of attack happens if there are undocumented APIs running (“shadow APIs”) or older APIs that were developed, used, and then forgotten without being removed or replaced with newer more secure versions (“zombie APIs”). Undocumented APIs present a risk because they’re running outside the processes and tooling meant to manage APIs, such as API gateways. You can’t protect what you don’t know about, so you need your inventory to be complete, even with developers have left something undocumented. Older APIs are unpatched and often use older libraries. They are also undocumented and can remain undetected for a long time.

Preventive measures:

Set up a proper inventory management system that includes all the API endpoints, their versions, uses, and the environment and networks they are reachable on.

Always check to ensure that the API needs to be in production in the first place, it’s not an outdated version, there’s no sensitive data exposed and that data flows as expected throughout the application.

Insufficient logging & monitoring

API logs contain personal information that attackers can exploit. Logging and monitoring functions provide security teams with raw data to establish the usual user behavior patterns. When an attack happens, the threat can be easily detected by identifying unusual patterns.

Insufficient monitoring and logging results in untraceable user behavior patterns, thereby allowing threat actors to compromise the system and stay undetected for a long time.

Preventive measures:

Always have a consistent logging and monitoring plan so you have enough data to use as a baseline for normal behavior. That way you can quickly detect attacks and respond to incidents in real-time. Also, ensure that any data that goes into the logs are monitored and sanitized.

What are API security best practices?

Here’s a list of API best practices to help you improve your API security strategy:

  • Train employees and security teams on the nature of API attacks. Lack of knowledge and expertise is the biggest obstacle in API security. Your security team needs to understand how cybercriminals propagate API attacks and different call/response pairs so they can better harden APIs. Use the OWASP API Top 10 list as a starting point for your education efforts.
  • Adopt an effective API security strategy throughout the lifecycle of the APIs.
  • Turn on logging and monitoring and use the data to detect patterns of malicious activities and stop them in real-time.
  • Reduce the risk of sensitive data being exposed. Ensure that APIs return only as much data as is required to complete their task. In addition, implement data filtering, data access limits, and monitoring.
  • Document and manage your APIs so you’re aware of all the existing APIs in your organization and how they are built and integrated to secure and manage them effectively.
  • Have a retirement plan for old APIs and remove or patch those that are no longer in use.
  • Invest in software specifically designed for detecting API call manipulations. Traditional solutions cannot detect the subtle probing associated with API reconnaissance and attack traffic….[…] Read more »… 

 

IT Budgets in the Face of a Recession: How to Plan

The threat of an economic recession could impact the digitalization plans of organizations both large and small, requiring chief information officers and chief financial officers to closely coordinate investment plans.

This planning will require determining the IT funding initiatives that are “mission critical” and other projects that might be nice to have but don’t require immediate investment.

Despite the cloudy economic outlook, the relentless push to digitalization will likely continue apace. Businesses may therefore have to adjust priorities, rather than reduce spending.

“There is always uncertainty even in the best of times, so the key to IT budget planning is ranking your priorities and executing them — especially if you can beat your vendors down a bit,” says Rich Quattrocchi, vice president of digital transformation at Mutare, an enterprise communications and security provider. “If you liked the project at $500,000, you must love it $450,000.”

He says IT is akin to investing in the stock market, so if you can invest at a discount during a downturn, pulling the trigger will pay big dividends downstream.

“Keep in mind the average length of a recession is only 13 months, and recent recessions have been shorter,” Quattrocchi adds. “The best tip is to ensure all your employees involved in IT, and other projects, think like owners and spend money as if it were their own.”

Weathering Recessions

After more than 30 years in business, Quattrocchi notes that Mutare has weathered several recessions. “We invest heavily in IT security, automation, and digital transformation, especially during downturns,” he says. “Digital transformation delivers more productivity from existing resources diminishing the impact of headwinds by enabling our people to do more with new tools.”

He adds that the company has found that during downturns, vendors are far more eager to discount products and services, which makes it an excellent time to invest in IT infrastructure to better serve customers, employees, and constituents.

However, a great deal of critical thinking must go into determining what is mission critical vs. “nice to have”.

From Quattrocchi’s perspective, the “mission” should come top down, and then leadership needs to get out of the way and let the business unit get the job done.

“The best leaders don’t tell their people how to do the job, but what to do then ensure they have the tools to get it done,” he says.

He adds the KPIs need to be mutually agreed upon and be achievable and objectively measurable. “Having a regular feedback loop is essential,” Quattrocchi says. “If the KPIs are going in the wrong direction, then a course correction is required. This is where common sense, consensus and critical thinking intersect.”

Establish and Refresh Detailed IT Budget

Coinme’s CFO Chris Roling says he thinks it’s useful for an organization to establish a detailed IT budget at the beginning of each year and then review the actual/forecasted cash spent every month with the key budget holders and the senior finance team.

“We then ‘refresh’ the budgeted spending for the remainder of the year,” he explains. “Effective communication between finance and IT is critical as both parties can understand and agree on what IT investments are required/necessary versus nice to haves.”

Roling says the company evaluates every major project on its own merits and employs a “decision matrix” template whereby the project team documents the proposed project spend and highlights the strategic rationale, the impact on internal/external customers, the financial return and cash flow timing, project resource planning and implementation risk.

“We also conduct a full legal review of all proposed contract provisions, and service level agreements should the project involve third parties,” he says. “The leadership team then can ask additional questions and ‘challenge’ the spend, timing or approach.

Finally, the group takes the final consensus decision and closely monitors the project.

“We do not anticipate cutting any budgeted IT expenses but rather may defer some significant IT project spending during the year’s balance,” he says, noting all cost center budgets, including IT, are under monthly review.

Roling explains that at his company, the entire leadership team is involved in planning and reviewing annual departmental budgets and significant IT spending.

“The IT team and finance are the primary stakeholders in agreeing and documenting the detailed monthly cost budgets and forecasts, and the communication is interactive and as frequent as required,” he says.

He points to the benefits of scheduling fixed monthly “budget reviews” in advance, along with a transparent process for reviewing actual and forecasted monthly IT spending.

“We also try to establish “contract owners” when we enter into new contracts with third parties,” he explains. “These individuals are responsible for owning each IT contract and being aware of actual invoicing and monthly spending, user metrics and related pricing, escalation clauses, renewal and exit timing and terms. This allows us to manage our spends and contractual relationships proactively.”

When planning an IT budget, Quattrocchi says the stakeholders are the same irrespective of economic uncertainty depending on the project, mission critical objectives, and business unit responsibilities.

Collaboration and Alignment

He says best practices should be the same concerning collaboration and alignment in both good and bad times. “Uncertainty shouldn’t change collaboration, as it can result in untended consequences,” he says.

He points to the recent Robin Hood data breach that originated from a vishing attack.

“Protecting their voice network should have been a mission critical project, yet they didn’t invest in a single technical control to filter voice traffic from bad actors,” he says..[…] Read more »…..

 

Solving the identity crisis in cybersecurity

The evolving threat landscape is making identity protection within the enterprise a top priority. According to the 2022 CrowdStrike Global Threat Report, nearly 80% of cyberattacks leverage identity-based attacks to compromise legitimate credentials and use techniques like lateral movement to quickly evade detection. The reality is that identity-based attacks are difficult to detect, especially as the attack surface continues to increase for many organizations. 

Every business needs to authenticate every identity and authorize each request to maintain a strong security posture. It sounds simple, but the truth is this is still a pain point for many organizations. However, it doesn’t need to be.

 

Why identity protection must be an urgent priority for business leaders

We have seen adversaries become more adept at obtaining and abusing stolen credentials to gain a foothold in an organization. Identity has become the new perimeter, as attackers are increasingly targeting credentials to infiltrate an organization. Unfortunately, organizations continue to be compromised by identity-based attacks and lack the awareness necessary to prevent it until it’s too late.

Businesses are coming around to the fact that any user — whether it be an IT administrator, employee, remote worker, third-party vendor or customer — can be compromised and provide an attack path for adversaries. This means that organizations must authenticate every identity and authorize each request to maintain security and prevent a wide range of cyber threats, including ransomware and supply chain attacks. Otherwise, the damage is costly. According to a 2021 report, the most common initial attack vector — compromised credentials — was responsible for 20% of breaches at an average cost of $4.37 million.

 

How zero trust helps contain adversaries 

Identity protection cannot occur in a vacuum — it’s just one aspect of an effective security strategy and works best alongside a zero trust framework. To realize the benefits of identity protection paired with zero trust, we must first acknowledge that zero trust has become a very broad and overused term. With vendors of all shapes and sizes claiming to have zero trust solutions, there is a lot of confusion about what it is and what it isn’t. 

Zero trust requires all users, whether in or outside the organization’s network, to be authenticated, authorized and continuously validated before being granted or maintaining access to applications and data. Simply put, there is no such thing as a trusted source in a zero trust model. Just because a user is authenticated to access a certain level or area of a network does not necessarily automatically grant them access to every level and area. Each movement is monitored, and each access point and access request is analyzed. Always. This is why organizations with the strongest security defenses utilize an identity protection solution in conjunction with a zero trust framework. In fact, a 2021 survey found that 97% of identity and security professionals agree that identity is a foundational component of a zero trust security model.

 

It’s time to take identity protection seriously — here’s how 

As organizations adopt cloud-based technologies to enable people to work from anywhere over the past two years, it’s created an identity crisis that needs to be solved. This is evidenced in a 2021 report, which found a staggering 61% of breaches in the first half of 2021 involved credential data. 

A comprehensive identity protection solution should deliver a host of benefits and enhanced capabilities to the organization. This includes the ability to:

  • Stop modern attacks like ransomware or supply chain attacks
  • Pass red team/audit testing
  • Improve the visibility of credentials in a hybrid environment (including identities, privileged users and service accounts)
  • Enhance lateral movement detection and defense
  • Extend multi-factor authentication (MFA) to legacy and unmanaged systems
  • Strengthen the security of privileged users 
  • Protect identities from account takeover
  • Detect attack tools..[…] Read more »….

 

 

Planning for post-quantum cryptography: Impact, challenges and next steps

Symmetric vs. asymmetric cryptography

Encryption algorithms can be classified into one of two categories based on their use of encryption keys. Symmetric encryption algorithms use the same secret key for both encryption and decryption. Asymmetric or public-key encryption algorithms use a pair of related keys. Public keys are used for encryption and digital signature validation, while private keys are used for decryption and digital signature validation.

Different types of encryption algorithms have different benefits and downsides. For example, symmetric encryption algorithms are often more efficient, making them well-suited to bulk data encryption. However, they need the shared secret key to be shared between the sender and recipient over a secure channel before message encryption/decryption can be performed.

Asymmetric cryptography is less efficient but does not have this requirement. Encryption is performed using public keys, which, as their name suggests, are designed to be public. As a result, asymmetric algorithms are often used to create a secret channel over which a shared symmetric key is established for bulk data encryption.

Asymmetric cryptography and “hard” problems

Asymmetric encryption algorithms are built using a mathematically “hard” problem. This is a mathematical function where performing an operation is far easier than undoing it. For example, a commonly used “hard” problem in asymmetric cryptography is the factoring problem. Multiplying two large prime numbers together is relatively “easy” with polynomial complexity. In contrast, factoring the result of this multiplication is “hard” with exponential complexity.

This difference in complexity makes it possible to develop cryptographic algorithms that are both usable and secure. Public key encryption algorithms are designed so that legitimate users only perform “easy” operations, while an attacker must perform “hard” ones. The asymmetric complexity between these operations makes it possible to choose key lengths for which performing the “easy” operation” is possible, while the “hard” operations are infeasible on modern computers.

Impacts of quantum computing on asymmetric cryptography

The security of public-key cryptography depends on the “hardness” of these underlying problems. If the “hard” problem (factoring, logarithms, etc.) can be solved with polynomial complexity, then the security of the algorithm is broken. Even if the complexity of breaking cryptography is hundreds, thousands, etc., times more difficult than using it, an attacker with sufficient resources and incentives (nation-states, etc.) could perform the attack.

Quantum computing poses a threat to asymmetric cryptography due to the existence of Shor’s algorithm. On a sufficiently large quantum computer, Shor’s algorithm has the ability to solve the factoring problem in polynomial time, breaking the security of asymmetric cryptography…[…] Read more »….

 

5 Critical Considerations in Building a Zero Trust Architecture

Zero Trust is everywhere. It’s covered in industry trade publications and events, it’s a topic of conversation at board meetings, and it’s on the minds of CISOs, CIOs and even the President.

What is Zero Trust, and why is it important?

Zero Trust isn’t a cybersecurity solution in and of itself. However, implementing a Zero Trust architecture will help mitigate and ultimately lower the number of successful cybersecurity attacks your organization might otherwise endure, greatly reducing operational and financial risk.

What is Zero Trust?

A Zero Trust security model, simply put, is the idea that anything inside or outside an organization’s networks should never implicitly be trusted. It dictates that users, their devices, the network’s components, and in fact any and every packet that holds a stated identity, should continuously be monitored and verified before anyone or anything is allowed to access the organization’s environment – especially its most critical assets.

This concept is the exact opposite of the old “trust everything if it’s in my zone” model that many IT models operated under in years past. Today, Zero Trust takes a “trust nothing unless it can be verified in multiple ways” approach to security.

How do you build a Zero Trust architecture?

If you’re considering implementing a Zero Trust model in your organization and want to better understand how to get started, John Kindervag, the creator of Zero Trust, outlines these five practical steps.

Step 1: Define your protect surfaces.

Most organizations understand the concept of the attack surface, which includes every potential point of entry a malicious actor might try to access in an attempt to compromise an organization.

Protect surfaces are different. They encompass the data, physical equipment, networks, applications and other crucial assets your organization wants to deliberately protect, given how important they are to the business.

Why take the protect surface approach instead of looking at the entire attack surface? Kindervag puts it simply: “Protect surface becomes a problem that’s solvable, versus a problem, like the attack surface, that’s actually unsolvable. How could you ever solve a problem as big as the internet itself?”

It’s essential to first identify the assets within your environment that require protection. Where does the most sensitive data reside? What operational technology is most critical to your plant and production processes? Make a list of those assets that you absolutely must prioritize from a security and access management standpoint and prioritize them.

Step 2: Map the transaction flows

Once you’ve identified your protect surfaces, you can start to map their transaction flows.

This includes examining all the ways in which various users have access to those assets and how each protect surface interacts with all other systems in your environment. For example, a user might be able to access terminal services only if multi-factor authentication (MFA) is implemented and verified, the user is logging on at an expected time and from the expected place and doing an expected task.

With your protect surfaces identified, prioritized and transaction flows mapped, you’re now ready to begin architecting a Zero Trust environment. Start with the highest priority protect surface and when completed, move to the next. Each protect surface with a Zero Trust architecture implemented is a high-quality step toward stronger cyber resiliency and lowered risk.

Step 3: Architect a Zero Trust environment

Keep in mind: no single product delivers a complete Zero Trust architecture. Zero Trust environments take advantage of multiple cybersecurity tools, ranging from access controls like MFA and identity and access management (IAM), to technology that protects sensitive data through processes like encryption or tokenization.

Beyond a toolbox of security technologies, every Zero Trust architecture essentially starts with creating smart, detailed segmentation and firewall policies. It takes those policies and then creates multiple variations based on attributes like the individual requesting access, the device they’re using, the type of network connection, the time of day they’re making the request and more – step by step, building a secure perimeter around each protect surface.

Step 4: Create a Zero Trust policy

This step focuses on creating the policies that govern activities and expectations related to things like access controls and firewall rules.

Think beyond posting those new policies to your organization’s intranet, too. Consider educational programs that you may need to implement throughout the organization to promote strong security practices among your employees, vendors and consultants. Frequent cyber-awareness training has moved into the mainstream, becoming a necessity that will help reduce risk.

Step 5: Monitor and maintain the network

The final step in Kindervag’s process focuses on verifying that your Zero Trust environment and the policies governing it are working the way you intended, identifying gaps or areas for improvement and course-correcting as necessary…[…] Read more »

 

Building a risk management program

In today’s world, it’s important for every organization to have some form of vulnerability assessment and risk management program. While this can seem daunting, by focusing on some key concepts it’s possible for an organization of any size to develop a strong security posture with a firm grasp of its risk profile. We’ll discuss in this article how to build the technical foundation for a comprehensive security program and, crucially, the tools and processes necessary to develop that foundation into a mature vulnerability assessment and risk management program. 

 

Build the Foundation

It’s impossible to implement effective security, let alone manage risk, without a clear understanding of the environment. That means, essentially, taking an inventory of hosts, applications, resources, and users.

With the current computing environment, that combination is apt to include assets that reside in the cloud as well as those hosted in an organization’s own data center. Organizations have little control over their remote employees’ devices, who are accessing data on a bring-your-own-device (BYOD) basis, adding another layer of risk. There is also the aspect of software as a service applications (SaaS) that the organization uses. It’s essential to know what data is kept where. With SaaS, in particular, teams must have a clear understanding of who is responsible for the security of the data in contractual terms, so as to allocate resources accordingly. 

 

Manage the puzzle

Once the environment is scoped, managing it relies on three main components: visibility, control, and timely maintenance. 

Whether it is software vulnerabilities, vulnerable configurations, obsolete packages, or a range of other issues, a vulnerability scanner will show the security operations team what’s at risk and let them prioritize their reaction. That said, scanners, external or internal, are not the only option. At the high end, a penetration testing team can probe the environment to a level that vulnerability scanners can’t match. At the low end, establishing a process to monitor public vulnerability feeds and verifying whether newly exposed issues affect the environment can provide a baseline. It may not give as deep a picture scanning, or penetration testing, but the cost in SecOps time is often well worth it.

Protecting the users is a major point and doesn’t always get the attention it deserves. Ultimately, that starts with user education and establishing a culture that enhances a secure environment. Users are often the threat surface that presents the greatest risk, but with proper education and attitude they can become an effective layer of a defense depth strategy.

Another important step to protecting users is adding multi-factor authentication (MFA). In particular, those that require a physical or virtual token tend to be more secure than those that rely on text messaging or email. While MFA does add a minor annoyance to a user’s login, it can drastically reduce the threat posed by compromised accounts and reduce the organization’s overall risk profile.

User endpoints are another area of concern. While the default endpoint protection included in the main desktop operating systems (Windows and MacOS) are quite effective, they are also the defenses every malware writer in the world tests against. That makes investment in an additional layer of endpoint protection worthwhile. 

The last major piece here is a patch management program. This requires base processes that not only manage the patch process, but also the assets themselves. Fortunately, there are multiple tools available that can enhance and automate the process, and a regular patch cycle can have vulnerabilities fixed before they are even developed into exploits.

Ideally, the patch management process includes a change management system that’s able to smoothly accommodate emergency situations where a security hotfix must go in outside the normal window.

Pulling it all together

With the foundation laid, the final step involves communication. Simply assessing risk is not useful if there is no reliable way to organize people to act on it.

Bridging the information security teams, who are responsible for recognizing, analyzing, and mitigating threats to the organization, and the information technology teams, who are responsible for maintaining the organization’s infrastructure, is vital. Whether an organization achieves this with a process or a tool is up to them. But in either case, communication is vital, along with an ability to react across teams. This applies to non-technical teams as well — if folks are receiving phishing emails, security operations should know. 

These mechanisms need to be in place from the executive offices down to the sales or production floor, as reducing risk really is everyone’s responsibility. Moreover, the asset and patch management system needs a mechanism to prioritize patches based on business risk. Unless the IT team has the resources to deploy every single patch that comes their way, they will have to prioritize, and that prioritization needs to be based on the threat to business rather than arbitrary severity scores.

 An Investment 

There is no “one size fits all” solution for risk assessment and management. For example, for a restaurant that doesn’t accept reservations or orders online, a relatively insecure website doesn’t present much business risk. While it may be technically vulnerable, they are not at risk of losing valuable data...[…] Read more »….

 

Building Data Literacy: What CDOs Need to Know

Data literacy is the ability to read, work with, analyze, and communicate with data.

As businesses have become increasingly digital, all business functions are generating valuable data that can guide their decisions and optimize their performance.

Employees now have data available to augment their experience and intuition with analytical insights. Leading organizations are using this data to answer their every question — including questions they didn’t know they had.

The chief data officer’s (CDO) role in data literacy and ensuring that data literacy efforts are successful is to be the chief evangelist and educator to the organization.

Standardizing basic data training across the organization and creating a center of excellence for self-service in all departments can help ensure everyone can benefit from data literacy.

“As the leader of data and analytics, CDOs can no longer afford to work exclusively with data scientists in siloed environments,” explains Paul Barth, Qlik’s global head of data literacy. “They must now work to promote a culture of data literacy in which every employee is able to use data to the benefit of their role and of their employer.”

Cultural Mindset on Data

This culture starts with a change in mindset: It’s imperative that every employee, from new hires fresh out of college all the way to the C-suite, can understand the value of data.

At the top, CDOs can make the strongest case for improving data literacy by highlighting the benefits of becoming a data-driven organization.

For example, McKinsey found that, among high-performing businesses, data and analytics initiatives contributed at least 20% to earnings before interest and taxes (EBIT), and according to Gartner, enterprises will fail to identify potential business opportunities without data-literate employees across the organization.

Abe Gong, CEO and co-founder of Superconductive, adds for an organization to be data literate, there needs to be a critical mass of data-literate people on the team.

“A CDO’s role is to build a nervous system with the right process and technical infrastructure to support a shared understanding of data and its impact across the organization,” he says. “They promote data literacy at the individual level as well as building that organizational nervous system of policies, processes, and tools.”

Data Literacy: Start with Specific Examples

From his perspective, the way to build data literacy is not by doing some giant end-to-end system or a massive overhaul, but rather by coming up with specific discrete examples that really work.

“I think you start small with doable challenges and a small number of stakeholders on short timelines,” he says. “You get those to work, then iterate and add complexity.”

From his perspective, data-literate organizations simply think better together and can draw conclusions and respond to new information in a way that they couldn’t if they didn’t understand how data works.

“As businesses prepare for the future of work and the advancements that automation will bring, they need employees who are capable of leading with data, not guesswork,” Barth notes. “When the C-suite understands this, they will be eager to make data literacy a top priority.”

He says CDOs need to take the lead and properly educate staff about why they should appreciate, pay attention to and work with data.

“Data literacy training can greatly help in this regard and can be used to highlight the various tools and technologies employees need to ensure they can make the most of their data,” he adds.

As CDOs work to break down the data barriers and limitations that are present in so many firms, they can empower more employees with the necessary skills to advance their organization’s data strategy.

“And as employees become more data literate, they will be better positioned to help their employers accelerate future growth,” Barth says.

Formalizing Data Initiative and Strategies

Data literacy should start with a formal conversation between people charged with leading data initiatives and strategies within the organization.

The CDO or another data leader should craft a thoughtful communication plan that explains why the team needs to become data literate and why a data literacy program is being put into place.

“While surveys suggest few are confident in their data literacy skills, I would advise against relying on preconceptions or assumptions about team members’ comfort in working with data,” Barth says. “There are a variety of free assessment tools in the market, such as The Data Literacy Project, to jumpstart this process.”

However, training is only the beginning of what businesses need to build a data literate culture: Every decision should be supported with data and analysis, and leaders should be prepared to model data-driven decision-making in meetings and communications.

“The only playbook that I have seen work for an incoming CDO is to do a fast assessment of where the opportunities are and then look for ways to create immediate value,” Gong adds. “If you can create some quick but meaningful wins, you can earn the trust you need to do deeper work.”

For opportunities, CDOs should be looking for places the organization can make better use of its data on a short timeline — it’s usually weeks, not months.

“Once you’ve built a library of those wins and trust in your leadership, you can have a conversation about infrastructure — both technical and cultural,” he says. “Data literacy is part of the cultural infrastructure you need.”.[…] Read more »…..

 

Top 9 effective vulnerability management tips and tricks

The world is currently in a frenetic flux. With rising geopolitical tensions, an ever-present rise in cybercrime and continuous technological evolution, it can be difficult for security teams to maintain a straight bearing on what’s key to keeping their organization secure.

With the advent of the “Log4shell,” aka Log4J vulnerability, sound vulnerability management practices have jumped to the top of the list of skills needed to maintain an ideal state of cybersecurity. The impacts due to Log4j are expected to be fully realized throughout 2022.

As of 2021, missing security updates are a top-three security concern for organizations of all sizes — approximately one in five network-level vulnerabilities are associated with unpatched software.

Not only are attacks on the rise, but their financial impacts are as well. According to Cybersecurity Ventures, costs related to cybercrime are expected to balloon 15% year over year into 2025, totaling $11 trillion.

Vulnerability management best practices

Whether you’re performing vulnerability management for the first time or looking to revisit your current vulnerability management practices to find new perspectives or process efficiencies, there are some recommended useful strategies concerning vulnerability reduction.

Here are the top nine (We decided to just stop there!) tips and tricks for effective vulnerability management at your organization.

1. Vulnerability remediation is a long game

Extreme patience is required when it comes to vulnerability remediation. Your initial review of vulnerability counts, categories, and recommended remediations may instill a false sense of confidence: You may expect a large reduction after only a few meetings and executing a few patch activities. This is far from how reality will unfold.

Consider these factors as you begin initial vulnerability management efforts:

  • Take small steps: Incremental progress in reducing total vulnerabilities by severity should be the initial goal, not an unrealistic expectation of total elimination. The technology estate should ideally accumulate new vulnerabilities at a slightly lower pace versus what is remediated as the months and quarters roll on.
  • Patience is a virtue: Adopting a patient mindset is unequivocally necessary to avoid mental defeat, burnout and complacency. Remediation progress will be slow but must sustain a methodical approach.
  • Learn from challenges: As roadblocks are encountered, these serve as opportunities to approach alternate remediation strategies. Plan on what can be solved today or in the current week.

Avoid focusing on all the major problems preventing remediation and think with a growth mindset to overcome these challenges.

2. Cross-team collaboration is required

Achieving a large vulnerability reduction requires effective collaboration across technology teams. The high vulnerability counts across the IT estate likely exist due to several cultural and operational factors within the organization which pre-exists remediation efforts, including:

  • Insufficient staff to maintain effective vulnerability management processes
  • Legacy hardware that cannot be patched because they run on very expensive hardware — or provide a specific function that is cost-prohibitive to replace
  • Ineffective patching solutions that do not or cannot apply necessary updates completely (e.g., the solution can patch web browsers but not Java or Adobe)
  • Misguided beliefs that specialized classes of equipment cannot be patched or rebooted therefore, they are not revisited for extended periods

Part of your remediation efforts should focus on addressing systemic issues that have historically prevented effective vulnerability remediation while gaining support within or across the business to begin addressing existing vulnerabilities.

Determine how the various teams in your organization can serve as a force multiplier. For example, can the IT support desk or other technical teams assist directly in applying patches or decommissioning legacy devices? Can your vendors assist in applying patches or fine-tuning configurations of difficult to patch equipment to make?

These groups can assist in overall reduction while further plans are developed to address additional vulnerabilities.

3. Start by focusing on low-hanging fruit

Focus your initial efforts on the low-hanging fruit when building a plan to address vulnerabilities. Missing browser updates and applying updates to third-party browser software like Java or Adobe are likely to comprise the largest initial reduction efforts.

If software like Google Chrome or Firefox is missing the previous two years of security updates, it likely signifies the software is not being used. Some confirmation may be required, but the response is likely to remove software, not the application of patches.

To prevent a recurrence, there will likely be a need to revisit workstation and server imaging processes to determine if legacy, unapproved or unnecessary software is being installed as new devices are provisioned.

4. Leverage your end-users when needed

Don’t forget to leverage your end-users as a possible remediation vector. A single email you spend 30 minutes carefully crafting to include instructions on how they can self-update difficult-to-patch third-party applications can save you many hours of time and effort — compared to working with technical teams where the end result may be a reduction of fewer vulnerabilities.

However, end-user involvement should be an infrequent and short-term approach as the underlying problems outlined in cross-team collaboration (tip #2) are addressed.

This also provides an indirect approach to increasing security awareness via end-user engagement. Users are more likely to prioritize security when they are directly involved in the process.

5. Be prepared to get your hands dirty

Many of the vulnerabilities that exist will require a manual fix, including but not limited to:

  • Unquoted service paths in program directories
  • Weak or no passwords on periphery devices like printers
  • Updating SNMP community strings
  • Windows registry not set

While there is project downtime — or the security function is between remediation planning — focus on providing direct assistance where possible. A direct intervention provides an opportunity to learn more about the business and the people operating the technology in the environment. It also provides direct value when an automated process fails to remediate or cannot remediate identified vulnerabilities.

This may also be required when already stressed IT teams cannot assist in remediation activity.

6. Targeted patch applications can be effective for specific products

Some vulnerabilities may require the application of a specific update to address large numbers of vulnerabilities that automatic updates continuously fail to address. This is often seen in Microsoft security updates that did not apply completely or accurately for random months across several years and devices.

Search for and test the application of cumulative security updates. One targeted patch update may remediate dozens of vulnerabilities.

Once tested, use automated patch application tools like SCCM or remote management and monitoring (RMM) tools to stage and deploy the specific cumulative update.

7. Limit scan scope and schedules 

Vulnerability management seeks to identify and remediate vulnerabilities, not cause production downtime. Vulnerability scanning tools can unintentionally disrupt information systems and networks via the probing traffic generated towards organization devices or equipment.

Suppose an organization is onboarding a new scanning tool or spinning up a new vulnerability management practice. In that case, it is best to start scanning a small network subset that represents the asset types deployed across the network.

Over time, scanning can be rolled out to larger portions of the network as successful scanning activity on a smaller scale is consistently demonstrated.

8. Leverage analytics to focus remediation activity 

Native reporting functions provided by vulnerability scanning tools typically lack effective reporting functions that assist in value-add vulnerability reduction. Consider implementing programs like Power BI, which can help the organization focus on the following:

  • New vulnerabilities by type or category
  • Net new vulnerabilities
  • Risk severity ratings for groups of or individual vulnerabilities
9. Avoid overlooking compliance pitfalls or licensing issues

Ensure you fully understand any licensing requirements in relation to enterprise usage of third-party software and make plans to stay compliant.

As software evolves, its creators may look to harness new revenue streams, which have real-world impacts on vulnerability management efforts. A classic example is Java, which is highly prevalent in organizations across the globe. As of 2019, Java requires a paid license subscription to receive security updates for Java.

Should a third party decide to perform an onsite audit of the license usage, the company may find itself tackling a lawsuit on top of managing third-party software security updates…[…] Read more »….

 

Key Steps for Public Sector Agencies To Defend Against Ransomware Attacks

Over the past two years, the pandemic has fundamentally altered the business world and the modern work environment, leaving organizations scrambling to maintain productivity and keep operational efficiency intact while securing the flow of data across different networks (home and office). While this scenario has undoubtedly created new problems for businesses in terms of keeping sensitive data and IP safe, the “WFH shift” has opened up even greater risks and threat vectors for the US public sector.

Federal, state, local governments, education, healthcare, finance, and nonprofit organizations are all facing privacy and cybersecurity challenges the likes of which they’ve never seen before. Since March 2020, there’s been an astounding increase in the number of cyberattacks, high-profile ransomware incidents, and government security shortfalls. There are many more that go undetected or unreported. This is in part due to employees now accessing their computers and organization resources/applications from everywhere but the office, which is opening up new security threats for CISOs and IT teams.

Cyberthreats are expected to grow exponentially this year, particularly as the world faces geopolitical upheaval and international cyberwarfare. Whether it’s a smaller municipality or a local school system, no target is too small these days, and everyone is under attack due to bad actors now having more access to sophisticated automation tools.

The US public sector must be prepared to meet these new challenges and focus on shoring up vulnerable and critical technology infrastructures while implementing new cybersecurity and backup solutions that secure sensitive data.

Previous cyber protection challenges

As data volumes grow and methods of access change, safeguarding US public sector data, applications, and systems involves addressing complex and often competing considerations. Government agencies have focused on securing a perimeter around their networks, however, with a mobile workforce combined with the increase in devices, endpoints, and sophisticated threats, data is still extremely vulnerable. Hence the massive shift towards a Zero Trust model.

Today, there is an over-reliance on legacy and poorly integrated IT systems, leaving troves of hypersensitive constituent data vulnerable; government agencies have become increasingly appealing targets for cybercriminals. Many agencies still rely on outdated five-decade-old technology infrastructure and deal with a multitude of systems that need to interact with each other, which makes it even more challenging to lock down these systems. Critical infrastructure industries have more budget restraints than ever; they need flexible and affordable solutions to maintain business continuity and protect against system loss.

Protecting your organization’s data assets

The private sector, which owns and operates most US critical infrastructure, will continue being instrumental in helping government organizations (of all sizes) modernize their cyber defenses. The US continues to make strides in creating specific efforts that encourage cyber resilience and counter these emerging threats.

Agencies and US data centers must focus on solutions that attest to data protection frameworks like HIPAA, CJIS, NIST 800-171 first and then develop several key pillars for data protection built around the Zero Trust concept. This includes safety (ensuring organizational data, applications, and systems are always available), accessibility (allowing employees to access critical data anytime and anywhere), and privacy and authenticity (control who has access to your organization’s digital assets).

New cloud-based data backup, protection and cybersecurity solutions that are compliant to the appropriate frameworks and certified will enable agencies to maximize operational uptime, reduce the threat of ransomware, and ensure the highest levels of data security possible across all public sector computing environments.

Conclusion

First and foremost, the public sector and US data centers must prioritize using compliant and certified services to ensure that specific criteria are met…[…] Read more »