How to find weak passwords in your organization’s Active Directory

Introduction

Confidentiality is a fundamental information security principle. According to ISO 27001, it is defined as ensuring that information is not made available or disclosed to unauthorized individuals, entities or processes. There are several security controls designed specifically to enforce confidentiality requirements, but one of the oldest and best known is the use of passwords.

In fact, aside from being used since ancient times by the military, passwords were adopted quite early in the world of electronic information. The first recorded case dates to the early 1960s by an operating system created at MIT. Today, the use of passwords is commonplace in most people’s daily lives, either to protect personal devices such as computers and smartphones or to prevent unwanted access to corporate systems.

With such an ancient security control, it’s only natural to expect it has evolved to the point where passwords are a completely effective and secure practice. The hard truth is that even today, the practice of stealing passwords as a way to gain illegitimate access is one of the main techniques used by cybercriminals. Recent statistics, such as Verizon’s 2020 Data Breach Investigations Report leave no space to doubt: 37% of hacking-related breaches are tied to passwords that were either stolen or used in gaining unauthorized access.

For instance, in a quite recent case, Nippon Telegraph & Telephone (NTT) — a Fortune 500 company — disclosed a security breach in its internal network, where cybercriminals stole data on at least 621 customers. According to NTT, crackers breached several layers of its IT infrastructure and reached an internal Active Directory (AD) to steal data, including legitimate accounts and passwords. This lead to unauthorized access to a construction information management server.

Figure 1: Diagram of the NTT breach (source: NTT)

As with other directory services, Microsoft Active Directory remains a prime target for cybercriminals, since it is used by many businesses to centralize accounts and passwords for both users and administrators. Well, there’s no point in making cybercrime any easier, so today we are going to discuss how to find weak passwords in Microsoft Active Directory.

Active Directory: Password policy versus weak passwords

First, there is a point that needs to be clear: Active Directory indeed allows the implementation of a GPO (Group Policy Object) defining rules for password complexity, including items such as minimum number of characters, mandatory use of specials characters, uppercase and lowercase letters, maximum password age and even preventing a user from reusing previous passwords. Even so, it is still important to know how to find weak passwords, since the GPO may (for example) not have been applied to all Organizational Units (OUs).

But this is not the only problem. Even with the implementation of a good password policy, the rules apply only to items such as size, complexity and history, which is not a guarantee of strong passwords. For example, users tend to use passwords that are easy to memorize, such as Password2020! — which, although it technically meets the rules described above, cannot be considered safe and can be easily guessed by a cybercriminal.

Finding weak passwords in Active Directory can be simpler than you think. The first step is to know what you are looking for when auditing password quality. For this example, we will look for weak, duplicate, default or even empty passwords using the DSInternals PowerShell Module, which can be downloaded for free here.

DSInternals is an extremely interesting tool for Microsoft Administrators and has specific functionality for password auditing in Active Directory. It has the ability to discover accounts that share the same passwords or that have passwords available in public databases (such as the famous HaveIBeenPwned) or in a custom dictionary that you can create yourself to include terms more closely related to your organization.

Once installed, the password audit module in DSInternals Active Directory is quite simple to use. Just follow the syntax below:

Test-PasswordQuality [-Account] <DSAccount> [-SkipDuplicatePasswordTest] [-IncludeDisabledAccounts] 

[-WeakPasswords <String[]>] [-WeakPasswordsFile <String>] [-WeakPasswordHashesFile <String>] [-WeakPasswordHashesSortedFile <String>] [<CommonParameters>]

The Test-PasswordQuality cmdlet receives the output from the Get-ADDBAccount and Get-ADReplAccount cmdlets, so that offline (ntds.dit) and online (DCSync) password analyses can be done. A good option to obtain a list of leaked passwords is to use the ones provided by HaveIBeenPwned, which are fully supported in DSInternals. In this case, be sure to download the list marked “NTLM (sorted by hash)”..[…] Read more »….

 

Prevent burnout with some lessons learned from golf

Even in the wake of Covid-19 and its effect on the world, business doesn’t stop. For many of us, having an extended “holiday” at home has only added more stress to our lives, and getting back to business means catching up what we missed.

At this point in time, whether you’re in a leadership position or an employee, it’s even more important to be aware of and do what we can to prevent burnout.

While we may not all have time to get in a round at the golf course while bringing business back up to speed, here are some lessons golf can teach us about preventing burnout.

#1 – Play With the Right People

When you’re on the golf course, the people you’re with have a lot to do with whether it’s a fun or a stressful experience! Nobody wants to play a round with the guy who complains all the time, or who criticizes your every shot.

The same is true in the workplace. While you can’t always choose who you work with or who you have to spend time with on the job, your colleagues can make a big difference to job satisfaction, which can, in turn, be a large factor in burnout.

Working with people who don’t share your vision, work well in a team, or contribute positively to company culture can cause stress on top of normal work pressure.

More stress just leads one more step down the path to burnout. On the other hand, working without supportive, passionate, and action-oriented people can spur you on when you’re feeling a little low.

Lesson: Surround yourself with positive people wherever possible.

#2 – Use Technology to Your Advantage

Golf is booming with new technologies that can do everything from analyzing your swing to giving you in-depth details about the course you’re about to play. Using these correctly can supercharge your game!

Similarly, there are technologies available in business that can make life easier. Struggling with time management? There’s an app for that. Need to streamline your business processes? Software is available. Not sure what the problem is? Data analytics can help you find out.

Choosing the right piece of software or app is important, though. You can’t tee off with a putter! Analyze where your business could do with some help and figure out exactly what you need before committing.

Lesson: Choosing the right technology can streamline your business and reduce stress.

#3 – Change Things Up

There’s a saying that says something along the lines of if you do things the same way as you’ve always done, expect to get the same results as you always have!

Playing the same round of golf at the same club at the same time every week won’t do much for your game. Complacency is easy to come by.

But switch it up and visit a different club, or play with a different partner, and you may notice that you feel a little more excited and into it.

If you’re feeling like you’re headed towards a burnout, the worst thing you can do is… Keep going!

See where you can mix things up a little. Work from home, or a nearby coffee shop. Sit at a different desk near other people in the office. Try a new way of doing your work.

Lesson: Make a change – your environment, people, or method.

#4 – Appreciate Your Environment

Have you ever seen a golf course that wasn’t beautiful? Rolling green hills, tall trees, and often, a spectacular view make golf courses some of the most peaceful and stunning environments around.

When last did you spend some time marveling at the view or the scenery when you played a round? In the same vein, when last did you look around your workplace and consider what you really like about it and give some gratitude?

It might sound ridiculous, but focusing on the positives around you can change your mindset and remind you of the good things in life (and work!).

Lesson: Make a note of what you’re grateful for in your working environment..[…] Read more »….

 

 

How Object Storage Is Taking Storage Virtualization to the Next Level

We live in an increasingly virtual world. Because of that, many organizations not only virtualize their servers, they also explore the benefits of virtualized storage.

Gaining popularity 10-15 years ago, storage virtualization is the process of sharing storage resources by bringing physical storage from different devices together in a centralized pool of available storage capacity. The strategy is designed to help organizations improve agility and performance while reducing hardware and resource costs. However, this effort, at least to date, has not been as seamless or effective as server virtualization.

That is starting to change with the rise of object storage – an increasingly popular approach that manages data storage by arranging it into discrete and unique units, called objects. These objects are managed within a single pool of storage instead of a legacy LUN/volume block store structure. The objects are also bundled with associated metadata to form a centralized storage pool.

Object storage truly takes storage virtualization to the next level. I like to call it storage virtualization 2.0 because it makes it easier to deploy increased storage capacity through inline deduplication, compression, and encryption. It also enables enterprises to effortlessly reallocate storage where needed while eliminating the layers of management complexity inherent in storage virtualization. As a result, administrators do not need to worry about allocating a given capacity to a given server with object storage. Why? Because all servers have equal access to the object storage pool.

One key benefit is that organizations no longer need a crystal ball to predict their utilization requirements. Instead, they can add the exact amount of storage they need, anytime and in any granularity, to meet their storage requirements. And they can continue to grow their storage pool with zero disruption and no application downtime.

Greater security

Perhaps the most significant benefit of storage virtualization 2.0 is that it can do a much better job of protecting and securing your data than legacy iterations of storage virtualization.

Yes, with legacy storage solutions, you can take snapshots of your data. But the problem is that these snapshots are not immutable. And that fact should have you concerned. Why? Because, although you may have a snapshot when data changes or is overwritten, there is no way to recapture the original.

So, once you do any kind of update, you have no way to return to the original data. Quite simply, you are losing the old data snapshots in favor of the new. While there are some exceptions, this is the case with the majority of legacy storage solutions.

With object storage, however, your data snapshots are indeed immutable. Because of that, organizations can now capture and back up their data in near real-time—and do it cost-effectively. An immutable storage snapshot protects your information continuously by taking snapshots every 90 seconds so that even in the case of data loss or a cyber breach, you will always have a backup. All your data will be protected.

Taming the data deluge

Storage virtualization 2.0 is also more effective than the original storage virtualization when it comes to taming the data tsunami. Specifically, it can help manage the massive volumes of data—such as digital content, connected services, and cloud-based apps—that companies must now deal with. Most of this new content and data is unstructured, and organizations are discovering that their traditional storage solutions are not up to managing it all.

It’s a real problem. Unstructured data eats up a vast amount of a typical organization’s storage capacity. IDC estimates that 80% of data will be unstructured in five years. For the most part, this data takes up primary, tier-one storage on virtual machines, which can be a very costly proposition.

It doesn’t have to be this way. Organizations can offload much of this unstructured data via storage virtualization 2.0, with immutable snapshots and centralized pooling capabilities.

The net effect is that by moving the unstructured data to object storage, organizations won’t have it stored on VMs and won’t need to backup in a traditional sense. With object storage taking immutable snaps and replicating to another offsite cluster, it will eliminate 80% of an organization’s backup requirements/window.

This dramatically lowers costs. Because instead of having 80% of storage in primary, tier-one environments, everything is now stored and protected on object storage.

All of this also dramatically reduces the recovery time of both unstructured data from days and weeks to less than a minute, regardless of whether it’s TB or PB of data. And because the network no longer moves the data around from point to point, it’s much less congested. What’s more, the probability of having failed data backups goes away, because there are no more backups in the traditional sense.

The need for a new approach

As storage needs increase, organizations need more than just virtualization..[…] Read more »

 

“To be successful, CISOs must have intentionality and focus”

Most of today’s CISOs got into the role accidentally. Yet tomorrow’s CISO will have chosen this role by intent. It will be a chosen vocation. Therefore, CISOs will need to focus on the role and start cultivating the skills required to become a security leader. This was a key message from a presentation on The Future CISO by Jeff Pollard, Principal Analyst, Forrester Research.  Speaking at the Forrester Security & Risk Global 2020 Live Virtual Experience on September 22, Pollard urged CISOs to check if they are “Company Fit” and to prepare for what’s next. He also outlined the six different types of CISOs: transformational, post-breach, tactical/operational, compliance guru, steady-state, and customer-facing evangelist. Pollard showed how CISOs can build a roadmap for transitioning from one type to another and explore strategies for obtaining future CISO and related roles.

By Brian Pereira, Principal Editor, CISO MAG

“CISOs do an insanely challenging job under challenging circumstances. They have to worry about their company, adversaries who attack, insider threats, and also employee and customer experience. This is not easy. That’s why intent matters,” said Pollard.

He advised CISOs to plan for the role and make a meaningful contribution at the C-Level. Skills enhancement, both for the CISO and the security teams is also crucial.

Pollard alluded to the example of Pixar Animation Studios, which achieved immense success and bagged many awards because it has intent and focus.

“Pixar is a company that matches this intent. They know exactly what they want to do. They have a specific methodology for stories, how they think about content. Technology drives the stories that they tell. They are an incredibly innovative company. There is a secret history of Pixar that ties in with the CISO role,” said Pollard.

Pixar earned 16 Academy awards, 11 Grammys, and 10 Golden Globes.

“They earned all these awards because they operate with intent and focus. When you operate without intent and focus, and when you don’t plan for this role, and when you don’t actively cultivate all of the skills that you need, then this happens,” said Pollard.

By “this” he meant that CISOs lose focus and find their role challenging, which could even lead to burn out.

He urged security leaders to start writing their own stories and to think about their stories with intent, discipline, and rigor.

Why CISOs lose focus

The CISO was never a “No” department. In saying “Yes” to everyone and trying to do everything for everyone, CISOs lost their focus.

CISOs juggle many tasks like product security concerns, compliance concerns, regulatory issues, legal issues, beaches and attackers, and incident response. And then, there are new priorities that come up.

“0% of CISOs are great at everything. And that’s what most security leaders have had to do. You can’t do all of that and be effective. It’s not possible. But that’s what happened to the role — priority after priority and trade-off after trade-off. None of it results in the success that we want,” said Pollard.

He added, “CISOs haven’t operated with constraints, which lead to focus. And focus leads to innovation. We are just doing too much and not succeeding. We are too tactical. We say yes to a lot. The CISO is not the department of No.”

How many are C-level?

While most security leaders aspire for a seat at the table in the board room, very few make the cut.

A 2020 study by Forrester Research shows that just 13% of all security leaders are actual C-level titles or CISO.

The Forrester study considered those with an SVP or an EVP title and compared that to those with a VP, Director, or another title — across Fortune 500 companies. The other data point from this study is that the average tenure of the CISO is 4.2 years and not two or three years.

“Even those who got a seat at the table are not treated like a true C-level executive. They do not have the same access for authority that those others have. And most of the 13% are on their third or fourth CISO role. After the second one, they don’t take that laying down anymore. They demand to be an actual C-level,” said Pollard.

What CISOs need to do

CISOs need to plan for a four-year stay, and they can take some inspiration from Pixar by writing their own stories.

“The reason why this is so important is because you are looking at a four-year stay. It’s going to be hard for CISOs because they are going to do all their tasks for four years with all these limitations. They can make mistakes if they do not operate with intentionality and if they don’t fight for what they deserve. The good news is that CISOs can get this right and write their own story. It’s just about thinking about it in terms of intent and our own story,” advised Pollard.

Going back to the Pixar example, he urged CISOs to simplify and focus. Like Pixar, they should combine characters (or tasks) and hop over detours.

“You will feel like you are losing valuable stuff, but it is actually freeing you. Fire yourself. find a way to replace yourself. Get rid of activities that you don’t need to do. And don’t be afraid to empower the direct reports that work for you,” he said.

Reproduced with permission from Forrester Research 

The 6 types of CISOs

Forrester Research began thinking about the future or the CISO two years ago and came up with a concept that there were 6 types of CISOs. The roles could overlap, and one could have the attributes of other types as well.

Pollard said the CISO should consider these 6 types when thinking about their intent and focus. These types give one the opportunity to think about their roles and future careers —  and even life after being a CISO.

We started thinking about this concept of the future CISO two years ago. We figured out there were 6 types of CISOs out there.

1. The Transformational CISO

This is a more strategic type of CISO who thinks about customers and business outcomes. They focus on turn around and transformation of the security program. They take it from one that may be too insular and too internally focused to one that focusses on the outside of the organization. They do this to make the security program more relevant to the rest of the business.

2. The Post-breed CISO

This CISOs comes in after the organization has been breached. There is intense media and board speculation. Add to that, litigation, regulatory investigations, and potential fines. There is a lot of chaos and they must remediate the situation and lead through the turbulence.

3. Tactical / Operational expert

This is the action-oriented CISO who gets things done. They are adept at sorting out technical issues and building out cybersecurity programs for the company.

4. Compliance Guru

They have a thorough knowledge of compliance requirements and they operate in a heavily regulated industry. They help the company to figure out how to navigate international issues and wars as well as oversight from the FTC, PCI, HIPPAA, and other regulatory bodies. For them, Security is always a risk management conversation.

5. The Steady-State CISO

The minimalist who doesn’t rock the boat and change the status quo overnight. They maintain a balance between minimal change and keeping up. Maybe things are just fine at the company right now and security is working for them.

6. Customer Facing Evangelist 

This type is common at the tech and product companies. They evangelize the company’s products and services with a commitment to cybersecurity. And they speak about how security and privacy help customers.

CISO Company Fit

Forrester defines “CISO Company Fit” as the degree to which the CISO type at the company matches the type the company needs to maximize the success of both parties.

“If the company fit is not suitable, then security leaders have to deal with burn-out and angst.  And part of that burn-out comes from the fact that they may not have CISO Company fit,” said Pollard..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

Is Your Pandemic-Fueled Cloud Migration Sustainable?

COVID-19 shoved enterprises into the cloud. While remote work is sustainable, emergency cloud strategies are not.

Enterprises were already moving deeper into the cloud before the pandemic hit. Multi-year plans were replaced by emergency implementations to facilitate remote work and digital customer interactions. Businesses and their IT departments have been proud of their heroic efforts, but emergency implementations are not sustainable over the long-term.

“Regardless of what we did right or wrong, there was a rationalization behind it,” said George Burns, senior consultant for cloud operations at digital transformation agency SPR. “Now we need to take a step back and look at projects through a different prism.”

Governance

Data governance is non-optional for companies whether they’re regulated or not, especially with data regulations such as General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Burns said some of his clients are having trouble finding data now that they’ve shoveled it into the cloud.

“We need to rearchitect some of these solutions we’ve put in place, but then we need to come up with implementation plans that are even less disruptive than we had during good times,” said Burns. “How do we bolt that on to what we already have to let our newly distributed workforce continue to function and continue to generate revenue? Do we have governance wrapped around that to make sure that we can monitor what we need to monitor and be compliant where we need to be compliant?”

Six months into the pandemic, organizations should realize that they’re accumulating unnecessary risks if they don’t address the longer-term governance issues.

Though governance tends to be viewed as an internal-facing function, its role doesn’t end there. In fact, a recent Forrester report discusses the link between sound governance and better customer service. In a customer service context, the report’s authors said governance should include a cross-functional governance board, technology governance, process governance and data governance.

That’s sound advice generally.

Security

An obvious victim of rapid cloud adoption is security. There was no time to fully assess the new architecture from a security standpoint because business continuity and employee safety were more important. However, the potential security vulnerabilities left unchecked keep the door open for internal and external bad actors.

“It really comes back to the fundamentals,” said Burns. “Do we have the right security wrapped around [our architecture] so that we’re not exposing any of our data or access points?”

Meanwhile, the pandemic has fueled cyber fraud spikes and many of those campaigns have target work-from-home employees. In August, Interpol revealed it had observed a 350% increase in phishing sites and a 600% increase in malicious emails. Home Wi-Fi routers also have been targeted and several family members in an employee’s home may be sharing computers regardless of who owns them.

Enterprises need to ensure they’re educating employees about the work-from-home security risks to their organizations, especially since many of those individuals are attempting to balance their personal and professional lives. Hackers and social engineers know distracted individuals are easy targets.

Time to reassess

When the pandemic hit, there was no time for long-term thinking. Now, there’s a window of opportunity that shouldn’t be squandered. Whether a second COVID-19 wave occurs or not, businesses have an opportunity to assess where they’re at and compare that with where they need to be to ensure longer-term resilience.

“People are starting to understand that we’re not going to go back to work like normal tomorrow,” said Burns. ” Really, it comes back to the fundamentals. Do we have the right technology in place? Are we moving in the right direction? We need KPIs that show us that.”

Digital transformation has taken on new meaning in 2020 because it isn’t just about responding to digital disruption anymore. It’s about doing whatever it takes to survive and thrive no matter what happens. Essentially, last year’s playbook is old news.

“The rules of the game have completely changed. We’re not solving for the same X anymore,” said Burns. “We’re solving new problems that we haven’t taken the time to identify. We need to put out fewer fires and make more strategic decisions.”

Otherwise, enormous amounts of technical debt will continue to accumulate..[…] Read more »…..

 

Security theatrics or strategy? Optimizing security budget efficiency and effectiveness

Introduction

I am a staunch advocate of the consideration of human behavior in cybersecurity threat mitigation. The discipline of behavioral ecology is a good place to start. This subset of evolutionary biology observes how individuals and groups react to given environmental conditions — including the interplay between people and an environment.

The digital world is also a type of environment that we have all ended up playing in as computing and digital transactions become ever-present in our lives. By understanding this “digital theater,” we can determine a best-fit strategy to produce an effective cybersecurity play that optimizes security budgets.

Why having an effective strategy is important

I’ll offer up an example from nature to show the importance of an effective strategy. You may read this and wonder what it has to do with cybersecurity, but bear with me.

Starlings feed their chicks with leatherjackets and other insect larvae. During nesting season, the starlings work hard finding food and relaying it back and forth to the nest of chicks. If you’ve ever observed any bird during this season, you might have noticed by the end of it, they have lost feathers and look pretty beat up. But the sacrifice is important: effective feeding of chicks will produce fledglings that then go on to reproduce. Reproduction is seen as a success in evolutionary terms.

However, starlings are capable of carrying more than one leatherjacket in their beak. The more they can carry, the fewer trips they need to make. Fewer trips mean the parent starling is less likely to fall foul of bad health or predators. However, there is a tradeoff. To find the leatherjackets, the starling has to forage. Too many leatherjackets in the beak and it becomes harder to forage. The optimum number of leatherjackets is a trade-off between the number of trips and foraging efficiency.

Any strategy that plays out in the real world is a balance: a trade-off between what seems to be optimal and what is strategically efficient. The starling could try to cram lots of larvae into its beak and this might seem to be a show of capability and a great strategy, but in the end, it would just be a piece of theater.

In evolutionary biology, this balance is known as an Evolutionary Stable Strategy, or ESS. In nature, this would be a strategy that confers “fitness” so an organism can reproduce at an optimal rate. The concept behind an ESS also applies in cybersecurity, where fitness is also about finding a best-fit strategy for a given environment.

Security, like feeding chicks, is about knowing how to use the right tools for the job in an optimal manner and not just for show. This creates a fine balance that can help optimize a security budget.

Security and trade-offs: A complex equation

Enough of the biology lesson! Back to cybersecurity. The security industry, like most industries, has a culture. This culture has informants, people in your company who influence decisions and people outside such as vendors who sell security products. The result can be an overwhelming cascade of information. This can lead to decisions that are based on less-than-optimal input.

Back in 2008, security man extraordinaire Bruce Schneier wrote a treatise entitled “The Psychology of Security”. In this, Bruce talks about how security is a tradeoff. He goes on to explain how these trade-offs, which often come down to finding a balance between cost and outcome, are actually much more nuanced. Bruce says that asking “Is this effective against the threat?” is the wrong question to ask. Instead, you should ask “Is it a good trade-off?”

Security teams can be put under enormous pressure to “do the right thing.” An example is the recent ransomware attack on Garmin. If you are being effectively held hostage by malicious software that prevents your business from running, you have to do something and quickly. Garmin is reported to have paid the ransom of $10 million.

But was this a shrewd move? Was the trade-off between business disruption and hope of a decryption key a balanced one? When making that decision, there are multiple considerations. Can the company offset the cost of the ransomware? Will the decryption key end the attack or have the hackers installed other malware into the company’s IT system?

Security systems, like biological ones, are reliant on making good trade-off decisions to move the needle of security towards your company’s safety.

Back to basics to optimize security trade-offs

Security can be a costly business. Solutions, services and platforms all need to be costed and maintenance and upgrades factored in. And the choice is astounding. In terms of just startups in the cybersecurity sector, there were around 21,729 at last count. The amount of spending on cloud security tools alone is expected to be around $12.6 billion by 2023.

Getting the balance right is important. An organization must cut through the trees to see the wood. In doing so, the balance of financial burden against cyber-threat mitigation can be made.

Going back to basics is the starting point. There is little point in putting on a security show with the latest in machine learning-based tech if you misconfigure a crucial element so the data becomes worthless. At this point in history, machines are nothing without their human operators. We have to get back to basics, build a strong strategy and culture of security before layering on the technology.

The basics, human factors and a great security ESS

Weaving this together we can ensure optimization of a security budget through an awareness of strategic security considerations, e.g.:

The basics

The fundamentals of security are covered by several frameworks and general knowledge of Operations Security (OPSEC). Frameworks such as Center for Internet Security (CIS) and NIST-CSF set out basics for a robust cybersecurity approach. These include knowing what assets (both digital and physical) you have and how to control access.

The human factors

Cybercriminals place a focus on using humans to perpetrate a cyberattack. This is inherent in the popular tactics of social engineering, phishing and other human-activated cybercrimes. Employees, non-employees (e.g., contractors), supply chain members and so on all need to be evaluated for risk. Mitigation of the risk levels can be alleviated using several techniques:

  • Security awareness training for all: Teaching the fundamentals of security is an essential tool in a cybersecurity landscape that focuses on human touchpoints. But security awareness needs to be performed effectively. Some training sessions feel more like those old-school lessons that ended up with snoozing students. Modern security awareness is engaging, interactive and often gamified.
  • The issue of misconfiguration: It isn’t just employees clicking on a malicious link in a phishing email that is cause for concern. Loss of data due to misconfiguration of IT components cost companies around $5 trillion in 2018 – 2019. Security awareness training needs to extend to system administrators and others who take care of databases, web servers and so on.
  • Patch management. Like misconfiguration, ensuring that IT systems are up to date can be the difference between exposed data and safe data. This process has been complicated by the increase in home working. But this fundamental piece of security hygiene is as vital as it ever was.
Never trust, always verify

The concept of zero-trust security has highlighted the importance of robust identity and access management (IAM). The idea behind this tactic is to always check the identity of any individual or device attempting to access corporate resources. Zero trust defines an architecture that puts data as a central commodity and trust as a rule to determine access rights..[…] Read more »….

 

How to Eliminate Disruptive Technology’s Risk

Emerging technologies can provide a competitive edge. Yes, risk comes along with those technologies, but there are ways to at least minimize the likelihood.

Disruptive technologies are a double-edged sword. On one hand, successfully implementing the right disruptive technology can lead to significant competitive advantages through innovation. On the other, emerging technologies present potentially unforeseen risks that can lead to high implementation failure rates. Let’s look at how businesses can avoid major pitfalls when selecting the right disruptive technology and how to more accurately time the deployment of high risk, high reward tech projects.

What are disruptive technologies?

The list of disruptive technologies is seemingly without limit. Machine learning, artificial intelligence (AI) and edge computing are three popular examples many companies are considering today. While any technology has the potential to be “disruptive”, some stand out from the others. Truly disruptive technologies are considered innovative and have the potential to dramatically change how a business operates, interacts with customers, or completes the sale of a product or service. These are major evolutions to a business that can create brand-new revenue streams — or help streamline current processes that save time and/or money.

Of course, the catch is that disruptive technologies are known to be both expensive and difficult to implement. If this weren’t the case, everyone would do it. Therefore, it’s important to remember that disruptive technologies are not for the faint of heart. Yet, if properly planned, a successful implementation can be a true game changer.

Find the right technical expertise

One key to a successful disruptive technology implementation is to anticipate potential problems that are likely to emerge during the implementation process. It cannot be stressed enough that you have the right skillset from an IT architecture perspective when planning to integrate disruptive technologies into your infrastructure. This is often where failures happen because the right people aren’t involved at this stage of the game. Two incorrect decisions often occur during the architecture phase. One is to lean on in-house architects to learn the new technology, then come up with an implementation plan to integrate it into an infrastructure they’re very familiar with. The other is to bring in external technical consultants that have a deeper knowledge of the disruptive technology — yet do not have intimate insight into the business’s existing infrastructure architecture.

As you can imagine, a healthy understanding of how technology is used to facilitate current processes combined with a technical background of the emerging technology is beneficial. Thus, implementations are usually more successful when internal and external resources work in tandem to accomplish the same goal. As many of you are probably aware, this is easier said than done. That means it may take some time to find the right external technical resources that mesh well with in-house architecture staff.

Timing the implementation is key

Correctly timing the implementation of a disruptive technology is another critical deployment aspect that often gets overlooked. When dealing with cutting-edge digital tools, there’s a finite timeframe between implementing a technology that’s not quite ready for production and one that’s matured enough to the point where it’s no longer disruptive from a competitive advantage perspective. Unfortunately, there’s no magic ball that can be used to predict the absolute perfect moment to implement these types of technologies. This instead is where a significant amount of research must be put in ahead of time to verify the technology can accomplish exactly what the business requires within the guidelines of a well-established IT roadmap…[…] Read more »…..

 

How to prioritize security and avoid the top 10 IoT stress factors

The Internet of Things (IoT) is transforming our homes, businesses and public spaces – mostly for the better – but without proper precautions IoT devices can be an attractive target for malicious actors and cyberattacks.

Security threats involving IoT devices often stem from the fact that many IoT devices usually have single-purpose designs and may lack broader capabilities to defend themselves in a hostile environment. For example, a door bell, a toaster or a washing machine frequently do not contain as much storage, memory and processing capability as a typical laptop computer.

By some estimates, there will be more than 21 billion connected devices on the market by 2025, and the proliferation of this technology will only continue to impact our daily lives in a multitude of ways.

But as more connected products are invented and introduced for both business and consumer use, the security challenges related to these connected IoT devices continue to increase, in part due to a lack of consistent security controls. Even if the networks that the connected devices operate on are considered secure, IoT device security is still only as good as the security of the products themselves.

Because the IoT industry has predominantly lacked a globally recognized, repeatable standard for manufacturers, channel owners, regulators and other key parties to turn to, IoT device security continues to be a major challenge. It’s therefore especially important for companies to not only be aware of potential vulnerabilities, but also to take action to build more secure products – before they ever get into the hands of the end user.

Below are 10 design and development approaches/best practices that can help mitigate IoT security issues and ensure that IoT delivers on its promise to improve our lives.

10. Hiding live ports: The best practice for hiding live ports is to actually not hide them at all – and definitely to not use easy to peel off plastic covers. Live debug ports such as USB and JTAG may provide a hacker access into the firmware of the device. If live debug ports are required, they should be disabled so that only authorized systems/users can re-enable them. However, if hiding them is required, it’s important to make it as difficult as possible for someone to access them – and to avoid plastic caps whenever possible.

9. Common/default passwords: Most people don’t change their passwords from the default, making it easy for hackers to gain access to devices. In the future, passwords may be replaced altogether, but for now, they should at least be unique, random and distinct for each consumer device. During setup, users should be prompted to change the password the device was shipped with to further bolster security.

8. Relying solely on network security: Introducing layers of security can be a great way to avoid compromised data. The security principle of defense in depth dictates that when multiple layers are in place, attacks are more effectively thwarted. While network security is helpful, if the device is solely reliant on this for communication, it can lead to further compromised information.

7. Sending without encryption: Avoid sending any information without encryption, because without it, communications between devices are simply not secure. Everything should be encrypted, with approved encryption algorithms, so that when information leaves the device and goes to the server, internet, or any other access point in a home, it is protected from unauthorized access and modification. For IoT devices communicating over wireless technologies, it is important to also encrypt application data within the network tunnel. Adding application security to the mix is highly recommended and preferred to help mitigate these issues.

6. Overriding security and certificate checks: Simply put – small, compact digital certificates are a proven way for IoT devices to trust each other and for servers to authenticate IoT devices. However, oftentimes, proper certificate validation at the IoT device is overridden, diluted or negated, nullifying the security provided by digital certificates. This can lead to undesired security consequences, such as man-in-the-middle attacks. Keep these checks as part of your security measures to ensure certificates are up to date, valid and issued by trusted authorities.

5. Public visibility: There is no need for a device to advertise unique information such as (but not limited to) serial number that will identify it and allow it to be identified over unsecure connections, whether Wi-Fi, Bluetooth or beacons. The best practice is to be incognito and employ randomization techniques over the airwaves. The “less is more” approach is necessary to protect privacy and prevent tracking. However, when device-identifying information is needed for device discovery, registration and verification, it should be used in a secure manner, only exchanging securely and with authenticated and authorized devices. Local display may need to be made available for configuration, which is obviously important to protect display configurations with secure unique passwords, tokens or other standardized security authenticating mechanisms.

4. Access of devices’ private key: The security of digital certificates is only guaranteed when the private key is sufficiently protected from disclosure and unauthorized modification. This can be difficult to accomplish on some IoT devices that lack specialized hardware to protect sensitive information. However, today, low-cost and secure elements are available and can be embedded into IoT devices to protect sensitive keys that are injected into these devices at manufacturing time. Today’s technology allows for the size of the key to be reduced and compressed, so that the devices can attest to their identity without revealing private information. Such private information should be kept in secure elements.

3. Blockchain for added security: Blockchain empowers IoT devices to defend themselves in hostile environments by making autonomous decisions with high degree of confidence. The cryptographically-signed transactions allow devices to determine the authenticity of the transactions before acting on them. Using such transactions, IoT devices can also assert their ownership, i.e., to whom they belong. So, if a rogue entity attempts to own the device, the IoT device can reject the access attempt. In addition, the distributed data contained in blockchain is cryptographically hashed and anonymized, providing “out-of-the-box” privacy for devices and the users who interact with them…[…] Read more »….

 

 

“Some Devices Allowed” – Secure Facilities Face New RF Threats

When secure facilities say “no devices allowed,” that’s not necessarily the case.

Exceptions are being granted for personal medical devices, health monitors and other operation-associated devices, especially in defense areas where human performance monitoring devices can be core to the mission.

The problem: most of these devices have radio frequency (RF) communication interfaces such as Bluetooth, Bluetooth Low Energy (BLE), Wi-Fi, Cellular, IoT or proprietary protocols that can make them vulnerable to RF attacks, which by their nature are “remote attacks” from beyond the building’s physical perimeters.

Questions are now being asked about the ability to allow some devices in some areas, some of the time, resulting in the need for stratified policy and sophisticated technology which can accurately distinguish between approved and unapproved electronic devices in secure areas.

The invisible dangers of RF devices

RF-enabled devices are prevalent in the enterprise. According to Ericsson’s Internet of Things Forecast, there are 22 billion connected devices and 15 billions of these devices have radios. Furthermore, as the avalanche of IoT devices grows, cyber threats will become increasingly common.

Wireless devices in the enterprise today include light bulbs, headsets, building control systems, and HVAC systems. Increasingly vulnerable and risky are wearables. Wearables with data exfiltrating capabilities include Fitbits, smartwatches and other personal devices with embedded radios and variety of audio/video capture, pairing and transmission capabilities.

Understanding the current policy device landscape

The RF environment has become increasingly complicated over the past five years because more and more devices have RF interfaces that can’t be disabled. Secure facilities with very strict RF device policies are making exceptions to the “No Device Policy” into a more stratified approach: “Some Device Policy.” Examples of a stratified policy are whitelisting devices with RF interfaces such as medical wearables, Fitbits and vending machines. Some companies are geofencing certain areas in facilities, such as Sensitive Compartmented Information Facility (SCIFs) in defense facilities.

Current policies are outdated

While some government and commercial buildings have secure areas where no cell phones or other RF-emitting devices are allowed, detecting and locating radio-enabled devices is largely based on the honor system or one-time scans for devices. Bad actors do not follow the honor system and one-time scans are just that: one time and cannot monitor 24×7.

Benefits of implementing RF device security policy

In a world where security teams need to detect and locate unauthorized cellular, Bluetooth, BLE, Wi-Fi and IoT devices, there are solutions available and subsequent benefits to enforcing device security policies: ..[…] Read more »

 

Fundamentals Of Cryptography

The mathematics of cryptography

Under the hood, cryptography is all mathematics. For many of the algorithms in development today, you need to understand some fairly advanced mathematical concepts to understand how the algorithms work.

That being said, many cryptographic algorithms in common use today are based on very simple cryptographic operations. Three common cryptographic functions that show up across cryptography are the modulo operator, exclusive-or/XOR and bitwise shifts/rotations.

The modulo operator

You’re probably familiar with the modulo operator even if you’ve never heard of it by that name. When first learning division, you probably learned about dividends, divisors, quotients and remainders.

When we say X modulo Y or X (mod Y) or X % Y, we want the remainder after dividing X by Y. This is useful in cryptography, since it ensures that a number stays within a certain range of values (between 0 and Y – 1).

Exclusive-or

In English, when we say OR, we are usually using the inclusive or. Saying that you want A or B probably means that you’re willing to accept A, B or both A and B.

Cryptography uses the exclusive or where A XOR B equals A or B but not both. The image above shows a truth table for XOR. Notice that anything XOR itself is zero, and anything XOR zero is itself.

XOR is also useful in cryptography because it is equivalent to addition modulo 2. 1 + 0 = 1 and 1 + 1 = 2 = 0 (mod 2) = 0 + 0. XOR is one of the most commonly-used mathematical operators in cryptography.

Bitwise shifts

A bitwise shift is exactly what it sounds like: a string of bits is shifted so many places to the left or right. In cryptography, this shift is usually a rotation, meaning that anything that “falls off” one end of the string moves around to the other.

The bitwise shift is another operator that has special meaning in modulo 2. In binary (mod 2), shifting to the left is multiplying by a power of two, while shifting to the right is division by a power of two.

Common structures in cryptography

While cryptographic algorithms within a “family” can be similar, most cryptographic algorithms are very different. However, some cryptographic structures exist that show up in multiple different cryptographic “families.”

Encryption operations and key schedules

Many symmetric encryption algorithms are actually two different algorithms that are put together to achieve the goal of encrypting the plaintext. One of these algorithms implements the key schedule, while the other performs the encryption operations.

In symmetric cryptography, both the sender and the recipient have a shared secret key. However, this key is often too short to be used for the complete encryption process since many algorithms have multiple rounds. A key schedule is designed to take the shared secret as a seed and use it to create a set of round keys, which are then fed into the algorithm that actually performs the encryption.

The other half of the encryption algorithm is the part that converts the plaintext to a ciphertext. This is typically accomplished by using multiple iterations or “rounds” of the same set of encryption operations. Each round takes a round key from the key schedule as input, meaning that the operations performed in each round are different.

The Advanced Encryption Standard (AES) is a classic example of an encryption algorithm with separate parts implementing the encryption operations and key schedule, as shown above. The different variants of AES (AES-128, AES-192, and AES-256) all have a similar encryption process (with different number of rounds) but have different key schedules to convert the various key lengths to 128-bit round keys.

Feistel networks

A Feistel network is a cryptographic structure designed to allow the same algorithm to perform both encryption and decryption. The only difference between the two processes is the order in which round keys are used.

An example of a Feistel network is shown in the image above. Notice that in each round, only the left half of the input is transformed and the two halves switch sides at the end of each round. This structure is essential to making the Feistel network reversible.

Looking at the first round (of both encryption and decryption), we see that the right side of the input and the round key are used as inputs to the Feistel function, F, to produce a value that is XORed with the left side of the input. This is significant because the output of F in the last round of encryption and the first round of encryption are the exact same. Both use the same round key and same value of Ln+1 as input…[…] Read more »….