How We’ll Conduct Algorithmic Audits in the New Economy

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes.

Algorithms are the heartbeat of applications, but they may not be perceived as entirely benign by their intended beneficiaries.

Most educated people know that an algorithm is simply any stepwise computational procedure. Most computer programs are algorithms of one sort of another. Embedded in operational applications, algorithms make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings — encroaching on customer privacy, refusing them a home loan, or perhaps targeting them with a barrage of objectionable solicitation — stakeholders’ understandable reaction may be to swat back in anger, and possibly with legal action.

Regulatory mandates are starting to require algorithm auditing

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes, especially those powered by artificial intelligence (AI), deep learning (DL), and machine learning (ML).

Many of these concerns revolve around the possibility that algorithmic processes can unwittingly inflict racial biases, privacy encroachments, and job-killing automations on society at large, or on vulnerable segments thereof. Surprisingly, some leading tech industry execs even regard algorithmic processes as a potential existential threat to humanity. Other observers see ample potential for algorithmic outcomes to grow increasingly absurd and counterproductive.

Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity — coupled with their daunting size, complexity and obscurity — presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions?

Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens.

Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data — including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision. And that requires that an audit log be kept for review and that auditing tools support rollup of algorithmic decision factors.

Considering how influential GDPR has been on other privacy-focused regulatory initiatives around the world, it wouldn’t be surprising to see laws and regulations mandate these sorts of auditing requirements placed on businesses operating in most industrialized nations before long.

For example, US federal lawmakers introduced the Algorithmic Accountability Act in 2019 to require companies to survey and fix algorithms that result in discriminatory or unfair treatment.

Anticipating this trend by a decade, the US Federal Reserve’s SR-11 guidance on model risk management, issued in 2011, mandates that banking organizations conduct audits of ML and other statistical models in order to be alert to the possibility of financial loss due to algorithmic decisions. It also spells out the key aspects of an effective model risk management framework, including robust model development, implementation, and use; effective model validation; and sound governance, policies, and controls.

Even if one’s organization is not responding to any specific legal or regulatory requirements for rooting out evidence of fairness, bias, and discrimination in your algorithms, it may be prudent from a public relations standpoint. If nothing else, it would signal enterprise commitment to ethical guidance that encompasses application development and machine learning DevOps practices.

But algorithms can be fearsomely complex entities to audit

CIOs need to get ahead of this trend by establishing internal practices focused on algorithm auditing, accounting, and transparency. Organizations in every industry should be prepared to respond to growing demands that they audit the complete set of business rules and AI/DL/ML models that their developers have encoded into any processes that impact customers, employees, and other stakeholders.

Of course, that can be a tall order to fill. For example, GDPR’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity — coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms — be they machine learning, convolutional neural networks, or whatever — are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Most organizations — even the likes of Amazon, Google, and Facebook — might find it difficult to keep track of all the variables encoded into its algorithmic business processes. What could prove even trickier is the requirement that they roll up these audits into plain-English narratives that explain to a customer, regulator, or jury why a particular algorithmic process took a specific action under real-world circumstances. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding.

Throwing more algorithm experts at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. Explaining what goes on inside an algorithm is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. One can’t easily trace their precise path to a final answer.

Algorithmic auditing is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

Most of the people you’re training to explain this stuff to may not know a machine-learning algorithm from a hole in the ground. More often than we’d like to believe, there will be no single human expert — or even (irony alert) algorithmic tool — that can frame a specific decision-automation narrative in simple, but not simplistic, English. Even if you could replay automated decisions in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance — such as a legal proceeding, contractual dispute, or showstopping technical glitch — will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory.

Establishing standard approach to algorithmic auditing

CIOs should recognize that they don’t need to go it alone on algorithm accounting. Enterprises should be able to call on independent third-party algorithm auditors. Auditors may be called on to review algorithms prior to deployment as part of the DevOps process, or post-deployment in response to unexpected legal, regulatory, and other challenges.

Some specialized consultancies offer algorithm auditing services to private and public sector clients. These include: This firm describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.” It provides enterprise-wide assessments of enterprise AI liabilities and model governance practices; AI incident detection and response, model- and project-specific risk certifications; and regulatory and compliance guidance. It also trains clients’ technical, legal and risk personnel how to perform algorithm audits.

O’Neil Risk Consulting and Algorithmic Auditing: ORCAA describes itself as a “consultancy that helps companies and organizations manage and audit algorithmic risks.” It works with clients to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. It helps clients to institute “early warning systems” that flag when a problematic algorithm (ethical, legal, reputational, or otherwise) is in development or in production, and thereby escalate the matter to the relevant parties for remediation. They serve as expert witnesses to assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. They help organizations develop strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools. They work with regulators to translate fairness laws and rules into specific standards for algorithm builders. And they train client personnel on algorithm auditing.

Currently, there are few hard-and-fast standards in algorithm auditing. What gets included in an audit and how the auditing process is conducted are more or less defined by every enterprise that undertakes it, or by the specific consultancy being engaged to conduct it. Looking ahead to possible future standards in algorithm auditing, Google Research and Open AI teamed with a wide range of universities and research institutes last year to publish a research study that recommends third-party auditing of AI systems. The paper also recommends that enterprises:

  • Develop audit trail requirements for “safety-critical applications” of AI systems;
  • Conduct regular audits and risk assessments associated with the AI-based algorithmic systems that they develop and manage;
  • Institute bias and safety bounties to strengthen incentives and processes for auditing and remediating issues with AI systems;
  • Share audit logs and other information about incidents with AI systems through their collaborative processes with peers;
  • Share best practices and tools for algorithm auditing and risk assessment; and
  • Conduct research into the interpretability and transparency of AI systems to support more efficient and effective auditing and risk assessment.

Other recent AI industry initiatives relevant to standardization of algorithm auditing include:

  • Google published an internal audit framework that is designed help enterprise engineering teams audit AI systems for privacy, bias, and other ethical issues before deploying them.
  • AI researchers from Google, Mozilla, and the University of Washington published a paper that outlines improved processes for auditing and data management to ensure that ethical principles are built into DevOps workflows that deploy AI/DL/ML algorithms into applications.
  • The Partnership on AI published a database to document instances in which AI systems fail to live up to acceptable anti-bias, ethical, and other practices.


CIOs should explore how best to institute algorithmic auditing in their organizations’ DevOps practices…[…] Read more »…..


Meet Andrea Blubaugh: Cloud Expert of the Month – February 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Andrea Blubaugh.

Andrea has more than 15 years of experience facilitating the design, implementation and ongoing management of data center, cloud and WAN solutions. Her reputation for architecting solutions for organizations of all sizes and verticals – from Fortune 100 to SMBs – earned her numerous awards and honors. With a specific focus on the mid to enterprise space, Andrea works closely with IT teams as a true client advocate, consistently meeting, and often exceeding expectations. As a result, she maintains strong client and provider relationships spanning the length of her career.

When did you join Cloud Girls and why?  

Wow, it’s been a long time! I believe it was 2014 or 2015 when i joined Cloud Girls. I had come to know Manon through work and was impressed by her and excited to join a group of women in the technology space.

What do you value about being a Cloud Girl?  

Getting to know and develop friendships with the fellow Cloud Girls over the years has been a real joy. It’s been a great platform for learning on both a professional and personal level.

What advice would you give to your younger self at the start of your career?  

I would reassure my younger self in her decisions and to encourage her to keep taking risks. I would also tell her to not sweat the losses so much. They tend to fade pretty quickly.

What’s your favorite inspirational quote?  

“Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do, so throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover.”  –Mark Twain

What one piece of advice would you share with young women to encourage them to take a seat at the table?  

I was very fortunate early on in my career to work for a startup whose leadership saw promise in my abilities that I didn’t yet see myself. I struggled with the decision to take a leadership role as I didn’t feel “ready” or that I had the right or enough experience. I received some good advice that I had to do what ultimately felt right to me, but that turning down an opportunity based on a fear of failure wouldn’t ensure there would be another one when I felt the time was right. My advice is if you’re offered that seat, and you want that seat, take it.

What’s one item on your bucket list and why?..[…] Read more »…..



What types of cybersecurity skills can you learn in a cyber range?

What is a cyber range?

A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.

A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.

What skills can you learn in a cyber range?

A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.

SIEM, IDS/IPS and firewall management

Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.

A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.

Incident response

After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.

A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.

Operating system management: Linux and Windows

Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.

A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.

Endpoint controls and protection

As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.

Penetration testing

This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.

A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.

Network management

Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.

A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.

Malware analysis

Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.

Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.

Threat hunting

Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.

A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.

Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.

Computer forensics

Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics…[…] Read more »….


3 key reasons why SOCs should implement policies over security standards

In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.

How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.

Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling

How Security Standards Look Today

Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.

There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.

We can greatly improve the security landscape by directly addressing these key issues:

1. Information Security and Compliance is Siloed

In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.

Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.

If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.

2. Security Standards are Too Generic

Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.

With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.

Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.

3. Current Practices are Reactive, Rather Than Proactive

The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.

Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.

The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.

Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.

DevSecOps – A New Way Forward

The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.

First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.

In outlining the rise and success of DevSecOps, a recent article gave three defining criteria of a true DevSecOps environment:

  1. Developers are in charge of security testing.
  2. Security experts act as consultants to developers when additional knowledge is required.
  3. Fixing security issues are managed by the development team.

Ongoing security-related issues are owned by the development team..[…] Read more »….



How to prioritize security and avoid the top 10 IoT stress factors

The Internet of Things (IoT) is transforming our homes, businesses and public spaces – mostly for the better – but without proper precautions IoT devices can be an attractive target for malicious actors and cyberattacks.

Security threats involving IoT devices often stem from the fact that many IoT devices usually have single-purpose designs and may lack broader capabilities to defend themselves in a hostile environment. For example, a door bell, a toaster or a washing machine frequently do not contain as much storage, memory and processing capability as a typical laptop computer.

By some estimates, there will be more than 21 billion connected devices on the market by 2025, and the proliferation of this technology will only continue to impact our daily lives in a multitude of ways.

But as more connected products are invented and introduced for both business and consumer use, the security challenges related to these connected IoT devices continue to increase, in part due to a lack of consistent security controls. Even if the networks that the connected devices operate on are considered secure, IoT device security is still only as good as the security of the products themselves.

Because the IoT industry has predominantly lacked a globally recognized, repeatable standard for manufacturers, channel owners, regulators and other key parties to turn to, IoT device security continues to be a major challenge. It’s therefore especially important for companies to not only be aware of potential vulnerabilities, but also to take action to build more secure products – before they ever get into the hands of the end user.

Below are 10 design and development approaches/best practices that can help mitigate IoT security issues and ensure that IoT delivers on its promise to improve our lives.

10. Hiding live ports: The best practice for hiding live ports is to actually not hide them at all – and definitely to not use easy to peel off plastic covers. Live debug ports such as USB and JTAG may provide a hacker access into the firmware of the device. If live debug ports are required, they should be disabled so that only authorized systems/users can re-enable them. However, if hiding them is required, it’s important to make it as difficult as possible for someone to access them – and to avoid plastic caps whenever possible.

9. Common/default passwords: Most people don’t change their passwords from the default, making it easy for hackers to gain access to devices. In the future, passwords may be replaced altogether, but for now, they should at least be unique, random and distinct for each consumer device. During setup, users should be prompted to change the password the device was shipped with to further bolster security.

8. Relying solely on network security: Introducing layers of security can be a great way to avoid compromised data. The security principle of defense in depth dictates that when multiple layers are in place, attacks are more effectively thwarted. While network security is helpful, if the device is solely reliant on this for communication, it can lead to further compromised information.

7. Sending without encryption: Avoid sending any information without encryption, because without it, communications between devices are simply not secure. Everything should be encrypted, with approved encryption algorithms, so that when information leaves the device and goes to the server, internet, or any other access point in a home, it is protected from unauthorized access and modification. For IoT devices communicating over wireless technologies, it is important to also encrypt application data within the network tunnel. Adding application security to the mix is highly recommended and preferred to help mitigate these issues.

6. Overriding security and certificate checks: Simply put – small, compact digital certificates are a proven way for IoT devices to trust each other and for servers to authenticate IoT devices. However, oftentimes, proper certificate validation at the IoT device is overridden, diluted or negated, nullifying the security provided by digital certificates. This can lead to undesired security consequences, such as man-in-the-middle attacks. Keep these checks as part of your security measures to ensure certificates are up to date, valid and issued by trusted authorities.

5. Public visibility: There is no need for a device to advertise unique information such as (but not limited to) serial number that will identify it and allow it to be identified over unsecure connections, whether Wi-Fi, Bluetooth or beacons. The best practice is to be incognito and employ randomization techniques over the airwaves. The “less is more” approach is necessary to protect privacy and prevent tracking. However, when device-identifying information is needed for device discovery, registration and verification, it should be used in a secure manner, only exchanging securely and with authenticated and authorized devices. Local display may need to be made available for configuration, which is obviously important to protect display configurations with secure unique passwords, tokens or other standardized security authenticating mechanisms.

4. Access of devices’ private key: The security of digital certificates is only guaranteed when the private key is sufficiently protected from disclosure and unauthorized modification. This can be difficult to accomplish on some IoT devices that lack specialized hardware to protect sensitive information. However, today, low-cost and secure elements are available and can be embedded into IoT devices to protect sensitive keys that are injected into these devices at manufacturing time. Today’s technology allows for the size of the key to be reduced and compressed, so that the devices can attest to their identity without revealing private information. Such private information should be kept in secure elements.

3. Blockchain for added security: Blockchain empowers IoT devices to defend themselves in hostile environments by making autonomous decisions with high degree of confidence. The cryptographically-signed transactions allow devices to determine the authenticity of the transactions before acting on them. Using such transactions, IoT devices can also assert their ownership, i.e., to whom they belong. So, if a rogue entity attempts to own the device, the IoT device can reject the access attempt. In addition, the distributed data contained in blockchain is cryptographically hashed and anonymized, providing “out-of-the-box” privacy for devices and the users who interact with them…[…] Read more »….



“Some Devices Allowed” – Secure Facilities Face New RF Threats

When secure facilities say “no devices allowed,” that’s not necessarily the case.

Exceptions are being granted for personal medical devices, health monitors and other operation-associated devices, especially in defense areas where human performance monitoring devices can be core to the mission.

The problem: most of these devices have radio frequency (RF) communication interfaces such as Bluetooth, Bluetooth Low Energy (BLE), Wi-Fi, Cellular, IoT or proprietary protocols that can make them vulnerable to RF attacks, which by their nature are “remote attacks” from beyond the building’s physical perimeters.

Questions are now being asked about the ability to allow some devices in some areas, some of the time, resulting in the need for stratified policy and sophisticated technology which can accurately distinguish between approved and unapproved electronic devices in secure areas.

The invisible dangers of RF devices

RF-enabled devices are prevalent in the enterprise. According to Ericsson’s Internet of Things Forecast, there are 22 billion connected devices and 15 billions of these devices have radios. Furthermore, as the avalanche of IoT devices grows, cyber threats will become increasingly common.

Wireless devices in the enterprise today include light bulbs, headsets, building control systems, and HVAC systems. Increasingly vulnerable and risky are wearables. Wearables with data exfiltrating capabilities include Fitbits, smartwatches and other personal devices with embedded radios and variety of audio/video capture, pairing and transmission capabilities.

Understanding the current policy device landscape

The RF environment has become increasingly complicated over the past five years because more and more devices have RF interfaces that can’t be disabled. Secure facilities with very strict RF device policies are making exceptions to the “No Device Policy” into a more stratified approach: “Some Device Policy.” Examples of a stratified policy are whitelisting devices with RF interfaces such as medical wearables, Fitbits and vending machines. Some companies are geofencing certain areas in facilities, such as Sensitive Compartmented Information Facility (SCIFs) in defense facilities.

Current policies are outdated

While some government and commercial buildings have secure areas where no cell phones or other RF-emitting devices are allowed, detecting and locating radio-enabled devices is largely based on the honor system or one-time scans for devices. Bad actors do not follow the honor system and one-time scans are just that: one time and cannot monitor 24×7.

Benefits of implementing RF device security policy

In a world where security teams need to detect and locate unauthorized cellular, Bluetooth, BLE, Wi-Fi and IoT devices, there are solutions available and subsequent benefits to enforcing device security policies: ..[…] Read more »


“You can’t quantify business risk with RAG color coded scores”

A recent study by Forrester Research shows that 97% of Indian organizations experienced at least one business-impacting cyberattack in the past 12 months. Yet, only four in 10 security leaders in India have a clear picture of how much at risk, or how secure their organizations are. In a chat with CISO MAGAdam Palmer, Chief Cybersecurity Strategist at Tenable, tells us how security leaders should quantify business risk and assess the attack surface, using accurate and more insightful metrics like the cyber exposure score.

Palmer has over 20 years of cybersecurity experience.  That includes executive positions at large cybersecurity vendors, leading the U.N. Global Program against cybercrime.  Before joining Tenable, Palmer held the position of Global Director, cybersecurity Risk & Controls for Banco Santander – the largest bank in the EU and Latin America.

Palmer began his career as a U.S. military officer focused on cybercrime cases.  After the military, he worked in a senior operational role by creating the [.]ORG top-level Internet domain cybersecurity program.

Edited excerpts of the interview:

By Brian Pereira, Principal Editor, CISO MAG

Your research shows that only 4 in 10 security leaders know how secure or at risk they are. How does an organization quantify business risk due to these business-impacting cyber attacks? Are there any frameworks or tools to do this?

I worked on this idea for two years and my prior job at the bank (Banco Santander) was trying to quantify risk — moving from qualitative to quantifiable analysis. Many security leaders use the heat matrices, the red, amber, green (RAG) scores to try to describe risk to the business leaders. This is really IT talk. Every organization I worked at did this. It doesn’t say anything to really quantify the risk or help people understand the reduction in risk. How can a business leader make a decision based on a color in RAG scores? There is a gap in communication between how IT people speak (technical or ambiguous), and the expectations of business leaders — quantitative understanding of risk.

A cyber exposure score, which is what Tenable creates, is a powerful tool because it gives you a quantifiable number.

Why haven’t security leaders been able to do accurate risk assessments for business-impacting cyberattacks?

The heart of it is really the lack of partnership between the security and the business leaders. There’s not enough alignment of metrics and objectives with business strategic priorities. I see that organizations report risk in a very qualitative language. This is not the language of business leaders. They have to consider industry benchmarking frameworks and accurately report it to the business, especially in times like today.

Organizations with security and business leaders who are aligned in measuring and managing cybersecurity as a strategic business risk deliver demonstrable results. What would be your recommendations to security leaders to do this security-business alignment? How do they weave cybersecurity into the fabric of business discussions?

The keys are a few things: linking the security program to business performance.  Making sure you have visibility across the entire attack surface. The attack surface has expanded with cloud and even operational technology. You can’t protect what you can’t see. And you have to apply a business context to your tactical decisions and express that in a quantifiable matrix that business leaders understand.

Looking at the global threat landscape, which countries are being targeted the most? And what could be the reasons?

We saw that all the markets had a high percentage of business-impacting events over the last 12 months. 97% of businesses in India reported a cyberattack within the last 12 months. And 74% expect an increase in cyberattacks. Today, we are in a very dynamic business environment, with business and technology closely woven together. The effective business-aligned CISO just can’t focus on technical issues or one part of that threat landscape. They really have to be aligned with the business and elevate themselves as a business-aligned security expert — and be aware of the entire expanded threat landscape.

Specific to India, what does your research show, with respect to the types of businesses being increasingly targeted?

We saw medium and large businesses being attacked. We know that these businesses make India a dynamic and exciting economy, with Digital India, and all the technology being used throughout India – in business and in government. Cybercriminals know where the money is, and they target technology and intellectual property. Given the monetary value and the damage that can be caused by a successful attack, across industries, telecom, health care, finance, all these industries are major targets. And what we found in this study is that all of these are equal opportunity targets for cybercriminals to attack a business.

Your research shows that 67% of security leaders in India say these attacks also involved an operational technology (OT) system. What kind of industries are being targeted within India? Does this also include critical infrastructures like nuclear plants and electricity grids?

This is really an issue of convergence. Automation is now common in the industrial environment. And that environment is converging with the IT environment. It is in critical infrastructure and manufacturing. But it can be in lots of different types of businesses. Think about automated access controls, with all kinds of smart connected devices, HVAC — some of these use smart connected industrial controllers. And we are finding that cybercriminals are attacking these devices and often, security teams aren’t monitoring these satisfactorily. They are using legacy approaches for vulnerability risk management, and they are not detecting these devices. And the criminals are attacking them..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site:>

How CTOs Can Innovate Through Disruption in 2020

CTOs and other IT leaders need to invest in innovation to emerge from the current COVID-19 crisis ready for the next opportunities.

Are you ready for 2021’s opportunities? Are you ready for the new business models that will emerge once the COVID-19 coronavirus is behind us? What strategic technology moves will your organization make today to invest in the innovation to bring your enterprise out of the current crisis, stronger and better?

CTOs and other senior technology leaders should now be focusing on these key questions as we enter the second half of 2020. Sure, it was critically important to pivot instantly to enable working from home in the first half of this year. Yes, there’s still work to be done improving the systems that enable employees to work from home, especially since organizations are making many of these arrangements permanent. However, the strategic longer term moves that senior leaders make today are what will help their organizations emerge stronger on the other side of this crisis.

CTOs are at risk now of focusing solely on short-term needs when it is equally important to plan for technology and innovation initiatives to help their organizations come out of the crisis and meet post-coronavirus challenges, according to a new report from Gartner, How CTOs Should Lead in Times of Disruptions and Uncertain.

Read all our coverage on how IT leaders are responding to the conditions caused by the pandemic.

Disruption is nothing new for technology leaders. In Gartner’s survey of IT leaders, conducted in early 2020 before the coronavirus pandemic struck, 90% said they had faced a “turn” or disruption in the last 4 years, and 100% said they face ongoing disruption and uncertainty. The current crisis may just be the biggest test of the resiliency they have developed in response to those challenges.

“We are hearing from a lot of clients about innovation budgets being slashed, but it’s really important not to throw innovation out the window,” said Gartner senior principal analyst Samantha Searle, one of the report’s authors, who spoke to InformationWeek. “Innovation techniques are well-suited to reducing uncertainty. This is critical in a crisis.”

The impact of the crisis on your technology budget is likely dependent on your industry, Searle said. For instance, technology and financial companies tend to be farther ahead of other companies when it comes to response to the crisis and consideration of investments for the future.

Other businesses, such as retail and hospitality, just now may be considering how to reopen. These organizations are still focused on fulfilling the initial needs around ensuring employees and customers are safe. In response to the short-term crisis, CTOs and other IT leaders were likely to focus on things like customer and employee safety, employee productivity, supply chain stabilization, and providing the optimal customer experience. But the innovation pipeline is also a crucial component.

Innovation doesn’t necessarily have to cost a lot of money. Budgets are tight, after all. Searle suggests incremental innovations and cost optimizations, gaining efficiencies where they are achievable.

Consider whether you’ve already made some investments in AI, chatbots, or other platforms. Those are tools that you can use to improve customer experience during the ongoing crisis or even assist with better decision making as you navigate to the future.

Remember, investments will pay off on the other side. For instance, companies that thought more about employing customer safety measures are the ones that will come out better in terms of brand reputation.

In a retail environment, for instance, an innovation for employee and customer safety might be replacing touch type with voice interactions.

Searle said that the crisis has also altered acceptance of technologies that may not have been desirable in the past. For instance, before the pandemic people generally preferred seeing a doctor face-to-face rather than via a telemedicine appointment.

“That’s an example of where societal acceptance of the technology has changed a lot,” she said.

Another example that was not quite ready for prime time as the crisis hit is the idea of drones and autonomous vehicles making deliveries of groceries, take-out orders, and other orders. However, those are technologies that companies can continue to invest in for the longer term benefits.

Another key action CTOs and other IT leaders should take is trendspotting, Searle said. Trends can be around emerging technologies such as AI, but they can also be economic or political, too. The current pandemic is an example that disruption is the new order, and that just focusing on emerging technology as the only perceived catalyst of disruption has been a a misstep by many organizations, according to Searle. She recommends that organizations use trendspotting efforts to assemble a big picture of trends that will impact technology strategic decisions as your organization begins to rebuild and renew.

In terms of challenges in the next 6 months, CTOs remain focused on the near term. In an online poll during a recent webinar, Searle asked CTOs just that question. The biggest percentage said that their challenge was improving customer experience at 31%. Other challenges were maintaining employee productivity (28%), infrastructure resilience (22%), supply chain stability (8%), and combatting security attacks (8%)…[…] Read more »…..


Democratizing Cybersecurity Protects Us All

Cybersecurity is a sophisticated art. It can truly consume the time and resources of IT teams as they work to safeguard valuable data from the growing risk of cyberattacks and data breaches. The technical nature of it, along with the specific expertise it requires, has created a workforce gap that many fear is nearly impossible to bridge.

By Akshay Bhargava, Chief Product Officer at Malwarebytes

In fact, the cybersecurity workforce gap has been reported to be over four million globally, causing an alarming void of security experts who are fit to protect business and consumer data. This gap is particularly painful for small and midsize businesses (SMBs) where recruiting cybersecurity expertise may be particularly costly or challenging. Unfortunately, with the average cost of a breach weighing in at a hefty $3.92 million, cybersecurity is not something any business – no matter the size – can afford to get wrong. This is especially concerning for SMBs where estimates have found that as many as 60% are forced to shut their doors after a cyberattack.

But the damage caused by a successful attack can extend beyond the SMB itself.

Not only will the SMB suffer in the event of a cyberattack, but the larger enterprises it partners with are also put at risk. Take the 2019 Quest Diagnostics data breach as an example. Nearly 12 million patients were exposed after hackers took control of a payments page for one of Quest’s billing collection vendors, AMCA, exposing account data, social security numbers and health information. The same attack also impacted 7.7 million customers of LabCorp. AMCA has since filed bankruptcy.

It’s also been reported that it was an email attack on a vendor of Target Corp. that exposed the credit card and personal data of more than 110 million consumers in 2013. The Target breach has been traced back to network credentials stolen from an email malware attack on a heating, air conditioning and refrigeration firm used by Target.

In each instance, the exposure of a smaller organization put a much larger enterprise at risk. There is hope though, that if we can democratize cybersecurity, SMBs could realize the same protections enterprises require, and we’d all be much safer as a result.

So, what can be done? How can SMBs achieve a cybersecure environment like their enterprise competitors? The key lies in automation and empowering employees.

Automation Unlocks Cybersecurity Democratization

Adopting security automation is an effective way to achieve cyber resilience without adding staff or cost burden. It’s the core of cybersecurity democratization. In fact, companies that fully deploy security automation realize an average $1.55 million in incremental savings when handling a data breach. Not only will automation relieve the pressure from continued staff and skills resource constraints, it’s also dynamically scalable, always on, and enables a more proactive security approach that makes the business exponentially more secure. When applying automation, consider each of these three critical security process areas:

1. Threat detection and prevention. Technologies including advanced analytics, artificial intelligence and machine learning give SMBs the ability to apply adaptive threat detection and prevention capabilities so that they can stay one step ahead of cybercriminals without added staff. By automating threat detection, powered by strong threat intelligence, SMBs can detect new, emerging threats while also increasing the detection and prevention of known threats that may have previously slipped past corporate defenses. Furthermore, they can reduce the noise from incident alerts and false positives from detection systems, improving overall threat detection and prevention success rates.

2. Incident responseIf a successful cyberattack does break through, it can move throughout an environment like wildfire. Incident response time is critical to mitigating the severity of the damage, and for those SMBs impacted by the security skills shortage, having the response team needed to react fast is likely a problem. By automating incident response, organizations can greatly improve their cyber resilience. Adopt solutions that will automatically isolate, remediate and recover from a cyberattack:

  •  Isolate. By automating endpoint isolation SMBs are able to rapidly contain an infection while also minimizing disruption to the user. Effective isolation includes the automated containment of network, device and process levels. Advanced solutions will also impede malware from “phoning home” which will restrict further damage to the environment.
  • Remediate. Automating remediation will quickly and effectively restore systems without requiring staff resource time or expertise. It will also allow CISOs to remediate endpoints at scale to significantly reduce the company’s mean-time-to-response.
  • Recover. Finally, incident response should also provide automated restore capabilities to return endpoints to their pre-infected, trusted state. During this recovery process it’s also wise to enable automated detection and removal of artifacts that may have been left behind during the incident. This is essential to preventing malware from re-infecting the network.

3. Security task orchestrationTo further relieve security staff while ensuring cyber resiliency, low-level tasks should be automated, including the orchestration between complex, distributed security ecosystems and services. This will ensure a more nimble and responsive environment in the event a cyberattack is successful. Cloud-based management of endpoints can help, specifically if it provides deep visibility with remediation maps[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site:>

Leveraging packet data to improve network agility and reduce costs

Global enterprises spend over $100 billion a year on cybersecurity, but multi-vector threats can still find a way to invade network infrastructures. IT teams need to protect numerous and varied entry points, including mobile devices, and new technologies like the Internet of Things (IoT), virtualization, Wi-Fi hotspots and cloud applications.

At the same time, service providers need secure access to data centers, equipment and campus environments with near-zero network performance latencies. They must also gain visibility into encrypted traffic so they can safeguard their resources.

However, the most vital of these assets is packet data, which offers a shortcut to a comprehensive visibility-driven security program encompassing threat detection and precise investigative capabilities. IT teams can also add controls, flexibility and scalability by delivering the right packets to tools as needed. Throughout this process, they will improve recovery times and increase the return on investment for their cybersecurity budget.

The current landscape

Network administrators are working hard to meet the continuous demands for higher bandwidth while delivering a superior user experience. To do so, they need to gather real-time insights, improve productivity, and stay within monetary constraints. That’s a tough balance to strike, especially given the increased number of vulnerabilities affecting safety, governance, and compliance.

Over 20 billion connected devices are in use worldwide, and cybercriminals are updating their strategies to fit this new environment. Attackers exploit faster internet speeds, next-generation tools, and bad actor hosting sites, to create a wide range of sophisticated attacks. These can include malware, spam services, encrypted attacks to exfiltrate data, potential beaconing and C2 (Command and Control) communications, Distributed Denial of Service (DDoS) attack, and other malicious communications. They target networks and collect sensitive data from right under victims’ noses. With increased targeting of edge services, organizations must adopt a holistic approach to securing their entire distributed security visibility network to deliver the right packet data to their security systems. That begins with a comprehensive security visibility fabric architecture.

The most crucial preventive measure is rapidly addressing application performance issues through actionable insights. Operators can mitigate DDoS attacks at the edge quickly with automated solutions that protect packet data while minimizing risk. They should move storage workloads to the cloud as an extra layer of security.

IT teams who can’t see encrypted traffic face dangerous blind spots in their security, which could lead to financial losses, data breaches, and heaps of bad press. Because of this, it’s essential to protect networks and get smart visibility into these issues.

Regulatory bodies and organizations are shifting to the use of – and even mandating – ephemeral key encryption and forward secrecy (FS) to address the need for greater user security. The monitoring infrastructure will require companies to look at offloading Secure Socket Layer (SSL) decryption to allow tool capacity to keep up and to reduce latency by performing SSL decryption once and inspecting many times to scale the security infrastructure. Having a network packet broker in place to direct specific traffic to your SSL decryption appliance will allow for that decryption step. It will also enable the use of security service chaining to deliver the decrypted packet data to various security systems to maintain and monitor for optimal performance.

What the industry needs 

Many organizations don’t have the proper protective measures in place to fight attackers. They need to embed that capability into workflows because it allows for the rapid detection of issues within both physical and virtual infrastructures.

Enterprises are adopting emerging technologies to handle growing traffic volumes and network speeds. The increase in web applications and multimedia content has spurred a growing demand for simplified data center management, automation and cloud services. As a result, the packet broker market is flourishing with research predicting that the segment will be worth $849 million by 2023.

At the same time, network administrators must provide smart and flexible security solutions while reducing capital expenditures. IT teams can simplify these processes using distributed architecture. To do so, they need a cost-effective, scalable solution with no blind spots, which allows them to evolve packet data storage.

Operators and security administrators who base their actions on up-to-the-minute traffic reports can make decisions in real-time. Devices, applications and public and private clouds all aid in this mission by detecting threats throughout the network.

Why visibility is essential

Security is about controlling risk, and risk is defined by loss exposure. How can a business identify and manage risk? Companies need to be crystal clear on what they think about risk and have a thorough understanding of what they consider as assets. Having control is only possible with visibility into the network that provides access to those assets. Overcoming challenges and maximizing security requires a pervasive visibility layer that reduces downtime while increasing return on investment and enabling efficient operations.

The good news is enterprises are improving visibility as they analyze more information. IT departments need to follow suit by obtaining high-quality packet data and real-time insights. Tech teams can then protect systems from cyberattacks, provide reliable service assurance and comply with regulations.

Enterprises should monitor their infrastructure continuously so they can detect threats before they happen..[…] Read more »….