Meet Leanne Hurley: Cloud Expert of the Month – April 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Leanne Hurley.

After starting out at the front counter of a two-way radio shop in 1993, Leanne worked her way from face-to-face customer service, to billing, to training and finally into sales. She has been in sales since 1996 and has (mostly!) loved every minute of it. Leanne started selling IaaS (whether co-lo, Managed Hosting or Cloud) during the boom and expanded her expertise since I’ve been at SAP.  Now, she enjoys leading a team of sales professionals as she works with companies to improve business outcomes and accelerate digital transformation utilizing SAP’s Intelligent enterprise.

When did you join Cloud Girls and why?

I was one of the first members of Cloud Girls in 2011. I joined because having a strong network and community of women in technology is important.

What do you value about being a Cloud Girl?  

I value the relationships and women in the group.

What advice would you give to your younger self at the start of your career?

Stop doubting yourself. Continue to ask questions and don’t be intimidated by people that try to squash your tenacity and curiosity.

What’s your favorite inspirational quote?

“You can have everything in life you want if you will just help other people get what they want.”  – Zig Ziglar

What one piece of advice would you share with young women to encourage them to take a seat at the table?

Never stop learning and always ask questions. In technology women (and men too for that matter) avoid asking questions because they think it reveals some sort of inadequacy. That is absolutely false. Use your curiosity and thirst for knowledge as a tool, it will serve you well all your life.

You’re a new addition to the crayon box. What color would you be and why?

I would be Sassy-molassy because I’m a bit sassy.

What was the best book you read this year and why?

I loved American Dirt because it humanized the US migrant plight and reminded me how blessed and lucky we all are to have been born in the US.

What’s the most useless talent you have? Why?.[…] Read more »…..


3 signs that it’s time to reevaluate your monitoring platform

As we move forward from the uncertainty of 2020, remote and hybrid styles of work are likely to remain beyond the pandemic. Amid the rise of modified workflows, we’ve also seen an increase in phishing scams, ransomware attacks, and simple user errors that result in the IT infrastructures we rely on crashing – sometimes with devastating long-term repercussions for the business. What’s needed to prevent this is a reliable monitoring system that is constantly scanning your system – whether you’re operating from a data center, a public cloud, or some combination – to alert you when something is amiss. Often these monitoring tools run so smoothly in the background of operations that we forget they’re even there – which can be a big problem.

When is the last time you assessed your monitoring platform? You may have already noticed signs indicating that your tools are not keeping up with the rapidly changing digital workforce – gathering nonessential data while failing to forewarn you about legitimate issues to your network operations. Post-2020, these systems have to handle workforces that are staying connected digitally regardless of where employees are working. Your monitoring tools should be hyper-focused on alerting you to issues from outside your network and any weakness from within it. Often, we turn out to be monitoring for too much and still missing the essential problems until it’s too late.

  1. Outages

One of the most damaging and costly setbacks a business can experience is network downtime when your network suddenly and without warning ceases to work. Applications are no longer functioning, files are inaccessible, and your business cannot perform its daily functions. Responding to network downtime isn’t a simple matter of rebooting your computer, either. Gartner estimates that for every minute of network downtime, the company in question loses an average of $5,600. On the higher end of this spectrum, a business could lose $540,000 per hour. Those figures are based on lost productivity. Getting your system up and running again, catching up on lost time, and, one would think, reevaluating and implementing a new monitoring system all incur additional costs.

In the case of one luxury hotel chain, an updated monitoring system accurately detected why they were experiencing outages – a change in network configuration. By utilizing a newly updated monitoring configuration, the chain quickly reverted the network change and restored service for their customers, saving hours of troubleshooting and costly downtime.

Systems should be proactive, not reactive. The time to reassess your monitoring infrastructure isn’t after it fails to warn you that something goes wrong. Your network monitoring system should be automatically measuring performance and sharing status updates, so you can fix a problem before it happens. If your system is working at its proper capacity, it will be routinely preventing unexpected outages by using performance thresholds to evaluate functionality in real time, and alert you when targeted metrics have reached a threshold that requires attention. With a robust monitoring system in place, your team should have complete network visibility and can respond to changes and prevent outages before they happen.

  1. Alert Fatigue

Alert fatigue is something we can all relate to following a year of working from home: email notifications, instant messages, texts, phone calls, and calendar reminders for your next video meeting. After so many of these day after day, we become desensitized to them; the more alerts we receive, the less urgent any of them seem. From a cybersecurity standpoint, some of the notifications may be for anomalies linked to a potential cyberattack, but more often will be a junk email. If a genuinely urgent message does come through, it often slips through the cracks because it seems no different from any other notification we receive.

So how can your IT infrastructure help prevent this? Intelligent monitoring systems, in general, aim to make the lives of the people using them easier. Your monitoring system should reduce the number of redundant alerts to recognize and prioritize actual issues. A tiered-alert priority system will have notifications display on your dashboard with a visual or auditory cue signifying how important it is. Can this wait until the afternoon, or does it need to be addressed immediately? Detecting a cyberattack early, for example, can make a huge difference in mitigating damage.

  1. Excess Tools

One of the root causes of any monitoring flaw can be excessive monitoring tools themselves – over-monitoring. If you have multiple tools to track your network, you’re likely getting notifications and warnings from each; contributing to alert fatigue, opening yourself up to a potential failure, resulting in a network outage and business interruption. Having multiple tools performing the same function is a waste of resources as they render each other redundant. The key is to consolidate the necessary functions in one monitoring system, regularly assessed for vulnerabilities and customized for your particular business needs.

Your business members will indeed want to track an abundance of metrics – server functionality, security, business metrics, and so on – and it may be that not all of these things can be monitored by the same tool. You should first decide which things are essential for your team to be actively monitoring and assessing. Security should be a top priority, but are there other data points that can be pulled in a quarterly or annual report instead? Your IT monitoring should be focused on tracking and alerting you to essential information and irregularities. You can avoid overextending the team and receiving alerts that will only be ignored by first doing your own assessment of what you need from your system.

Assessing Your Approach for Future Growth

We can’t operate at our full potential without the control and visibility that monitoring tools give us…[…] Read more »….


Protecting Remote Workers Against the Perils of Public WI-FI

In a physical office, front-desk security keeps strangers out of work spaces. In your own home, you control who walks through your door. But what happens when your “office” is a table at the local coffee shop, where you’re sipping a latte among total strangers?

Widespread remote work is likely here to stay, even after the pandemic is over. But the resumption of travel and the reopening of public spaces raises new concerns about how to keep remote work secure.

In particular, many employees used to working in the relative safety of an office or private home may be unaware of the risks associated with public Wi-Fi. Just like you can’t be sure who’s sitting next to your employee in a coffee shop or other public space, you can’t be sure whether the public Wi-Fi network they’re connecting to is safe. And the second your employee accidentally connects to a malicious hotspot, they could expose all the sensitive data that’s transmitted in their communications or stored on their device.

Taking scenarios like this into account when planning your cybersecurity protections will help keep your company’s data safe, no matter where employees choose to open their laptops.

The risks of Wi-Fi search

An employee leaving Wi-Fi enabled when they leave their house may seem harmless, but it really leaves them incredibly vulnerable. Wi-Fi enabled devices can reveal the network names (SSIDs) they normally connect to when they are on the move. An attacker can then use this information to emulate a known “trusted” network that is not encrypted and pretend to be that network.  Many devices will automatically connect to these “trusted” open networks without verifying that the network is legitimate.

Often, attackers don’t even need to emulate known networks to entice users to connect. According to a recent poll, two-thirds of people who use public Wi-Fi set their devices to connect automatically to nearby networks, without vetting which ones they’re joining.

If your employee automatically connects to a malicious network — or is tricked into doing so — a cybercriminal can unleash a number of damaging attacks. The network connection can enable the attacker to intercept and modify any unencrypted content that is sent to the employee’s device. That means they can insert malicious payloads into innocuous web pages or other content, enabling them to exploit any software vulnerabilities that may be present on the device.

Once such malicious content is running on a device, many technical attacks are possible against other, more important parts of the device software and operating system. Some of these provide administrative or root level access, which gives the attacker near total control of the device. Once an attacker has this level of access, all data, access, and functionality on the device is potentially compromised. The attacker can remove or alter the data, or encrypt it with ransomware and demand payment in exchange for the key.

The attacker could even use the data to emulate and impersonate the employee who owns and or uses the device. This sort of fraud can have devastating consequences for companies. Last year, a Florida teenager was able to take over multiple high-profile Twitter accounts by impersonating a member of the Twitter IT team.

A multi-layered approach to remote work security

These worst-case scenarios won’t occur every time an employee connects to an unknown network while working remotely outside the home — but it only takes one malicious network connection to create a major security incident. To protect against these problems, make sure you have more than one line of cybersecurity defenses protecting your remote workers against this particular attack vector.

Require VPN use. The best practice for users who need access to non-corporate Wi-Fi is to require that all web traffic on corporate devices go through a trusted VPN. This greatly limits the attack surface of a device, and reduces the probability of a device compromise if it connects to a malicious access point.

Educate employees about risk. Connecting freely to public Wi-Fi is normalized in everyday life, and most people have no idea how risky it is. Simply informing your employees about the risks can have a major impact on behavior. No one wants to be the one responsible for a data breach or hack…[…] Read more »



How We’ll Conduct Algorithmic Audits in the New Economy

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes.

Algorithms are the heartbeat of applications, but they may not be perceived as entirely benign by their intended beneficiaries.

Most educated people know that an algorithm is simply any stepwise computational procedure. Most computer programs are algorithms of one sort of another. Embedded in operational applications, algorithms make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings — encroaching on customer privacy, refusing them a home loan, or perhaps targeting them with a barrage of objectionable solicitation — stakeholders’ understandable reaction may be to swat back in anger, and possibly with legal action.

Regulatory mandates are starting to require algorithm auditing

Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes, especially those powered by artificial intelligence (AI), deep learning (DL), and machine learning (ML).

Many of these concerns revolve around the possibility that algorithmic processes can unwittingly inflict racial biases, privacy encroachments, and job-killing automations on society at large, or on vulnerable segments thereof. Surprisingly, some leading tech industry execs even regard algorithmic processes as a potential existential threat to humanity. Other observers see ample potential for algorithmic outcomes to grow increasingly absurd and counterproductive.

Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity — coupled with their daunting size, complexity and obscurity — presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions?

Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens.

Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data — including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision. And that requires that an audit log be kept for review and that auditing tools support rollup of algorithmic decision factors.

Considering how influential GDPR has been on other privacy-focused regulatory initiatives around the world, it wouldn’t be surprising to see laws and regulations mandate these sorts of auditing requirements placed on businesses operating in most industrialized nations before long.

For example, US federal lawmakers introduced the Algorithmic Accountability Act in 2019 to require companies to survey and fix algorithms that result in discriminatory or unfair treatment.

Anticipating this trend by a decade, the US Federal Reserve’s SR-11 guidance on model risk management, issued in 2011, mandates that banking organizations conduct audits of ML and other statistical models in order to be alert to the possibility of financial loss due to algorithmic decisions. It also spells out the key aspects of an effective model risk management framework, including robust model development, implementation, and use; effective model validation; and sound governance, policies, and controls.

Even if one’s organization is not responding to any specific legal or regulatory requirements for rooting out evidence of fairness, bias, and discrimination in your algorithms, it may be prudent from a public relations standpoint. If nothing else, it would signal enterprise commitment to ethical guidance that encompasses application development and machine learning DevOps practices.

But algorithms can be fearsomely complex entities to audit

CIOs need to get ahead of this trend by establishing internal practices focused on algorithm auditing, accounting, and transparency. Organizations in every industry should be prepared to respond to growing demands that they audit the complete set of business rules and AI/DL/ML models that their developers have encoded into any processes that impact customers, employees, and other stakeholders.

Of course, that can be a tall order to fill. For example, GDPR’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity — coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms — be they machine learning, convolutional neural networks, or whatever — are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Most organizations — even the likes of Amazon, Google, and Facebook — might find it difficult to keep track of all the variables encoded into its algorithmic business processes. What could prove even trickier is the requirement that they roll up these audits into plain-English narratives that explain to a customer, regulator, or jury why a particular algorithmic process took a specific action under real-world circumstances. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding.

Throwing more algorithm experts at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. Explaining what goes on inside an algorithm is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. One can’t easily trace their precise path to a final answer.

Algorithmic auditing is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

Most of the people you’re training to explain this stuff to may not know a machine-learning algorithm from a hole in the ground. More often than we’d like to believe, there will be no single human expert — or even (irony alert) algorithmic tool — that can frame a specific decision-automation narrative in simple, but not simplistic, English. Even if you could replay automated decisions in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance — such as a legal proceeding, contractual dispute, or showstopping technical glitch — will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory.

Establishing standard approach to algorithmic auditing

CIOs should recognize that they don’t need to go it alone on algorithm accounting. Enterprises should be able to call on independent third-party algorithm auditors. Auditors may be called on to review algorithms prior to deployment as part of the DevOps process, or post-deployment in response to unexpected legal, regulatory, and other challenges.

Some specialized consultancies offer algorithm auditing services to private and public sector clients. These include: This firm describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.” It provides enterprise-wide assessments of enterprise AI liabilities and model governance practices; AI incident detection and response, model- and project-specific risk certifications; and regulatory and compliance guidance. It also trains clients’ technical, legal and risk personnel how to perform algorithm audits.

O’Neil Risk Consulting and Algorithmic Auditing: ORCAA describes itself as a “consultancy that helps companies and organizations manage and audit algorithmic risks.” It works with clients to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. It helps clients to institute “early warning systems” that flag when a problematic algorithm (ethical, legal, reputational, or otherwise) is in development or in production, and thereby escalate the matter to the relevant parties for remediation. They serve as expert witnesses to assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. They help organizations develop strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools. They work with regulators to translate fairness laws and rules into specific standards for algorithm builders. And they train client personnel on algorithm auditing.

Currently, there are few hard-and-fast standards in algorithm auditing. What gets included in an audit and how the auditing process is conducted are more or less defined by every enterprise that undertakes it, or by the specific consultancy being engaged to conduct it. Looking ahead to possible future standards in algorithm auditing, Google Research and Open AI teamed with a wide range of universities and research institutes last year to publish a research study that recommends third-party auditing of AI systems. The paper also recommends that enterprises:

  • Develop audit trail requirements for “safety-critical applications” of AI systems;
  • Conduct regular audits and risk assessments associated with the AI-based algorithmic systems that they develop and manage;
  • Institute bias and safety bounties to strengthen incentives and processes for auditing and remediating issues with AI systems;
  • Share audit logs and other information about incidents with AI systems through their collaborative processes with peers;
  • Share best practices and tools for algorithm auditing and risk assessment; and
  • Conduct research into the interpretability and transparency of AI systems to support more efficient and effective auditing and risk assessment.

Other recent AI industry initiatives relevant to standardization of algorithm auditing include:

  • Google published an internal audit framework that is designed help enterprise engineering teams audit AI systems for privacy, bias, and other ethical issues before deploying them.
  • AI researchers from Google, Mozilla, and the University of Washington published a paper that outlines improved processes for auditing and data management to ensure that ethical principles are built into DevOps workflows that deploy AI/DL/ML algorithms into applications.
  • The Partnership on AI published a database to document instances in which AI systems fail to live up to acceptable anti-bias, ethical, and other practices.


CIOs should explore how best to institute algorithmic auditing in their organizations’ DevOps practices…[…] Read more »…..


Meet Andrea Blubaugh: Cloud Expert of the Month – February 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Andrea Blubaugh.

Andrea has more than 15 years of experience facilitating the design, implementation and ongoing management of data center, cloud and WAN solutions. Her reputation for architecting solutions for organizations of all sizes and verticals – from Fortune 100 to SMBs – earned her numerous awards and honors. With a specific focus on the mid to enterprise space, Andrea works closely with IT teams as a true client advocate, consistently meeting, and often exceeding expectations. As a result, she maintains strong client and provider relationships spanning the length of her career.

When did you join Cloud Girls and why?  

Wow, it’s been a long time! I believe it was 2014 or 2015 when i joined Cloud Girls. I had come to know Manon through work and was impressed by her and excited to join a group of women in the technology space.

What do you value about being a Cloud Girl?  

Getting to know and develop friendships with the fellow Cloud Girls over the years has been a real joy. It’s been a great platform for learning on both a professional and personal level.

What advice would you give to your younger self at the start of your career?  

I would reassure my younger self in her decisions and to encourage her to keep taking risks. I would also tell her to not sweat the losses so much. They tend to fade pretty quickly.

What’s your favorite inspirational quote?  

“Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do, so throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover.”  –Mark Twain

What one piece of advice would you share with young women to encourage them to take a seat at the table?  

I was very fortunate early on in my career to work for a startup whose leadership saw promise in my abilities that I didn’t yet see myself. I struggled with the decision to take a leadership role as I didn’t feel “ready” or that I had the right or enough experience. I received some good advice that I had to do what ultimately felt right to me, but that turning down an opportunity based on a fear of failure wouldn’t ensure there would be another one when I felt the time was right. My advice is if you’re offered that seat, and you want that seat, take it.

What’s one item on your bucket list and why?..[…] Read more »…..



What types of cybersecurity skills can you learn in a cyber range?

What is a cyber range?

A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.

A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.

What skills can you learn in a cyber range?

A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.

SIEM, IDS/IPS and firewall management

Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.

A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.

Incident response

After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.

A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.

Operating system management: Linux and Windows

Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.

A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.

Endpoint controls and protection

As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.

Penetration testing

This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.

A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.

Network management

Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.

A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.

Malware analysis

Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.

Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.

Threat hunting

Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.

A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.

Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.

Computer forensics

Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics…[…] Read more »….


In an Uncertain World, You Can Count on These Four Trends in 2021

As leaders look to the year ahead, planning and predictions have taken on a whole new meaning in a post-pandemic world. With so many unknowns in 2021, how can anyone claim to know what’s coming with confidence?

However, if 2020 taught us nothing else, it is that major, unseen disruption does result in one certainty: enterprises must be more responsive. The pandemic shone a bright and unflattering spotlight on where companies need to update their IT infrastructure – both to contend with current challenges and to become a modern company. Despite cloud adoption, resilience and business continuity gradually fell to the bottom of enterprises’ infrastructure to-do lists in recent years. Now, companies are playing catchup to take hold of the future that has arrived on their doorstep, a few years early.

While 2021 feels largely uncharted, there are a few trends that are bound to define this year’s plans and investments. Companies that capitalize on these trends will position themselves not only to be competitive and stronger than ever, but to respond when the next disruption hits.

Reinforced mainframe recruiting and retention

We have witnessed the mainframe skills gap widen in recent years. As more mainframe pros retire, taking their knowledge with them, too little focus has been put on enabling emerging professionals to step in Enterprises must recalibrate, the mainframe is not going away, and the workforce needs to shift. Mainframe spend continues to grow at above 3% yearly, and upwards of 50% of organizations in several industries (financial services, insurance, public sector and more) state its running more of their business-critical applications than ever before.

To fill these vacant positions with new talent, enterprises need to adjust their approach to the mainframe. Emerging IT pros and college graduates don’t want to work on outdated interfaces. They prefer interfaces that are intuitive and allow for clicking and dragging. They want to work for a company with a highly collaborative and communicative culture, as well as modern processes and tools such as DevOps and related tool chains. DevOps enables a culture of collaboration, reduces siloes, automates for efficiency and, yes, can be applied to the mainframe. Enterprises that modernize the mainframe and related operations will close the skills gap in 2021 and usher in a new wave of innovation.

A call for COBOL programmers

COBOL (Common Business-Oriented Language) is not a dead programming language – far from it. A 2018 report for the Social Security Administration found that the administration maintained more than 60 million lines of COBOL. In fact, many government agencies rely on COBOL programs, but – no surprise – there are too few programmers to tackle them. The lack of COBOL programmers has been adding stress to the mainframe, especially for government agencies, which will only escalate over the next six to eight months.

COBOL development isn’t inherently the issue – most IT pros who can code Python can learn COBOL. The bigger problem is understanding how the programs work and what they’re doing. It requires a person who has application understanding and ample tribal knowledge. The second issue is good quality assurance. Many enterprises lack the ability to test applications and ensure nothing is at risk of breaking. COBOL coders need an easy way to digest 10 thousand lines of code, break it down and understand it. They need the right tools to avoid working unreasonable hours, battling rampant inefficiency and risking project failure. With these tools, enterprises can tackle COBOL programs in a way that is manageable for people new and experienced with COBOL.

A shift towards value stream management

As enterprises work to better align the IT and business sides of the business, including embracing agile and DevOps, value stream management is stepping into the spotlight. Up until recently, most organizations were focused on workload automation and scheduling, where they could automate certain parts of a system and schedule related jobs. However, workload automations solutions no longer fully meet most enterprises’ needs as IT moves toward DevOps and teams want to move faster and require more visibility as they orchestrate automations across multiple systems, technologies and platforms.

With value stream management, enterprises take a step beyond automating a particular “job” to instead orchestrating the automation of multiple jobs and tasks, across multiple applications and systems, to streamline a process or value stream. Many value streams are currently “in the dark,” managed manually by resources focused on ensuring a job runs to completion. With orchestration, enterprises can design and visualize their value streams, create workflows tying them together and collect metrics to figure out where improvements can be made. This visibility will allow enterprises to deliver value faster and innovate more quickly – key advantages in becoming a more responsive enterprise. This orchestration helps eliminate silos and create greater transparency enabling IT pros to find issues and remove bottlenecks faster. Enterprises can even orchestrate a DevOps toolchain, so they can kick off the creation of code and orchestrate its delivery to meet demands

The hyperautomation takeover

What is the difference between automation and hyperautomation? According to Gartner, hyperautomation applies advanced technologies like robotic process automation (RPA), artificial intelligence (AI), machine learning (ML) and more to enable the automation of virtually any repetitive task. In the age of efficiency and productivity, it is not hard to see why this trend is taking off.

The pandemic highlighted several areas within the enterprise that would benefit from hyperautomation. For instance, many companies put new workflows in place for COVID-19 tracking. HR departments need to monitor which employees are physically safe, which are not and what their IT needs are, especially with a remote workforce. While many companies were once reluctant to dabble in workflow automation with their content services solutions, never mind value stream management, COVID-19 has forced companies – and HR specifically – to reevaluate where they need automation capabilities, especially with extra processes added by the pandemic.

Automation will also be increasingly pertinent as companies apply DevOps across the organization, especially when integrating the mainframe into the DevOps toolchain. Using hyperautomation, enterprises can integrate tools that allow for continuous delivery and bring processes into a modern culture…[…] Read more »


3 key reasons why SOCs should implement policies over security standards

In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.

How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.

Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling

How Security Standards Look Today

Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.

There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.

We can greatly improve the security landscape by directly addressing these key issues:

1. Information Security and Compliance is Siloed

In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.

Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.

If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.

2. Security Standards are Too Generic

Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.

With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.

Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.

3. Current Practices are Reactive, Rather Than Proactive

The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.

Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.

The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.

Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.

DevSecOps – A New Way Forward

The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.

First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.

In outlining the rise and success of DevSecOps, a recent article gave three defining criteria of a true DevSecOps environment:

  1. Developers are in charge of security testing.
  2. Security experts act as consultants to developers when additional knowledge is required.
  3. Fixing security issues are managed by the development team.

Ongoing security-related issues are owned by the development team..[…] Read more »….



Making CI/CD Work for DevOps Teams

Many DevOps teams are advancing to CI/CD, some more gracefully than others. Recognizing common pitfalls and following best practices helps.

Agile, DevOps and CI/CD have all been driven by the competitive need to deliver value faster to customers. Each advancement requires some changes to processes, tools, technology and culture, although not all teams approach the shift holistically. Some focus on tools hoping to drive process changes when process changes and goals should drive tool selection. More fundamentally, teams need to adopt an increasingly inclusive mindset that overcomes traditional organizational barriers and tech-related silos so the DevOps team can achieve an automated end-to-end CI/CD pipeline.

Most organizations begin with Agile and advance to DevOps. The next step is usually CI, followed by CD, but the journey doesn’t end there because bottlenecks such as testing and security eventually become obvious.

At benefits experience platform provider HealthJoy, the DevOps team sat between Dev and Ops, maintaining a separation between the two. The DevOps team accepted builds from developers in the form of Docker images via Docker Hub. They also automated downstream Ops tasks in the CI/CD pipeline, such as deploying the software builds in AWS.

Sajal Dam, HealthJoy

Sajal Dam, HealthJoy

“Although it’s a good approach for adopting CI/CD, it misses the fact that the objective of a DevOps team is to break the barriers between Dev and Ops by collaborating with the rest of software engineering across the whole value stream of the CI/CD pipeline, not just automating Ops tasks,” said Sajal Dam, VP of engineering at HealthJoy.

Following are a few of the common challenges and advice for dealing with them.


People are naturally change resistant, but change is a constant when it comes to software development and delivery tools and processes.

“I’ve found the best path is to first work with a team that is excited about the change or new technology and who has the time and opportunity to redo their tooling,” said Eric Johnson, EVP of Engineering at DevOps platform provider GitLab. “Next, use their success [such as] lower cost, higher output, better quality, etc. as an example to convert the bulk of the remaining teams when it’s convenient for them to make a switch.”

Eric Johnson, GitLab

Eric Johnson, GitLab

The most fundamental people-related issue is having a culture that enables CI/CD success.
“The success of CI/CD [at] HealthJoy depends on cultivating a culture where CI/CD is not just a collection of tools and technologies for DevOps engineers but a set of principles and practices that are fully embraced by everyone in engineering to continually improve delivery throughput and operational stability,” said HealthJoy’s Dam.

At HealthJoy, the integration of CI/CD throughout the SDLC requires the rest of engineering to closely collaborate with DevOps engineers to continually transform the build, testing, deployment and monitoring activities into a repeatable set of CI/CD process steps. For example, they’ve shifted quality controls left and automated the process using DevOps principles, practices and tools.

Component provider Infragistics changed its hiring approach. Specifically, instead of hiring experts in one area, the company now looks for people with skill sets that meld well with the team.

“All of a sudden, you’ve got HR involved and marketing involved because if we don’t include marketing in every aspect of software delivery, how are they going to know what to market?” said Jason Beres, SVP of developer tools at Infragistics. “In a DevOps team, you need a director, managers, product owners, team leads and team building where it may not have been before. We also have a budget to ensure we’re training people correctly and that people are moving ahead in their careers.”


Jason Beres, Infragistics

Jason Beres, Infragistics


Effective leadership is important.

“[A]s the head of engineering, I need to play a key role in cultivating and nurturing the DevOps culture across the engineering team,” said HealthJoy’s Dam. “[O]ne of my key responsibilities is to coach and support people from all engineering divisions to continually benefit from DevOps principles and practices for an end-to-end, automated CI/CD pipeline.”


Processes should be refined as necessary, accelerated through automation and continuously monitored so they can be improved over time.

“When problems or errors arise and need to be sent back to the developer, it becomes difficult to troubleshoot because the code isn’t fresh in their mind. They have to stop working on their current project and go back to the previous code to troubleshoot,” said Gitlab’s Johnson. “In addition to wasting time and money, this is demoralizing for the developer who isn’t seeing the fruit of their labor.”

Johnson also said teams should start their transition by identifying bottlenecks and common failures in their pipelines. The easiest indicators to check pipeline inefficiencies are the runtimes of the jobs, stages and the total runtime of the pipeline itself. To avoid slowdowns or frequent failures, teams should look for problematic patterns with failed jobs.

At HealthJoy, the developers and architects have started explicitly identifying and planning for software design best practices that will continually increase the frequency, quality and security of deployments. To achieve that, engineering team members have started collaborating across the engineering divisions horizontally.

“One of the biggest barriers to changing processes outside of people and politics is the lack of tools that support modern processes,” said Stephen Magill, CEO of continuous assurance platform provider MuseDev. “To be most effective, teams need to address people, processes and technology together as part of their transformations.”


Different teams have different favorite tools that can serve as a barrier to a standardized pipeline which, unlike a patchwork of tools, can provide end-to-end visibility and ensure consistent processes throughout the SDLC with automation.

“Age and diversity of existing tools slow down migration to newer and more standardized technologies. For example, large organizations often have ancient SVN servers scattered about and integration tools are often cobbled together and fragile,” said MuseDev’s Magill. “Many third-party tools pre-date the DevOps movement and so are not easily integrated into a modern Agile development workflow.”

Integration is critical to the health and capabilities of the pipeline and necessary to achieve pipeline automation.

Stephen Magill, MuseDev

Stephen Magill, MuseDev

“The most important thing to automate, which is often overlooked, is automating and streamlining the process of getting results to developers without interrupting their workflow,” said MuseDev’s Magill. “For example, when static code analysis is automated, it usually runs in a manner that reports results to security teams or logs results in an issue tracker. Triaging these issues becomes a labor-intensive process and results become decoupled from the code change that introduced them.”

Instead, such results should be reported directly to developers as part of code review since developers can easily fix issues at that point in the development process. Moreover, they can do so without involving other parties, although Magill underscored the need for developers, QA, and security to mutually have input into which analysis tools are integrated into the development process.

GitLab’s Johnson said the upfront investment in automation should be a default decision and that the developer experience must be good enough for developers to rely on the automation.

“I’d advise adding things like unit tests, necessary integration tests, and sufficient monitoring to your ‘definition of done’ so no feature, service or application is launched without the fundamentals needed to drive efficient CI/CD,” said Johnson. “If you’re running a monorepo and/or microservices, you’re going to need some logic to determine what integration tests you need to run at the right times. You don’t want to spin up and run every integration test you have in unaffected services just because you changed one line of code.”

At Infragistics, the lack of a standard communication mechanism became an issue. About five years ago, the company had a mix of Yammer, Slack and AOL Instant Messenger.

“I don’t want silos. It took a good 12 months or more to get people weaned off those tools and on to one tool, but five years later everyone is using [Microsoft] Teams,” said Infragistics’ Beres. “When everyone is standardized on a tool like that the conversation is very fluid.”

HealthJoy encourages its engineers to stay on top of the latest software principles, technologies and practices for a CI/CD pipeline, which includes experimenting with new CI/CD tools. They’re also empowered to affect grassroots transformation through POCs and share knowledge of the CI/CD pipeline and advancements through collaborative experimentation, internal knowledge bases, and tech talks.

In fact, the architects, developers and QA team members have started collaborating across the engineering divisions to continually plan and improve the build, test, deploy, and monitoring activities as integral parts of product delivery. And the DevOps engineers have started collaborating in the SDLC and using tools and technologies that allows developers to deliver and support products without the barrier the company once had between developers and operations..[…] Read more »…..


“All sectors can benefit from a simulated targeted attack”

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part.

On the surface, a Red Team exercise appears like a scene straight out of a Hollywood movie. Spies masquerading as employees walking straight into the office so instinctively that no one bats an eye. Plugging things into your devices that are not supposed to be there. Tapping cameras, telephones, microphones, rolling out emails, or even walking around with a banana so you may assume the new guy/girl didn’t have time to grab a proper lunch. By the time you figure out they weren’t supposed to be where they were, it’s already too late. And the only sigh of relief is the fact that they were on your side — and they were working for you.

So, before your company humors itself with a Red Team assessment it might be of use that you talk to an expert about it. And for that, we have Tom Van de Wiele, Principal Security Consultant at F-Secure. With nearly 20 years of experience in information security, Tom specializes in red team operations and targeted penetration testing for the financial, gaming, and service industry. When not breaking into banks Tom acts as an adviser on topics such as critical infrastructure and IoT as well as incident response and cybercrime. With a team that has a 100% success rate in overcoming the combination of targeted organizations’ physical and cybersecurity defenses to end up in places they should never be, Tom is possibly one of the best red team experts in the world. In an exclusive interview with Augustin Kurian of CISO MAG, Tom discusses key questions a company should ask before it engages in a Red Team assessment.

It is often said that Red Teaming is much better than regular penetration testing? What are your thoughts about it?

Red teaming, penetration testing, source code review, vulnerability scanning, and other facets of testing play a key part in trying to establish the level of control and maturity of an organization. They all have different purposes, strengths, and limitations. A penetration test is usually limited and only focused on a certain aspect of the business e.g. a certain network, application, building, or IT asset; a red team test is based on the attacker’s choice and discretion on what to target and when. Keeping in mind the actual objectives and goals of what the client wants to have simulated that is relevant to them. That means anything with the company logo on it could be in scope for the test — keeping in mind ethics, local and international laws, and good taste.

In general, Red Team Testing is only for organizations that have already established a certain maturity and resilience when it comes to opportunistic and targeted attacks. This resilience can be expressed in many ways, hence we want to make sure that we are performing it at the right time and place for our clients, to ensure they get value out of it. The goals are three-fold: to increase the detection capabilities of the organization tailored towards relevant attack scenarios, to ensure that certain attack scenarios become impossible, and increase the response and containment time to make sure that a future attack can be dealt with swiftly and with limited impact. Ultimately, all efforts should be focused on an “assume breach” mentality while increasing the cost of attack for a would-be attacker.

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part. For instance, just because you haven’t been hit by ransomware in the past, doesn’t mean you won’t get impacted by one in the future. “Forcing” organizations to look at their own structure and how they handle their daily operations and business continuity as part of threat modeling, sometimes brings surprising results in positive or negative form. But at the end of the day, everyone is better off knowing what the risks might be of certain aspects of the business, so that an organization can take better business decisions, for better or for worse, while they structure a plan on how to handle whatever it is that is causing concern to stakeholders.

When should a company realize that it is an apt time to hold a Red Team assessment? What kinds of industries should invest in Red Teaming? If so, how frequent should the Red Teaming assessment be? Should it be a yearly process, half-yearly, quarterly, or a continuous one? How often do you do one for your clients?

All sectors can benefit from a simulated targeted attack to test the sum of their security controls, as all business sectors have something to protect or care about, be it customer data, credibility, funds, intellectual property, disruption scenarios, industrial espionage, etc. What kind of testing and how frequently depends on the maturity of the organization, its size, and how much they regard information security as a key part of their organization, rather than a costly afterthought, which unfortunately is still the case for a lot of organizations.

Major financial institutions will usually schedule a red team engagement every 1 – 1.5 years or so.

In between those, a number of other initiatives are held on a periodical basis in order to keep track of the current attack surface, the current threat landscape as well as trying to understand where the business is going versus what technology, processes, and training are required to ensure risk can be kept at an acceptable level. As part of an organization’s own due diligence, it needs to ensure that networks and application receive different levels of scrutiny using a combination of preventive and reactive efforts e.g architecture reviews, threat modeling, vulnerability scanning, source code review, and attack path mapping, just to name a few..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site:>