Meet Andrea Blubaugh: Cloud Expert of the Month – February 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Andrea Blubaugh.

Andrea has more than 15 years of experience facilitating the design, implementation and ongoing management of data center, cloud and WAN solutions. Her reputation for architecting solutions for organizations of all sizes and verticals – from Fortune 100 to SMBs – earned her numerous awards and honors. With a specific focus on the mid to enterprise space, Andrea works closely with IT teams as a true client advocate, consistently meeting, and often exceeding expectations. As a result, she maintains strong client and provider relationships spanning the length of her career.

When did you join Cloud Girls and why?  

Wow, it’s been a long time! I believe it was 2014 or 2015 when i joined Cloud Girls. I had come to know Manon through work and was impressed by her and excited to join a group of women in the technology space.

What do you value about being a Cloud Girl?  

Getting to know and develop friendships with the fellow Cloud Girls over the years has been a real joy. It’s been a great platform for learning on both a professional and personal level.

What advice would you give to your younger self at the start of your career?  

I would reassure my younger self in her decisions and to encourage her to keep taking risks. I would also tell her to not sweat the losses so much. They tend to fade pretty quickly.

What’s your favorite inspirational quote?  

“Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do, so throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover.”  –Mark Twain

What one piece of advice would you share with young women to encourage them to take a seat at the table?  

I was very fortunate early on in my career to work for a startup whose leadership saw promise in my abilities that I didn’t yet see myself. I struggled with the decision to take a leadership role as I didn’t feel “ready” or that I had the right or enough experience. I received some good advice that I had to do what ultimately felt right to me, but that turning down an opportunity based on a fear of failure wouldn’t ensure there would be another one when I felt the time was right. My advice is if you’re offered that seat, and you want that seat, take it.

What’s one item on your bucket list and why?..[…] Read more »…..

 

 

What types of cybersecurity skills can you learn in a cyber range?

What is a cyber range?

A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.

A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.

What skills can you learn in a cyber range?

A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.

SIEM, IDS/IPS and firewall management

Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.

A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.

Incident response

After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.

A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.

Operating system management: Linux and Windows

Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.

A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.

Endpoint controls and protection

As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.

Penetration testing

This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.

A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.

Network management

Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.

A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.

Malware analysis

Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.

Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.

Threat hunting

Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.

A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.

Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.

Computer forensics

Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics…[…] Read more »….

 

In an Uncertain World, You Can Count on These Four Trends in 2021

As leaders look to the year ahead, planning and predictions have taken on a whole new meaning in a post-pandemic world. With so many unknowns in 2021, how can anyone claim to know what’s coming with confidence?

However, if 2020 taught us nothing else, it is that major, unseen disruption does result in one certainty: enterprises must be more responsive. The pandemic shone a bright and unflattering spotlight on where companies need to update their IT infrastructure – both to contend with current challenges and to become a modern company. Despite cloud adoption, resilience and business continuity gradually fell to the bottom of enterprises’ infrastructure to-do lists in recent years. Now, companies are playing catchup to take hold of the future that has arrived on their doorstep, a few years early.

While 2021 feels largely uncharted, there are a few trends that are bound to define this year’s plans and investments. Companies that capitalize on these trends will position themselves not only to be competitive and stronger than ever, but to respond when the next disruption hits.

Reinforced mainframe recruiting and retention

We have witnessed the mainframe skills gap widen in recent years. As more mainframe pros retire, taking their knowledge with them, too little focus has been put on enabling emerging professionals to step in Enterprises must recalibrate, the mainframe is not going away, and the workforce needs to shift. Mainframe spend continues to grow at above 3% yearly, and upwards of 50% of organizations in several industries (financial services, insurance, public sector and more) state its running more of their business-critical applications than ever before.

To fill these vacant positions with new talent, enterprises need to adjust their approach to the mainframe. Emerging IT pros and college graduates don’t want to work on outdated interfaces. They prefer interfaces that are intuitive and allow for clicking and dragging. They want to work for a company with a highly collaborative and communicative culture, as well as modern processes and tools such as DevOps and related tool chains. DevOps enables a culture of collaboration, reduces siloes, automates for efficiency and, yes, can be applied to the mainframe. Enterprises that modernize the mainframe and related operations will close the skills gap in 2021 and usher in a new wave of innovation.

A call for COBOL programmers

COBOL (Common Business-Oriented Language) is not a dead programming language – far from it. A 2018 report for the Social Security Administration found that the administration maintained more than 60 million lines of COBOL. In fact, many government agencies rely on COBOL programs, but – no surprise – there are too few programmers to tackle them. The lack of COBOL programmers has been adding stress to the mainframe, especially for government agencies, which will only escalate over the next six to eight months.

COBOL development isn’t inherently the issue – most IT pros who can code Python can learn COBOL. The bigger problem is understanding how the programs work and what they’re doing. It requires a person who has application understanding and ample tribal knowledge. The second issue is good quality assurance. Many enterprises lack the ability to test applications and ensure nothing is at risk of breaking. COBOL coders need an easy way to digest 10 thousand lines of code, break it down and understand it. They need the right tools to avoid working unreasonable hours, battling rampant inefficiency and risking project failure. With these tools, enterprises can tackle COBOL programs in a way that is manageable for people new and experienced with COBOL.

A shift towards value stream management

As enterprises work to better align the IT and business sides of the business, including embracing agile and DevOps, value stream management is stepping into the spotlight. Up until recently, most organizations were focused on workload automation and scheduling, where they could automate certain parts of a system and schedule related jobs. However, workload automations solutions no longer fully meet most enterprises’ needs as IT moves toward DevOps and teams want to move faster and require more visibility as they orchestrate automations across multiple systems, technologies and platforms.

With value stream management, enterprises take a step beyond automating a particular “job” to instead orchestrating the automation of multiple jobs and tasks, across multiple applications and systems, to streamline a process or value stream. Many value streams are currently “in the dark,” managed manually by resources focused on ensuring a job runs to completion. With orchestration, enterprises can design and visualize their value streams, create workflows tying them together and collect metrics to figure out where improvements can be made. This visibility will allow enterprises to deliver value faster and innovate more quickly – key advantages in becoming a more responsive enterprise. This orchestration helps eliminate silos and create greater transparency enabling IT pros to find issues and remove bottlenecks faster. Enterprises can even orchestrate a DevOps toolchain, so they can kick off the creation of code and orchestrate its delivery to meet demands

The hyperautomation takeover

What is the difference between automation and hyperautomation? According to Gartner, hyperautomation applies advanced technologies like robotic process automation (RPA), artificial intelligence (AI), machine learning (ML) and more to enable the automation of virtually any repetitive task. In the age of efficiency and productivity, it is not hard to see why this trend is taking off.

The pandemic highlighted several areas within the enterprise that would benefit from hyperautomation. For instance, many companies put new workflows in place for COVID-19 tracking. HR departments need to monitor which employees are physically safe, which are not and what their IT needs are, especially with a remote workforce. While many companies were once reluctant to dabble in workflow automation with their content services solutions, never mind value stream management, COVID-19 has forced companies – and HR specifically – to reevaluate where they need automation capabilities, especially with extra processes added by the pandemic.

Automation will also be increasingly pertinent as companies apply DevOps across the organization, especially when integrating the mainframe into the DevOps toolchain. Using hyperautomation, enterprises can integrate tools that allow for continuous delivery and bring processes into a modern culture…[…] Read more »

 

3 key reasons why SOCs should implement policies over security standards

In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.

How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.

Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling

How Security Standards Look Today

Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.

There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.

We can greatly improve the security landscape by directly addressing these key issues:

1. Information Security and Compliance is Siloed

In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.

Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.

If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.

2. Security Standards are Too Generic

Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.

With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.

Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.

3. Current Practices are Reactive, Rather Than Proactive

The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.

Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.

The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.

Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.

DevSecOps – A New Way Forward

The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.

First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.

In outlining the rise and success of DevSecOps, a recent article gave three defining criteria of a true DevSecOps environment:

  1. Developers are in charge of security testing.
  2. Security experts act as consultants to developers when additional knowledge is required.
  3. Fixing security issues are managed by the development team.

Ongoing security-related issues are owned by the development team..[…] Read more »….

 

 

Making CI/CD Work for DevOps Teams

Many DevOps teams are advancing to CI/CD, some more gracefully than others. Recognizing common pitfalls and following best practices helps.

Agile, DevOps and CI/CD have all been driven by the competitive need to deliver value faster to customers. Each advancement requires some changes to processes, tools, technology and culture, although not all teams approach the shift holistically. Some focus on tools hoping to drive process changes when process changes and goals should drive tool selection. More fundamentally, teams need to adopt an increasingly inclusive mindset that overcomes traditional organizational barriers and tech-related silos so the DevOps team can achieve an automated end-to-end CI/CD pipeline.

Most organizations begin with Agile and advance to DevOps. The next step is usually CI, followed by CD, but the journey doesn’t end there because bottlenecks such as testing and security eventually become obvious.

At benefits experience platform provider HealthJoy, the DevOps team sat between Dev and Ops, maintaining a separation between the two. The DevOps team accepted builds from developers in the form of Docker images via Docker Hub. They also automated downstream Ops tasks in the CI/CD pipeline, such as deploying the software builds in AWS.

Sajal Dam, HealthJoy

Sajal Dam, HealthJoy

“Although it’s a good approach for adopting CI/CD, it misses the fact that the objective of a DevOps team is to break the barriers between Dev and Ops by collaborating with the rest of software engineering across the whole value stream of the CI/CD pipeline, not just automating Ops tasks,” said Sajal Dam, VP of engineering at HealthJoy.

Following are a few of the common challenges and advice for dealing with them.

People

People are naturally change resistant, but change is a constant when it comes to software development and delivery tools and processes.

“I’ve found the best path is to first work with a team that is excited about the change or new technology and who has the time and opportunity to redo their tooling,” said Eric Johnson, EVP of Engineering at DevOps platform provider GitLab. “Next, use their success [such as] lower cost, higher output, better quality, etc. as an example to convert the bulk of the remaining teams when it’s convenient for them to make a switch.”

Eric Johnson, GitLab

Eric Johnson, GitLab

The most fundamental people-related issue is having a culture that enables CI/CD success.
“The success of CI/CD [at] HealthJoy depends on cultivating a culture where CI/CD is not just a collection of tools and technologies for DevOps engineers but a set of principles and practices that are fully embraced by everyone in engineering to continually improve delivery throughput and operational stability,” said HealthJoy’s Dam.

At HealthJoy, the integration of CI/CD throughout the SDLC requires the rest of engineering to closely collaborate with DevOps engineers to continually transform the build, testing, deployment and monitoring activities into a repeatable set of CI/CD process steps. For example, they’ve shifted quality controls left and automated the process using DevOps principles, practices and tools.

Component provider Infragistics changed its hiring approach. Specifically, instead of hiring experts in one area, the company now looks for people with skill sets that meld well with the team.

“All of a sudden, you’ve got HR involved and marketing involved because if we don’t include marketing in every aspect of software delivery, how are they going to know what to market?” said Jason Beres, SVP of developer tools at Infragistics. “In a DevOps team, you need a director, managers, product owners, team leads and team building where it may not have been before. We also have a budget to ensure we’re training people correctly and that people are moving ahead in their careers.”

 

Jason Beres, Infragistics

Jason Beres, Infragistics

 

Effective leadership is important.

“[A]s the head of engineering, I need to play a key role in cultivating and nurturing the DevOps culture across the engineering team,” said HealthJoy’s Dam. “[O]ne of my key responsibilities is to coach and support people from all engineering divisions to continually benefit from DevOps principles and practices for an end-to-end, automated CI/CD pipeline.”

Processes

Processes should be refined as necessary, accelerated through automation and continuously monitored so they can be improved over time.

“When problems or errors arise and need to be sent back to the developer, it becomes difficult to troubleshoot because the code isn’t fresh in their mind. They have to stop working on their current project and go back to the previous code to troubleshoot,” said Gitlab’s Johnson. “In addition to wasting time and money, this is demoralizing for the developer who isn’t seeing the fruit of their labor.”

Johnson also said teams should start their transition by identifying bottlenecks and common failures in their pipelines. The easiest indicators to check pipeline inefficiencies are the runtimes of the jobs, stages and the total runtime of the pipeline itself. To avoid slowdowns or frequent failures, teams should look for problematic patterns with failed jobs.

At HealthJoy, the developers and architects have started explicitly identifying and planning for software design best practices that will continually increase the frequency, quality and security of deployments. To achieve that, engineering team members have started collaborating across the engineering divisions horizontally.

“One of the biggest barriers to changing processes outside of people and politics is the lack of tools that support modern processes,” said Stephen Magill, CEO of continuous assurance platform provider MuseDev. “To be most effective, teams need to address people, processes and technology together as part of their transformations.”

Technology

Different teams have different favorite tools that can serve as a barrier to a standardized pipeline which, unlike a patchwork of tools, can provide end-to-end visibility and ensure consistent processes throughout the SDLC with automation.

“Age and diversity of existing tools slow down migration to newer and more standardized technologies. For example, large organizations often have ancient SVN servers scattered about and integration tools are often cobbled together and fragile,” said MuseDev’s Magill. “Many third-party tools pre-date the DevOps movement and so are not easily integrated into a modern Agile development workflow.”

Integration is critical to the health and capabilities of the pipeline and necessary to achieve pipeline automation.

Stephen Magill, MuseDev

Stephen Magill, MuseDev

“The most important thing to automate, which is often overlooked, is automating and streamlining the process of getting results to developers without interrupting their workflow,” said MuseDev’s Magill. “For example, when static code analysis is automated, it usually runs in a manner that reports results to security teams or logs results in an issue tracker. Triaging these issues becomes a labor-intensive process and results become decoupled from the code change that introduced them.”

Instead, such results should be reported directly to developers as part of code review since developers can easily fix issues at that point in the development process. Moreover, they can do so without involving other parties, although Magill underscored the need for developers, QA, and security to mutually have input into which analysis tools are integrated into the development process.

GitLab’s Johnson said the upfront investment in automation should be a default decision and that the developer experience must be good enough for developers to rely on the automation.

“I’d advise adding things like unit tests, necessary integration tests, and sufficient monitoring to your ‘definition of done’ so no feature, service or application is launched without the fundamentals needed to drive efficient CI/CD,” said Johnson. “If you’re running a monorepo and/or microservices, you’re going to need some logic to determine what integration tests you need to run at the right times. You don’t want to spin up and run every integration test you have in unaffected services just because you changed one line of code.”

At Infragistics, the lack of a standard communication mechanism became an issue. About five years ago, the company had a mix of Yammer, Slack and AOL Instant Messenger.

“I don’t want silos. It took a good 12 months or more to get people weaned off those tools and on to one tool, but five years later everyone is using [Microsoft] Teams,” said Infragistics’ Beres. “When everyone is standardized on a tool like that the conversation is very fluid.”

HealthJoy encourages its engineers to stay on top of the latest software principles, technologies and practices for a CI/CD pipeline, which includes experimenting with new CI/CD tools. They’re also empowered to affect grassroots transformation through POCs and share knowledge of the CI/CD pipeline and advancements through collaborative experimentation, internal knowledge bases, and tech talks.

In fact, the architects, developers and QA team members have started collaborating across the engineering divisions to continually plan and improve the build, test, deploy, and monitoring activities as integral parts of product delivery. And the DevOps engineers have started collaborating in the SDLC and using tools and technologies that allows developers to deliver and support products without the barrier the company once had between developers and operations..[…] Read more »…..

 

“All sectors can benefit from a simulated targeted attack”

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part.

On the surface, a Red Team exercise appears like a scene straight out of a Hollywood movie. Spies masquerading as employees walking straight into the office so instinctively that no one bats an eye. Plugging things into your devices that are not supposed to be there. Tapping cameras, telephones, microphones, rolling out emails, or even walking around with a banana so you may assume the new guy/girl didn’t have time to grab a proper lunch. By the time you figure out they weren’t supposed to be where they were, it’s already too late. And the only sigh of relief is the fact that they were on your side — and they were working for you.

So, before your company humors itself with a Red Team assessment it might be of use that you talk to an expert about it. And for that, we have Tom Van de Wiele, Principal Security Consultant at F-Secure. With nearly 20 years of experience in information security, Tom specializes in red team operations and targeted penetration testing for the financial, gaming, and service industry. When not breaking into banks Tom acts as an adviser on topics such as critical infrastructure and IoT as well as incident response and cybercrime. With a team that has a 100% success rate in overcoming the combination of targeted organizations’ physical and cybersecurity defenses to end up in places they should never be, Tom is possibly one of the best red team experts in the world. In an exclusive interview with Augustin Kurian of CISO MAG, Tom discusses key questions a company should ask before it engages in a Red Team assessment.

It is often said that Red Teaming is much better than regular penetration testing? What are your thoughts about it?

Red teaming, penetration testing, source code review, vulnerability scanning, and other facets of testing play a key part in trying to establish the level of control and maturity of an organization. They all have different purposes, strengths, and limitations. A penetration test is usually limited and only focused on a certain aspect of the business e.g. a certain network, application, building, or IT asset; a red team test is based on the attacker’s choice and discretion on what to target and when. Keeping in mind the actual objectives and goals of what the client wants to have simulated that is relevant to them. That means anything with the company logo on it could be in scope for the test — keeping in mind ethics, local and international laws, and good taste.

In general, Red Team Testing is only for organizations that have already established a certain maturity and resilience when it comes to opportunistic and targeted attacks. This resilience can be expressed in many ways, hence we want to make sure that we are performing it at the right time and place for our clients, to ensure they get value out of it. The goals are three-fold: to increase the detection capabilities of the organization tailored towards relevant attack scenarios, to ensure that certain attack scenarios become impossible, and increase the response and containment time to make sure that a future attack can be dealt with swiftly and with limited impact. Ultimately, all efforts should be focused on an “assume breach” mentality while increasing the cost of attack for a would-be attacker.

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part. For instance, just because you haven’t been hit by ransomware in the past, doesn’t mean you won’t get impacted by one in the future. “Forcing” organizations to look at their own structure and how they handle their daily operations and business continuity as part of threat modeling, sometimes brings surprising results in positive or negative form. But at the end of the day, everyone is better off knowing what the risks might be of certain aspects of the business, so that an organization can take better business decisions, for better or for worse, while they structure a plan on how to handle whatever it is that is causing concern to stakeholders.

When should a company realize that it is an apt time to hold a Red Team assessment? What kinds of industries should invest in Red Teaming? If so, how frequent should the Red Teaming assessment be? Should it be a yearly process, half-yearly, quarterly, or a continuous one? How often do you do one for your clients?

All sectors can benefit from a simulated targeted attack to test the sum of their security controls, as all business sectors have something to protect or care about, be it customer data, credibility, funds, intellectual property, disruption scenarios, industrial espionage, etc. What kind of testing and how frequently depends on the maturity of the organization, its size, and how much they regard information security as a key part of their organization, rather than a costly afterthought, which unfortunately is still the case for a lot of organizations.

Major financial institutions will usually schedule a red team engagement every 1 – 1.5 years or so.

In between those, a number of other initiatives are held on a periodical basis in order to keep track of the current attack surface, the current threat landscape as well as trying to understand where the business is going versus what technology, processes, and training are required to ensure risk can be kept at an acceptable level. As part of an organization’s own due diligence, it needs to ensure that networks and application receive different levels of scrutiny using a combination of preventive and reactive efforts e.g architecture reviews, threat modeling, vulnerability scanning, source code review, and attack path mapping, just to name a few..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

Cybersecurity Predictions For 2021

Here we are again for the annual predictions of the trends and events that will impact the cybersecurity landscape in 2021. Let’s try to predict which will be the threats and bad actors that will shape the landscape in the next 12 months. I’ve put together a list of the seven top cybersecurity trends that you should be aware of.

#1 Ransomware attacks on the rise

In the past months we have observed an unprecedented surge of ransomware attacks that hit major businesses and organizations across the world. The number of attacks will continue to increase in 2021, threat actors will use prominent botnets like Trickbot to deliver their ransomware. Security experts will also observe a dramatic increase in the human-operated attacks that see threat actors exploiting known vulnerabilities in targeted systems in order to gain access to the target networks. Once gained access to these network operators will manually deploy the ransomware. School districts and municipalities will be privileged targets of cybercriminal organizations because they have limited resources and poor cyber hygiene.

In the first quarter of 2021, a growing number of organizations will continue to allow their employees to remotely access their resources in response to the ongoing COVID-19 pandemic, thus enlarging their surface of attacks.

Most of the human-operated attacks will be targeted, ransomware operators will carefully choose their victims in order to maximize their efforts.

The ransomware-as-a-service model will allow network of affiliates to arrange their own campaign that will hit end-users and SMEs worldwide.

#2 The return of cyber attacks on cryptocurrency industry

The number of cyber-attacks against organizations and businesses in the cryptocurrency industry will surge again in the first months of 2021 due to a new increase in the value of currencies such as Bitcoin.

Cryptocurrency exchanges and platforms will be targeted by both cybercrime organizations and nation-state actors attempting to monetize their efforts.

If the values of the major cryptocurrencies will increase we will observe new malware specifically designed to steal cryptocurrencies from the wallets of the victims along with new phishing campaigns targeting users of cryptocurrency platforms.

#3 Crimeware-as-a-service even more efficient

In the Crimeware-as-a-Service (CaaS) model cybercriminals offer their advanced tools and services for sale or rent to other less skilled criminals. The CaaS is having a significant effect on the threat landscape because it lowers the bar for inexperienced threat actors to launch sophisticated cyber attacks.

The CaaS model will continue to enable both technically inexperienced criminals and APT groups to rapidly arrange sophisticated attacks. The most profitable services that will be offered using this model in 2021 are ransomware and malware attacks.

CaaS allows Advanced threat actors to rapidly arrange hit-and-run operations and make their attribution difficult. In 2021 major botnet operations, such as Emotet and Trickbot, will continues to infect devices worldwide.

In the next months we will assist to the growth of Remote Access Markets that allow attackers to exchange access credentials to compromised networks and services. These services expose organizations to a broad range of cyber threats, including, malware, ransomware and e-skimming.

#4 Cyberbullying, too many people suffer in the silent

Words can cause more damages than weapons, we cannot underestimate this threat and technology could exacerbate this dangers. Cyberbullying refers to the practice of using technology to harass, or bully, someone else.

The term cyberbullying is as an umbrella for different kinds of online abuse, some of which are rapidly increasing such as doxing, cyberstalking, and revenge porn.

Authorities and media are approaching the problem with increasing interest, but evidently it is not enough.

This criminal practice represents one of the greatest dangers of the Internet, it could have a devastating impact on teenagers.

In the upcoming months, the problem of cyberbullying will impact, most of ever, the online gaming community reaching worrisome level.

#5 State-sponsored hacking, all against all

In 2021, cyber attacks carried out by state-sponsored hackers will cause important damages to the target organizations.

The number of targeted attacks against government organizations and critical infrastructure will increase pushing the states to promote a global dialog to discuss about the risks connected to these campaigns.

The healthcare and the pharmaceutical sector, as well as academic and financial industries will be under attack.

Nation-state actors aim at gathering intelligence on strategic Intellectual Property.

Most of the campaigns that will be uncovered by security firms will be carried out by APT groups linked to Russia, China, Iran, and North Korea. This is just the tip of the iceberg because the level of sophistication of these campaigns will allow them to avoid the detection for long periods with dramatic consequences.

Nation-state actors will be also involved in long-running disinformation campaigns aimed at destabilizing the politics of other states.

#6 IoT industry under attack

The rapid evolution of the internet-of-things (IoT) industry and the implementation of 5G networks will push businesses to become ever more reliant on IoT technology.

The bad news it that a large number of smart devices fails in implementing security by design and most of their instances are poorly configured, exposing the organizations and individuals to the risk of hack.

Threat actors will develop new malware to target IoT devices that could be abused in multi-purposes malicious campaigns. Ransomware operators will also focus their efforts on the development of specific malware variants to target these systems.

IoT ransomware are designed to take over connected systems and force them to work incorrectly (i.e. changing the level of chemical elements in production processes or manipulating the level of medicine in an insulin pump), and forcing victims into paying the ransom in order to restore ordinary operations.

#7 Data breaches will continue to flood cybercrime underground market

Thousands of data breaches will be disclosed in 2021 by organizations worldwide..[…] Read more »….

 

Beyond standard risk feeds: Adopting a more holistic API solution

In July 2020, the gaming company Nintendo was compromised in a data breach that commentators described as unprecedented.

The breach, dubbed “the gigaleak,” exposed internal emails and identifying information, as well as a deluge of proprietary source code and other internal documents.  But the compromise wasn’t discovered by observing network traffic or even dark web analysis — it was first identified through a post on 4chan.

Less-regulated online spaces like imageboards, messaging apps, decentralized platforms, and other obscure sites are increasingly relevant for detecting these types of corporate security compromises. Serious threats can be easily missed if security teams aren’t looking beyond standard digital risk sources like technical and dark web data feeds.

Overlooked risks can cost companies millions in financial and reputational damage — but existing commercial threat intelligence solutions often lack data coverage, especially from these alternative web spaces.

How does this impact corporate security operations, and how can data coverage gaps be addressed?

An evolving corporate risk landscape

Security risk detection is no longer limited to highly anonymized online spaces like the dark web or technical feeds like network traffic data.

While these sources remain crucial, corporate security teams also need to assess obscure social sites, forums, and imageboards, messaging apps, decentralized platforms, and paste sites. These spaces are frequently used to circulate leaked data, as with the Nintendo breach, and discuss or advertise hacking tactics like malware and phishing.

echosec
Example of leaked data on RaidForums, a popular hacking website on the deep web—posted/discovered by Echosec Systems

Beyond malware and breach detection, these sources can indicate internal threats, fraud, theft, disinformation, brand impersonation, potentially damaging viral content, and other threats implicating a company or industry.

The rise of hacktivism and extremism on less-regulated networks also poses an increased risk to companies and executives. For example, disinformation or violence targeting high-profile personnel may be discussed and planned on these sites.

Why are these alternative sources becoming more relevant for threat detection?

To start, surface and deep web networks are more accessible for threat actors even though the dark web may offer more anonymity. They also have further reach than the dark web — a relatively small and isolated webspace — if the goal is to spread disinformation and leaked data.

Obfuscation tactics in text-based content are also becoming more sophisticated. For example, special characters (e.g. !4$@), intentional typos, code language, or acronyms can be used to hide targeted threats and company names. Adversaries are often less concerned with detection on surface and deep websites using these techniques.

Decentralization is also becoming a popular hosting method for threat actors concerned with censorship on mainstream networks and takedowns on the dark web. Decentralization means that content or social media platforms are hosted on multiple global or user-operated servers so that networks are theoretically impossible to dismantle.

 

echosec
CEO-targeted death threat on the decentralized social network Mastodon — discovered by Echosec Systems

 

While the dark web was once considered a mecca for detecting security threats, these factors are extending relevant intelligence sources to a wider range of alternative sites.

New barriers to threat detection

Emerging online spaces offer valuable security data, but the changing threat landscape is posing new challenges for corporate security. Many alternative threat intelligence sources are obscure enough that analysts may not know they exist or to look there for threats. Some surface and deep websites, like forums and imageboards, emerge and turn over quickly, making it hard to keep track of what’s currently relevant.

Additionally, many commercial, off-the-shelf APIs provide access to technical security feeds and common sources like the dark web and mainstream social media — but do not offer this alternative data. This creates a functional gap for security teams who realize the value of obscure online sources but may be forced to navigate them manually.

APIs enable security teams to funnel data from online sources directly into their security tooling and interfaces rather than collecting data through manual searches on-site.

 

echosec
Leaked image of a security operations Centre on social media — discovered by Echosec Systems

 

For most corporate security teams and operations centers, manual data gathering — which often requires creating dummy accounts — is unsustainable, requiring a significant amount of time and resources.

Efficient threat intelligence access is essential in an industry where security teams are often understaffed and overwhelmed by alerts. According to a recent survey by Forrester Consulting, the average security operations team sees 11,000 daily alerts but only has the resources to address 72% of them.

Putting aside the issue of niche data access, industry research suggests that commercial threat intelligence vendors vary widely in their data coverage — overlapping 4% at most even when tracking the same specific threat groups. This raises concerns about how many critical alerts are missed by security teams and operations centers — and how holistic their data coverage actually is, even when using more than one vendor.

Holistic APIs: The future of addressing corporate risk

How do security professionals and operations centers comprehensively access relevant data and accelerate analysis and triage? To address these issues, security teams must rethink their API coverage.

This means adopting commercial threat intelligence solutions that are transparent about their data coverage. Vendors must be able to offer a wider variety of standard and alternative threat sources than is commonly available through off-the-shelf APIs. To achieve this, vendors often must source data in unique ways — such as developing proprietary web crawlers to sit in less-regulated chat applications and forums.

When standard threat intelligence sources are combined with fringe online data in an API, analysts can do their jobs faster than merging conventional feeds with manual navigation. Analysts also get more contextual value within their tooling than viewing different sources separately. It also means that previously overlooked risks on obscure sites are included in a more holistic security strategy.

An API also retains content that has been deleted on the original site since being crawled, allowing for more thorough investigations than those possible with manual searches. This is important on more obscure networks like 4chan where content turns over quickly.

 

echosec

 

When collected and catalogued appropriately, a wider variety of online data can be used to train effective machine learning models. These can support faster and more accurate threat detection for overwhelmed security teams. In fact, some emerging APIs have machine learning functionality already built-in so analysts can narrow in on relevant data faster.

As alert volumes grow and threat actors migrate to a greater variety of online spaces, security professionals are likely to become more concerned with their data coverage — and how to integrate alternative data sources effectively into workflows…[…] Read more »….

 

 

The Inevitable Rise of Intelligence in the Edge Ecosystem

A new frontier is taking shape where smart, autonomous devices running data on 5G networks process information that can lead to near real-time insights enterprises need.

The implementation and adoption of 5G wireless, the cloud, and smarter devices is setting the stage for advanced capabilities to emerge at the edge, according to experts and stakeholders. Communications providers such as Verizon continue to flesh out the newest generation of wireless, which promises to offer more robust data capacity and mobile solutions. In brief, the edge has the potential to be a place where greater data processing and analytics happens with near real-time speed, even in seemingly small devices. On the hardware and services side, IBM, Nokia Enterprise, DXC Technology, and Intel all see potential for these converging resources to evolve the edge in 2021 in exponential ways — if all the right pieces fall into place.

The edge is poised to support highly responsive compute, far from core data centers, but Bob Gill, research vice president with Gartner, says the landscape needs to become more cohesive.  “As long as all we have are vertical, monolithic, bespoke stacks, edge isn’t going to scale,” he says, referring to the differing resources created to work at the edge that might not mesh well with other solutions.

Gill defines the edge as the place where the physical and digital worlds interact, which can include sensors and industrial machine controllers. He says it is a form of distributed computing with assets placed in locations that can optimize latency and bandwidth. Retailers, internet of things, and the industrial world have already been working at the edge for more than a decade, Gill says. The current activity at the edge may introduce the world to even more possibilities. “What’s changed is the huge plethora of services from the cloud along with the rising intelligence and number of devices at the edge,” he says. “The edge completes the cloud.”

The focus of the evolution at the edge is to push intelligence to locations where bandwidth, data latency, and autonomy might otherwise be concerns when connecting to the cloud or core computing. With more autonomy, Gill says devices at the edge will be able to operate even if their connections are down.

This might include robots in manufacturing or automated resources in warehousing and logistics, as well as transportation, oil, and gas. Organizations will need some normalization of platforms and solutions at the edge, he says, in order to see the full benefit of such resources. “They’re looking for standardized toolsets and a way that everything isn’t a bespoke one-off,” Gill says.  This could include using open source frameworks deployed to create solutions that can be tweaked.

Gill expects there to be move toward a standardized approach in the next five years. He says enterprise leadership should ask questions about ways the edge can help the organization achieve goals while also eliminating risk. “The c-suite should be saying, ‘What is the business benefit I’m getting out of this? Is it something that’s replicable?’”

Edge mimics public cloud

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute.

The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.

Part of that acceleration came from medical facilities, logistics, and distribution where the need is great for such implementations. Loucks sees investment and opportunities for companies to move quickly at the edge with such resources as professional services robots that work alongside people. Such robots need fast, low latency connections made possible through 5G and have edge AI chips to assist with computer visions, letting them “see” their environment, he says.

Loucks says there are an estimated 650 million edge AI chips in the wild this year with that number expected to scale up fast. “We are predicting [there will be] around 1.6 billion edge AI chips by 2024 as the chips get smaller with lower power consumption,” he says.

The COVID accelerator

World events have played a part in advancing the resources and capabilities at the edge, says Paul Silverglate, vice chairman and Deloitte’s US technology sector leader. “COVID has been an accelerator and a challenge as it relates to computing at the edge,” he says. Remote working, digital transformation, and cloud migration have all been pushed faster than expected in response to the repercussions of the pandemic. “We’ve gone 10s of years into the future,” Silverglate says.

That future may already be happening as Verizon sees the components of the edge coming together, says director of IoT and real-time enterprise Thierry Sender. “From a Verizon standpoint, we now have partners for enabling edge deeply integrated into our 5G network and wireless overall,” he says, “which means 4G devices get the benefit of the capabilities.” For example, Sender says for private infrastructure, Verizon has a relationship with Microsoft to deliver on compute resources that support mission critical applications large enterprises would have in warehouses or manufacturing. That ties together different bespoke solutions that enterprises use together to solve their needs.

The edge elements coming together in 2020 are building blocks for exponential change, Sender says. “2021 is the year of transformation,” he says. “That’s where a lot of the solutions will begin to truly manifest themselves.” Sender also says 2022 will be a year of disruption as industries adapt to real-time operational and customer insights that affect their businesses. “Every industry is being impacted with this edge integration to network,” Sender says.

This transformative move is well under way, says Evaristus Mainsah, general manager of the IBM Cloud private ecosystem. “What we’re seeing is lots of data moving out to edge locations.” That is thanks to more devices carrying enough compute to conduct analytics, he says, reducing the need to move data to a data center or to the cloud to process. By 2023, expect 50% of new on-prem infrastructure will be in edge locations, he says, compared with 10% now. Enterprise data processing outside of central data centers will also grow from 10% now to 75% in 2025, Mainsah says. “Think of it as a movement of data from traditional data center or cloud locations out into edges.”

There is a generation shift taking place, says Karl Bream, head of strategy for Nokia’s enterprise business, which will take some time and see more agility, automation, and efficiency. “The network is becoming higher capacity, much more reliable, much lower latency, and can perform better in situations where you’re controlling high value assets,” he says. Bream calls this an inflection point, though networks alone cannot achieve the next evolution. Data privacy and security remain concerns, he says, as many enterprises must decide if they can allow data to reside offsite.

Tradeoffs and choices

There are tradeoffs and choices to be made, but possibilities are growing fast at the edge. “We’re seeing web companies putting edge type scenarios into place to put storage closer and closer to the device,” Bream says..[…] Read more »…..

 

Social Engineering: Life Blood of Data Exploitation (Phishing)

What do Jeffrey Dahmer, Ted Bundy, Wayne Gacy, Dennis Rader, and Frank Abigail all have in common, aside from the obvious fact that they are all criminals?  They are also all master manipulators that utilize the art of social engineering to outwit their unsuspecting victims into providing them with the object or objects that they desire.  They appear as angels of light but are no more than ravenous wolves in sheep’s clothing. There are six components of an information system: Humans, Hardware, Software, Data, Network Communication, and Policies; with the human being the weakest link of the six.

By Zachery S. Mitcham, MSA, CCISO, CSIH, VP and Chief Information Security Officer, SURGE Professional Services-Group
Social engineering is the art of utilizing deception to manipulate a subject into providing the manipulator with the object or objects they are seeking to obtain. Pretexting is often used in order to present a false perception of having creditability via sources universally known to be valid. It is a dangerous combination to be gullible and greedy. Social engineers prey on the gullible and greedy using the full range of human emotions to exploit their weaknesses via various scams, of which the most popular being phishing.  They have the uncanny ability to influence their victim to comply with their demands.

Phishing is an age-old process of scamming a victim out of something by utilizing bait that appears to be legitimate. Prior to the age of computing, phishing was conducted mainly through chain mail but has evolved over the years in cyberspace via electronic mail. One of the most popular phishing scams is the Nigerian 419 scam, which is named after the Nigerian criminal code that addresses the crime.

Information security professionals normally eliminate the idea of social norms when investigating cybercrime.  Otherwise, you will be led into morose mole tunnels going nowhere. They understand that the social engineering cybercriminal capitalizes on unsuspecting targets of opportunity. Implicit biases can lead to the demise of the possessor. Human behavior can work to your disadvantage if left unchecked. You profile one while unwittingly becoming a victim of the transgressions of another. These inherent and natural tendencies can lead to breaches of security. The most successful cybersecurity investigators have a thorough understanding of the sophisticated criminal mind.

Victims of social engineering often feel sad and embarrassed. They are reluctant to report the crime depending on its magnitude. And the CISO to comes the rescue! In order to get to the root cause of the to determine the damage caused to the enterprise, the CISO must put the victim at ease by letting them know that they are not alone in their unwitting entanglement.

These are some tips that can assist you with an anti-social engineering strategy for your enterprise: Employ Sociological education tools by developing a comprehensive Information Security Awareness and Training program addressing all six basic components that make up the information system. The majority of security threats that exist on the network are a direct result of insider threats caused by humans, no matter if they are unintentional or deliberate. The most effective way an organization can mitigate the damaged caused by insider threats is to develop effective security awareness and training program that is ongoing and mandatory.

Deploy enterprise technological tools that protect your human capital against themselves.

Digital Rights Management (DRM) and Data Loss Prevention (DLP) serve as effective defensive tools that protect from the exfiltration enterprise data in the event that it falls into the wrong hands...[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>