3 key reasons why SOCs should implement policies over security standards

In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.

How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.

Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling

How Security Standards Look Today

Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.

There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.

We can greatly improve the security landscape by directly addressing these key issues:

1. Information Security and Compliance is Siloed

In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.

Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.

If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.

2. Security Standards are Too Generic

Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.

With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.

Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.

3. Current Practices are Reactive, Rather Than Proactive

The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.

Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.

The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.

Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.

DevSecOps – A New Way Forward

The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.

First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.

In outlining the rise and success of DevSecOps, a recent article gave three defining criteria of a true DevSecOps environment:

  1. Developers are in charge of security testing.
  2. Security experts act as consultants to developers when additional knowledge is required.
  3. Fixing security issues are managed by the development team.

Ongoing security-related issues are owned by the development team..[…] Read more »….



Making CI/CD Work for DevOps Teams

Many DevOps teams are advancing to CI/CD, some more gracefully than others. Recognizing common pitfalls and following best practices helps.

Agile, DevOps and CI/CD have all been driven by the competitive need to deliver value faster to customers. Each advancement requires some changes to processes, tools, technology and culture, although not all teams approach the shift holistically. Some focus on tools hoping to drive process changes when process changes and goals should drive tool selection. More fundamentally, teams need to adopt an increasingly inclusive mindset that overcomes traditional organizational barriers and tech-related silos so the DevOps team can achieve an automated end-to-end CI/CD pipeline.

Most organizations begin with Agile and advance to DevOps. The next step is usually CI, followed by CD, but the journey doesn’t end there because bottlenecks such as testing and security eventually become obvious.

At benefits experience platform provider HealthJoy, the DevOps team sat between Dev and Ops, maintaining a separation between the two. The DevOps team accepted builds from developers in the form of Docker images via Docker Hub. They also automated downstream Ops tasks in the CI/CD pipeline, such as deploying the software builds in AWS.

Sajal Dam, HealthJoy

Sajal Dam, HealthJoy

“Although it’s a good approach for adopting CI/CD, it misses the fact that the objective of a DevOps team is to break the barriers between Dev and Ops by collaborating with the rest of software engineering across the whole value stream of the CI/CD pipeline, not just automating Ops tasks,” said Sajal Dam, VP of engineering at HealthJoy.

Following are a few of the common challenges and advice for dealing with them.


People are naturally change resistant, but change is a constant when it comes to software development and delivery tools and processes.

“I’ve found the best path is to first work with a team that is excited about the change or new technology and who has the time and opportunity to redo their tooling,” said Eric Johnson, EVP of Engineering at DevOps platform provider GitLab. “Next, use their success [such as] lower cost, higher output, better quality, etc. as an example to convert the bulk of the remaining teams when it’s convenient for them to make a switch.”

Eric Johnson, GitLab

Eric Johnson, GitLab

The most fundamental people-related issue is having a culture that enables CI/CD success.
“The success of CI/CD [at] HealthJoy depends on cultivating a culture where CI/CD is not just a collection of tools and technologies for DevOps engineers but a set of principles and practices that are fully embraced by everyone in engineering to continually improve delivery throughput and operational stability,” said HealthJoy’s Dam.

At HealthJoy, the integration of CI/CD throughout the SDLC requires the rest of engineering to closely collaborate with DevOps engineers to continually transform the build, testing, deployment and monitoring activities into a repeatable set of CI/CD process steps. For example, they’ve shifted quality controls left and automated the process using DevOps principles, practices and tools.

Component provider Infragistics changed its hiring approach. Specifically, instead of hiring experts in one area, the company now looks for people with skill sets that meld well with the team.

“All of a sudden, you’ve got HR involved and marketing involved because if we don’t include marketing in every aspect of software delivery, how are they going to know what to market?” said Jason Beres, SVP of developer tools at Infragistics. “In a DevOps team, you need a director, managers, product owners, team leads and team building where it may not have been before. We also have a budget to ensure we’re training people correctly and that people are moving ahead in their careers.”


Jason Beres, Infragistics

Jason Beres, Infragistics


Effective leadership is important.

“[A]s the head of engineering, I need to play a key role in cultivating and nurturing the DevOps culture across the engineering team,” said HealthJoy’s Dam. “[O]ne of my key responsibilities is to coach and support people from all engineering divisions to continually benefit from DevOps principles and practices for an end-to-end, automated CI/CD pipeline.”


Processes should be refined as necessary, accelerated through automation and continuously monitored so they can be improved over time.

“When problems or errors arise and need to be sent back to the developer, it becomes difficult to troubleshoot because the code isn’t fresh in their mind. They have to stop working on their current project and go back to the previous code to troubleshoot,” said Gitlab’s Johnson. “In addition to wasting time and money, this is demoralizing for the developer who isn’t seeing the fruit of their labor.”

Johnson also said teams should start their transition by identifying bottlenecks and common failures in their pipelines. The easiest indicators to check pipeline inefficiencies are the runtimes of the jobs, stages and the total runtime of the pipeline itself. To avoid slowdowns or frequent failures, teams should look for problematic patterns with failed jobs.

At HealthJoy, the developers and architects have started explicitly identifying and planning for software design best practices that will continually increase the frequency, quality and security of deployments. To achieve that, engineering team members have started collaborating across the engineering divisions horizontally.

“One of the biggest barriers to changing processes outside of people and politics is the lack of tools that support modern processes,” said Stephen Magill, CEO of continuous assurance platform provider MuseDev. “To be most effective, teams need to address people, processes and technology together as part of their transformations.”


Different teams have different favorite tools that can serve as a barrier to a standardized pipeline which, unlike a patchwork of tools, can provide end-to-end visibility and ensure consistent processes throughout the SDLC with automation.

“Age and diversity of existing tools slow down migration to newer and more standardized technologies. For example, large organizations often have ancient SVN servers scattered about and integration tools are often cobbled together and fragile,” said MuseDev’s Magill. “Many third-party tools pre-date the DevOps movement and so are not easily integrated into a modern Agile development workflow.”

Integration is critical to the health and capabilities of the pipeline and necessary to achieve pipeline automation.

Stephen Magill, MuseDev

Stephen Magill, MuseDev

“The most important thing to automate, which is often overlooked, is automating and streamlining the process of getting results to developers without interrupting their workflow,” said MuseDev’s Magill. “For example, when static code analysis is automated, it usually runs in a manner that reports results to security teams or logs results in an issue tracker. Triaging these issues becomes a labor-intensive process and results become decoupled from the code change that introduced them.”

Instead, such results should be reported directly to developers as part of code review since developers can easily fix issues at that point in the development process. Moreover, they can do so without involving other parties, although Magill underscored the need for developers, QA, and security to mutually have input into which analysis tools are integrated into the development process.

GitLab’s Johnson said the upfront investment in automation should be a default decision and that the developer experience must be good enough for developers to rely on the automation.

“I’d advise adding things like unit tests, necessary integration tests, and sufficient monitoring to your ‘definition of done’ so no feature, service or application is launched without the fundamentals needed to drive efficient CI/CD,” said Johnson. “If you’re running a monorepo and/or microservices, you’re going to need some logic to determine what integration tests you need to run at the right times. You don’t want to spin up and run every integration test you have in unaffected services just because you changed one line of code.”

At Infragistics, the lack of a standard communication mechanism became an issue. About five years ago, the company had a mix of Yammer, Slack and AOL Instant Messenger.

“I don’t want silos. It took a good 12 months or more to get people weaned off those tools and on to one tool, but five years later everyone is using [Microsoft] Teams,” said Infragistics’ Beres. “When everyone is standardized on a tool like that the conversation is very fluid.”

HealthJoy encourages its engineers to stay on top of the latest software principles, technologies and practices for a CI/CD pipeline, which includes experimenting with new CI/CD tools. They’re also empowered to affect grassroots transformation through POCs and share knowledge of the CI/CD pipeline and advancements through collaborative experimentation, internal knowledge bases, and tech talks.

In fact, the architects, developers and QA team members have started collaborating across the engineering divisions to continually plan and improve the build, test, deploy, and monitoring activities as integral parts of product delivery. And the DevOps engineers have started collaborating in the SDLC and using tools and technologies that allows developers to deliver and support products without the barrier the company once had between developers and operations..[…] Read more »…..


“All sectors can benefit from a simulated targeted attack”

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part.

On the surface, a Red Team exercise appears like a scene straight out of a Hollywood movie. Spies masquerading as employees walking straight into the office so instinctively that no one bats an eye. Plugging things into your devices that are not supposed to be there. Tapping cameras, telephones, microphones, rolling out emails, or even walking around with a banana so you may assume the new guy/girl didn’t have time to grab a proper lunch. By the time you figure out they weren’t supposed to be where they were, it’s already too late. And the only sigh of relief is the fact that they were on your side — and they were working for you.

So, before your company humors itself with a Red Team assessment it might be of use that you talk to an expert about it. And for that, we have Tom Van de Wiele, Principal Security Consultant at F-Secure. With nearly 20 years of experience in information security, Tom specializes in red team operations and targeted penetration testing for the financial, gaming, and service industry. When not breaking into banks Tom acts as an adviser on topics such as critical infrastructure and IoT as well as incident response and cybercrime. With a team that has a 100% success rate in overcoming the combination of targeted organizations’ physical and cybersecurity defenses to end up in places they should never be, Tom is possibly one of the best red team experts in the world. In an exclusive interview with Augustin Kurian of CISO MAG, Tom discusses key questions a company should ask before it engages in a Red Team assessment.

It is often said that Red Teaming is much better than regular penetration testing? What are your thoughts about it?

Red teaming, penetration testing, source code review, vulnerability scanning, and other facets of testing play a key part in trying to establish the level of control and maturity of an organization. They all have different purposes, strengths, and limitations. A penetration test is usually limited and only focused on a certain aspect of the business e.g. a certain network, application, building, or IT asset; a red team test is based on the attacker’s choice and discretion on what to target and when. Keeping in mind the actual objectives and goals of what the client wants to have simulated that is relevant to them. That means anything with the company logo on it could be in scope for the test — keeping in mind ethics, local and international laws, and good taste.

In general, Red Team Testing is only for organizations that have already established a certain maturity and resilience when it comes to opportunistic and targeted attacks. This resilience can be expressed in many ways, hence we want to make sure that we are performing it at the right time and place for our clients, to ensure they get value out of it. The goals are three-fold: to increase the detection capabilities of the organization tailored towards relevant attack scenarios, to ensure that certain attack scenarios become impossible, and increase the response and containment time to make sure that a future attack can be dealt with swiftly and with limited impact. Ultimately, all efforts should be focused on an “assume breach” mentality while increasing the cost of attack for a would-be attacker.

Knowing that red teaming and target-based attack simulations are at the proverbial finish line for an organization, it is still beneficial to have a red team as an end-goal as part of a real simulation. It forces organizations to look at their own security from a threat-based approach, rather than a risk-based approach, where the past defines the future for the most part. For instance, just because you haven’t been hit by ransomware in the past, doesn’t mean you won’t get impacted by one in the future. “Forcing” organizations to look at their own structure and how they handle their daily operations and business continuity as part of threat modeling, sometimes brings surprising results in positive or negative form. But at the end of the day, everyone is better off knowing what the risks might be of certain aspects of the business, so that an organization can take better business decisions, for better or for worse, while they structure a plan on how to handle whatever it is that is causing concern to stakeholders.

When should a company realize that it is an apt time to hold a Red Team assessment? What kinds of industries should invest in Red Teaming? If so, how frequent should the Red Teaming assessment be? Should it be a yearly process, half-yearly, quarterly, or a continuous one? How often do you do one for your clients?

All sectors can benefit from a simulated targeted attack to test the sum of their security controls, as all business sectors have something to protect or care about, be it customer data, credibility, funds, intellectual property, disruption scenarios, industrial espionage, etc. What kind of testing and how frequently depends on the maturity of the organization, its size, and how much they regard information security as a key part of their organization, rather than a costly afterthought, which unfortunately is still the case for a lot of organizations.

Major financial institutions will usually schedule a red team engagement every 1 – 1.5 years or so.

In between those, a number of other initiatives are held on a periodical basis in order to keep track of the current attack surface, the current threat landscape as well as trying to understand where the business is going versus what technology, processes, and training are required to ensure risk can be kept at an acceptable level. As part of an organization’s own due diligence, it needs to ensure that networks and application receive different levels of scrutiny using a combination of preventive and reactive efforts e.g architecture reviews, threat modeling, vulnerability scanning, source code review, and attack path mapping, just to name a few..[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

Cybersecurity Predictions For 2021

Here we are again for the annual predictions of the trends and events that will impact the cybersecurity landscape in 2021. Let’s try to predict which will be the threats and bad actors that will shape the landscape in the next 12 months. I’ve put together a list of the seven top cybersecurity trends that you should be aware of.

#1 Ransomware attacks on the rise

In the past months we have observed an unprecedented surge of ransomware attacks that hit major businesses and organizations across the world. The number of attacks will continue to increase in 2021, threat actors will use prominent botnets like Trickbot to deliver their ransomware. Security experts will also observe a dramatic increase in the human-operated attacks that see threat actors exploiting known vulnerabilities in targeted systems in order to gain access to the target networks. Once gained access to these network operators will manually deploy the ransomware. School districts and municipalities will be privileged targets of cybercriminal organizations because they have limited resources and poor cyber hygiene.

In the first quarter of 2021, a growing number of organizations will continue to allow their employees to remotely access their resources in response to the ongoing COVID-19 pandemic, thus enlarging their surface of attacks.

Most of the human-operated attacks will be targeted, ransomware operators will carefully choose their victims in order to maximize their efforts.

The ransomware-as-a-service model will allow network of affiliates to arrange their own campaign that will hit end-users and SMEs worldwide.

#2 The return of cyber attacks on cryptocurrency industry

The number of cyber-attacks against organizations and businesses in the cryptocurrency industry will surge again in the first months of 2021 due to a new increase in the value of currencies such as Bitcoin.

Cryptocurrency exchanges and platforms will be targeted by both cybercrime organizations and nation-state actors attempting to monetize their efforts.

If the values of the major cryptocurrencies will increase we will observe new malware specifically designed to steal cryptocurrencies from the wallets of the victims along with new phishing campaigns targeting users of cryptocurrency platforms.

#3 Crimeware-as-a-service even more efficient

In the Crimeware-as-a-Service (CaaS) model cybercriminals offer their advanced tools and services for sale or rent to other less skilled criminals. The CaaS is having a significant effect on the threat landscape because it lowers the bar for inexperienced threat actors to launch sophisticated cyber attacks.

The CaaS model will continue to enable both technically inexperienced criminals and APT groups to rapidly arrange sophisticated attacks. The most profitable services that will be offered using this model in 2021 are ransomware and malware attacks.

CaaS allows Advanced threat actors to rapidly arrange hit-and-run operations and make their attribution difficult. In 2021 major botnet operations, such as Emotet and Trickbot, will continues to infect devices worldwide.

In the next months we will assist to the growth of Remote Access Markets that allow attackers to exchange access credentials to compromised networks and services. These services expose organizations to a broad range of cyber threats, including, malware, ransomware and e-skimming.

#4 Cyberbullying, too many people suffer in the silent

Words can cause more damages than weapons, we cannot underestimate this threat and technology could exacerbate this dangers. Cyberbullying refers to the practice of using technology to harass, or bully, someone else.

The term cyberbullying is as an umbrella for different kinds of online abuse, some of which are rapidly increasing such as doxing, cyberstalking, and revenge porn.

Authorities and media are approaching the problem with increasing interest, but evidently it is not enough.

This criminal practice represents one of the greatest dangers of the Internet, it could have a devastating impact on teenagers.

In the upcoming months, the problem of cyberbullying will impact, most of ever, the online gaming community reaching worrisome level.

#5 State-sponsored hacking, all against all

In 2021, cyber attacks carried out by state-sponsored hackers will cause important damages to the target organizations.

The number of targeted attacks against government organizations and critical infrastructure will increase pushing the states to promote a global dialog to discuss about the risks connected to these campaigns.

The healthcare and the pharmaceutical sector, as well as academic and financial industries will be under attack.

Nation-state actors aim at gathering intelligence on strategic Intellectual Property.

Most of the campaigns that will be uncovered by security firms will be carried out by APT groups linked to Russia, China, Iran, and North Korea. This is just the tip of the iceberg because the level of sophistication of these campaigns will allow them to avoid the detection for long periods with dramatic consequences.

Nation-state actors will be also involved in long-running disinformation campaigns aimed at destabilizing the politics of other states.

#6 IoT industry under attack

The rapid evolution of the internet-of-things (IoT) industry and the implementation of 5G networks will push businesses to become ever more reliant on IoT technology.

The bad news it that a large number of smart devices fails in implementing security by design and most of their instances are poorly configured, exposing the organizations and individuals to the risk of hack.

Threat actors will develop new malware to target IoT devices that could be abused in multi-purposes malicious campaigns. Ransomware operators will also focus their efforts on the development of specific malware variants to target these systems.

IoT ransomware are designed to take over connected systems and force them to work incorrectly (i.e. changing the level of chemical elements in production processes or manipulating the level of medicine in an insulin pump), and forcing victims into paying the ransom in order to restore ordinary operations.

#7 Data breaches will continue to flood cybercrime underground market

Thousands of data breaches will be disclosed in 2021 by organizations worldwide..[…] Read more »….


Beyond standard risk feeds: Adopting a more holistic API solution

In July 2020, the gaming company Nintendo was compromised in a data breach that commentators described as unprecedented.

The breach, dubbed “the gigaleak,” exposed internal emails and identifying information, as well as a deluge of proprietary source code and other internal documents.  But the compromise wasn’t discovered by observing network traffic or even dark web analysis — it was first identified through a post on 4chan.

Less-regulated online spaces like imageboards, messaging apps, decentralized platforms, and other obscure sites are increasingly relevant for detecting these types of corporate security compromises. Serious threats can be easily missed if security teams aren’t looking beyond standard digital risk sources like technical and dark web data feeds.

Overlooked risks can cost companies millions in financial and reputational damage — but existing commercial threat intelligence solutions often lack data coverage, especially from these alternative web spaces.

How does this impact corporate security operations, and how can data coverage gaps be addressed?

An evolving corporate risk landscape

Security risk detection is no longer limited to highly anonymized online spaces like the dark web or technical feeds like network traffic data.

While these sources remain crucial, corporate security teams also need to assess obscure social sites, forums, and imageboards, messaging apps, decentralized platforms, and paste sites. These spaces are frequently used to circulate leaked data, as with the Nintendo breach, and discuss or advertise hacking tactics like malware and phishing.

Example of leaked data on RaidForums, a popular hacking website on the deep web—posted/discovered by Echosec Systems

Beyond malware and breach detection, these sources can indicate internal threats, fraud, theft, disinformation, brand impersonation, potentially damaging viral content, and other threats implicating a company or industry.

The rise of hacktivism and extremism on less-regulated networks also poses an increased risk to companies and executives. For example, disinformation or violence targeting high-profile personnel may be discussed and planned on these sites.

Why are these alternative sources becoming more relevant for threat detection?

To start, surface and deep web networks are more accessible for threat actors even though the dark web may offer more anonymity. They also have further reach than the dark web — a relatively small and isolated webspace — if the goal is to spread disinformation and leaked data.

Obfuscation tactics in text-based content are also becoming more sophisticated. For example, special characters (e.g. !4$@), intentional typos, code language, or acronyms can be used to hide targeted threats and company names. Adversaries are often less concerned with detection on surface and deep websites using these techniques.

Decentralization is also becoming a popular hosting method for threat actors concerned with censorship on mainstream networks and takedowns on the dark web. Decentralization means that content or social media platforms are hosted on multiple global or user-operated servers so that networks are theoretically impossible to dismantle.


CEO-targeted death threat on the decentralized social network Mastodon — discovered by Echosec Systems


While the dark web was once considered a mecca for detecting security threats, these factors are extending relevant intelligence sources to a wider range of alternative sites.

New barriers to threat detection

Emerging online spaces offer valuable security data, but the changing threat landscape is posing new challenges for corporate security. Many alternative threat intelligence sources are obscure enough that analysts may not know they exist or to look there for threats. Some surface and deep websites, like forums and imageboards, emerge and turn over quickly, making it hard to keep track of what’s currently relevant.

Additionally, many commercial, off-the-shelf APIs provide access to technical security feeds and common sources like the dark web and mainstream social media — but do not offer this alternative data. This creates a functional gap for security teams who realize the value of obscure online sources but may be forced to navigate them manually.

APIs enable security teams to funnel data from online sources directly into their security tooling and interfaces rather than collecting data through manual searches on-site.


Leaked image of a security operations Centre on social media — discovered by Echosec Systems


For most corporate security teams and operations centers, manual data gathering — which often requires creating dummy accounts — is unsustainable, requiring a significant amount of time and resources.

Efficient threat intelligence access is essential in an industry where security teams are often understaffed and overwhelmed by alerts. According to a recent survey by Forrester Consulting, the average security operations team sees 11,000 daily alerts but only has the resources to address 72% of them.

Putting aside the issue of niche data access, industry research suggests that commercial threat intelligence vendors vary widely in their data coverage — overlapping 4% at most even when tracking the same specific threat groups. This raises concerns about how many critical alerts are missed by security teams and operations centers — and how holistic their data coverage actually is, even when using more than one vendor.

Holistic APIs: The future of addressing corporate risk

How do security professionals and operations centers comprehensively access relevant data and accelerate analysis and triage? To address these issues, security teams must rethink their API coverage.

This means adopting commercial threat intelligence solutions that are transparent about their data coverage. Vendors must be able to offer a wider variety of standard and alternative threat sources than is commonly available through off-the-shelf APIs. To achieve this, vendors often must source data in unique ways — such as developing proprietary web crawlers to sit in less-regulated chat applications and forums.

When standard threat intelligence sources are combined with fringe online data in an API, analysts can do their jobs faster than merging conventional feeds with manual navigation. Analysts also get more contextual value within their tooling than viewing different sources separately. It also means that previously overlooked risks on obscure sites are included in a more holistic security strategy.

An API also retains content that has been deleted on the original site since being crawled, allowing for more thorough investigations than those possible with manual searches. This is important on more obscure networks like 4chan where content turns over quickly.




When collected and catalogued appropriately, a wider variety of online data can be used to train effective machine learning models. These can support faster and more accurate threat detection for overwhelmed security teams. In fact, some emerging APIs have machine learning functionality already built-in so analysts can narrow in on relevant data faster.

As alert volumes grow and threat actors migrate to a greater variety of online spaces, security professionals are likely to become more concerned with their data coverage — and how to integrate alternative data sources effectively into workflows…[…] Read more »….



The Inevitable Rise of Intelligence in the Edge Ecosystem

A new frontier is taking shape where smart, autonomous devices running data on 5G networks process information that can lead to near real-time insights enterprises need.

The implementation and adoption of 5G wireless, the cloud, and smarter devices is setting the stage for advanced capabilities to emerge at the edge, according to experts and stakeholders. Communications providers such as Verizon continue to flesh out the newest generation of wireless, which promises to offer more robust data capacity and mobile solutions. In brief, the edge has the potential to be a place where greater data processing and analytics happens with near real-time speed, even in seemingly small devices. On the hardware and services side, IBM, Nokia Enterprise, DXC Technology, and Intel all see potential for these converging resources to evolve the edge in 2021 in exponential ways — if all the right pieces fall into place.

The edge is poised to support highly responsive compute, far from core data centers, but Bob Gill, research vice president with Gartner, says the landscape needs to become more cohesive.  “As long as all we have are vertical, monolithic, bespoke stacks, edge isn’t going to scale,” he says, referring to the differing resources created to work at the edge that might not mesh well with other solutions.

Gill defines the edge as the place where the physical and digital worlds interact, which can include sensors and industrial machine controllers. He says it is a form of distributed computing with assets placed in locations that can optimize latency and bandwidth. Retailers, internet of things, and the industrial world have already been working at the edge for more than a decade, Gill says. The current activity at the edge may introduce the world to even more possibilities. “What’s changed is the huge plethora of services from the cloud along with the rising intelligence and number of devices at the edge,” he says. “The edge completes the cloud.”

The focus of the evolution at the edge is to push intelligence to locations where bandwidth, data latency, and autonomy might otherwise be concerns when connecting to the cloud or core computing. With more autonomy, Gill says devices at the edge will be able to operate even if their connections are down.

This might include robots in manufacturing or automated resources in warehousing and logistics, as well as transportation, oil, and gas. Organizations will need some normalization of platforms and solutions at the edge, he says, in order to see the full benefit of such resources. “They’re looking for standardized toolsets and a way that everything isn’t a bespoke one-off,” Gill says.  This could include using open source frameworks deployed to create solutions that can be tweaked.

Gill expects there to be move toward a standardized approach in the next five years. He says enterprise leadership should ask questions about ways the edge can help the organization achieve goals while also eliminating risk. “The c-suite should be saying, ‘What is the business benefit I’m getting out of this? Is it something that’s replicable?’”

Edge mimics public cloud

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute.

The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.

Part of that acceleration came from medical facilities, logistics, and distribution where the need is great for such implementations. Loucks sees investment and opportunities for companies to move quickly at the edge with such resources as professional services robots that work alongside people. Such robots need fast, low latency connections made possible through 5G and have edge AI chips to assist with computer visions, letting them “see” their environment, he says.

Loucks says there are an estimated 650 million edge AI chips in the wild this year with that number expected to scale up fast. “We are predicting [there will be] around 1.6 billion edge AI chips by 2024 as the chips get smaller with lower power consumption,” he says.

The COVID accelerator

World events have played a part in advancing the resources and capabilities at the edge, says Paul Silverglate, vice chairman and Deloitte’s US technology sector leader. “COVID has been an accelerator and a challenge as it relates to computing at the edge,” he says. Remote working, digital transformation, and cloud migration have all been pushed faster than expected in response to the repercussions of the pandemic. “We’ve gone 10s of years into the future,” Silverglate says.

That future may already be happening as Verizon sees the components of the edge coming together, says director of IoT and real-time enterprise Thierry Sender. “From a Verizon standpoint, we now have partners for enabling edge deeply integrated into our 5G network and wireless overall,” he says, “which means 4G devices get the benefit of the capabilities.” For example, Sender says for private infrastructure, Verizon has a relationship with Microsoft to deliver on compute resources that support mission critical applications large enterprises would have in warehouses or manufacturing. That ties together different bespoke solutions that enterprises use together to solve their needs.

The edge elements coming together in 2020 are building blocks for exponential change, Sender says. “2021 is the year of transformation,” he says. “That’s where a lot of the solutions will begin to truly manifest themselves.” Sender also says 2022 will be a year of disruption as industries adapt to real-time operational and customer insights that affect their businesses. “Every industry is being impacted with this edge integration to network,” Sender says.

This transformative move is well under way, says Evaristus Mainsah, general manager of the IBM Cloud private ecosystem. “What we’re seeing is lots of data moving out to edge locations.” That is thanks to more devices carrying enough compute to conduct analytics, he says, reducing the need to move data to a data center or to the cloud to process. By 2023, expect 50% of new on-prem infrastructure will be in edge locations, he says, compared with 10% now. Enterprise data processing outside of central data centers will also grow from 10% now to 75% in 2025, Mainsah says. “Think of it as a movement of data from traditional data center or cloud locations out into edges.”

There is a generation shift taking place, says Karl Bream, head of strategy for Nokia’s enterprise business, which will take some time and see more agility, automation, and efficiency. “The network is becoming higher capacity, much more reliable, much lower latency, and can perform better in situations where you’re controlling high value assets,” he says. Bream calls this an inflection point, though networks alone cannot achieve the next evolution. Data privacy and security remain concerns, he says, as many enterprises must decide if they can allow data to reside offsite.

Tradeoffs and choices

There are tradeoffs and choices to be made, but possibilities are growing fast at the edge. “We’re seeing web companies putting edge type scenarios into place to put storage closer and closer to the device,” Bream says..[…] Read more »…..


Social Engineering: Life Blood of Data Exploitation (Phishing)

What do Jeffrey Dahmer, Ted Bundy, Wayne Gacy, Dennis Rader, and Frank Abigail all have in common, aside from the obvious fact that they are all criminals?  They are also all master manipulators that utilize the art of social engineering to outwit their unsuspecting victims into providing them with the object or objects that they desire.  They appear as angels of light but are no more than ravenous wolves in sheep’s clothing. There are six components of an information system: Humans, Hardware, Software, Data, Network Communication, and Policies; with the human being the weakest link of the six.

By Zachery S. Mitcham, MSA, CCISO, CSIH, VP and Chief Information Security Officer, SURGE Professional Services-Group
Social engineering is the art of utilizing deception to manipulate a subject into providing the manipulator with the object or objects they are seeking to obtain. Pretexting is often used in order to present a false perception of having creditability via sources universally known to be valid. It is a dangerous combination to be gullible and greedy. Social engineers prey on the gullible and greedy using the full range of human emotions to exploit their weaknesses via various scams, of which the most popular being phishing.  They have the uncanny ability to influence their victim to comply with their demands.

Phishing is an age-old process of scamming a victim out of something by utilizing bait that appears to be legitimate. Prior to the age of computing, phishing was conducted mainly through chain mail but has evolved over the years in cyberspace via electronic mail. One of the most popular phishing scams is the Nigerian 419 scam, which is named after the Nigerian criminal code that addresses the crime.

Information security professionals normally eliminate the idea of social norms when investigating cybercrime.  Otherwise, you will be led into morose mole tunnels going nowhere. They understand that the social engineering cybercriminal capitalizes on unsuspecting targets of opportunity. Implicit biases can lead to the demise of the possessor. Human behavior can work to your disadvantage if left unchecked. You profile one while unwittingly becoming a victim of the transgressions of another. These inherent and natural tendencies can lead to breaches of security. The most successful cybersecurity investigators have a thorough understanding of the sophisticated criminal mind.

Victims of social engineering often feel sad and embarrassed. They are reluctant to report the crime depending on its magnitude. And the CISO to comes the rescue! In order to get to the root cause of the to determine the damage caused to the enterprise, the CISO must put the victim at ease by letting them know that they are not alone in their unwitting entanglement.

These are some tips that can assist you with an anti-social engineering strategy for your enterprise: Employ Sociological education tools by developing a comprehensive Information Security Awareness and Training program addressing all six basic components that make up the information system. The majority of security threats that exist on the network are a direct result of insider threats caused by humans, no matter if they are unintentional or deliberate. The most effective way an organization can mitigate the damaged caused by insider threats is to develop effective security awareness and training program that is ongoing and mandatory.

Deploy enterprise technological tools that protect your human capital against themselves.

Digital Rights Management (DRM) and Data Loss Prevention (DLP) serve as effective defensive tools that protect from the exfiltration enterprise data in the event that it falls into the wrong hands...[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

How to find weak passwords in your organization’s Active Directory


Confidentiality is a fundamental information security principle. According to ISO 27001, it is defined as ensuring that information is not made available or disclosed to unauthorized individuals, entities or processes. There are several security controls designed specifically to enforce confidentiality requirements, but one of the oldest and best known is the use of passwords.

In fact, aside from being used since ancient times by the military, passwords were adopted quite early in the world of electronic information. The first recorded case dates to the early 1960s by an operating system created at MIT. Today, the use of passwords is commonplace in most people’s daily lives, either to protect personal devices such as computers and smartphones or to prevent unwanted access to corporate systems.

With such an ancient security control, it’s only natural to expect it has evolved to the point where passwords are a completely effective and secure practice. The hard truth is that even today, the practice of stealing passwords as a way to gain illegitimate access is one of the main techniques used by cybercriminals. Recent statistics, such as Verizon’s 2020 Data Breach Investigations Report leave no space to doubt: 37% of hacking-related breaches are tied to passwords that were either stolen or used in gaining unauthorized access.

For instance, in a quite recent case, Nippon Telegraph & Telephone (NTT) — a Fortune 500 company — disclosed a security breach in its internal network, where cybercriminals stole data on at least 621 customers. According to NTT, crackers breached several layers of its IT infrastructure and reached an internal Active Directory (AD) to steal data, including legitimate accounts and passwords. This lead to unauthorized access to a construction information management server.

Figure 1: Diagram of the NTT breach (source: NTT)

As with other directory services, Microsoft Active Directory remains a prime target for cybercriminals, since it is used by many businesses to centralize accounts and passwords for both users and administrators. Well, there’s no point in making cybercrime any easier, so today we are going to discuss how to find weak passwords in Microsoft Active Directory.

Active Directory: Password policy versus weak passwords

First, there is a point that needs to be clear: Active Directory indeed allows the implementation of a GPO (Group Policy Object) defining rules for password complexity, including items such as minimum number of characters, mandatory use of specials characters, uppercase and lowercase letters, maximum password age and even preventing a user from reusing previous passwords. Even so, it is still important to know how to find weak passwords, since the GPO may (for example) not have been applied to all Organizational Units (OUs).

But this is not the only problem. Even with the implementation of a good password policy, the rules apply only to items such as size, complexity and history, which is not a guarantee of strong passwords. For example, users tend to use passwords that are easy to memorize, such as Password2020! — which, although it technically meets the rules described above, cannot be considered safe and can be easily guessed by a cybercriminal.

Finding weak passwords in Active Directory can be simpler than you think. The first step is to know what you are looking for when auditing password quality. For this example, we will look for weak, duplicate, default or even empty passwords using the DSInternals PowerShell Module, which can be downloaded for free here.

DSInternals is an extremely interesting tool for Microsoft Administrators and has specific functionality for password auditing in Active Directory. It has the ability to discover accounts that share the same passwords or that have passwords available in public databases (such as the famous HaveIBeenPwned) or in a custom dictionary that you can create yourself to include terms more closely related to your organization.

Once installed, the password audit module in DSInternals Active Directory is quite simple to use. Just follow the syntax below:

Test-PasswordQuality [-Account] <DSAccount> [-SkipDuplicatePasswordTest] [-IncludeDisabledAccounts] 

[-WeakPasswords <String[]>] [-WeakPasswordsFile <String>] [-WeakPasswordHashesFile <String>] [-WeakPasswordHashesSortedFile <String>] [<CommonParameters>]

The Test-PasswordQuality cmdlet receives the output from the Get-ADDBAccount and Get-ADReplAccount cmdlets, so that offline (ntds.dit) and online (DCSync) password analyses can be done. A good option to obtain a list of leaked passwords is to use the ones provided by HaveIBeenPwned, which are fully supported in DSInternals. In this case, be sure to download the list marked “NTLM (sorted by hash)”..[…] Read more »….


Prevent burnout with some lessons learned from golf

Even in the wake of Covid-19 and its effect on the world, business doesn’t stop. For many of us, having an extended “holiday” at home has only added more stress to our lives, and getting back to business means catching up what we missed.

At this point in time, whether you’re in a leadership position or an employee, it’s even more important to be aware of and do what we can to prevent burnout.

While we may not all have time to get in a round at the golf course while bringing business back up to speed, here are some lessons golf can teach us about preventing burnout.

#1 – Play With the Right People

When you’re on the golf course, the people you’re with have a lot to do with whether it’s a fun or a stressful experience! Nobody wants to play a round with the guy who complains all the time, or who criticizes your every shot.

The same is true in the workplace. While you can’t always choose who you work with or who you have to spend time with on the job, your colleagues can make a big difference to job satisfaction, which can, in turn, be a large factor in burnout.

Working with people who don’t share your vision, work well in a team, or contribute positively to company culture can cause stress on top of normal work pressure.

More stress just leads one more step down the path to burnout. On the other hand, working without supportive, passionate, and action-oriented people can spur you on when you’re feeling a little low.

Lesson: Surround yourself with positive people wherever possible.

#2 – Use Technology to Your Advantage

Golf is booming with new technologies that can do everything from analyzing your swing to giving you in-depth details about the course you’re about to play. Using these correctly can supercharge your game!

Similarly, there are technologies available in business that can make life easier. Struggling with time management? There’s an app for that. Need to streamline your business processes? Software is available. Not sure what the problem is? Data analytics can help you find out.

Choosing the right piece of software or app is important, though. You can’t tee off with a putter! Analyze where your business could do with some help and figure out exactly what you need before committing.

Lesson: Choosing the right technology can streamline your business and reduce stress.

#3 – Change Things Up

There’s a saying that says something along the lines of if you do things the same way as you’ve always done, expect to get the same results as you always have!

Playing the same round of golf at the same club at the same time every week won’t do much for your game. Complacency is easy to come by.

But switch it up and visit a different club, or play with a different partner, and you may notice that you feel a little more excited and into it.

If you’re feeling like you’re headed towards a burnout, the worst thing you can do is… Keep going!

See where you can mix things up a little. Work from home, or a nearby coffee shop. Sit at a different desk near other people in the office. Try a new way of doing your work.

Lesson: Make a change – your environment, people, or method.

#4 – Appreciate Your Environment

Have you ever seen a golf course that wasn’t beautiful? Rolling green hills, tall trees, and often, a spectacular view make golf courses some of the most peaceful and stunning environments around.

When last did you spend some time marveling at the view or the scenery when you played a round? In the same vein, when last did you look around your workplace and consider what you really like about it and give some gratitude?

It might sound ridiculous, but focusing on the positives around you can change your mindset and remind you of the good things in life (and work!).

Lesson: Make a note of what you’re grateful for in your working environment..[…] Read more »….



How Object Storage Is Taking Storage Virtualization to the Next Level

We live in an increasingly virtual world. Because of that, many organizations not only virtualize their servers, they also explore the benefits of virtualized storage.

Gaining popularity 10-15 years ago, storage virtualization is the process of sharing storage resources by bringing physical storage from different devices together in a centralized pool of available storage capacity. The strategy is designed to help organizations improve agility and performance while reducing hardware and resource costs. However, this effort, at least to date, has not been as seamless or effective as server virtualization.

That is starting to change with the rise of object storage – an increasingly popular approach that manages data storage by arranging it into discrete and unique units, called objects. These objects are managed within a single pool of storage instead of a legacy LUN/volume block store structure. The objects are also bundled with associated metadata to form a centralized storage pool.

Object storage truly takes storage virtualization to the next level. I like to call it storage virtualization 2.0 because it makes it easier to deploy increased storage capacity through inline deduplication, compression, and encryption. It also enables enterprises to effortlessly reallocate storage where needed while eliminating the layers of management complexity inherent in storage virtualization. As a result, administrators do not need to worry about allocating a given capacity to a given server with object storage. Why? Because all servers have equal access to the object storage pool.

One key benefit is that organizations no longer need a crystal ball to predict their utilization requirements. Instead, they can add the exact amount of storage they need, anytime and in any granularity, to meet their storage requirements. And they can continue to grow their storage pool with zero disruption and no application downtime.

Greater security

Perhaps the most significant benefit of storage virtualization 2.0 is that it can do a much better job of protecting and securing your data than legacy iterations of storage virtualization.

Yes, with legacy storage solutions, you can take snapshots of your data. But the problem is that these snapshots are not immutable. And that fact should have you concerned. Why? Because, although you may have a snapshot when data changes or is overwritten, there is no way to recapture the original.

So, once you do any kind of update, you have no way to return to the original data. Quite simply, you are losing the old data snapshots in favor of the new. While there are some exceptions, this is the case with the majority of legacy storage solutions.

With object storage, however, your data snapshots are indeed immutable. Because of that, organizations can now capture and back up their data in near real-time—and do it cost-effectively. An immutable storage snapshot protects your information continuously by taking snapshots every 90 seconds so that even in the case of data loss or a cyber breach, you will always have a backup. All your data will be protected.

Taming the data deluge

Storage virtualization 2.0 is also more effective than the original storage virtualization when it comes to taming the data tsunami. Specifically, it can help manage the massive volumes of data—such as digital content, connected services, and cloud-based apps—that companies must now deal with. Most of this new content and data is unstructured, and organizations are discovering that their traditional storage solutions are not up to managing it all.

It’s a real problem. Unstructured data eats up a vast amount of a typical organization’s storage capacity. IDC estimates that 80% of data will be unstructured in five years. For the most part, this data takes up primary, tier-one storage on virtual machines, which can be a very costly proposition.

It doesn’t have to be this way. Organizations can offload much of this unstructured data via storage virtualization 2.0, with immutable snapshots and centralized pooling capabilities.

The net effect is that by moving the unstructured data to object storage, organizations won’t have it stored on VMs and won’t need to backup in a traditional sense. With object storage taking immutable snaps and replicating to another offsite cluster, it will eliminate 80% of an organization’s backup requirements/window.

This dramatically lowers costs. Because instead of having 80% of storage in primary, tier-one environments, everything is now stored and protected on object storage.

All of this also dramatically reduces the recovery time of both unstructured data from days and weeks to less than a minute, regardless of whether it’s TB or PB of data. And because the network no longer moves the data around from point to point, it’s much less congested. What’s more, the probability of having failed data backups goes away, because there are no more backups in the traditional sense.

The need for a new approach

As storage needs increase, organizations need more than just virtualization..[…] Read more »