How gamification boosts security awareness training effectiveness

Ransomware and its partner in crime phishing are very much in the spotlight of late. According to the Phishing Activity Trends Report by APWG, the quantity of phishing doubled in 2020 and continues to rise.

In response, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has launched a campaign to address ransomware. It takes a two-pronged approach: improving security readiness and raising awareness. Its campaign encourages organizations to implement best practices, tools and resources to mitigate the risk of falling victim to ransomware.

“Anyone can be the victim of ransomware, and so everyone should take steps to protect their systems,” said Director of CISA Brandon Wales.

But protection doesn’t just mean buying new security systems and implementing better processes and policies. The CISA campaign emphasizes training as an integral part of anti-ransomware and anti-phishing success.

Employee security awareness training

“The greatest technology in the world won’t do you much good if your users are not well educated on security and don’t know what to do and not do,” said Greg Schulz, an analyst at Server StorageIO Group. “Too many security issues are attributed to humans falling prey to social engineering.”

But the effectiveness of security training varies significantly depending on the approach. According to a report by Computer Economics, Security Training Adoption and Best Practices 2021, the security training given to staff in some cases only goes as far as insisting all users sign off on reading organizational security policies and procedures. How much of it are they likely to retain?

“All it takes is one weak link among the workforce, and state-of-the-art security technology is breached,” said Frank Scavo, President of Computer Economics. “The goal of security training should be to erect a human firewall of informed and ever-vigilant users: an army of personnel with high awareness of social engineering methods provides an extra safeguard against attack.”

Lunch-and-learn anti-phishing awareness briefings are a little better than only making employees read policy. Lunch-and-learns may earn a small improvement in the reduction of phishing success, but not nearly enough. Similarly, traditional classroom and textbook learning has only a modest effect on the number of phishing victims.

What it takes is a multi-faceted response to phishing and ransomware via interactive learning. The introduction of gamification, in particular, to the field of security awareness training has been shown to boost results.

Gamification of cybersecurity training

“People can easily tune out when subjected to static security awareness training,” said Schulz. “But make it an interactive learning experience or game, and people are more likely to engage while being entertained and educated on the do’s and don’ts of modern IT security threat risks.”

What exactly is gamification? Gabe Zichermann, author of “The Gamification Revolution,” defined it as “taking what’s fun about games and applying it to situations that maybe aren’t so fun.”

Gamification is essentially about finding ways to engage people emotionally to motivate them to behave in a particular way or decide to forward a specific goal. In training, it’s used to make learning a lot more fun.

Effective gamification techniques applied to security training use quizzes, interactive videos, cartoons and short films with characters and plots that entertain while getting across the important facts about phishing and other scams — and how to avoid them.

In “The Forrester Wave: Security Awareness and Training Solutions, Q1 2020,” Jinan Budge, an analyst at Forrester Research, said, “Successful vendors deliver the ABCs of security: awareness, behavior and culture. Look for providers that truly understand how training contributes to your overall security culture and don’t just check the training requirement box.”

Later in the same report, she added: “Choose vendors that create positive content with inclusive, clear and compelling images and that engage users with alternative content types like gamification, microlearning and virtual reality (VR). Some vendors offer true gamification that involves teams, competition and advanced graphic design, engaging discerning audiences on a deeper level than multiple-choice tests or phishing simulations.”..[…] Read more »….

 

Key Steps To Secure Every Identity in the Age of Digital Transformation

Identity management is in crisis. The need for a secure, seamless authentication method for all digital identities has reached a critical point with the widespread remote working practices and accelerated digital transformation ushered in by the pandemic. Microsoft’s CEO went as far as to say that the pandemic was the foundation for ‘two years’ worth of digital transformation in two months’ as organizations of all shapes and sizes prepared for the new normal.

The past year’s accelerated digital transformation has highlighted the need to leave behind the total reliance on passwords, reflecting the fact that they aren’t as effective as the already available alternatives. Compromised passwords are far and away the most common cause of data breaches, with 80% of breaches involving the misuse or abuse of credentials. Gartner predicts that 60% of large enterprises and 90% of midsize businesses will be using passwordless authentication by 2024. Not only will this reduce the security issues around passwords but will also free up the time IT teams spend dealing with issues of authentication.

Businesses are turning to Zero Trust security with multi-factor authentication as a step towards passwordless, which is a key factor in an identity-first cybersecurity strategy. MFA is now a must for authenticating end users, but it is not enough in itself. A comprehensive approach to identity management will not just ensure users stay safe and secure but will also secure your machines, devices and interactions. IT leaders need to incorporate certificate-based services in order to defend all the identities on their network. This article will take us through three facets of a holistic approach to identity management that will help IT practitioners go beyond user identity management to ensure all the machines, devices, and interactions on their network are trusted identities.

Enabling trusted identities with automated PKI

It isn’t enough to authenticate just your users, businesses need to authenticate all their identities – whether it’s their systems, machines, or digital processes like signing contracts or wiring money– and ensure trusted and secure interactions among them.

This is where we see the benefits of a solution such as Public Key Infrastructure (PKI). With PKI, you can issue certificates for a variety of different machines, so you no longer need to worry about unverified devices or applications on your network. In the era of flexible, hybrid, remote and ‘work from anywhere’ models, this is an invaluable part of PKI’s appeal: you can rest easy that a user is not bringing an unsafe device into the corporate network, even if this is a personal device which they are using for work.

When deployed and managed correctly by a knowledgeable security partner on your behalf, PKI is an effective authentication method both in terms of costs and security. By selecting PKI security partners, overworked IT teams can automate their PKI management and alleviate the pressure of implementing this technology manually without having the expertise in-house.

Securing digital interactions

Phishing emails and other forms of email-based compromise are increasingly worrying for IT teams and are the frequent causes of a data breach. Millions of phishing attacks are attempted every day, with the FBI reporting that in 2020 there were 11 times more phishing complaints recorded than in 2016. While many of these are random and ineffective, those who do manage to successfully target a specific individual at a company are often hugely lucrative for cybercriminals. A business email compromise (BEC) attack costs a business an average of $3.9 million.

These high stakes are another reason why PKI should be deployed across not just access requirements, but digital interactions across your business. By issuing personalized, secure certificates for emails, documents, and other business documents, you can sign and encrypt essential online interactions. Certificates can also be issued across authentication tools such as hardware tokens or smart cards. This will help to streamline digital transformation for not just users but their online interactions making it that much harder for hackers to operate with impunity on your network.

A user empowerment mentality

It’s also of paramount importance that the authentication solutions you are using are assessed by their long-term impact. Choosing an automated, user-centric solution is the best way to simplify the whole process, while ensuring that your IT teams and wider employees are able to do their jobs with as little interference as possible. Placing the user front and center of the process is the best way to ensure this is done. A user-centric policy also accounts for the diversity present in the types of users that identity management must cater for – differing login locations, devices, and combinations of remote and on-premise work can all be included in a user-centric identity management policy…[…] Read more »

 

 

 

Cloud Native Driving Change in Enterprise and Analytics

A pair of keynote talks at the DeveloperWeek Global conference held online this week hashed out the growing trends among enterprises going cloud native and how cloud native can affect the future of business analytics. Dan McKinney, senior engineer and developer relations lead with Cloudsmith, focused on cloud native supporting the continuous software pipelines in enterprises. Roman Stanek, CEO of GoodData, spoke on the influence cloud native can have on the analytics space. Their keynotes highlighted how software development in the cloud is creating new dynamics within organizations.

In his keynote, Stanek spoke about how cloud native could transform analytics and business intelligence. He described how developers might take ownership of business intelligence, looking at how data is exposed, workflows, and platforms. “Most people are just overloaded with PDF files and Excel files and it’s up to them to visualize and interpret the data,” Stanek said.

There is a democratization underway of data embedded into workflows and Slack, he said, but being able to expose data from applications or natively integrated in applications is the province of developers. Tools exist, Stanek said, for developers to make such data analytics more accessible and understandable by users. “We want to help people make decisions,” he said. “We also want to get them data at the right time, with the right context and volume.”

Stanek said he sees more developers owning business applications, insights, and intelligence up to the point where end users can make decisions. “This industry is heading away from an isolated industry where business people are copying data into visualization tools and data preparation tools and analytics tools,” he said. “We are moving into a world where we will be providing all of this functionality as a headless functionality.” The rise of headless compute services, which do not have local keyboards, monitors, or other means of input and are controlled over a network, may lead to different composition tools that allow business users to build their own applications with low-code/no-code resources, Stanek said.

Enterprise understanding of what constitutes cloud is evolving as well. Though cloud native and cloud hosted sound similar, McKinney said they can be different resources. “The cloud goes way beyond just storing and hosting,” he said. “It is at the heart of a whole new range of technical possibilities.” Many enterprises are moving from on-prem and cloud-hosted solutions to completely cloud-native solutions for continuous software, McKinney said, as cloud providers expand their offerings. “It is opening up new ways to build and deploy applications.”

The first wave of applications migrated to the cloud were cloud hosted, he said. “At a very high level, a cloud-hosted application has been lifted and shifted onto cloud-based server instances.” That gave them access to basic features from cloud providers and offered some advantages to on-prem applications, McKinney said. Still, the underlying architecture of the applications remained largely the same. “Legacy applications migrated to the cloud were never built to take advantage of the paradigm shift that cloud providers present,” he said. Such applications cannot take advantage of shared services or pools of resources and are not suitable for scaling. “It doesn’t have the elasticity,” McKinney said.

The march toward the cloud has since accelerated with the next wave of applications to take advantage of the cloud were constructed natively, he said. Applications born and deployed with the technology of cloud providers in mind typically make use of continuous integration, orchestrators, container engines, and microservices, McKinney said. “Cloud-native applications are increasingly architected as smaller and smaller pieces and they share and reuse services wherever possible.”

Enterprises favor cloud-native solutions now for such reasons as the total cost of ownership, performance and security of the solution, and accommodating distributed teams, McKinney said. There is a desire, he said, to shift from capital expense on infrastructure to operational expense on running costs. These days the costs of cloud-native applications can be calculated fairly easily, McKinney said. Cloud-native resources offer fully managed service models, which can maintain the application itself. “You don’t have to think about what version of the application you have deployed,” he said. “It’s all part of the subscription.”

The ability to scale up with the cloud to meet increased demand was one of the first drivers of migration, McKinney said, but cloud-native applications can go beyond simple scaling. “Cloud-native applications can scale down to the level of individual functions,” he said. “It’s more responsive, efficient, and able to better suit increasing demands — particularly spike loads.”..[…] Read more »…..

 

Securing cloud endpoints: What you should know

What is endpoint security in the cloud?

Endpoint security solutions, such as endpoint protection platforms (EPP) and endpoint detection and response (EDR), were once considered a separate discipline from cloud security. These technologies have since merged to create solutions for endpoint protection in the cloud.

Traditional endpoint security was only sufficient when employees all worked on-premises, accessing workloads through company computers. However, changes to the market, including greater competition, the need for 24/7 accessibility, and rising IT costs, have led more organizations to embrace cloud computing to enable a more open and accessible IT environment. The cloud is accessible from any device, which is good for work flexibility but can complicate security.

Challenges for cloud security include:

  • Cloud systems introduce new types of endpoints, including SaaS applications, cloud storage buckets, managed databases and computer instances (such as EC2 instances or Azure VMs). Each of these is, for all intents and purposes, an endpoint that attackers can gain access to and compromise.
  • The number and types of endpoints accessing the cloud are constantly growing, with devices ranging from laptops to smartphones and tablets. As the Internet of Things (IoT) grows, so does the list of devices and the associated vulnerabilities.
  • External bring-your-own-device (BYOD) endpoints do not provide sufficient visibility into their state or contents. You cannot know what potential security threat may be hidden in a connected device.
  • It is difficult to manage and monitor endpoint behavior and access. Even if your security policy stipulates a list of approved devices and installed apps, you need the right tools to monitor and enforce endpoint security. To ensure you are protected, you need to find a way to extend security to include monitoring remote endpoint access and behavior.

Cloud endpoint security challenges

Let’s take a closer look at security challenges affecting endpoints in public and private clouds.

Public cloud endpoint security

Public cloud resources are more vulnerable to attackers because they are outside the control of IT departments and typically have access to public networks. All public cloud providers use a shared responsibility model, in which the cloud provider secures cloud infrastructure, while cloud users must secure their workloads and data and are responsible for secure configuration.

Many organizations use multiple computing models, including public Infrastructure-as-a-Service (IaaS) such as Amazon EC2, Platform-as-a-Service (PaaS) such as Amazon Lambda and Software-as-a-Service (SaaS) such as SalesForce and Microsoft Office 365.

It can be challenging to identify endpoints, understand access controls and establish secure configurations, as these can work differently for each cloud provider. You cannot centrally view and control all your public cloud branches without specialized tools, and you have to find them one by one across multiple cloud environments.

Another dimension of cloud security, which is unique to the public cloud, is that attacks can not only compromise sensitive resources but also increase cloud costs as attackers leverage cloud infrastructure to create their own, malicious resources.

Private cloud endpoint security

The private cloud may seem more secure because it is fully controlled by the organization and runs in a local data center. However, private clouds are also vulnerable to attack.

Security issues that can impact private clouds include:

  • Insider attacks — a malicious employee or attacker who holds or compromises a legitimate account within the private cloud, can use it to wage an attack. Endpoints are usually connected to other resources and networks, which can lead to lateral movements by malicious insiders.
  • Phishing — social engineering is a common way to compromise endpoints. For example, in a spearphishing attack, hackers investigate victim behavior in your organization, send a crafted and trusted email and trick them into clicking a link to grant attackers access or distribute malicious code.
  • Non-compliance — organizations must ensure that endpoint controls are properly configured and sensitive data is adequately protected. If the necessary control measures are not implemented and there are audits or actual violations, the organization may lose certification or incur fines.
  • Data exfiltration — intellectual property, sensitive or business-critical data or security controls can be leaked to external sources. This is often the result of endpoint vulnerabilities. Data can be stolen by malicious software deployed on endpoints by attackers, transmitted via tunneling over traditional communication protocols (e.g. DNS) or using other methods, such as cloud storage, FTP or Tor.

4 Cloud endpoint security best practices

The following best practices can help you enhance endpoint security in the cloud.

Centralize your security strategy

To identify threats across multiple cloud platforms and effectively integrate a security strategy that meets the needs of each platform, security teams can centralize security controls to gain data visibility across multiple cloud environments.

Information about security measures and tools should be shared between the teams responsible for each platform. Having a common protocol for secure implementation of services ensures consistency, and facilitates secure integration of multi-cloud architectures.

Secure user endpoints

Most users access cloud services through a web browser. Therefore, it is important to implement a high degree of client-side security, ensuring user browsers are up-to-date and free of vulnerabilities.

You should also consider implementing an endpoint security solution to protect your end-user devices. This is critical given the explosive growth of mobile devices and remote work, as users increasingly access cloud services through non-company-owned devices.

Combine endpoint security with additional security solutions, including firewall, mobile device security and intrusion detection/prevention (IDS/IPS) systems.

Network segmentation

EDR tools typically respond to events by isolating endpoints. This type of response quickly deters threat actors. But by creating a segmented network from the outset, you can provide additional protection and prevent attacks before they begin. You can use network segmentation to restrict access to specific services and datastores. This reduces the risk of data loss and limits the extent of damage from a successful attack.

Using Ethernet Switched Path (ESP) technology, network structure can be hidden to further protect the network. This makes it more difficult for attackers to laterally move from one network segment to another.

Preventing cloud phishing by securing credentials

Many security breaches are caused by leaked credentials. Users may intentionally share their credentials with others, store their credentials on public devices or use weak passwords that are easy to crack.

Credential phishing is also a major risk. Many users are easily tricked into using fake portals through malicious scripts and email scams. These users may provide their credentials without realizing that something is suspicious. Once a malicious attacker obtains these credentials, they can gain access to applications, application data and corporate systems.

To protect against these situations, you can implement endpoint protection that detects anomalous use of credentials. For example, if someone logs in from an unexpected geographic location or from multiple IPs at once, you’ll receive an alert.

You should also implement a secure password and login policy. If possible, set a session timeout policy and force users to change their passwords periodically. Implement multi-factor authentication (MFA) whenever possible. If you can’t change the authentication scheme because you are using third-party services, implement an internal policy that defines password complexity and lifecycle.

Cloud endpoint security challenges

Here are suggested best practices that can help improve endpoint security in a cloud environment:

  1. Centralize security for cloud endpoints: make sure you apply consistent policies across all your cloud environments and the on-premise data center.
  2. Secure user endpoints: users frequently access the cloud from remote, personal devices. Ensure that only devices with a known level of security hygiene can access your cloud..[…] Read more »…. 

Machine identities: What they are and how to use automation to secure them

Security teams who aim to control secure access to networked applications and sensitive data often focus on the authentication of user credentials. Yet, the explosive growth of connected devices and machines in today’s enterprises exposes critical security vulnerabilities within machine-to-machine communications, where no human is involved. 

That’s where machine identity comes in. Machine identity is the digital credential or “fingerprint” used to establish trust, authenticate other machines, and encrypt communication.

Much more than a digital ID number or a simple identifier such as a serial number or part number, machine identity is a collection of authenticated credentials that certify that a machine is authorized access to online resources or a network. 

Machine identities are a subset of a broader digital identity foundation that also includes all human and application identities in an enterprise environment. It goes beyond easily recognizable use cases like authenticating a laptop that is accessing the network remotely through Wi-Fi. Machine identity is required for the millions or billions of daily communications between systems where no human is involved, like routing messages across the globe through various network appliances or application servers generating or using data stored across multiple data centers. 

Why Machine Identity Management Needs to Be Automated 

As the number of processes and devices requiring machine-to-machine communication grows, the number of machine identities to track also grows. According to the Cisco Annual Internet Report, by 2023, there will be 29.3 billion networked devices globally, up from 18.4 billion in 2018. That is more than 10 billion new devices in just five years!

Improper identity management not only makes enterprises more vulnerable to cybercriminals, malware and fraud, it also exposes organizations to risks related to employee productivity, customer experience issues, compliance shortfalls and more. While there is no stronger, more versatile authentication and encryption solution than PKI-based digital identity, the challenge for busy IT teams is that manually deploying and managing certificates is time-consuming and can result in unnecessary risk if a mistake is made. 

Whether an enterprise deploys certificates to enable device authentication for a single control network or manages millions of certificates across all its networked device identities, the end-to-end process of certificate issuance, configuration and deployment can overwhelm the workforce. 

The bottom line? Manual machine identity management is neither sustainable nor scalable.

In addition, manually managing certificates puts enterprises at significant risk of neglected certificates expiring unexpectedly. This can result in certificate-related outages, critical business systems failures and security breaches and attacks.

In recent years, expired certificates have resulted in many high-profile website and service outages. These mistakes have cost billions of dollars in lost revenue, contract penalties, lawsuits and the incalculable cost of lost customer goodwill and tarnished brand reputations. 

How to Automate Machine Identity Management

With such high stakes, IT professionals are rethinking their certificate lifecycle management strategies. Organizations need an automated solution that ensures all their digital certificates are correctly configured, installed and managed without human intervention. Yes, automation helps reduce risk, but it also aids IT departments in controlling operational costs and streamlining time-to-market for products and services.

In response to market forces and hacking attacks, PKI has become even more versatile. Consistent high uptime, interoperability and governance are still crucial benefits. But modern PKI solutions can also improve administration and certificate lifecycle management through:

●    Crypto-agility: Updating cryptographic strength and revoking and replacing at-risk certificates with quantum-safe certificates rapidly in response to new or changing threats.

●    Visibility: Viewing certificate status with a single pane of glass across all use cases.

●    Coordination: Using automation to manage a broad portfolio of tasks.

●    Scalability: Managing certificates numbering in the hundreds, thousands, or even millions.

●    Automation: Completing individual tasks while minimizing manual processes...[…] Read more »….

 

Why Detection-As-Code Is the Future of Threat Detection

As security moves to the cloud, manual threat detection processes are unable to keep pace. This article will discuss how detection engineering can advance security operations just as DevOps improved the app development world. We’ll explore detection-as-code (DaC) and innumerate several compelling benefits of this trending approach to threat detection.

What is detection-as-code?

Detection-as-code is a systematic, flexible, and comprehensive approach to threat detection powered by software; the same way infrastructure as code (IaC) and configuration-as-code are about machine-readable definition files and descriptive models for composing infrastructure at scale.

It is a structured approach to analyzing security log data used to identify attacker behaviors. Using software engineering best practices to write expressive detections and automate responses, security teams can build scalable processes to identify sophisticated threats across rapidly expanding environments.

Done right, detection engineering — the set of practices and systems to deliver modern and effective threat detection — can advance security operations just as DevOps improved the app development world.

Similar to a change CI/CD workflow, a detection engineering workflow might include the following steps:

  • Observe a suspicious or malicious behavior
  • Model it in code
  • Write various test cases
  • Commit to version control
  • Deploy to staging, then production
  • Tune and update

You can see that the detection engineering CI/CD workflow is not so much about treating detections as code but about improving detection engineering to be an authentic engineering practice; one that is built on modern software development principles.

The concept of detection-as-code grew out of security’s need for automated, systematic, repeatable, predictable, and shareable approaches. It is essential because threat detection was not previously fully developed as a systematic discipline with effective automation and predictably good results.

Threat detection programs that are precisely adjusted for particular environments and systems have the most potent effect. By using detections as well-written code that can be tested, checked into source control, and code-reviewed by peers, security teams can produce higher-quality alerts that reduce burnout and quickly flag questionable activity.

What are the benefits of detection-as-code?

The benefits of detection-as-code include the ability to:

  1. Build custom, flexible detections using a programming language
  2. Adopt a Test-Driven Development (TDD) approach
  3. Incorporate with version control systems
  4. Automate workflows
  5. Reuse code

Writing detections in a universally recognized, flexible, and expressive language like Python offers several advantages. Instead of using domain-specific languages with too many limitations, you can write more custom and complex detections to fit the precise needs of your enterprise. These language rules are also often more readable and easy to understand. This characteristic can be crucial as complexity increases.

An additional benefit of using expressive language is the ability to use a rich set of built-in or third-party libraries developed or familiar by security practitioners for communicating with APIs, which improves the effectiveness of the detection.

Quality assurance for detection code can illuminate detection blind spots, test for false positives, and promote detection efficacy. A TDD approach enables security teams to anticipate an attacker’s approach, document what they learn, and create a library of insights into the attacker’s strategy.

Over and above code correctness, a TDD approach improves the quality of detection code and enables more modular, extensible, and flexible detections. Engineers can easily modify their code without fear of breaking alerts or weakening security.

When writing or modifying detections, version control allows practitioners to revert to previous states swiftly. It also confirms that security teams are using the most updated detection. Additionally, version control can provide needed meaning for specific detections that trigger an alert or help identify changes in detections.

Over time, detections must change as new or additional data enters the system. Change control is an essential process to help teams adjust detections as needed. An effective change control process will also ensure that all changes are documented and reviewed.

Security teams that have been waiting to shift security left will benefit from a CI/CD pipeline. Starting security operations earlier in the delivery process helps to achieve these two goals:

  • Eliminate silos between teams that work together on a shared platform and code-review each other’s work.
  • Provide automated testing and delivery systems for your security detections. Security teams remain agile by focusing on building precision detections.

Finally, DaC promotes code reusability across broad sets of detections. As security detection engineers write detections over time, they start to identify patterns as they emerge. Engineers can reuse existing code to meet similar needs across different detections without starting completely over.

Reusability is an essential part of detection engineering that allows teams to share functions across different detections or change and adjust detections for particular use-cases…[…] Read more »

 

How to Become an IT Industry Thought Leader

Many IT executives — and prospective executives — would like to share their knowledge and insights as industry thought leaders. Here’s a look at what you need to know to get started.

An IT industry thought leader or influencer is someone who uses their expertise and perspective to offer specialized guidance, inspire innovation, and motivate followers to business success. A thought leader’s followers can include colleagues, business partners, on-site and virtual conference audiences, and website and book readers, as well as social media followers.

Getting started as an IT thought leader requires industry experience, a winning personality, lessons to teach, and an eager audience. The most successful thought leaders address the two major needs of IT and business executives: making money and saving money, said Juan Orlandini, chief architect at Insight Enterprises, an IT systems and services provider.

Leaders, Not Followers

IT thought leaders don’t follow the crowd. “You have to be able to set a path that’s your own — a path that’s informed by the prevailing wisdom, not driven by it,” Orlandini said. “IT thought leaders, regardless of what level they’re at, also need to remain current and relevant with the changing landscape,” he added.

Beyond deep technical and/or business knowledge, becoming an IT thought leader requires a significant amount of self-reflection. Points to consider include motivations, such as career advancement, enterprise recognition, increasing product or services sales, building close ties with business partners, and perhaps even the desire to help improve and advance the IT community. “Understanding those motivations will help you plan your approach,” said Jeff Ton, a strategic IT advisor to IT solutions provider InterVision.

Gaining widespread recognition requires the ability to communicate ideas through multiple channels. “Critically assess your ability to write, to speak in front of audiences, to be the subject of an interview,” Ton advised. Getting started doesn’t require perfection, but if there are any glaring weaknesses, it’s important to address them. “For example, if the thought of public speaking makes your palms sweaty, find low-risk opportunities to speak to groups,” he suggested. If you need to hone your writing skills, seek out guest blog opportunities. “If conversation is more your thing, identify tech-related podcasts and propose being a guest on the program,” Ton recommended.

Ask your enterprise’s marketing department to help you get your thought leadership career off on the right foot. “Marketing can help you find opportunities to amplify your voice,” Ton said. “They can also help you with editing, fine-tuning your message, graphics, social media, and much more.”

Personal and Career Benefits

Most thought leaders launch their quest with the goal of enhancing their careers. “Within your company, other executives will gain an understanding of the way you think about your role, the business, and the industry,” Ton said. “They will begin to see you as more than just the ‘IT person’,” he noted.

Becoming a thought leader creates an instant credibility that can be used to build strong connections to C-suite executives, said Ari Lightman, a professor of digital media and marketing at the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. It also creates a sense of pride in the IT department that there’s a leader who can help other in-house strategic thinkers work through challenging issues, he explained.

As they raise their industry profile, thought leaders are frequently targeted by enterprises searching for a new CIO. “They may think of reaching out to you prospectively because they know your name and know your reputation,” said Rich Temple, vice president and CIO at the Deborah Heart and Lung Center. “It becomes that much easier for potential employers to see your body of work and get to know you,” he noted. “That can be a real differentiator for you in a competitive job search.”..[…] Read more »…..

 

Why You Need a Data Fabric, Not Just IT Architecture

Data fabrics offer an opportunity to track, monitor and utilize data, while IT architectures track, monitor and maintain IT assets. Both are needed for a long-term digitalization strategy.

As companies move into hybrid computing, they’re redefining their IT architectures. IT architecture describes a company’s entire IT asset base, whether on-premises or in-cloud. This architecture is stratified into three basic levels: hardware such as mainframes, servers, etc.; middleware, which encompasses operating systems, transaction processing engines, and other system software utilities; and the user-facing applications and services that this underlying infrastructure supports.

IT architecture has been a recent IT focus because as organizations move to the cloud, IT assets also move, and there is a need to track and monitor these shifts.

However, with the growth of digitalization and analytics, there is also a need to track, monitor, and maximize the use of data that can come from a myriad of sources. An IT architecture can’t provide data management, but a data fabric can. Unfortunately, most organizations lack well-defined data fabrics, and many are still trying to understand why they need a data fabric at all.

What Is a Data Fabric?

Gartner defines a data fabric as “a design concept that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable and inferenced metadata assets to support the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.”

Let’s break it down.

Every organization wants to use data analytics for business advantage. To use analytics well, you need data agility that enables you to easily connect and combine data from any source your company uses –whether the source is an enterprise legacy database or data that is culled from social media or the Internet of Things (IoT).  You can’t achieve data integration and connectivity without using data integration tools, and you also must find a way to connect and relate disparate data to each other in meaningful ways if your analytics are going to work.

This is where data fabric enters. The data fabric contains all the connections and relationships between an organization’s data, no matter what type of data it is or where it comes from. The goal of the fabric is to function as an overall tapestry of data that interweaves all data so data in its entirety is searchable. This has the potential to not only optimize data value, but to create a data environment that can answer virtually any analytics query. The data fabric does what an IT architecture can’t: it tells you what data does, and how data relates to each other. Without a data fabric, companies’ abilities to leverage data and analytics are limited.

Building a Data Fabric

When you build a data fabric, it’s best to start small and in a place where your staff already has familiarity.

That “place” for most companies will be with the tools that they are already using to extract, transform and load (ETL) data from one source to another, along with any other data integration software such as standard and custom APIs. All of these are examples of data integration you have already achieved.

Now, you want to add more data to your core. You can do this by continuing to use the ETL and other data integration methods you already have in place as you build out your data fabric. In the process, care should be taken to also add the metadata about your data, which will include the origin point for the data, how it was created, what business and operational processes use it, what its form is (e.g.,  single field in a fixed record, or an entire image file), etc. By maintaining the data’s history, as well as all its transformations, you are in a better position to check data for reliability, and to ensure that it is secure.

As your data fabric grows, you will probably add data tools that are missing from your workbench. These might be tools that help with tracking data, sharing metadata, applying governance to data, etc. A recommendation in this area is to look for an all-inclusive data management software that contains not only all the tools that you’ll need build a data fabric, but also important automation such as built-in machine learning.

The machine learning observes how data in your data fabric is working together, and which combinations of data are used most often in different business and operational contexts. When you query the data, the ML assists in pulling the data together that is most likely to answer your queries…[…] Read more »…..

 

9 best practices for network security

Network security is the practice of protecting the network and data to maintain the integrity, confidentiality and accessibility of the computer systems in the network. It covers a multitude of technologies, devices and processes, and makes use of both software- and hardware-based technologies.

Each organization, no matter what industry they belong to or what their infrastructure size is, requires comprehensive network security solutions to protect it from various cyberthreats happening in the wild today.

Network security layers

When we talk about network security, we need to consider layers of protection:

Physical network security

Physical network security controls deal with preventing unauthorized persons from gaining physical access to the office and network devices, such as firewalls and routers. Physical locks, ID verification and biometric authentication are few measures in place to take care of such issues.

Technical network security

Technical security controls deal with the devices in the network and data stored and in transit. Also, technical security needs to protect data and systems from unauthorized personnel and malicious activities from employees.

Administrative network security

Administrative security controls deal with security policies and compliance processes on user behavior. It also includes user authentication, their privilege level and implementing changes to the existing infrastructure.

Network security best practices

Now we have a basic understanding and overview of network security, let’s focus on some of the network security best practices you should be following.

1. Perform a network audit

The first step to secure a network is to perform a thorough audit to identify the weakness in the network posture and design. Performing a network audit identifies and assesses:

  • Presence of security vulnerabilities
  • Unused or unnecessary applications
  • Open ports
  • Anti-virus/anti-malware and malicious traffic detection software
  • Backups

In addition, third-party vendor assessments should be conducted to identify additional security gaps.

2. Deploy network and security devices

Every organization should have a firewall and a web application firewall (WAF) for protecting their website from various web-based attacks and to ensure safe storage of their data. To maintain the optimum security of the organization and monitor traffic, various additional systems should be used, such as intrusion detection and prevention (IDS/IPS) systems, security information and event management (SIEM) systems and data loss prevention (DLP) software.

3. Disable file sharing features

Though file sharing sounds like a convenient method for exchanging files, it’s advisable to enable file sharing only on a few independent and private servers. File sharing should be disabled on all employee devices.

4. Update antivirus and anti-malware software

Businesses purchase desktop computers and laptops with the latest version of antivirus and anti-malware software but fail to keep it updated with new rules and updates. By ensuring that antivirus and anti-malware are up to date, one can be assured that the device is running antivirus with the latest bug fixes and security updates.

5. Secure your routers

A security breach or a security event can take place simply by hitting the reset button on the network router. Thus it is paramount to consider moving routers to a more secure location such as a locked room or closet. Also, video surveillance equipment and CCTV can be installed in the server or network room. In addition, the router should be configured to change default passwords and network names, which attackers can easily find online.

6. Use a private IP address

To avoid unauthorized users or devices from accessing the critical devices and servers in the network, private IP addresses should be assigned to them. This practice enables the IT administrator to easily tap on all unauthorized attempts by the users or devices connecting to your network for any suspicious activity.

7. Establish a network security maintenance system

A proper network security maintenance system should be established which involves processes such as :

  1. Perform regular backups
  2. Updating the software
  3. Schedule change in network name and passwords

Once a network security maintenance system is established, document it and circulate it to your team…[…] Read more »….

 

What Will Be the Next New Normal in Cloud Software Security?

Accelerated moves to the cloud made sense at the height of the pandemic — organizations may face different concerns in the future.

Organizations that accelerated their adoption of cloud native apps, SaaS, and other cloud-driven resources to cope with the pandemic may have to weigh other security matters as potential “new normal” operations take shape. Though many enterprises continue to make the most of remote operations, hybrid workplaces might be on the horizon for some. Experts from cybersecurity company Snyk and SaaS management platform BetterCloud see new scenarios in security emerging for cloud resources in a post-pandemic world.

The swift move to remote operations and work-from-home situations naturally led to fresh concerns about endpoint and network security, says Guy Podjarny, CEO and co-founder of Snyk. His company recently issued a report on the State of Cloud Native Application Security, exploring how cloud-native adoption affects defenses against threats. As more operations were pushed remote and to the cloud, security had to discern between authorized personnel who needed access from outside the office versus actual threats from bad actors.

Decentralization was already underway at many enterprises before COVID-19, though that trend may have been further catalyzed by the response to the pandemic. “Organizations are becoming more agile and the thinking that you can know everything that’s going on hasn’t been true for a long while,” Podjarny says. “The pandemic has forced us to look in the mirror and see that we don’t have line of sight into everything that’s going on.”

This led to distribution of security controls, he says, to allow for more autonomous usage by independent teams who are governed in an asynchronous manner. “That means investing more in security training and education,” Podjarny says.

A need for a security-based version of digital transformation surfaced, he says, with more automated tools that work at scale, offering insight on distributed activities. Podjarny says he expects most security needs that emerged amid the pandemic will remain after businesses can reopen more fully. “The return to the office will be partial,” he says, expecting some members of teams to not be onsite. This may be for personal, work-life needs, or organizations want to take advantage of less office space, Podjarny says.

That could lead to some issues, however, with the governance of decentralized activities and related security controls. “People don’t feel they have the tools to understand what’s going on,” he says. The net changes that organizations continue to make in response to the pandemic, and what may come after, have been largely positive, Podjarny says. “It moves us towards security models that scale better and adapted the SaaS, remote working reality.”

The rush to cloud-based applications such as SaaS and platform-as-a-service at the onset of the pandemic brought on some recognition of the necessity to offer ways to maintain operations under quarantine guidelines. “Employees were just trying to get the job done,” says Jim Brennan, chief product officer with BetterCloud. Spinning up such technologies, he says, enabled staff to meet those goals. That compares with the past where such “shadow IT” actions might have been regarded as a threat to the business. “We heard from a lot of CIOs where it really changed their thinking,” Brennan says, which led to efforts to facilitate the availability of such resources to support employees.

Meeting those needs at scale, however, created a new challenge. “How do I successfully onboard a new application for 100 employees? One thousand employees? How do I do that for 50 new applications? One hundred new applications?” Brennan says many CIOs and chief security officers have sought greater visibility into the cloud applications that have been spun up within their organizations and how they are being used. BetterCloud produced a brief recently on the State of SaaS, which looks at SaaS file security exposure.

Automation is being put to work, Brennan says, to improve visibility into those applications. This is part of the emerging landscape that even sees some organizations decide that the concept of shadow IT — the use of technology without direct approval — is a misnomer. “A CIO told me they don’t believe in ‘shadow IT,’” he says. In effect, the CIO regarded all IT, authorized or not, as a means to get work done…[…] Read more »…..