API Security 101: The Ultimate Guide

APIs, application programming interfaces, are driving forces in modern application development because they enable applications and services to communicate with each other. APIs provide a variety of functions that enable developers to more easily build applications that can share and extract data.

Companies are rapidly adopting APIs to improve platform integration, connectivity, and efficiency and to enable digital innovation projects. Research shows that the average number of APIs per company increased by 221% in 2021.

Unfortunately, over the last few years, API attacks have increased massively, and security concerns continue to impede innovations.

What’s worse, according to Gartner, API attacks will keep growing. They’ve already emerged as the most common type of attack in 2022. Therefore, it’s important to adopt security measures that will keep your APIs safe.

What is an API attack?

An API attack is malicious usage or manipulation of an API. In API attacks, cybercriminals look for business logic gaps they can exploit to access personal data, take over accounts, or perform other fraudulent activities.

What Is API security and why is it important?

API security is a set of strategies and procedures aimed at protecting an organization against API vulnerabilities and attacks.

APIs process and transfer sensitive data and other organizations’ critical assets. And they are now a primary target for attackers, hence the recent increase in the number of API attacks.

That’s why an effective API security strategy is a critical part of the application development lifecycle. It is the only way organizations running APIs can ensure those data conduits are secure and trustworthy.

A secure API improves the integrity of data by ensuring the content is not tampered with and available to only users, applications, and servers who have proper authentication and authorization to access it. API security techniques also help mitigate API vulnerabilities that attackers can exploit.

When is the API vulnerable?

Your API is vulnerable if:

  • The API host’s purpose is unclear, and you can’t tell which version is running, what data is collected and processed, or who should have access (for example, the general public, internal employees, and partners)
  • There is no documentation, or the documentation that exists is outdated.
  • Older API versions are still in use, and they haven’t been patched.
  • Integrated services inventory is either missing or outdated.
  • The API contains a business logic flaw that lets bad actors access accounts or data they shouldn’t be able to reach.

What are some common API attacks?

API attacks are extremely different from other cyberattacks and are harder to spot. This new approach is why you need to understand the most common API attacks, how they work and how to prevent them.

BOLA attack

This most common form of attack happens when a bad actor changes parameters across a sequence of API calls to request data that person is not authorized to have. For example, nefarious users might authenticate using one UserID, for example, and then enumerate UserIDs in subsequent API calls to pull back account information they’re not entitled to access.

Preventive measures:

Look for API tracking that can retain information over time about what different users in the system are doing. BOLA attacks can be very “low and slow,” drawn out over days or weeks, so you need API tracking that can store large amounts of data and apply AI to detect attack patterns in near real time.

Improper assets management attack

This type of attack happens if there are undocumented APIs running (“shadow APIs”) or older APIs that were developed, used, and then forgotten without being removed or replaced with newer more secure versions (“zombie APIs”). Undocumented APIs present a risk because they’re running outside the processes and tooling meant to manage APIs, such as API gateways. You can’t protect what you don’t know about, so you need your inventory to be complete, even with developers have left something undocumented. Older APIs are unpatched and often use older libraries. They are also undocumented and can remain undetected for a long time.

Preventive measures:

Set up a proper inventory management system that includes all the API endpoints, their versions, uses, and the environment and networks they are reachable on.

Always check to ensure that the API needs to be in production in the first place, it’s not an outdated version, there’s no sensitive data exposed and that data flows as expected throughout the application.

Insufficient logging & monitoring

API logs contain personal information that attackers can exploit. Logging and monitoring functions provide security teams with raw data to establish the usual user behavior patterns. When an attack happens, the threat can be easily detected by identifying unusual patterns.

Insufficient monitoring and logging results in untraceable user behavior patterns, thereby allowing threat actors to compromise the system and stay undetected for a long time.

Preventive measures:

Always have a consistent logging and monitoring plan so you have enough data to use as a baseline for normal behavior. That way you can quickly detect attacks and respond to incidents in real-time. Also, ensure that any data that goes into the logs are monitored and sanitized.

What are API security best practices?

Here’s a list of API best practices to help you improve your API security strategy:

  • Train employees and security teams on the nature of API attacks. Lack of knowledge and expertise is the biggest obstacle in API security. Your security team needs to understand how cybercriminals propagate API attacks and different call/response pairs so they can better harden APIs. Use the OWASP API Top 10 list as a starting point for your education efforts.
  • Adopt an effective API security strategy throughout the lifecycle of the APIs.
  • Turn on logging and monitoring and use the data to detect patterns of malicious activities and stop them in real-time.
  • Reduce the risk of sensitive data being exposed. Ensure that APIs return only as much data as is required to complete their task. In addition, implement data filtering, data access limits, and monitoring.
  • Document and manage your APIs so you’re aware of all the existing APIs in your organization and how they are built and integrated to secure and manage them effectively.
  • Have a retirement plan for old APIs and remove or patch those that are no longer in use.
  • Invest in software specifically designed for detecting API call manipulations. Traditional solutions cannot detect the subtle probing associated with API reconnaissance and attack traffic….[…] Read more »… 

 

CIO Agenda: Cloud, Cybersecurity, and AI Investments Ahead

Enterprises that employed “business composability” were more likely to succeed during the volatility caused by the pandemic, according to Gartner. That volatility is here to stay, so now is the time to get ready for it.
Nearly two years after a massive disruption hit enterprises, a few lessons are evident. Some organizations quickly adapted to the circumstances, recognized the opportunities available, and acted to capitalize on them. Other organizations were caught unprepared for the unexpected and struggled to keep going. Some of them shut down.

What separated the successful organizations from the organizations that subsisted or didn’t make it at all? One factor might be what Gartner is calling “business composability,” or “the mindset, technologies, and a set of operating capabilities that enable organizations to innovate and adapt quickly to changing business needs.” This composability was a major theme at the Gartner IT Symposium/Xpo Americas, and Gartner is promoting the concept of business composability as the way for businesses to thrive through disruption in 2022 and beyond.

“Business composability is an antidote to volatility,” says Monika Sinha, research VP at Gartner,. “Sixty-three percent of CIOs at organizations with high composability reported superior business performance, compared with peers or competitors in the past year. They are better able to pursue new value streams through technology, too.”

Sinha compares the concept of composability to the way toy Legos work. She told InformationWeek in an interview that composability is about creating flexible and adaptive organizations with departments that can be re-arranged to create new value streams. She says organizations should target the following three domains of business composability:

1. Composable thinking

“This is the ability to be dynamic in your thinking as an organization,” Sinha says. This kind of thinking recognizes that business conditions often change, and it empowers the teams closest to the action to respond to the new conditions. “Traditional business thinking views change as a risk, while composable thinking is the means to master the risk of accelerating change and to create new business value.”

2. Composable business architecture

This is the ability of organizations to create dynamic ways of working, Sinha says. For instance, during the pandemic, some retailers were able to pivot quickly to providing curbside pickup, and some healthcare providers pivoted to providing telehealth appointments.

“Organizations looked at different types of models in terms of delivery,” she says. “In these types of organizations, it is really about creating ‘agile’ at scale, and agile types of working in the organization.”

Sinha notes that digital business initiatives fail when business leaders commission projects from IT and then shirk accountability for the implementation of results, treating it as another IT project. “High-composability enterprises embrace distributed accountability for digital outcomes, reflecting a shift that most CIOs have been trying to make for several years, as well as create multidisciplinary teams that blend business and IT units to drive business results,” Sinha says.

3. Composable technology

This is the IT architecture or technology stack, says Sinha. Technology is a catalyst for business transformation and thinking, and developing a flexible and modular technology architecture enables bringing together the parts needed to support transformation.

Distributed cloud and artificial intelligence are the two main technologies that a majority of high-composability enterprises have already deployed or plan to deploy in 2022, according to Gartner’s CIO Agenda survey. Gartner notes that these technologies are a catalyst for business composability because they enable modular technology capabilities.

Tech investments for 2022

Another major technology at the top of the list of planned investments for 2022 is cyber and information security, with 66% of respondents saying they expect to increase associated investments in the next year.

“Many organizations were dabbling with composability before the pandemic,” Sinha says. “What we saw was that those that were composable came out ahead after the pandemic. The pandemic highlighted the importance and the value of composability”..[…] Read more »…..

 

Cloud Native Driving Change in Enterprise and Analytics

A pair of keynote talks at the DeveloperWeek Global conference held online this week hashed out the growing trends among enterprises going cloud native and how cloud native can affect the future of business analytics. Dan McKinney, senior engineer and developer relations lead with Cloudsmith, focused on cloud native supporting the continuous software pipelines in enterprises. Roman Stanek, CEO of GoodData, spoke on the influence cloud native can have on the analytics space. Their keynotes highlighted how software development in the cloud is creating new dynamics within organizations.

In his keynote, Stanek spoke about how cloud native could transform analytics and business intelligence. He described how developers might take ownership of business intelligence, looking at how data is exposed, workflows, and platforms. “Most people are just overloaded with PDF files and Excel files and it’s up to them to visualize and interpret the data,” Stanek said.

There is a democratization underway of data embedded into workflows and Slack, he said, but being able to expose data from applications or natively integrated in applications is the province of developers. Tools exist, Stanek said, for developers to make such data analytics more accessible and understandable by users. “We want to help people make decisions,” he said. “We also want to get them data at the right time, with the right context and volume.”

Stanek said he sees more developers owning business applications, insights, and intelligence up to the point where end users can make decisions. “This industry is heading away from an isolated industry where business people are copying data into visualization tools and data preparation tools and analytics tools,” he said. “We are moving into a world where we will be providing all of this functionality as a headless functionality.” The rise of headless compute services, which do not have local keyboards, monitors, or other means of input and are controlled over a network, may lead to different composition tools that allow business users to build their own applications with low-code/no-code resources, Stanek said.

Enterprise understanding of what constitutes cloud is evolving as well. Though cloud native and cloud hosted sound similar, McKinney said they can be different resources. “The cloud goes way beyond just storing and hosting,” he said. “It is at the heart of a whole new range of technical possibilities.” Many enterprises are moving from on-prem and cloud-hosted solutions to completely cloud-native solutions for continuous software, McKinney said, as cloud providers expand their offerings. “It is opening up new ways to build and deploy applications.”

The first wave of applications migrated to the cloud were cloud hosted, he said. “At a very high level, a cloud-hosted application has been lifted and shifted onto cloud-based server instances.” That gave them access to basic features from cloud providers and offered some advantages to on-prem applications, McKinney said. Still, the underlying architecture of the applications remained largely the same. “Legacy applications migrated to the cloud were never built to take advantage of the paradigm shift that cloud providers present,” he said. Such applications cannot take advantage of shared services or pools of resources and are not suitable for scaling. “It doesn’t have the elasticity,” McKinney said.

The march toward the cloud has since accelerated with the next wave of applications to take advantage of the cloud were constructed natively, he said. Applications born and deployed with the technology of cloud providers in mind typically make use of continuous integration, orchestrators, container engines, and microservices, McKinney said. “Cloud-native applications are increasingly architected as smaller and smaller pieces and they share and reuse services wherever possible.”

Enterprises favor cloud-native solutions now for such reasons as the total cost of ownership, performance and security of the solution, and accommodating distributed teams, McKinney said. There is a desire, he said, to shift from capital expense on infrastructure to operational expense on running costs. These days the costs of cloud-native applications can be calculated fairly easily, McKinney said. Cloud-native resources offer fully managed service models, which can maintain the application itself. “You don’t have to think about what version of the application you have deployed,” he said. “It’s all part of the subscription.”

The ability to scale up with the cloud to meet increased demand was one of the first drivers of migration, McKinney said, but cloud-native applications can go beyond simple scaling. “Cloud-native applications can scale down to the level of individual functions,” he said. “It’s more responsive, efficient, and able to better suit increasing demands — particularly spike loads.”..[…] Read more »…..

 

Why You Need a Data Fabric, Not Just IT Architecture

Data fabrics offer an opportunity to track, monitor and utilize data, while IT architectures track, monitor and maintain IT assets. Both are needed for a long-term digitalization strategy.

As companies move into hybrid computing, they’re redefining their IT architectures. IT architecture describes a company’s entire IT asset base, whether on-premises or in-cloud. This architecture is stratified into three basic levels: hardware such as mainframes, servers, etc.; middleware, which encompasses operating systems, transaction processing engines, and other system software utilities; and the user-facing applications and services that this underlying infrastructure supports.

IT architecture has been a recent IT focus because as organizations move to the cloud, IT assets also move, and there is a need to track and monitor these shifts.

However, with the growth of digitalization and analytics, there is also a need to track, monitor, and maximize the use of data that can come from a myriad of sources. An IT architecture can’t provide data management, but a data fabric can. Unfortunately, most organizations lack well-defined data fabrics, and many are still trying to understand why they need a data fabric at all.

What Is a Data Fabric?

Gartner defines a data fabric as “a design concept that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable and inferenced metadata assets to support the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.”

Let’s break it down.

Every organization wants to use data analytics for business advantage. To use analytics well, you need data agility that enables you to easily connect and combine data from any source your company uses –whether the source is an enterprise legacy database or data that is culled from social media or the Internet of Things (IoT).  You can’t achieve data integration and connectivity without using data integration tools, and you also must find a way to connect and relate disparate data to each other in meaningful ways if your analytics are going to work.

This is where data fabric enters. The data fabric contains all the connections and relationships between an organization’s data, no matter what type of data it is or where it comes from. The goal of the fabric is to function as an overall tapestry of data that interweaves all data so data in its entirety is searchable. This has the potential to not only optimize data value, but to create a data environment that can answer virtually any analytics query. The data fabric does what an IT architecture can’t: it tells you what data does, and how data relates to each other. Without a data fabric, companies’ abilities to leverage data and analytics are limited.

Building a Data Fabric

When you build a data fabric, it’s best to start small and in a place where your staff already has familiarity.

That “place” for most companies will be with the tools that they are already using to extract, transform and load (ETL) data from one source to another, along with any other data integration software such as standard and custom APIs. All of these are examples of data integration you have already achieved.

Now, you want to add more data to your core. You can do this by continuing to use the ETL and other data integration methods you already have in place as you build out your data fabric. In the process, care should be taken to also add the metadata about your data, which will include the origin point for the data, how it was created, what business and operational processes use it, what its form is (e.g.,  single field in a fixed record, or an entire image file), etc. By maintaining the data’s history, as well as all its transformations, you are in a better position to check data for reliability, and to ensure that it is secure.

As your data fabric grows, you will probably add data tools that are missing from your workbench. These might be tools that help with tracking data, sharing metadata, applying governance to data, etc. A recommendation in this area is to look for an all-inclusive data management software that contains not only all the tools that you’ll need build a data fabric, but also important automation such as built-in machine learning.

The machine learning observes how data in your data fabric is working together, and which combinations of data are used most often in different business and operational contexts. When you query the data, the ML assists in pulling the data together that is most likely to answer your queries…[…] Read more »…..

 

5 minutes with Vishal Jain – Navigating cybersecurity in a hybrid work environment

Are you ready for hybrid work? Though the hybrid office will create great opportunities for employees and employers alike, it will create some cybersecurity challenges for security and IT operations. Here, Vishal Jain, Co-Founder and CTO at Valtix, a Santa Clara, Calif.-based provider of cloud native network security services, speaks to Security magazine about the many ways to develop a sustainable cybersecurity program for the new hybrid workforce.

Security: What is your background and current role? 

Jain: I am the co-founder and CTO of Valtix. My background is primarily building products and technology at the intersection of networking, security and cloud; built Content Delivery Networks (CDNs) during early days of Akamai and just finished doing Software Defined Networking (SDN) in a startup which built ACI for Cisco.

 

Security: There’s a consensus that for many of us, the reality will be a hybrid workplace. What does the hybrid workforce mean for cybersecurity teams?

Jain: The pandemic has accelerated trends that had already begun before 2019. We’ve just hit an inflection point on the rate of change – taking on much more change in a much shorter period of time. The pandemic is an inflection point for cloud tech adoption. I think about this in three intersections of work, apps, infrastructure, and security:

  1. Work and Apps: A major portion of the workforce will continue to work remotely, communicating using collaboration tools like Zoom, WebEx, etc. Post-pandemic, video meetings would be the new norm compared to the old model where in-person meeting was the norm. The defaults have changed. Similarly, the expectation now is that any app is accessible anywhere from any device.
  2. Apps and Infrastructure: Default is cloud. This also means that expectation on various infrastructure is now towards speed, agility, being infinite and elastic and being delivered as a service.
  3. Infrastructure and Security: This is very important for cybersecurity teams, how do they take a discipline like security from a static environment (traditional enterprise) and apply it to a dynamic environment like cloud.

Security: What solutions will be necessary for enterprise security to implement as we move towards this new work environment?

Jain: In this new work environment where any app is accessible anywhere from any device, enterprise security needs to focus on security of users accessing those apps and security of those apps themselves. User-side security and securing access to the cloud is a well-understood problem now, plenty of innovation and investments have been made here. For security of apps, we need to look back at intersections 2 and 3, mentioned previously.

Enterprises need to understand security disciplines but implementation of these is very different in this new work environment. Security solutions need to evolve to address security & ops challenges. On the security side, definition of visibility has to expand. On the operational side of security, solutions need to be cloud-native, elastic, and infinitely scalable so that enterprises can focus on applications, not the infrastructure.

Security: What are some of the challenges that will need to be overcome as part of a hybrid workplace?

Jain: Engineering teams typically have experiences working across distributed teams so engineering and the product side of things are not super challenging as part of a hybrid workplace. On the other hand, selling becomes very different, getting both customers and the sales team used to this different world is a challenge enterprises need to focus on. Habits and culture are always the hardest part to change. This is true in security too. There is a tendency to bring in old solutions to secure this new world. Security practitioners could try to bring in the same tech and product he/she has been using for 10 years but deep down they know it’s a bad fit…[…] Read more »….

 

What Will Be the Next New Normal in Cloud Software Security?

Accelerated moves to the cloud made sense at the height of the pandemic — organizations may face different concerns in the future.

Organizations that accelerated their adoption of cloud native apps, SaaS, and other cloud-driven resources to cope with the pandemic may have to weigh other security matters as potential “new normal” operations take shape. Though many enterprises continue to make the most of remote operations, hybrid workplaces might be on the horizon for some. Experts from cybersecurity company Snyk and SaaS management platform BetterCloud see new scenarios in security emerging for cloud resources in a post-pandemic world.

The swift move to remote operations and work-from-home situations naturally led to fresh concerns about endpoint and network security, says Guy Podjarny, CEO and co-founder of Snyk. His company recently issued a report on the State of Cloud Native Application Security, exploring how cloud-native adoption affects defenses against threats. As more operations were pushed remote and to the cloud, security had to discern between authorized personnel who needed access from outside the office versus actual threats from bad actors.

Decentralization was already underway at many enterprises before COVID-19, though that trend may have been further catalyzed by the response to the pandemic. “Organizations are becoming more agile and the thinking that you can know everything that’s going on hasn’t been true for a long while,” Podjarny says. “The pandemic has forced us to look in the mirror and see that we don’t have line of sight into everything that’s going on.”

This led to distribution of security controls, he says, to allow for more autonomous usage by independent teams who are governed in an asynchronous manner. “That means investing more in security training and education,” Podjarny says.

A need for a security-based version of digital transformation surfaced, he says, with more automated tools that work at scale, offering insight on distributed activities. Podjarny says he expects most security needs that emerged amid the pandemic will remain after businesses can reopen more fully. “The return to the office will be partial,” he says, expecting some members of teams to not be onsite. This may be for personal, work-life needs, or organizations want to take advantage of less office space, Podjarny says.

That could lead to some issues, however, with the governance of decentralized activities and related security controls. “People don’t feel they have the tools to understand what’s going on,” he says. The net changes that organizations continue to make in response to the pandemic, and what may come after, have been largely positive, Podjarny says. “It moves us towards security models that scale better and adapted the SaaS, remote working reality.”

The rush to cloud-based applications such as SaaS and platform-as-a-service at the onset of the pandemic brought on some recognition of the necessity to offer ways to maintain operations under quarantine guidelines. “Employees were just trying to get the job done,” says Jim Brennan, chief product officer with BetterCloud. Spinning up such technologies, he says, enabled staff to meet those goals. That compares with the past where such “shadow IT” actions might have been regarded as a threat to the business. “We heard from a lot of CIOs where it really changed their thinking,” Brennan says, which led to efforts to facilitate the availability of such resources to support employees.

Meeting those needs at scale, however, created a new challenge. “How do I successfully onboard a new application for 100 employees? One thousand employees? How do I do that for 50 new applications? One hundred new applications?” Brennan says many CIOs and chief security officers have sought greater visibility into the cloud applications that have been spun up within their organizations and how they are being used. BetterCloud produced a brief recently on the State of SaaS, which looks at SaaS file security exposure.

Automation is being put to work, Brennan says, to improve visibility into those applications. This is part of the emerging landscape that even sees some organizations decide that the concept of shadow IT — the use of technology without direct approval — is a misnomer. “A CIO told me they don’t believe in ‘shadow IT,’” he says. In effect, the CIO regarded all IT, authorized or not, as a means to get work done…[…] Read more »…..

 

Meet Leanne Hurley: Cloud Expert of the Month – April 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Leanne Hurley.

After starting out at the front counter of a two-way radio shop in 1993, Leanne worked her way from face-to-face customer service, to billing, to training and finally into sales. She has been in sales since 1996 and has (mostly!) loved every minute of it. Leanne started selling IaaS (whether co-lo, Managed Hosting or Cloud) during the dot.com boom and expanded her expertise since I’ve been at SAP.  Now, she enjoys leading a team of sales professionals as she works with companies to improve business outcomes and accelerate digital transformation utilizing SAP’s Intelligent enterprise.

When did you join Cloud Girls and why?

I was one of the first members of Cloud Girls in 2011. I joined because having a strong network and community of women in technology is important.

What do you value about being a Cloud Girl?  

I value the relationships and women in the group.

What advice would you give to your younger self at the start of your career?

Stop doubting yourself. Continue to ask questions and don’t be intimidated by people that try to squash your tenacity and curiosity.

What’s your favorite inspirational quote?

“You can have everything in life you want if you will just help other people get what they want.”  – Zig Ziglar

What one piece of advice would you share with young women to encourage them to take a seat at the table?

Never stop learning and always ask questions. In technology women (and men too for that matter) avoid asking questions because they think it reveals some sort of inadequacy. That is absolutely false. Use your curiosity and thirst for knowledge as a tool, it will serve you well all your life.

You’re a new addition to the crayon box. What color would you be and why?

I would be Sassy-molassy because I’m a bit sassy.

What was the best book you read this year and why?

I loved American Dirt because it humanized the US migrant plight and reminded me how blessed and lucky we all are to have been born in the US.

What’s the most useless talent you have? Why?.[…] Read more »…..

 

Protecting Remote Workers Against the Perils of Public WI-FI

In a physical office, front-desk security keeps strangers out of work spaces. In your own home, you control who walks through your door. But what happens when your “office” is a table at the local coffee shop, where you’re sipping a latte among total strangers?

Widespread remote work is likely here to stay, even after the pandemic is over. But the resumption of travel and the reopening of public spaces raises new concerns about how to keep remote work secure.

In particular, many employees used to working in the relative safety of an office or private home may be unaware of the risks associated with public Wi-Fi. Just like you can’t be sure who’s sitting next to your employee in a coffee shop or other public space, you can’t be sure whether the public Wi-Fi network they’re connecting to is safe. And the second your employee accidentally connects to a malicious hotspot, they could expose all the sensitive data that’s transmitted in their communications or stored on their device.

Taking scenarios like this into account when planning your cybersecurity protections will help keep your company’s data safe, no matter where employees choose to open their laptops.

The risks of Wi-Fi search

An employee leaving Wi-Fi enabled when they leave their house may seem harmless, but it really leaves them incredibly vulnerable. Wi-Fi enabled devices can reveal the network names (SSIDs) they normally connect to when they are on the move. An attacker can then use this information to emulate a known “trusted” network that is not encrypted and pretend to be that network.  Many devices will automatically connect to these “trusted” open networks without verifying that the network is legitimate.

Often, attackers don’t even need to emulate known networks to entice users to connect. According to a recent poll, two-thirds of people who use public Wi-Fi set their devices to connect automatically to nearby networks, without vetting which ones they’re joining.

If your employee automatically connects to a malicious network — or is tricked into doing so — a cybercriminal can unleash a number of damaging attacks. The network connection can enable the attacker to intercept and modify any unencrypted content that is sent to the employee’s device. That means they can insert malicious payloads into innocuous web pages or other content, enabling them to exploit any software vulnerabilities that may be present on the device.

Once such malicious content is running on a device, many technical attacks are possible against other, more important parts of the device software and operating system. Some of these provide administrative or root level access, which gives the attacker near total control of the device. Once an attacker has this level of access, all data, access, and functionality on the device is potentially compromised. The attacker can remove or alter the data, or encrypt it with ransomware and demand payment in exchange for the key.

The attacker could even use the data to emulate and impersonate the employee who owns and or uses the device. This sort of fraud can have devastating consequences for companies. Last year, a Florida teenager was able to take over multiple high-profile Twitter accounts by impersonating a member of the Twitter IT team.

A multi-layered approach to remote work security

These worst-case scenarios won’t occur every time an employee connects to an unknown network while working remotely outside the home — but it only takes one malicious network connection to create a major security incident. To protect against these problems, make sure you have more than one line of cybersecurity defenses protecting your remote workers against this particular attack vector.

Require VPN use. The best practice for users who need access to non-corporate Wi-Fi is to require that all web traffic on corporate devices go through a trusted VPN. This greatly limits the attack surface of a device, and reduces the probability of a device compromise if it connects to a malicious access point.

Educate employees about risk. Connecting freely to public Wi-Fi is normalized in everyday life, and most people have no idea how risky it is. Simply informing your employees about the risks can have a major impact on behavior. No one wants to be the one responsible for a data breach or hack…[…] Read more »

 

 

Meet Andrea Blubaugh: Cloud Expert of the Month – February 2021

Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications and more. Now, we are excited to shine a spotlight on one of our members each month.

Our Cloud Expert of the Month is Andrea Blubaugh.

Andrea has more than 15 years of experience facilitating the design, implementation and ongoing management of data center, cloud and WAN solutions. Her reputation for architecting solutions for organizations of all sizes and verticals – from Fortune 100 to SMBs – earned her numerous awards and honors. With a specific focus on the mid to enterprise space, Andrea works closely with IT teams as a true client advocate, consistently meeting, and often exceeding expectations. As a result, she maintains strong client and provider relationships spanning the length of her career.

When did you join Cloud Girls and why?  

Wow, it’s been a long time! I believe it was 2014 or 2015 when i joined Cloud Girls. I had come to know Manon through work and was impressed by her and excited to join a group of women in the technology space.

What do you value about being a Cloud Girl?  

Getting to know and develop friendships with the fellow Cloud Girls over the years has been a real joy. It’s been a great platform for learning on both a professional and personal level.

What advice would you give to your younger self at the start of your career?  

I would reassure my younger self in her decisions and to encourage her to keep taking risks. I would also tell her to not sweat the losses so much. They tend to fade pretty quickly.

What’s your favorite inspirational quote?  

“Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do, so throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover.”  –Mark Twain

What one piece of advice would you share with young women to encourage them to take a seat at the table?  

I was very fortunate early on in my career to work for a startup whose leadership saw promise in my abilities that I didn’t yet see myself. I struggled with the decision to take a leadership role as I didn’t feel “ready” or that I had the right or enough experience. I received some good advice that I had to do what ultimately felt right to me, but that turning down an opportunity based on a fear of failure wouldn’t ensure there would be another one when I felt the time was right. My advice is if you’re offered that seat, and you want that seat, take it.

What’s one item on your bucket list and why?..[…] Read more »…..

 

 

How Object Storage Is Taking Storage Virtualization to the Next Level

We live in an increasingly virtual world. Because of that, many organizations not only virtualize their servers, they also explore the benefits of virtualized storage.

Gaining popularity 10-15 years ago, storage virtualization is the process of sharing storage resources by bringing physical storage from different devices together in a centralized pool of available storage capacity. The strategy is designed to help organizations improve agility and performance while reducing hardware and resource costs. However, this effort, at least to date, has not been as seamless or effective as server virtualization.

That is starting to change with the rise of object storage – an increasingly popular approach that manages data storage by arranging it into discrete and unique units, called objects. These objects are managed within a single pool of storage instead of a legacy LUN/volume block store structure. The objects are also bundled with associated metadata to form a centralized storage pool.

Object storage truly takes storage virtualization to the next level. I like to call it storage virtualization 2.0 because it makes it easier to deploy increased storage capacity through inline deduplication, compression, and encryption. It also enables enterprises to effortlessly reallocate storage where needed while eliminating the layers of management complexity inherent in storage virtualization. As a result, administrators do not need to worry about allocating a given capacity to a given server with object storage. Why? Because all servers have equal access to the object storage pool.

One key benefit is that organizations no longer need a crystal ball to predict their utilization requirements. Instead, they can add the exact amount of storage they need, anytime and in any granularity, to meet their storage requirements. And they can continue to grow their storage pool with zero disruption and no application downtime.

Greater security

Perhaps the most significant benefit of storage virtualization 2.0 is that it can do a much better job of protecting and securing your data than legacy iterations of storage virtualization.

Yes, with legacy storage solutions, you can take snapshots of your data. But the problem is that these snapshots are not immutable. And that fact should have you concerned. Why? Because, although you may have a snapshot when data changes or is overwritten, there is no way to recapture the original.

So, once you do any kind of update, you have no way to return to the original data. Quite simply, you are losing the old data snapshots in favor of the new. While there are some exceptions, this is the case with the majority of legacy storage solutions.

With object storage, however, your data snapshots are indeed immutable. Because of that, organizations can now capture and back up their data in near real-time—and do it cost-effectively. An immutable storage snapshot protects your information continuously by taking snapshots every 90 seconds so that even in the case of data loss or a cyber breach, you will always have a backup. All your data will be protected.

Taming the data deluge

Storage virtualization 2.0 is also more effective than the original storage virtualization when it comes to taming the data tsunami. Specifically, it can help manage the massive volumes of data—such as digital content, connected services, and cloud-based apps—that companies must now deal with. Most of this new content and data is unstructured, and organizations are discovering that their traditional storage solutions are not up to managing it all.

It’s a real problem. Unstructured data eats up a vast amount of a typical organization’s storage capacity. IDC estimates that 80% of data will be unstructured in five years. For the most part, this data takes up primary, tier-one storage on virtual machines, which can be a very costly proposition.

It doesn’t have to be this way. Organizations can offload much of this unstructured data via storage virtualization 2.0, with immutable snapshots and centralized pooling capabilities.

The net effect is that by moving the unstructured data to object storage, organizations won’t have it stored on VMs and won’t need to backup in a traditional sense. With object storage taking immutable snaps and replicating to another offsite cluster, it will eliminate 80% of an organization’s backup requirements/window.

This dramatically lowers costs. Because instead of having 80% of storage in primary, tier-one environments, everything is now stored and protected on object storage.

All of this also dramatically reduces the recovery time of both unstructured data from days and weeks to less than a minute, regardless of whether it’s TB or PB of data. And because the network no longer moves the data around from point to point, it’s much less congested. What’s more, the probability of having failed data backups goes away, because there are no more backups in the traditional sense.

The need for a new approach

As storage needs increase, organizations need more than just virtualization..[…] Read more »