Are you ready for hybrid work? Though the hybrid office will create great opportunities for employees and employers alike, it will create some cybersecurity challenges for security and IT operations. Here, Vishal Jain, Co-Founder and CTO at Valtix, a Santa Clara, Calif.-based provider of cloud native network security services, speaks to Security magazine about the many ways to develop a sustainable cybersecurity program for the new hybrid workforce.
Security: What is your background and current role?
Jain: I am the co-founder and CTO of Valtix. My background is primarily building products and technology at the intersection of networking, security and cloud; built Content Delivery Networks (CDNs) during early days of Akamai and just finished doing Software Defined Networking (SDN) in a startup which built ACI for Cisco.
Security: There’s a consensus that for many of us, the reality will be a hybrid workplace. What does the hybrid workforce mean for cybersecurity teams?
Jain: The pandemic has accelerated trends that had already begun before 2019. We’ve just hit an inflection point on the rate of change – taking on much more change in a much shorter period of time. The pandemic is an inflection point for cloud tech adoption. I think about this in three intersections of work, apps, infrastructure, and security:
Work and Apps: A major portion of the workforce will continue to work remotely, communicating using collaboration tools like Zoom, WebEx, etc. Post-pandemic, video meetings would be the new norm compared to the old model where in-person meeting was the norm. The defaults have changed. Similarly, the expectation now is that any app is accessible anywhere from any device.
Apps and Infrastructure: Default is cloud. This also means that expectation on various infrastructure is now towards speed, agility, being infinite and elastic and being delivered as a service.
Infrastructure and Security: This is very important for cybersecurity teams, how do they take a discipline like security from a static environment (traditional enterprise) and apply it to a dynamic environment like cloud.
Security: What solutions will be necessary for enterprise security to implement as we move towards this new work environment?
Jain: In this new work environment where any app is accessible anywhere from any device, enterprise security needs to focus on security of users accessing those apps and security of those apps themselves. User-side security and securing access to the cloud is a well-understood problem now, plenty of innovation and investments have been made here. For security of apps, we need to look back at intersections 2 and 3, mentioned previously.
Enterprises need to understand security disciplines but implementation of these is very different in this new work environment. Security solutions need to evolve to address security & ops challenges. On the security side, definition of visibility has to expand. On the operational side of security, solutions need to be cloud-native, elastic, and infinitely scalable so that enterprises can focus on applications, not the infrastructure.
Security: What are some of the challenges that will need to be overcome as part of a hybrid workplace?
Jain: Engineering teams typically have experiences working across distributed teams so engineering and the product side of things are not super challenging as part of a hybrid workplace. On the other hand, selling becomes very different, getting both customers and the sales team used to this different world is a challenge enterprises need to focus on. Habits and culture are always the hardest part to change. This is true in security too. There is a tendency to bring in old solutions to secure this new world. Security practitioners could try to bring in the same tech and product he/she has been using for 10 years but deep down they know it’s a bad fit…[…] Read more »….
Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications, and more. Now, we are excited to shine a spotlight on one of our members each month.
Angela has been a Cloud Girl since 2015. After establishing a career in the cloud consulting space, she pivoted to cybersecurity and compliance in 2018 and now serves as the Director of Assessment and Innovation at RSI. Angela’s main focus is enabling clients to achieve new business heights while also securing their organizations through technology, operations, and governance. Angela lives in Broomfield with her husband and 2 boys and spends much of her time serving on the Cloud Girls Board and she’s always looking for new ways to enable women in technology and security.
When did you join Cloud Girls and why?
I joined Cloud Girls in 2015 after I left my job in a cloud company to start my own business. I was introduced to Manon by a number of colleagues in the industry and she told me I would be a great candidate for the group. Like many others, I was looking to connect with other women in the field who could offer me guidance and support in my journey.
What do you value about being a Cloud Girl?
The Cloud Girls have been a great source of inspiration and support throughout my career. We have representation from incredible companies and the vibe is never competitive because we’re all committed to supporting each other and the next generation of women in tech.
How did you find a career in tech? Did you choose it, or did you end up here and how?
I entered the tech field purely by accident. I was an Executive Assistant looking for a new job and was approached by the CEO of a telecom company that had recently acquired a data center and had just launched their cloud computing division. At the time, I hoped to grow into a marketing role, which I did. I spearheaded the rebrand of the VoIP line and was tasked with coordinating the computing rebrand. By taking on these challenges, I was really forced to step outside of my comfort zone and learn new things. My job ultimately led to my consulting career. I was a partner/reseller for a number of tech services and in 2018, I attended a privacy workshop with the hopes of networking with my target client base. Instead, I became so intrigued by the subject matter that I spent all of my time learning about privacy and pondering the tie-ins to cybersecurity. I ultimately pivoted to cybersecurity and left my consulting career to become a compliance advisor and practitioner. Now, I’m expanding on that journey by pursuing my degree in cybersecurity.
How do you avoid being complacent in your role?
Cybersecurity is a field that changes every day. To be honest, it’s incredibly difficult to keep up with the latest tech, incidents, legislation, and chatter. I’ve found that by selecting the domain areas that really energize me, I’m better able to stay in touch with the landscape. For areas I have known deficiencies, I have a collection of resources I can use for additional information. Sometimes it’s trusted online sources and sometimes it’s my professional network.
What one piece of advice would you share with young women to encourage them to take a seat at the table?
Don’t be afraid to mess up. I have seen so many girls and women refuse challenges because they hold themselves to a standard of always performing well. If I hadn’t failed many of the challenges presented to me, I wouldn’t have the career I have now. It takes stepping out of your comfort zone and falling down just to get back up a few times to really find your path in life and work.
Which superpower would you like to have? Why?
I would freeze time! There are so many things I want to do and learn and never enough hours in the day.
What was the best book you read this year and why?
Rising Strong by Brene Brown. The last few years have been so turbulent for everyone and I think Brown does an excellent job of conveying that feeling of “belonging everywhere and nowhere.” It’s one of the few books I’ve read that made me feel better about being in my own skin, living a life that I can admire, and being an ally for others..[…] Read more »…..
In a tech-driven world, the security industry is still facing a talent shortage, and finding skilled candidates to fill any of the thousands of open positions available is one of the greatest challenges facing hiring managers.
To put an end to the skills gap, organizations are focusing not only on finding new talents, but on upskilling their security teams through courses offered by training providers or pursuing relevant industry certifications.
But what are organizations looking for? Which combination of soft and hard skills is the most sought after in 2021?
Top in-demand cybersecurity skillsets
The most in-demand skillsets for security professionals are listed here in no particular order. These are what organizations are most likely looking for when choosing the right person to safeguard their systems, networks, data, programs and digital assets.
1. IT and networking skills
Being able to analyze and resolve high-level security issues on a network requires solid technical skills. This includes system administration and networking skills, as well as understanding how to adopt security controls to protect digital assets from cyber threats.
Other skills include assessing the security of wired and wireless networks and implementing the latest security best practices in troubleshooting, maintaining and updating information systems.
Building a foundation of technical skills is important for many types of cybersecurity careers. Common entry-level certifications focused on networking and security basics include:
Analysis is an essential skill for security professionals tasked with examining computer systems to foresee problems, assess risks and consider solutions to prevent, detect and respond to cyberattacks. This not only requires technical proficiency in utilizing security tools to identify complex cyberthreats, it requires soft skills, such as problem-solving, critical thinking and the ability to communicate and persuade management to adopt stricter safety protocols.
Analysts can take on different roles like a cybersecurity analyst, information security analyst, computer systems analyst and malware analyst.
Technically- and analytically-minded professional certifications include:
Security professionals need to evaluate threats and their associated risks to a system and organization. Most companies have many tools in place to identify threats, but these are useless without professionals that can properly analyze, rank and mitigate the threats discovered.
Popular certifications related to threat intelligence include:
Quickly responding to an incident is key in ensuring the smallest possible damage to an organization. But it’s also important to investigate the situation thoroughly and provide recommendations to address loopholes in an organization’s security posture. Other skills include the ability to create an effective incident response plan (IRP) to reduce the risk of IT service downtime when an incident occurs.
Popular learning paths and certifications related to incident response include:
IT auditors conduct system and security audits at organizations so that vulnerabilities and flaws within them are found, documented, tested and resolved. Auditing can uncover vulnerabilities introduced into the organization by people, technology or processes and whether there are risks or other complications associated with them.
Possessing auditing skills means not only having knowledge of basic system infrastructure, data analytics and risk management, it means also having exceptional interpersonal and communications skills to effectively present findings to technical and non-technical personnel.
For those considering a career as an IT/IS auditor, a few certifications and career paths are available, including:
Using exploitation techniques for testing purposes is a sought-after cybersecurity skill. Pentesters generally have hands-on skills and a passion for breaking things. Their discoveries help organizations improve digital security measures and resolve security vulnerabilities and weaknesses. They do exactly what a malicious hacker would do when attempting to break into a system — with permission, of course.
Forensic investigations are an important part of incident response. They use various forensic tools to recover deleted, damaged or otherwise manipulated data from a range of devices, such as computers, tablets, phones and flash drives. Digital forensics professionals require sound investigative practices, strong data interpretation and effective presentation skills to produce evidence in a court of law.
8. Governance, risk management and compliance skills
Effective governance, risk management and compliance (GRC) is critical to business operations. GRC professionals are asked to be able to develop and implement strategies and solutions that are both aligned with business objectives and consistent with industry regulations (HIPAA, CCPA, GDPR, ISO 27000 series, NIST CSF and NIST RMF).
Related certifications and training for GRC professionals include:
Most organizations use cloud services — be it software as a service (SaaS), platform as a service (PaaS) or infrastructure as a service (IaaS) — so cybersecurity professionals who can deploy, configure and manage a virtualized environment and its security are in demand…[…] Read more »….
You’ve just been hired to lead the security program of a prominent multinational organization. You’re provided a seasoned team and budget, but you can’t help looking around and asking yourself, “How will I possibly protect every asset of this company, every day, against every threat, globally?” After all, this is the expectation of most organizations, their customers and shareholders, as well as regulators and lawmakers. In my experience, one of the top challenges security leaders face is trying to optimize a modest security budget to protect a highly complex and ever-expanding organizational attack surface. In fact, Accenture found that 69% of security professionals say staying ahead of attackers is a constant battle and the cost is unsustainable. For most, this challenge is extremely discouraging. However, success is not necessarily promised to those with resources – it’s more about how resourceful you can be.
As organizations worldwide digitally transform at a breakneck pace, the stakes are increasing for cybersecurity programs. Cyberattacks no longer just take down websites and internal email. They can disrupt the availability of every revenue-generating digital business process, threatening the very existence of many organizations. With this heightened risk, organizations must shift from a prevention-first mindset to one that balances aggressive prevention measures with a keen focus on enabling efficient consequence management. This shouldn’t be read as a response-only strategy, but it does mean:
Designing business processes to minimize single points of failure and reduce sensitivity to technology and data latency, recognizing that technology and data risk is extremely high in today’s environment.
Focusing asset protection programs disproportionately on the assets that underpin the most critical business processes or present the greatest risk.
Architecting technology to anticipate and recover from persistent, sophisticated attacks, as the “zero trust” approach suggests.
Establishing an organizational culture that acknowledges, anticipates, accepts and thrives in a pervasive threat environment.
Most cybersecurity leaders today only focus on, or are limited to focusing on, the third of these four items. Many are aggressively pursuing zero trust related modernization programs to increase the technology resilience of their organization’s systems and networks. However, the other three strategic imperatives are not achieved due to a lack of organizational knowledge, access, influence or governance.
The same can be said for physical security leaders, who likely do their best to focus on the second item, but may not understand the interdependency between an organization’s physical and digital assets. Unless a building is labeled as a data center, they may be unlikely to protect those physical assets that are most critical to their organization’s digital operations.
All security programs, both digital and physical, struggle to achieve the fourth item, limited by their lack of business access and influence. So, how do security organizations move from being security-centric to business-centric? The journey starts by taking a converged approach.
Implementing a converged security organization is perhaps one of the most resourceful and beneficial business decisions an organization can make when seeking to enhance security risk management. In this era of heightened consequences and sophisticated security threats, the need for integration between siloed security and risk management teams is imperative. The need for collaboration between those two teams and the business is equally imperative.
In my role as the Chief Security Officer of Dell Technologies, I oversee a converged organization with responsibility for physical security, cybersecurity, product security, privacy and enterprise resiliency programs, including business continuity, disaster recovery and crisis management. As discussed in a recent article, organizations that treat different aspects of security – such as physical and cybersecurity – as separate endeavors, can unintentionally undermine one area and in turn, weaken both areas. With a converged organization, the goal is to bring those once-separate entities together in a more impactful manner. I’ve seen convergence lead to greater effectiveness in corporate risk management practices. But, the benefits don’t stop there. It also increases financial and operational efficiency, improves stakeholder communications and strengthens customer trust.
Over the course of this series, I will walk you through how security, privacy and resiliency teams with seemingly different capabilities and goals can work together to advance one another’s priorities, all while marching towards one common goal – greater organizational outcomes. First up, let’s discuss the benefits gained from converging enterprise resiliency and security programs.
The road less traveled – benefits of converging resiliency and security
While I’ve observed an increase in organizations merging cybersecurity and physical security programs, I’ve seen fewer organizations bring resiliency into the mix, despite it being potentially more important. In fact, an ASIS study found that only 19% of organizations converged cybersecurity, physical security and business continuity into a single department.
In my experience, converging resiliency programs with all security programs enables organizations to consistently prepare for and respond to any security incident or crisis – natural disaster, global pandemic or cyberattack – with a high degree of resiliency. More importantly, converging these programs empowers security organizations to achieve the strategic imperatives mentioned earlier.
Now, let’s look at some of the more specific benefits:
Business continuity programs help prioritize security resources
As discussed earlier, one of the main challenges for security leaders is trying to find resourceful ways to adequately secure the breadth of a company’s assets, often with a less-than-ideal budget that limits implementing leading security practices across every asset. By converging business continuity, a core component of a resiliency program, with cybersecurity and physical security programs, security leaders can identify the most critical business processes and the digital and physical assets that underpin them. This in turn provides clear priorities for security focus and investment.
Non-converged security organizations generally prioritize their focus through the lens of regulatory and litigation risk, rather than having a deep understanding of business operational risk and its ties to revenue generation. For a physical security leader, this may look like prioritizing physical security resources in countries that have stronger regulatory oversight and more stringent fine structures, or those that contain the most employees. For a cybersecurity leader, it may mean focusing on databases that contain the most records of personal information, a costly data element to lose. While these approaches are not wrong, they are incomplete. In fact, the most critical business assets don’t often look like those most commonly prioritized by security. It requires a business lens to find the assets that the business depends upon to thrive, rather than focusing on the assets that might lead to a lawsuit if left unprotected. It means thinking about business risk more holistically.
Business continuity planners have perfected the art of applying a business lens to explore complex, interdependent business processes, some of which even sit with third parties. When organizations don’t continuity plan well, it isn’t until an incident strikes that they find most of their company’s revenue was dependent on an overlooked single point of failure.
However, business continuity alone is typically only looking for issues of availability. By converging resiliency and security programs, business impact assessments and security reviews can merge, resulting in more holistic assessments that consider both business and security risk across the full spectrum of availability, confidentiality and integrity issues. As a further sweetener, business stakeholders can have a single conversation with the converged risk program, reducing distractions that pull them from their primary business focus.
By integrating these two programs, converged security organizations can ensure their priorities are closely aligned with the business’ priorities. Whether it be digital assets, buildings or people, an organization’s most critical assets are clearly identified and traced to critical business processes through robust business continuity planning, then secured. Tying these programs together enables security leaders to protect what matters most, the most, which is the most important benefit of converging security and resiliency programs.
Security makes business continuity programs smarter
For the modern security professional, the only thing better than spotting a difficult-to-find critical business asset in need of protection is for a business to improve its processes and reduce the number of assets needing protection in the first place. By embedding security context into the continuity planning process, business continuity programs become smarter. With this knowledge, converged organizations can more effectively propose process engineering opportunities that optimize security budgets and reduce organizational risk. This is particularly true where the resiliency team has deeper access and insights to the supported organization than the security team.
Typically, business continuity planners are introduced to business processes and underlying assets only after they are in place, which means planners discover existing resiliency risks. Contrast that with modern security programs embedded in business and digital transformation projects from the beginning. By merging security and business continuity programs, the value proposition shifts from “smart discovery” of business process reengineering opportunities to one of resilient and secure business process engineering from the initial design point, helping organizations get it right the first time.
This type of value can extend from the most tactical processes to more strategic business initiatives, such as launching a new design center overseas. Converged security organizations can share a holistic, converged risk picture to inform business decision making. A typical converged risk assessment for such a project may consider historical storm patterns, geopolitical instability, national economic espionage, domestic terrorism, labor risk and so on. This holistic view results in better risk decisions and better business outcomes.
Security and crisis management go together like peanut butter and jelly
Crisis management is another core capability of resiliency programs. The benefit of converging crisis management and security programs is twofold. First, security is often the cause of the crisis. Historically, organizational crises would be a broad mix of mismanagement, natural, political, brand, labor and other issues. In the last year alone, the world has seen a dramatic rise in cyberattacks.
Second, this is the area where the culture of the two organizations is most closely aligned, allowing for low-friction integration and improvement. Crisis management professionals are accustomed to preparing for and managing through low-likelihood, high-impact events and facilitating critical decisions quickly, with imperfect information. If you ask a security leader what the motion of their organization looks like, you will likely get an identical answer. Leaders can unify and augment these skillsets and capabilities by bringing crisis management and security programs together. And, this is becoming more important in a world where consequence management – how capably a company responds when things go wrong – can be the difference between a glancing blow and a knockout.
Disaster recovery programs thrive when paired with security
Disaster recovery teams focus on identifying critical data and technology, and ensuring it is architected and tested to handle common continuity disruptions. In a mature resiliency program, this means close relationships between continuity planners and application owners. Often, however, resiliency programs struggle to gain deep access and influence within technology organizations, or the disaster recovery technology-centric arm of the program is challenged to integrate with the more business-centric continuity planning arm. A converged resiliency and security program eases these challenges.
Disaster recovery programs often sit within the technology organizations themselves, and in those cases, technology integration is not a challenge. However, these programs can sometimes struggle to maintain close access to the business organizations they support. In these cases, converging resiliency and physical security programs enables teams to leverage the strong business relationships and closer business access that physical security programs often have. By integrating these programs, physical security teams can create the inroads needed so disaster recovery programs can deliver the most value in a business-connected manner.
Conversely, for disaster recovery programs that sit within business or resiliency teams, they can often struggle to gain traction with an organization’s technical owners. In these cases, converging disaster recovery with a cybersecurity program can be a game changer. Cybersecurity core programs focus on application, database and system security, and have an existing engagement model with those the disaster recovery teams need to influence. By integrating with cybersecurity programs, disaster recovery teams can leverage existing processes and organizational relationships to accelerate their impact. The integration of these programs also provides a more efficient unified engagement model for the technology asset owners, creating overall efficiency for the organization.
Finally, the cause of disaster recovery events is increasingly cybersecurity related. Disaster recovery teams must adjust their architectures and programs to account for ransomware, destructive malware attacks and other evolving threats. The expertise needed to do this well rests with cybersecurity organizations who, once converged, are well positioned to help with this journey.
Security brings digital expertise to resiliency programs
Consider this: When a hurricane strikes, the location and severity of the storm’s eye depends on the time of day, the topography and numerous meteorological factors. It doesn’t target you specifically. Organizations are informed of the hurricane’s arrival days in advance. And, the organization is not the only victim of the hurricane, so external support is mobilized and resources are provided. Given all these factors, organizations infrequently experience the most severe possible outcomes. Now, consider a typical cyber crisis: When a ransomware attack strikes, it is without warning, usually targeting and impacting the most critical business assets and is designed to hit at the most inopportune time. Moreover, the victim is often blamed, which means outside help is scarce. Of course, organizations should continue planning for hurricanes, earthquakes, pandemics and other natural disasters, but the evolution of digital crises makes the resiliency threat landscape more complex. The results of these troubling trends: Cybercrime will have cost the world $6 trillion by the end of this year, up from $3 trillion in 2015. Natural disasters globally cost $84 billion in 2015.
Business continuity professionals have thrived for decades by helping their organizations predict and prepare for natural disasters and physical security incidents. To date, the best practice to prepare resilient data centers is to evaluate redundant electrical grid availability, historical weather patterns, earthquake trends and, most importantly, to confirm that the backup data center doesn’t reside within a certain physical distance of the primary data center. Cyber threats have added new challenges to this equation, as even two ideally positioned, geographically distanced, modern data centers often rely on the same underlying cyber systems and networks. It’s not uncommon to find ransomware attacks, which travel at the speed of light and aren’t bound by physical distance, devastating organizations when both primary and backup data centers are encrypted for ransom or, worse, deleted by destructive malware. This is only one example that highlights the new resiliency risks faced by the world’s recent dramatic increase in digital dependency and cyber threats. By converging cybersecurity and resiliency programs, organizations are better positioned to contend with this challenging new reality…[…] Read more »….
Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes.
Algorithms are the heartbeat of applications, but they may not be perceived as entirely benign by their intended beneficiaries.
Most educated people know that an algorithm is simply any stepwise computational procedure. Most computer programs are algorithms of one sort of another. Embedded in operational applications, algorithms make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings — encroaching on customer privacy, refusing them a home loan, or perhaps targeting them with a barrage of objectionable solicitation — stakeholders’ understandable reaction may be to swat back in anger, and possibly with legal action.
Regulatory mandates are starting to require algorithm auditing
Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes, especially those powered by artificial intelligence (AI), deep learning (DL), and machine learning (ML).
Many of these concerns revolve around the possibility that algorithmic processes can unwittingly inflict racial biases, privacy encroachments, and job-killing automations on society at large, or on vulnerable segments thereof. Surprisingly, some leading tech industry execs even regard algorithmic processes as a potential existential threat to humanity. Other observers see ample potential for algorithmic outcomes to grow increasingly absurd and counterproductive.
Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity — coupled with their daunting size, complexity and obscurity — presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions?
Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens.
Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data — including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision. And that requires that an audit log be kept for review and that auditing tools support rollup of algorithmic decision factors.
Considering how influential GDPR has been on other privacy-focused regulatory initiatives around the world, it wouldn’t be surprising to see laws and regulations mandate these sorts of auditing requirements placed on businesses operating in most industrialized nations before long.
Anticipating this trend by a decade, the US Federal Reserve’s SR-11 guidance on model risk management, issued in 2011, mandates that banking organizations conduct audits of ML and other statistical models in order to be alert to the possibility of financial loss due to algorithmic decisions. It also spells out the key aspects of an effective model risk management framework, including robust model development, implementation, and use; effective model validation; and sound governance, policies, and controls.
Even if one’s organization is not responding to any specific legal or regulatory requirements for rooting out evidence of fairness, bias, and discrimination in your algorithms, it may be prudent from a public relations standpoint. If nothing else, it would signal enterprise commitment to ethical guidance that encompasses application development and machine learning DevOps practices.
But algorithms can be fearsomely complex entities to audit
CIOs need to get ahead of this trend by establishing internal practices focused on algorithm auditing, accounting, and transparency. Organizations in every industry should be prepared to respond to growing demands that they audit the complete set of business rules and AI/DL/ML models that their developers have encoded into any processes that impact customers, employees, and other stakeholders.
Of course, that can be a tall order to fill. For example, GDPR’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity — coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms — be they machine learning, convolutional neural networks, or whatever — are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.
Most organizations — even the likes of Amazon, Google, and Facebook — might find it difficult to keep track of all the variables encoded into its algorithmic business processes. What could prove even trickier is the requirement that they roll up these audits into plain-English narratives that explain to a customer, regulator, or jury why a particular algorithmic process took a specific action under real-world circumstances. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding.
Throwing more algorithm experts at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. Explaining what goes on inside an algorithm is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. One can’t easily trace their precise path to a final answer.
Algorithmic auditing is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.
Most of the people you’re training to explain this stuff to may not know a machine-learning algorithm from a hole in the ground. More often than we’d like to believe, there will be no single human expert — or even (irony alert) algorithmic tool — that can frame a specific decision-automation narrative in simple, but not simplistic, English. Even if you could replay automated decisions in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made.
Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance — such as a legal proceeding, contractual dispute, or showstopping technical glitch — will compel impacted parties to revisit those automated decisions.
And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory.
Establishing standard approach to algorithmic auditing
CIOs should recognize that they don’t need to go it alone on algorithm accounting. Enterprises should be able to call on independent third-party algorithm auditors. Auditors may be called on to review algorithms prior to deployment as part of the DevOps process, or post-deployment in response to unexpected legal, regulatory, and other challenges.
Some specialized consultancies offer algorithm auditing services to private and public sector clients. These include:
BNH.ai: This firm describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.” It provides enterprise-wide assessments of enterprise AI liabilities and model governance practices; AI incident detection and response, model- and project-specific risk certifications; and regulatory and compliance guidance. It also trains clients’ technical, legal and risk personnel how to perform algorithm audits.
O’Neil Risk Consulting and Algorithmic Auditing: ORCAA describes itself as a “consultancy that helps companies and organizations manage and audit algorithmic risks.” It works with clients to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. It helps clients to institute “early warning systems” that flag when a problematic algorithm (ethical, legal, reputational, or otherwise) is in development or in production, and thereby escalate the matter to the relevant parties for remediation. They serve as expert witnesses to assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. They help organizations develop strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools. They work with regulators to translate fairness laws and rules into specific standards for algorithm builders. And they train client personnel on algorithm auditing.
Currently, there are few hard-and-fast standards in algorithm auditing. What gets included in an audit and how the auditing process is conducted are more or less defined by every enterprise that undertakes it, or by the specific consultancy being engaged to conduct it. Looking ahead to possible future standards in algorithm auditing, Google Research and Open AI teamed with a wide range of universities and research institutes last year to publish a research study that recommends third-party auditing of AI systems. The paper also recommends that enterprises:
Develop audit trail requirements for “safety-critical applications” of AI systems;
Conduct regular audits and risk assessments associated with the AI-based algorithmic systems that they develop and manage;
Institute bias and safety bounties to strengthen incentives and processes for auditing and remediating issues with AI systems;
Share audit logs and other information about incidents with AI systems through their collaborative processes with peers;
Share best practices and tools for algorithm auditing and risk assessment; and
Conduct research into the interpretability and transparency of AI systems to support more efficient and effective auditing and risk assessment.
Other recent AI industry initiatives relevant to standardization of algorithm auditing include:
Google published an internal audit framework that is designed help enterprise engineering teams audit AI systems for privacy, bias, and other ethical issues before deploying them.
AI researchers from Google, Mozilla, and the University of Washington published a paper that outlines improved processes for auditing and data management to ensure that ethical principles are built into DevOps workflows that deploy AI/DL/ML algorithms into applications.
Cloud Girls is honored to have amazingly accomplished, professional women in tech as our members. We take every opportunity to showcase their expertise and accomplishments – promotions, speaking engagements, publications and more. Now, we are excited to shine a spotlight on one of our members each month.
Our Cloud Expert of the Month is Andrea Blubaugh.
Andrea has more than 15 years of experience facilitating the design, implementation and ongoing management of data center, cloud and WAN solutions. Her reputation for architecting solutions for organizations of all sizes and verticals – from Fortune 100 to SMBs – earned her numerous awards and honors. With a specific focus on the mid to enterprise space, Andrea works closely with IT teams as a true client advocate, consistently meeting, and often exceeding expectations. As a result, she maintains strong client and provider relationships spanning the length of her career.
When did you join Cloud Girls and why?
Wow, it’s been a long time! I believe it was 2014 or 2015 when i joined Cloud Girls. I had come to know Manon through work and was impressed by her and excited to join a group of women in the technology space.
What do you value about being a Cloud Girl?
Getting to know and develop friendships with the fellow Cloud Girls over the years has been a real joy. It’s been a great platform for learning on both a professional and personal level.
What advice would you give to your younger self at the start of your career?
I would reassure my younger self in her decisions and to encourage her to keep taking risks. I would also tell her to not sweat the losses so much. They tend to fade pretty quickly.
What’s your favorite inspirational quote?
“Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do, so throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover.” –Mark Twain
What one piece of advice would you share with young women to encourage them to take a seat at the table?
I was very fortunate early on in my career to work for a startup whose leadership saw promise in my abilities that I didn’t yet see myself. I struggled with the decision to take a leadership role as I didn’t feel “ready” or that I had the right or enough experience. I received some good advice that I had to do what ultimately felt right to me, but that turning down an opportunity based on a fear of failure wouldn’t ensure there would be another one when I felt the time was right. My advice is if you’re offered that seat, and you want that seat, take it.
What’s one item on your bucket list and why?..[…] Read more »…..
A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.
A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.
What skills can you learn in a cyber range?
A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.
SIEM, IDS/IPS and firewall management
Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.
A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.
After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.
A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.
Operating system management: Linux and Windows
Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.
A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.
Endpoint controls and protection
As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.
This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.
A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.
Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.
A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.
Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.
Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.
Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.
A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.
Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.
Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics…[…] Read more »….
In the not-so-distant past, banking and healthcare industries were the main focus of security concerns as they were entrusted with guarding our most sensitive personal data. Over the past few years, security has become increasingly important for companies across all major industries. This is especially true since 2017 when the Economist reported that data has surpassed oil as the most valuable resource.
How do we respond to this increased focus on security? One option would be to simply increase the security standards being enforced. Unfortunately, it’s unlikely that this would create substantial improvements.
Instead, we should be talking about restructuring security policies. In this post, we’ll examine how security standards look today and 5 ways they can be dramatically improved with new approaches and tooling
How Security Standards Look Today
Security standards affect all aspects of a business, from directly affecting development requirements to regulating how data is handled across the entire organization. Still, those security standards are generally enforced by an individual, usually infosec or compliance officer.
There are many challenges that come with this approach, all rooted in 3 main flaws: 1) the gap between those building the technology and those responsible for enforcing security procedures within it, 2) the generic nature of infosec standards, and 3) security standards promote reactive issue handling versus proactive.
We can greatly improve the security landscape by directly addressing these key issues:
1. Information Security and Compliance is Siloed
In large companies, the people implementing security protocols and those governing security compliance are on separate teams, and may even be separated by several levels of organizational hierarchy.
Those monitoring for security compliance and breaches are generally non-technical and do not work directly with the development team at all. A serious implication of this is that there is a logical disconnect between the enforcers of security standards and those building systems that must uphold them.
If developers and compliance professionals do not have a clear and open line of communication, it’s nearly impossible to optimize security standards, which brings us to the next key issue.
2. Security Standards are Too Generic
Research has shown that security standards as a whole are too generic and are upheld by common practice more than they are by validation of their effectiveness.
With no regard for development methodology, organizational resources or structure, or the specific data types being handled, there’s no promise that adhering to these standards will lead to the highest possible level of security.
Fortunately, addressing the issue of silos between dev and compliance teams is the first step for resolving this issue as well. Once the two teams are working together, they can more easily collaborate and improve security protocols specific to the organization.
3. Current Practices are Reactive, Rather Than Proactive
The existing gap between dev and security teams along with the general nature of security standards, prevent organizations from being truly proactive when it comes to security measures.
Bridging the gap between development and security empowers both sides to adopt a shift-left mentality, making decisions about and implementing security features earlier in the development process.
The first step is to work on creating secure-by-design architecture and planning security elements earlier in the development lifecycle. This is key in breaking down the silos that security standards created.
Gartner analyst John Collins claims cultural and organizational structures are the biggest roadblocks to the progression of security operations. Following that logic, in restructuring security practices, security should be wrapped around DevOps practices, not just thrown on top. This brings us to the introduction of DevSecOps.
DevSecOps – A New Way Forward
The emergence of DevSecOps is showing that generic top-to-bottom security standards may soon be less important as they are now.
First, what does it mean to say, “security should be wrapped around DevOps practices”? It means not just allowing, but encouraging, the expertise of SecOps engineers and compliance professionals to impact development tasks in a constantly changing security and threat landscape.
The Internet of Things (IoT) is transforming our homes, businesses and public spaces – mostly for the better – but without proper precautions IoT devices can be an attractive target for malicious actors and cyberattacks.
Security threats involving IoT devices often stem from the fact that many IoT devices usually have single-purpose designs and may lack broader capabilities to defend themselves in a hostile environment. For example, a door bell, a toaster or a washing machine frequently do not contain as much storage, memory and processing capability as a typical laptop computer.
By some estimates, there will be more than 21 billion connected devices on the market by 2025, and the proliferation of this technology will only continue to impact our daily lives in a multitude of ways.
But as more connected products are invented and introduced for both business and consumer use, the security challenges related to these connected IoT devices continue to increase, in part due to a lack of consistent security controls. Even if the networks that the connected devices operate on are considered secure, IoT device security is still only as good as the security of the products themselves.
Because the IoT industry has predominantly lacked a globally recognized, repeatable standard for manufacturers, channel owners, regulators and other key parties to turn to, IoT device security continues to be a major challenge. It’s therefore especially important for companies to not only be aware of potential vulnerabilities, but also to take action to build more secure products – before they ever get into the hands of the end user.
Below are 10 design and development approaches/best practices that can help mitigate IoT security issues and ensure that IoT delivers on its promise to improve our lives.
10. Hiding live ports: The best practice for hiding live ports is to actually not hide them at all – and definitely to not use easy to peel off plastic covers. Live debug ports such as USB and JTAG may provide a hacker access into the firmware of the device. If live debug ports are required, they should be disabled so that only authorized systems/users can re-enable them. However, if hiding them is required, it’s important to make it as difficult as possible for someone to access them – and to avoid plastic caps whenever possible.
9. Common/default passwords: Most people don’t change their passwords from the default, making it easy for hackers to gain access to devices. In the future, passwords may be replaced altogether, but for now, they should at least be unique, random and distinct for each consumer device. During setup, users should be prompted to change the password the device was shipped with to further bolster security.
8. Relying solely on network security: Introducing layers of security can be a great way to avoid compromised data. The security principle of defense in depth dictates that when multiple layers are in place, attacks are more effectively thwarted. While network security is helpful, if the device is solely reliant on this for communication, it can lead to further compromised information.
7. Sending without encryption: Avoid sending any information without encryption, because without it, communications between devices are simply not secure. Everything should be encrypted, with approved encryption algorithms, so that when information leaves the device and goes to the server, internet, or any other access point in a home, it is protected from unauthorized access and modification. For IoT devices communicating over wireless technologies, it is important to also encrypt application data within the network tunnel. Adding application security to the mix is highly recommended and preferred to help mitigate these issues.
6. Overriding security and certificate checks: Simply put – small, compact digital certificates are a proven way for IoT devices to trust each other and for servers to authenticate IoT devices. However, oftentimes, proper certificate validation at the IoT device is overridden, diluted or negated, nullifying the security provided by digital certificates. This can lead to undesired security consequences, such as man-in-the-middle attacks. Keep these checks as part of your security measures to ensure certificates are up to date, valid and issued by trusted authorities.
5. Public visibility: There is no need for a device to advertise unique information such as (but not limited to) serial number that will identify it and allow it to be identified over unsecure connections, whether Wi-Fi, Bluetooth or beacons. The best practice is to be incognito and employ randomization techniques over the airwaves. The “less is more” approach is necessary to protect privacy and prevent tracking. However, when device-identifying information is needed for device discovery, registration and verification, it should be used in a secure manner, only exchanging securely and with authenticated and authorized devices. Local display may need to be made available for configuration, which is obviously important to protect display configurations with secure unique passwords, tokens or other standardized security authenticating mechanisms.
4. Access of devices’ private key: The security of digital certificates is only guaranteed when the private key is sufficiently protected from disclosure and unauthorized modification. This can be difficult to accomplish on some IoT devices that lack specialized hardware to protect sensitive information. However, today, low-cost and secure elements are available and can be embedded into IoT devices to protect sensitive keys that are injected into these devices at manufacturing time. Today’s technology allows for the size of the key to be reduced and compressed, so that the devices can attest to their identity without revealing private information. Such private information should be kept in secure elements.
3. Blockchain for added security: Blockchain empowers IoT devices to defend themselves in hostile environments by making autonomous decisions with high degree of confidence. The cryptographically-signed transactions allow devices to determine the authenticity of the transactions before acting on them. Using such transactions, IoT devices can also assert their ownership, i.e., to whom they belong. So, if a rogue entity attempts to own the device, the IoT device can reject the access attempt. In addition, the distributed data contained in blockchain is cryptographically hashed and anonymized, providing “out-of-the-box” privacy for devices and the users who interact with them…[…] Read more »….
When secure facilities say “no devices allowed,” that’s not necessarily the case.
Exceptions are being granted for personal medical devices, health monitors and other operation-associated devices, especially in defense areas where human performance monitoring devices can be core to the mission.
The problem: most of these devices have radio frequency (RF) communication interfaces such as Bluetooth, Bluetooth Low Energy (BLE), Wi-Fi, Cellular, IoT or proprietary protocols that can make them vulnerable to RF attacks, which by their nature are “remote attacks” from beyond the building’s physical perimeters.
Questions are now being asked about the ability to allow some devices in some areas, some of the time, resulting in the need for stratified policy and sophisticated technology which can accurately distinguish between approved and unapproved electronic devices in secure areas.
The invisible dangers of RF devices
RF-enabled devices are prevalent in the enterprise. According to Ericsson’s Internet of Things Forecast, there are 22 billion connected devices and 15 billions of these devices have radios. Furthermore, as the avalanche of IoT devices grows, cyber threats will become increasingly common.
Wireless devices in the enterprise today include light bulbs, headsets, building control systems, and HVAC systems. Increasingly vulnerable and risky are wearables. Wearables with data exfiltrating capabilities include Fitbits, smartwatches and other personal devices with embedded radios and variety of audio/video capture, pairing and transmission capabilities.
Understanding the current policy device landscape
The RF environment has become increasingly complicated over the past five years because more and more devices have RF interfaces that can’t be disabled. Secure facilities with very strict RF device policies are making exceptions to the “No Device Policy” into a more stratified approach: “Some Device Policy.” Examples of a stratified policy are whitelisting devices with RF interfaces such as medical wearables, Fitbits and vending machines. Some companies are geofencing certain areas in facilities, such as Sensitive Compartmented Information Facility (SCIFs) in defense facilities.
Current policies are outdated
While some government and commercial buildings have secure areas where no cell phones or other RF-emitting devices are allowed, detecting and locating radio-enabled devices is largely based on the honor system or one-time scans for devices. Bad actors do not follow the honor system and one-time scans are just that: one time and cannot monitor 24×7.
Benefits of implementing RF device security policy
In a world where security teams need to detect and locate unauthorized cellular, Bluetooth, BLE, Wi-Fi and IoT devices, there are solutions available and subsequent benefits to enforcing device security policies: ..[…] Read more »…