Building a risk management program

In today’s world, it’s important for every organization to have some form of vulnerability assessment and risk management program. While this can seem daunting, by focusing on some key concepts it’s possible for an organization of any size to develop a strong security posture with a firm grasp of its risk profile. We’ll discuss in this article how to build the technical foundation for a comprehensive security program and, crucially, the tools and processes necessary to develop that foundation into a mature vulnerability assessment and risk management program. 

 

Build the Foundation

It’s impossible to implement effective security, let alone manage risk, without a clear understanding of the environment. That means, essentially, taking an inventory of hosts, applications, resources, and users.

With the current computing environment, that combination is apt to include assets that reside in the cloud as well as those hosted in an organization’s own data center. Organizations have little control over their remote employees’ devices, who are accessing data on a bring-your-own-device (BYOD) basis, adding another layer of risk. There is also the aspect of software as a service applications (SaaS) that the organization uses. It’s essential to know what data is kept where. With SaaS, in particular, teams must have a clear understanding of who is responsible for the security of the data in contractual terms, so as to allocate resources accordingly. 

 

Manage the puzzle

Once the environment is scoped, managing it relies on three main components: visibility, control, and timely maintenance. 

Whether it is software vulnerabilities, vulnerable configurations, obsolete packages, or a range of other issues, a vulnerability scanner will show the security operations team what’s at risk and let them prioritize their reaction. That said, scanners, external or internal, are not the only option. At the high end, a penetration testing team can probe the environment to a level that vulnerability scanners can’t match. At the low end, establishing a process to monitor public vulnerability feeds and verifying whether newly exposed issues affect the environment can provide a baseline. It may not give as deep a picture scanning, or penetration testing, but the cost in SecOps time is often well worth it.

Protecting the users is a major point and doesn’t always get the attention it deserves. Ultimately, that starts with user education and establishing a culture that enhances a secure environment. Users are often the threat surface that presents the greatest risk, but with proper education and attitude they can become an effective layer of a defense depth strategy.

Another important step to protecting users is adding multi-factor authentication (MFA). In particular, those that require a physical or virtual token tend to be more secure than those that rely on text messaging or email. While MFA does add a minor annoyance to a user’s login, it can drastically reduce the threat posed by compromised accounts and reduce the organization’s overall risk profile.

User endpoints are another area of concern. While the default endpoint protection included in the main desktop operating systems (Windows and MacOS) are quite effective, they are also the defenses every malware writer in the world tests against. That makes investment in an additional layer of endpoint protection worthwhile. 

The last major piece here is a patch management program. This requires base processes that not only manage the patch process, but also the assets themselves. Fortunately, there are multiple tools available that can enhance and automate the process, and a regular patch cycle can have vulnerabilities fixed before they are even developed into exploits.

Ideally, the patch management process includes a change management system that’s able to smoothly accommodate emergency situations where a security hotfix must go in outside the normal window.

Pulling it all together

With the foundation laid, the final step involves communication. Simply assessing risk is not useful if there is no reliable way to organize people to act on it.

Bridging the information security teams, who are responsible for recognizing, analyzing, and mitigating threats to the organization, and the information technology teams, who are responsible for maintaining the organization’s infrastructure, is vital. Whether an organization achieves this with a process or a tool is up to them. But in either case, communication is vital, along with an ability to react across teams. This applies to non-technical teams as well — if folks are receiving phishing emails, security operations should know. 

These mechanisms need to be in place from the executive offices down to the sales or production floor, as reducing risk really is everyone’s responsibility. Moreover, the asset and patch management system needs a mechanism to prioritize patches based on business risk. Unless the IT team has the resources to deploy every single patch that comes their way, they will have to prioritize, and that prioritization needs to be based on the threat to business rather than arbitrary severity scores.

 An Investment 

There is no “one size fits all” solution for risk assessment and management. For example, for a restaurant that doesn’t accept reservations or orders online, a relatively insecure website doesn’t present much business risk. While it may be technically vulnerable, they are not at risk of losing valuable data...[…] Read more »….

 

Building Data Literacy: What CDOs Need to Know

Data literacy is the ability to read, work with, analyze, and communicate with data.

As businesses have become increasingly digital, all business functions are generating valuable data that can guide their decisions and optimize their performance.

Employees now have data available to augment their experience and intuition with analytical insights. Leading organizations are using this data to answer their every question — including questions they didn’t know they had.

The chief data officer’s (CDO) role in data literacy and ensuring that data literacy efforts are successful is to be the chief evangelist and educator to the organization.

Standardizing basic data training across the organization and creating a center of excellence for self-service in all departments can help ensure everyone can benefit from data literacy.

“As the leader of data and analytics, CDOs can no longer afford to work exclusively with data scientists in siloed environments,” explains Paul Barth, Qlik’s global head of data literacy. “They must now work to promote a culture of data literacy in which every employee is able to use data to the benefit of their role and of their employer.”

Cultural Mindset on Data

This culture starts with a change in mindset: It’s imperative that every employee, from new hires fresh out of college all the way to the C-suite, can understand the value of data.

At the top, CDOs can make the strongest case for improving data literacy by highlighting the benefits of becoming a data-driven organization.

For example, McKinsey found that, among high-performing businesses, data and analytics initiatives contributed at least 20% to earnings before interest and taxes (EBIT), and according to Gartner, enterprises will fail to identify potential business opportunities without data-literate employees across the organization.

Abe Gong, CEO and co-founder of Superconductive, adds for an organization to be data literate, there needs to be a critical mass of data-literate people on the team.

“A CDO’s role is to build a nervous system with the right process and technical infrastructure to support a shared understanding of data and its impact across the organization,” he says. “They promote data literacy at the individual level as well as building that organizational nervous system of policies, processes, and tools.”

Data Literacy: Start with Specific Examples

From his perspective, the way to build data literacy is not by doing some giant end-to-end system or a massive overhaul, but rather by coming up with specific discrete examples that really work.

“I think you start small with doable challenges and a small number of stakeholders on short timelines,” he says. “You get those to work, then iterate and add complexity.”

From his perspective, data-literate organizations simply think better together and can draw conclusions and respond to new information in a way that they couldn’t if they didn’t understand how data works.

“As businesses prepare for the future of work and the advancements that automation will bring, they need employees who are capable of leading with data, not guesswork,” Barth notes. “When the C-suite understands this, they will be eager to make data literacy a top priority.”

He says CDOs need to take the lead and properly educate staff about why they should appreciate, pay attention to and work with data.

“Data literacy training can greatly help in this regard and can be used to highlight the various tools and technologies employees need to ensure they can make the most of their data,” he adds.

As CDOs work to break down the data barriers and limitations that are present in so many firms, they can empower more employees with the necessary skills to advance their organization’s data strategy.

“And as employees become more data literate, they will be better positioned to help their employers accelerate future growth,” Barth says.

Formalizing Data Initiative and Strategies

Data literacy should start with a formal conversation between people charged with leading data initiatives and strategies within the organization.

The CDO or another data leader should craft a thoughtful communication plan that explains why the team needs to become data literate and why a data literacy program is being put into place.

“While surveys suggest few are confident in their data literacy skills, I would advise against relying on preconceptions or assumptions about team members’ comfort in working with data,” Barth says. “There are a variety of free assessment tools in the market, such as The Data Literacy Project, to jumpstart this process.”

However, training is only the beginning of what businesses need to build a data literate culture: Every decision should be supported with data and analysis, and leaders should be prepared to model data-driven decision-making in meetings and communications.

“The only playbook that I have seen work for an incoming CDO is to do a fast assessment of where the opportunities are and then look for ways to create immediate value,” Gong adds. “If you can create some quick but meaningful wins, you can earn the trust you need to do deeper work.”

For opportunities, CDOs should be looking for places the organization can make better use of its data on a short timeline — it’s usually weeks, not months.

“Once you’ve built a library of those wins and trust in your leadership, you can have a conversation about infrastructure — both technical and cultural,” he says. “Data literacy is part of the cultural infrastructure you need.”.[…] Read more »…..

 

Top 9 effective vulnerability management tips and tricks

The world is currently in a frenetic flux. With rising geopolitical tensions, an ever-present rise in cybercrime and continuous technological evolution, it can be difficult for security teams to maintain a straight bearing on what’s key to keeping their organization secure.

With the advent of the “Log4shell,” aka Log4J vulnerability, sound vulnerability management practices have jumped to the top of the list of skills needed to maintain an ideal state of cybersecurity. The impacts due to Log4j are expected to be fully realized throughout 2022.

As of 2021, missing security updates are a top-three security concern for organizations of all sizes — approximately one in five network-level vulnerabilities are associated with unpatched software.

Not only are attacks on the rise, but their financial impacts are as well. According to Cybersecurity Ventures, costs related to cybercrime are expected to balloon 15% year over year into 2025, totaling $11 trillion.

Vulnerability management best practices

Whether you’re performing vulnerability management for the first time or looking to revisit your current vulnerability management practices to find new perspectives or process efficiencies, there are some recommended useful strategies concerning vulnerability reduction.

Here are the top nine (We decided to just stop there!) tips and tricks for effective vulnerability management at your organization.

1. Vulnerability remediation is a long game

Extreme patience is required when it comes to vulnerability remediation. Your initial review of vulnerability counts, categories, and recommended remediations may instill a false sense of confidence: You may expect a large reduction after only a few meetings and executing a few patch activities. This is far from how reality will unfold.

Consider these factors as you begin initial vulnerability management efforts:

  • Take small steps: Incremental progress in reducing total vulnerabilities by severity should be the initial goal, not an unrealistic expectation of total elimination. The technology estate should ideally accumulate new vulnerabilities at a slightly lower pace versus what is remediated as the months and quarters roll on.
  • Patience is a virtue: Adopting a patient mindset is unequivocally necessary to avoid mental defeat, burnout and complacency. Remediation progress will be slow but must sustain a methodical approach.
  • Learn from challenges: As roadblocks are encountered, these serve as opportunities to approach alternate remediation strategies. Plan on what can be solved today or in the current week.

Avoid focusing on all the major problems preventing remediation and think with a growth mindset to overcome these challenges.

2. Cross-team collaboration is required

Achieving a large vulnerability reduction requires effective collaboration across technology teams. The high vulnerability counts across the IT estate likely exist due to several cultural and operational factors within the organization which pre-exists remediation efforts, including:

  • Insufficient staff to maintain effective vulnerability management processes
  • Legacy hardware that cannot be patched because they run on very expensive hardware — or provide a specific function that is cost-prohibitive to replace
  • Ineffective patching solutions that do not or cannot apply necessary updates completely (e.g., the solution can patch web browsers but not Java or Adobe)
  • Misguided beliefs that specialized classes of equipment cannot be patched or rebooted therefore, they are not revisited for extended periods

Part of your remediation efforts should focus on addressing systemic issues that have historically prevented effective vulnerability remediation while gaining support within or across the business to begin addressing existing vulnerabilities.

Determine how the various teams in your organization can serve as a force multiplier. For example, can the IT support desk or other technical teams assist directly in applying patches or decommissioning legacy devices? Can your vendors assist in applying patches or fine-tuning configurations of difficult to patch equipment to make?

These groups can assist in overall reduction while further plans are developed to address additional vulnerabilities.

3. Start by focusing on low-hanging fruit

Focus your initial efforts on the low-hanging fruit when building a plan to address vulnerabilities. Missing browser updates and applying updates to third-party browser software like Java or Adobe are likely to comprise the largest initial reduction efforts.

If software like Google Chrome or Firefox is missing the previous two years of security updates, it likely signifies the software is not being used. Some confirmation may be required, but the response is likely to remove software, not the application of patches.

To prevent a recurrence, there will likely be a need to revisit workstation and server imaging processes to determine if legacy, unapproved or unnecessary software is being installed as new devices are provisioned.

4. Leverage your end-users when needed

Don’t forget to leverage your end-users as a possible remediation vector. A single email you spend 30 minutes carefully crafting to include instructions on how they can self-update difficult-to-patch third-party applications can save you many hours of time and effort — compared to working with technical teams where the end result may be a reduction of fewer vulnerabilities.

However, end-user involvement should be an infrequent and short-term approach as the underlying problems outlined in cross-team collaboration (tip #2) are addressed.

This also provides an indirect approach to increasing security awareness via end-user engagement. Users are more likely to prioritize security when they are directly involved in the process.

5. Be prepared to get your hands dirty

Many of the vulnerabilities that exist will require a manual fix, including but not limited to:

  • Unquoted service paths in program directories
  • Weak or no passwords on periphery devices like printers
  • Updating SNMP community strings
  • Windows registry not set

While there is project downtime — or the security function is between remediation planning — focus on providing direct assistance where possible. A direct intervention provides an opportunity to learn more about the business and the people operating the technology in the environment. It also provides direct value when an automated process fails to remediate or cannot remediate identified vulnerabilities.

This may also be required when already stressed IT teams cannot assist in remediation activity.

6. Targeted patch applications can be effective for specific products

Some vulnerabilities may require the application of a specific update to address large numbers of vulnerabilities that automatic updates continuously fail to address. This is often seen in Microsoft security updates that did not apply completely or accurately for random months across several years and devices.

Search for and test the application of cumulative security updates. One targeted patch update may remediate dozens of vulnerabilities.

Once tested, use automated patch application tools like SCCM or remote management and monitoring (RMM) tools to stage and deploy the specific cumulative update.

7. Limit scan scope and schedules 

Vulnerability management seeks to identify and remediate vulnerabilities, not cause production downtime. Vulnerability scanning tools can unintentionally disrupt information systems and networks via the probing traffic generated towards organization devices or equipment.

Suppose an organization is onboarding a new scanning tool or spinning up a new vulnerability management practice. In that case, it is best to start scanning a small network subset that represents the asset types deployed across the network.

Over time, scanning can be rolled out to larger portions of the network as successful scanning activity on a smaller scale is consistently demonstrated.

8. Leverage analytics to focus remediation activity 

Native reporting functions provided by vulnerability scanning tools typically lack effective reporting functions that assist in value-add vulnerability reduction. Consider implementing programs like Power BI, which can help the organization focus on the following:

  • New vulnerabilities by type or category
  • Net new vulnerabilities
  • Risk severity ratings for groups of or individual vulnerabilities
9. Avoid overlooking compliance pitfalls or licensing issues

Ensure you fully understand any licensing requirements in relation to enterprise usage of third-party software and make plans to stay compliant.

As software evolves, its creators may look to harness new revenue streams, which have real-world impacts on vulnerability management efforts. A classic example is Java, which is highly prevalent in organizations across the globe. As of 2019, Java requires a paid license subscription to receive security updates for Java.

Should a third party decide to perform an onsite audit of the license usage, the company may find itself tackling a lawsuit on top of managing third-party software security updates…[…] Read more »….

 

Key Steps for Public Sector Agencies To Defend Against Ransomware Attacks

Over the past two years, the pandemic has fundamentally altered the business world and the modern work environment, leaving organizations scrambling to maintain productivity and keep operational efficiency intact while securing the flow of data across different networks (home and office). While this scenario has undoubtedly created new problems for businesses in terms of keeping sensitive data and IP safe, the “WFH shift” has opened up even greater risks and threat vectors for the US public sector.

Federal, state, local governments, education, healthcare, finance, and nonprofit organizations are all facing privacy and cybersecurity challenges the likes of which they’ve never seen before. Since March 2020, there’s been an astounding increase in the number of cyberattacks, high-profile ransomware incidents, and government security shortfalls. There are many more that go undetected or unreported. This is in part due to employees now accessing their computers and organization resources/applications from everywhere but the office, which is opening up new security threats for CISOs and IT teams.

Cyberthreats are expected to grow exponentially this year, particularly as the world faces geopolitical upheaval and international cyberwarfare. Whether it’s a smaller municipality or a local school system, no target is too small these days, and everyone is under attack due to bad actors now having more access to sophisticated automation tools.

The US public sector must be prepared to meet these new challenges and focus on shoring up vulnerable and critical technology infrastructures while implementing new cybersecurity and backup solutions that secure sensitive data.

Previous cyber protection challenges

As data volumes grow and methods of access change, safeguarding US public sector data, applications, and systems involves addressing complex and often competing considerations. Government agencies have focused on securing a perimeter around their networks, however, with a mobile workforce combined with the increase in devices, endpoints, and sophisticated threats, data is still extremely vulnerable. Hence the massive shift towards a Zero Trust model.

Today, there is an over-reliance on legacy and poorly integrated IT systems, leaving troves of hypersensitive constituent data vulnerable; government agencies have become increasingly appealing targets for cybercriminals. Many agencies still rely on outdated five-decade-old technology infrastructure and deal with a multitude of systems that need to interact with each other, which makes it even more challenging to lock down these systems. Critical infrastructure industries have more budget restraints than ever; they need flexible and affordable solutions to maintain business continuity and protect against system loss.

Protecting your organization’s data assets

The private sector, which owns and operates most US critical infrastructure, will continue being instrumental in helping government organizations (of all sizes) modernize their cyber defenses. The US continues to make strides in creating specific efforts that encourage cyber resilience and counter these emerging threats.

Agencies and US data centers must focus on solutions that attest to data protection frameworks like HIPAA, CJIS, NIST 800-171 first and then develop several key pillars for data protection built around the Zero Trust concept. This includes safety (ensuring organizational data, applications, and systems are always available), accessibility (allowing employees to access critical data anytime and anywhere), and privacy and authenticity (control who has access to your organization’s digital assets).

New cloud-based data backup, protection and cybersecurity solutions that are compliant to the appropriate frameworks and certified will enable agencies to maximize operational uptime, reduce the threat of ransomware, and ensure the highest levels of data security possible across all public sector computing environments.

Conclusion

First and foremost, the public sector and US data centers must prioritize using compliant and certified services to ensure that specific criteria are met…[…] Read more »

 

What Digital Transformation Truly Means, with Srini Alagarsamy

Searching for Digital Transformation on Google fetches over five hundred million results, a good number of which aim to define the term. Following are my thoughts from a vantage point of having strategized and executed digital transformations in leading organizations. Over history, firms have had to transform many times, from the invention of money to the advent of electricity, and through the industrial, railroad, communication, and internet revolutions. Generally, transformations have occurred roughly every 50 years, and thus transformation is an iterative process; a journey that firms must embrace and understand that it is about continually getting better and not an end state. While few firms help shape new consumer preference categories, a vast majority must adjust, each time, reimagining their business model around those newly formed preferences – designing products, interactions, business processes all revolving around the customer.

Digital transformation is the latest iteration of business transformation, with firms adapting to new consumer preferences focused on digital channels. These preferences in the past two decades have largely tended to be at-the-glass (mobile, web) experiences. While digital twins have existed for some time, there is still a marked separation between the physical and virtual worlds. With the advent of Web 3.0 and Spatial Web, we are entering a different era of experience where these boundaries will continue to get blurred. Consumer preferences will shift from at-the-glass to inside-the-jar experiences.

In many firms, digital transformation conversations begin with discussions around Cloud, Agile, DevOps, AI, ML, Data Science, etc. While these are key building blocks, they are only a means to an end. They are the How, not the Why or the What. This is akin to picking up the hammer before knowing where the nails are, and in my humble opinion needs to change. Every company that aims to drive digital transformation ought to ask Why they need to change and What the customer will gain as part of that change. If they choose to go directly to the How they could call their efforts digitization, or digitalization, not digital transformation.

To truly embrace digital transformation, deliberate analysis of consumer needs, planning, and execution is important.

 

Start with Why:

Every firm must ask these questions before embarking on a digital transformation:

  •     Will this create a fundamental shift in customer experience for the good?
  •     Will this create net new opportunities for the firm or its customers?
  •     Will this create significant operational efficiencies for the firm?
  •     Will this create marketplace differentiation?

 

Define the What:

  •     Based on the Why, create the manifestations that reach customers best. These could be products, platforms, services, experiences.

 

Apply the How:

What must the firm do to bring the What to life effectively and efficiently? Some objective questions to ask here:

  •     Which enablers will get us there? Agile, Cloud, DevOps?
  •     Is a cultural transformation needed before you digitally transform?
  •     Should we build, buy, partner, or a combination of the three?

 

In summary, transformation is a continuous journey, and successful firms will constantly transform themselves in a way that will fundamentally alter the value they offer their customers. If done well, transformation should not feel like a project or a program, it’s business as usual. My suggestion is to start with the Why.

 

Srini Alagarsamy | Vice President, Digital Software Solutions at GM Financial

I am a technologist both by passion and profession. My first interaction with computers was in my late teens but I soon fell in love with developing software and experimenting with hardware. Professionally, I have been fortunate to be part of world-class organizations, driving major business and digital transformation initiatives. Through this blog, I intend to share perspectives I have gained and lessons I have learned about business and leadership.

Finding the right MSSP for securing your business and training employees

Over the past year, small businesses have had to navigate the pandemic’s many challenges — from changes in business models and supply shortages to hiring and retaining employees. On top of these pandemic-driven challenges, SMBs also faced a growing business risk: cybersecurity incidents.

Cybercriminals often target SMBs due to the limited security resources and training that leave these businesses vulnerable. According to a study, Verizon found 61% of all SMBs reported at least one cyberattack during 2020, with 93% of small business attacks focused on monetary gain. Unfortunately, this leaves many SMBs forced to close after an incident due to the high costs incurred during a cyberattack.

Cybersecurity is no longer just “nice to have” for SMBs, but many business owners don’t know where to start. And while measures like a VPN or antivirus system can help, they aren’t enough by themselves. Managed security service providers (MSSPs) are a valuable resource for SMBs, allowing them to bring in the expertise needed to secure infrastructure that they might not be able to afford in this highly competitive labor market.

When looking for an MSSP, hundreds of options often leave businesses overwhelmed. To learn more about the value MSSPs should and can bring to the table, I spoke with Frank Rauch and Shay Solomon at Check Point Software Technologies.

Koziol: What should small and medium business owners look for when selecting a cybersecurity MSSP? What are the must-haves and the nice-to-haves?

Rauch: We are living in a time where businesses, SMBs especially, cannot afford to leave their security to chance. SMBs are a prime target for cybercriminals, as SMBs inherently struggle with the expertise, resources and IT budget needed to protect against today’s sophisticated cyberattacks. We are now experiencing the fifth generation of cyberattacks: large-scale, multi-vector, mega attacks targeting businesses, individuals and countries. SMBs should be looking for a true leader in cybersecurity. They should partner with an MSSP that can cover all customer sizes and all use cases. To make it easy, we can focus on three key areas:

  1. Security. The best MSSPs have security solutions that are validated by renowned third parties. They should prove their threat prevention capabilities and leverage a vast threat intelligence database that can help prevent threats at a moment’s notice.
  2. Capabilities. MSSPs should be offering a broad set of solutions, no matter the size—from large enterprises to small businesses, data centers, mobile, cloud, SD-WAN protection, all the way to IoT security. Having this broad range of expertise will ensure that your MSSP is ready to cover your business in all instances.
  3. Individualized. This may be one of the most critical areas. Your MSSP should be offering flexible growth-based financial models and provide service and support 24/7 with real-time prevention. Collaborative business processes and principles will ensure success and security in the long run.

Koziol: How can SMBs measure the value of bringing in an MSSP? Or, the risks of inaction?

Rauch: The biggest tell-tale sign of a match made in heaven is if you’re receiving your security needs through one single vendor. If not, those options are out there! Getting the best security through one experienced, leading vendor can reduce costs, simplify, support and ensure consistency across all products. This ranges from simply protecting your sensitive data all the way to ensuring you can secure the business through a centralized security management platform. How can you protect what you can’t see?

It makes sense to keep an eye on how many cybersecurity attacks you’re preventing each month. How long is it taking you to create, change and manage your policies? Are you scaling to your liking? Can you adapt on the fly if need be? Are your connected devices secure? These are just some examples that you should be able to measure with simplicity.

Koziol: How has the shift in remote/hybrid workforce changed how cybersecurity MSSPs support SMBs?

Rauch: The shift to a larger work-from-home practice has caused attackers to shift their attacks outside of their network. It is more important now than ever for MSSPs to be providing their SMBs with a complete portfolio — endpoint, mobile, cloud, email and office — that allows them to connect reliably, scale rapidly and stay protected, no matter the environment.

The best MSSPs should have been ready for this day. At any moment, day or night, your organization can be victimized by devastating cybercrime. You can’t predict when cyberattacks will happen, but you can use proactive practices and security services to quickly mitigate their effects or prevent them altogether. The shift to a hybrid workforce exposed the holes in the existing security infrastructure.

On the bright side, security incidents present an opportunity to comprehensively reevaluate and improve information security programs. They show threat vectors that we previously overlooked and raise awareness across the organization to enhance existing or implement new controls. So at the very least, this shift has been an eye-opener for MSSPs.

Koziol: Should MSSPs offer security awareness and training as part of their offering? Why?

Solomon: Absolutely, yes. At the end of the day, knowledge is power. Cyberattacks are evolving and training can help keep SMB employees protected and educated. According to a study from VIPRE, 47% of SMBs leaders reported keeping data secure as their top concern. At the same time, many SMBs lack sufficient skills and capacity to drive improved security on their own.

The only way to fight cybercrime effectively is by sharing experiences and knowledge. Due to the cyber shortage, Check Point Software, along with 200 global training partners, recently announced a free cybersecurity training program called Check Point Mind. It offers many training and cybersecurity awareness programs to give SMBs (or any business) the chance to extend their skills with comprehensive cybersecurity training programs led by world-class professionals.

Koziol: How can working with an MSSP on security awareness education improve a business’s overall security posture?

Solomon: Raising awareness with employees is a crucial step that’s often overlooked. Employees need to be able to identify a phishing attempt and know how to react. In our experience, we see a majority of employees attacked using emails. They receive an email that looks like an official email from someone with authority, asking them to open attachments or click on a link that contains malicious intent.

If employees go through a training course that teaches them what to look for in an attack, this will surely reduce the chance of that employee falling victim to the phishing attempt.

Koziol: What questions should SMBs be asking their current or future MSSPs about cybersecurity?

Solomon: Building on what was mentioned earlier, it is never too late to reevaluate and improve information security programs. Asking questions and investing in a better security posture shows us threat vectors that we previously might have overlooked and raises awareness across the organization to the need to improve existing or implement new controls. SMBs must proactively approach their MSSPs to ensure they are getting the best bang for their buck—security solutions that require minimal configuration and simple onboarding. In addition, they need to ensure they are taking the proper steps when evaluating security architecture, advanced threat prevention, endpoint, mobile, cloud, email and office.

Koziol: What’s ahead for MSSPs in the cybersecurity space? What should SMB owners expect to see next?

Rauch: One of the key areas we’ll see continuously growing is the need for a next-generation cybersecurity solution that enables organizations to proactively protect themselves against cyberthreats: incident detection and response management. As attacks continue to evolve and grow in numbers, unified visibility is a must-have across multiple vectors that a cyberthreat actor could use to attack a network.

A common challenge we see is an overwhelming volume of security data generated by an array of stand-alone point security solutions. What’s needed is a single dashboard, or, in other words, unified visibility, that enables a lean security team to maximize their efficiency and effectiveness. SMBs should take the opportunity to check security investments. The highest level of visibility, reached through consolidation, will guarantee the best effectiveness…[…] Read more »….

 

Talent Shortage: Are Universities Delivering Well-Prepared IT Graduates?

The tech talent crunch is impacting organizations of all sizes. The lack of qualified IT specialists is a rising concern with no end in sight.

Taking into account accelerated retirement plans of the Baby Boomers and the “great resignation” spurred by the pandemic, tech companies will be even more reliant on the upcoming generation of university graduates to fill the ranks of data specialists, AI experts, and software engineering pros.

Josh Drew, Boston regional director at Robert Half, a staffing and talent solutions company, says he regularly sees first-year computer science graduates take job opportunities in the development space with salaries ranging from $90,000 to north of $100,000.

“If you look at the opportunities of coming directly out of school and the skillset they leave with, I think there is a clear indication the university formula is working,” he says.

He added that the pandemic, which forced almost all university students into a totally virtual learning space, has also prepared them for the more flexible, part-time remote work.

“They’ve been doing online classes instead of turning their assignments into teachers — they’re uploading it and hosting it on sites, sharing through Google or Slack,” Drew says. “The model fits well with the hybrid workplace in the sense that it’s not always on-site turning in hard work — it’s working in a virtual world or outside of the classroom.”

Changing Tech Landscape

However, there is some concern that the most in-demand skills are not being taught and that they aren’t providing graduates with soft skills like communications.

Catherine Southard, vice president of engineering at D2iQ, says her company hasn’t had much success finding new grads with experience in Kubernetes and the Go programming language, in which D2iQ’s product is primarily developed.

“Part of that is because the tech landscape changes so quickly. It would be great for a representative from tech companies — maybe a panel of CTOs — to sit down with curriculum developers every couple of years and talk through industry trends and where technology is headed, and then brainstorm how to bridge the gap between university and industry,” she says.

Southard added something students can do is research jobs that look interesting, then see what tech stack those companies are using. They can then equip themselves to land those jobs by studying up on that technology by using free resources online or taking courses.

Importance of Internship Programs

She sees another area of improvement in support for internship programs. Historically, D2iQ had a program in the US, but it was expensive to operate, and it didn’t lead to long-term employee retention, except for a couple of stand-out talents.

She noted larger tech companies can sponsor internship programs, but for startups, Southard would like to see universities splitting some of the operating costs as an investment in their students.

“We have had success hiring student workers in our German office, and that is an excellent setup for all involved,” she says. “We get smart, motivated students, the students get real-world experience, and our engineers can focus on more challenging problems as the students are able to perform more basic tasks.”

She explained a lot of people looking to change careers participate in code camps lasting a couple of months. These camps give them the necessary skills to hit the ground running as developers: Universities might do well to look at what those programs are doing and create a similar curriculum.

“A four-year degree is great, and there are lots of benefits to it, but it’s really not required anymore to be a developer,” Southard points out. “Universities should make sure their graduating students are as immediately employable as code camp graduates.”

Code Camps

Drew pointed out that in the Boston area, he’s seen the growth of these code camps, as well as different academies and even school contests geared around topics like ethical hacking.

“More than ever, the curriculum within the IT space is definitely like real-world applications,” he says.

He’s seen the development of e-commerce and entrepreneurial kind of programs where students are building and developing products to sell on websites.

“They’re bridging the gap with soft skills and teamwork, collaborating with others and often in a virtual environment,” Drew says.

Southard also notes that most physical science students will only have a few required English courses, and no communication courses, but success in the workplace will come down to their attitude and their abilities to collaboratively solve problems and communicate clearly.

“If universities could have a peer review process on programming assignments using standard industry tooling such as Github, that would help build some of these skills and better prepare students for their career,” Southard says.

From the perspective of Kevin Chandra, the Gen-Z co-founder and CEO of Typedream, his university experience at University of Southern California adequately prepared him for a career in the real world.

“Our universities teach us the fundamentals of computer science; the reason that they do this is because technologies change very quickly,” he says. “If universities were to teach industry-standard technologies, by the time you graduate it will all have changed.”

Exposure to Tech Community

Chandra says what he thinks is currently lacking in universities is providing IT students with more exposure to the tech community.

“I have learned from Twitter, Substack blogs, and podcasts about relevant trends, technologies, and marketing strategies much more than I could ever have from outdated books,” he says. “I wish universities invited thought leaders to lecture students at universities.”

Mohit Tiwari, co-founder and CEO at Symmetry Systems, agreed with Chandra that universities excel at teaching young IT professionals the type of fundamental technical skill sets that can set students up for decades.

“For example, students trained in programming languages, distributed systems, and data engineering can now work on critical infrastructure problems like privacy and cloud security,” he says.

More broadly, universities are also a critical staging area before the students are launched into production.

“The goal is providing those students with a safe place to make mistakes and learn in a cohort with mentors to help, and not to load them with every skill they will need for 30 years,” Tiwari added.

Tiwari says he thought tech companies could do a better reaching out to IT students or form relationships with higher education institutions by creating open-source testbeds that reflect real-world deployments that students can use as projects.

Chandra says he felt big tech companies, especially in the US, do a good job reaching out to university grads by providing internship opportunities starting from freshman year..[…] Read more »…..

 

Is a Merger Between Information Security and Data Governance Imminent?

As with any merger, it is always difficult to predict an outcome until the final deal papers are signed, and press releases hit the wires.  However, there are clear indications that a tie up between these two is essential, and we will all be the better for it.  Data Governance has historically focused on the use of data in a business, legal and compliance context rather than how it should be protected, while the opposite is true for Information Security.

The idea of interweaving Data Governance and Information Security is not entirely new.  Gartner discussed this in their Data Security Governance Model, EDRM integrated multiple stakeholders including Information Security, Privacy, Legal and Risk into an overarching Unified Data Governance model, and an integrated approach to Governance, Risk, and Compliance has long been an aspiration in the eGRC market.  Organizations that have more mature programs are likely to have some level of integration between these functions already, but many continue to struggle with the idea and often treat them as separate, siloed programs.

As programs go, Information Security is ahead of Data Governance for its level of attention in the Boardroom; brought about primarily by news-worthy events that demonstrated what security and privacy practitioners had warning about for a long time.   These critial risks to the public and private sectors inspired significant, sweeping frameworks and industry standards(PCI, NIST, ISO, ISACA, SOC2) and regulatory legislation (HIPAA, GDPR, NYDS), and gave Information Security Officers (CISOs) a platform for change.

By contrast, data governance has been more fragmented in its definition, organization, development, and funding.  Many organizations accept the value of data governance, particularly as a proactive means to minimize risk, while enabling expansive use of information required in today’s business environment.   However, enterprises still struggle to balance information risk and value, and establishing the right enablers and controls.

Drivers

Risks and affirmative obligations associated with information are the primary drivers for the intersection of data governance and information security.  The reason that information security is so critical is that the loss ((through exfiltration or loss of access due to ransomware) of certain types of data carry legal and compliance consequences, along with impacting normal business operations.  And a lack of effective legal and compliance controls often lead to increased information security and privacy risk.

Additional common drivers include:

  • Volume, velocity, mobility, and sensitivity of information
  • Volume and complexity of legal, compliance, and privacy requirements
  • Hybrid technology and business environments
  • Multinational governance models and operations
  • Headline and business interruption risks

Finally, an underlying driver is the need to leverage investments in technology, practices, and personnel across an organization.  The interrelationships of so many information requirements simply demands a more coordinated approach.

Merging the models

We chose Information Risk Management, to define a construct that encompasses the overaching disciplines and requirements.  First, we did so because it places the focus on information.  For example, the same piece of information that requires protection, may also have retention and discovery requirements.  Second, risk management recognizes the need to balance the value and use of information from a business perspective, while also providing appropriate governance or protection.  Risk management also serves as an important means to evaluate priorities in investment, resources, and audit functions.

Information Risk Management
Figure 1: Information Risk Management

The primary objective is to integrate processes, people, and solutions into a framework that addresses common requirements; and does so “in depth” for both.  Security people, practices and technologies have long-been deployed at many levels (in-depth) to protect the organization.  The same has not often been the case for governance (legal, compliance, and privacy) obligations.  New practices and technologies are enablers for ntersecting programs, and support alignment amongst key constituencies, including Information Security, IT, Legal, Privacy, Risk and Compliance.  Done right, this provides leverage in an organization’s human and technology investments, improves risk posture, and increases the rate and reach of new practices and solutions.

Meshing disciplines and elements of each program are not meant as a new organizational construct; rather, it should start with a firm understanding of information requirements from key stakeholders; and from there establish synergies.  The list below, not meant to be exclusive, provides examples of shared enabling practices and technologies:

Shared Enablers and Requirements
Figure 2: Shared Enablers and Requirements

Conclusion

Integrating data governance, information security and privacy frameworks allows an enterprise to gain leverage from areas of common investment and provides a more comprehensive enterprise risk management strategy.  By improving proactive information management, organizations increase preventative control effectiveness and decrease reliance on detection and response activities.  It also develops cross functional capabilities across Privacy, Legal, Compliance, IT, and Information Security…[…] Read more »

 

 

Master Data Management (MDM) Framework With Arvind Joshi

Introduction – Reference Data vs. Master Data

It is very common for people to use ‘Reference Data’ and ‘Master Data’ interchangeably without understanding and appreciating the differences.

Reference data – External data that define the set of permissible values to be used by other data fields. Reference data gain in value when they are widely re-used and widely referenced. Typically, they do not change overly much in terms of definition, apart from occasional revisions. Example – Country Code, Asset Category, Vendor_ID and Currency Code, Industry Code, Security_ID (CUSIP, SEDOL, ISIN).

Master data – Internal dimensional data that directly participates in a transaction, like Customer_ID, Product_ID, Dept_ID and Employee_ID. Master data is critical for business and fall generally into four groupings: concepts, people, places, and things. Further categorizations within those groupings are called subject areas, domain areas, or entity types.

For example:

  • Within concepts, there are deals, contracts, warranties, and licenses.
  • Within people, there are customers, employees, and relationship managers.
  • Within places, there are office locations and geographic divisions.
  • Within things, there are products, business lines/units, and accounts.

Some domain areas may be further divided. Customer may be further segmented, based on relationships, market cap, incentives and history. A company may have normal customers, as well as premiere and executive customers. Product may be further segmented by sector, industry and geography/region.

The requirements and data life cycle for a product in the Financial Services Industry (FSI) is likely very different from those of the Insurance Industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.

Considerations – deciding why and what to manage

Master data is used by multiple applications, any error in master data will have ripple effect in all downstream applications consuming it. For example, an incorrect address in the customer master may mean orders, invoices/bills, confirms, and marketing literature are all sent to the wrong address. Similarly, an incorrect price on a Product Master can be a trade disaster, and an incorrect account number in an Account Master can lead to huge penalties.

Most organizations have more than one set of master data, this would be OK if it could be just union of the multiple master data sets, very likely some customers and products will appear in both sets of master data – usually, with different formats and different identifying keys. In most cases, customer IDs and product codes are assigned by the application that creates the master records, so the chances of the same customer or the same product having the same identifier in both databases is pretty remote.

Identifying master data entities is not complex, not all data that fits the definition for master data needs to be managed as such. The following criteria can be used to classify and identify master data attributes.

  • Interactions: Master data are the nouns and transactional data are the verbs in the data interactions. Review of these interactions can be used to identify and define master data. Facts (verbs) and dimensions (nouns) are represented in the similar way in a data warehouse. For example, in trading systems, master data is part of the trade record. An employee reports to their manager, who in turn reports up through another employee ﴾hierarchical relationship). Products can be part of multiple market segments and roll ups.
  • Data life cycle: Categorization of master data can be based on the way that it is created, read, updated, deleted, and searched. This data life cycle is different for different master‐data element types and industries. For example, how a customer is created depends largely upon business rules, industry segment, and data systems. There may be multiple customer creation paths, directly through customer on-boarding or through the operational systems. Additionally, how a customer element is created is certainly different from how a product element is created. 

Framework

Arvind Joshi – Director, Data Management and Analytics Lead at Scotiabank

 

Arvind serves as the Data Governance Officer for U.S. Finance, and as such he is the primary point of contact for all U.S. Finance data matters including with Fed regulators. In his time with Scotiabank, Arvind has championed data as a strategic asset and participated in data governance team, project, and leadership meetings like US Data Council, US Operating Committee and US Finance Committee. His team is responsible for the execution of day-to-day data governance and management activities, remediation of data quality issues, and implementation of change management processes. His team works closely with U.S. Data Office colleagues to establish and maintain strong data management capabilities, such as data quality measurement and monitoring, data issue management, and data lineage.

 

Chase CIO Gill Haus Discusses Recruitment, Agile, and Automation

The world of banking and finance faces aggressive change in innovation, increasing the need to adapt to new evolutionary cycles in financial technology. As customers want more resources and guidance with their finances, institutions such as JPMorgan Chase must nimbly respond in a way that belies their large size.

Gill Haus, CIO of consumer and community banking (Chase) at JPMorgan Chase, spoke with InformationWeek about his institution’s approach to finding the right tech talent to meet demands for innovation, the growing importance of automation, and the personal directives he follows.

When looking at technology recruitment, what skillsets is Chase seeking, both to meet current needs and also for what may come next?

At the root of what we do, we are in the business of building complex features and services for our customers. We have about 58 million digitally active customers; they depend heavily on the services we provide. Technology is behind all those products and services we offer. We are looking for the quintessential engineers that have the background in Java, machine learning engineers, those that have mobile experience as well. We also have technologies that are in “heritage” — systems that we’ve had for many years and we’re looking for engineers that understand how to use those technologies. Not just to support them but to modernize them. The key of our practice is to make sure also that we have those engineers and talent in general that is adaptable … because the market is constantly changing.

Why this is important is not just so we can have talent come in and help us build great solutions; it is also a great opportunity for talent to grow themselves. We provide our employees opportunities to use those new technologies whether it’s public cloud, private cloud, or machine learning. Also, to grow the breadth of their experiences, whether they’re working on mobile technologies, backend systems, or some other solution that touches millions and millions of customers. We offer our employees the opportunity, whether they are an entry-level software engineer, we have programs like our software engineer program where we bring in talent from universities and boot camps to do training. We offer things across the organization where our talent can contribute and learn with teams to build solutions, learn how to use other technology, and become more adaptable.

Gill_Haus_CHASE.jpg
Gill Haus, JPMorgan Chase

Are there particular technologies or methodologies that have come into play of late that Chase has wanted to adopt or look at?

We’ve made a large move to be an agile organization to organize around our products versus organizing around our businesses. The reason for that is we need to be able to build solutions quickly and those local teams — the product, technology, data, and design leaders — they’re more able to see what’s happening in the market, make decisions quickly, decide what to build or what service to provide, and make sure we’re applying that for our customer versus being organized in a way that makes it more difficult to operate.

The move to an agile work style is really key for us to compete.

The other [part] is the skills themselves. At our scale, machine learning absolutely. We have tons of data about our customers, on how customers are using our products. Customers ask us to provide them insights or guidance. If you go into our mobile app, we have something called Snapshot that tells you how you’re spending money compared to other people like you, ways you can save. Machine learning is the essence and power behind making that happen.

Mobile engineering is also incredibly important for us because more and more of our customers are moving to be digitally active in the mobile space. We want to be where our customers are.

What isn’t often talked about is a lot of our backend services, which is the main Java programming that we do, empowers all of this. From APIs to public cloud because when you deposit money, you’re using those rails. When you are executing machine learning models, you’re still using a lot of those rails.

While we are focused on a lot of the new, we’re also focused on modernizing the core that we have because that is so fundamental to the services we provide.

In terms of scouting tech talent, is there an emphasis on finding brand new graduates of schools that offer the latest skills, retraining existing staff to make use of their institutional knowledge as well?

All the above. The purpose-driven culture we have is really a big factor for us. Money is at the center of people’s lives. If you can create a positive experience for customers in using their money, whether they are able to save more, to pay for something they didn’t expect, or prevent fraud for them, it provides an incredible positive benefit to that individual. That’s important. Many of the people joining, or already at that firm, want to have that positive impact.

One of our software engineering programs is called Tech Connect, which is how we get in software engineers who might not have come in through the traditional software engineering degrees. It’s a way for them to go through training here and find a role within the organization. We also have the software engineering program where we look at entry level candidates coming in from colleges with computer science and other engineering degrees. For employees that we have here, we have programs like Power Up, which is at 20 JPMorgan Chase technology centers where over 17,000 employees meet on an annual basis. There they learn all different types technologies, from machine learning, to data, to cloud. That allows us not only to have people that are here be trained but it makes it compelling to join the firm…[…] Read more »…..