4 strategies for balancing cybersecurity and business continuity planning during the coronavirus outbreak

As cybersecurity conferences worldwide cancel events, the impact of the coronavirus (COVID-19) on the industry comes close to home. At least two people who attended the annual RSA cybersecurity conference were officially diagnosed with the virus, with one placed in a medically induced coma. Compounding this industry impact, many companies have started initiating new “work from home” requirements for nonessential employees, including Apple and Google.

While companies brace for the coming changes that COVID-19 seems to be bringing, cybersecurity and compliance professionals find themselves struggling to balance workforce, member and data security. With this in mind, organizations should consider the following business continuity planning and cybersecurity strategies as they create their coronavirus preparedness plans.

Stay home, stay safe

Infosec’s education platforms were built from the start to be flexible and offer uninterrupted service.
For more than 5 years, Infosec courses have been online — helping remote students and employees meet their career goals and stay safe wherever they are.

What are the current governmental directives regarding COVID-19?

In late February 2020, the Centers for Disease Control (CDC) released its “Interim Guidance for Businesses and Employers.” This reads in part:

Important Considerations for Creating an Infectious Disease Outbreak Response Plan

All employers should be ready to implement strategies to protect their workforce from COVID-19 while ensuring continuity of operations. During a COVID-19 outbreak, all sick employees should stay home and away from the workplace, respiratory etiquette and hand hygiene should be encouraged, and routine cleaning of commonly touched surfaces should be performed regularly.

Employers should:

  • Ensure the plan is flexible and involve your employees in developing and reviewing your plan.
  • Conduct a focused discussion or exercise using your plan, to find out ahead of time whether the plan has gaps or problems that need to be corrected.
  • Share your plan with employees and explain what human resources policies, workplace and leave flexibilities, and pay and benefits will be available to them.

The Occupational Safety and Health Administration (OSHA) and Health and Human Services (HHS) issued a joint guidance of their own which stated, in part:

  • Employers should explore whether they can establish policies and practices, such as flexible worksites (e.g., telecommuting) and flexible work hours (e.g., staggered shifts), to increase the physical distance among employees and between employees and others

Although many companies already allow employees to work remotely, many others require employees to remain on-site when handling sensitive information. Unfortunately, those employees and organizations may not be able to control the required quarantine of sick individuals or may need to work remotely as part of physical distancing requirements for preventing the spread of COVID-19.

This means that companies need to start preparing new business continuity and security models now in order to limit business disruption.

Review your business impact analysis for cybersecurity controls

When people think about business impact analysis (BIA) and cybersecurity, they normally consider the potential impact of an organization’s essential functions being taken down by a malicious actor. While this remains true in terms of business continuity during an outbreak, the risks also shift.

Some considerations to include might be:

  • Availability of critical IT staff
  • Workforce member home wireless security
  • Use of Virtual Public Networks (VPN)
  • Enforcement of encryption processes
  • Managing user access to applications with multi-factor authentication
  • Monitoring user and entity behavior analytics (UEBA)
  • Limiting user access according to the principle of least privilege.[…] Read more »…. 



A Radical Plan for Enterprise Transformation

If you want the big rewards of new technology implementations, you need the right approach and a full commitment.

Want your organization to be on track for the kind of growth that digital-native startups enjoy? It may sometimes seem like these organizations have a head start because they don’t already have infrastructure in place. There’s a certain freedom that comes with starting fresh. Every decision they make is a new choice, based on today’s technology market. But established organizations can learn a few lessons from how these startups have built their infrastructure.

The real leaders among established businesses are those organizations that will jump in and leverage today’s technology and revamp their tech infrastructure to be adaptive for multiple projects and purposes. They won’t be held back by their existing tech infrastructure.

That’s among the conclusions and best practices pulled out of a new report from Accenture, based on engagements with C-level execs at more than 8,300 companies, with half in IT roles and half in non-IT roles, including 885 CEOs.

But not many organizations fit into the “Leaders” category of companies that represent just 10% of the overall study group, according to Accenture. These Leaders companies will experience much greater success, growing revenues at more than twice the rate of those in the bottom 25%. That bottom 25% is known as the “Laggards.”

Does it really make that much of a difference? Accenture said that in 2018, for example, Laggards left 15% of their potential revenue behind. If both Leaders and Laggards continue on their current trajectories, Laggards will leave 46% of their potential annual revenue on the table in 2023, Accenture said.

Yet in this fast-moving tech environment, it can be tricky to be sure you are moving fast enough and also making the right decisions.

A new form of ‘silo’

“Why does it happen? Primarily due to fragmented decision-making,” said Accenture in the report, written by Accenture Chief Technology and Innovation Officer Paul Daugherty, Group Chief Executive of Accenture Technology Services Bhaskar Ghosh, and Global Managing Director for IT and Business Research James Wilson. “Compelled to move as rapidly as possible, C-level executives are putting business unit, product, or geography heads in charge of the tech investment decisions affecting their areas. It works well in the short run. But it results in several (or many) fully rooted highly customized systems operating in isolated pockets of the organization.”

These systems can’t work together, and interoperability is key to driving the innovation of modern cloud-based, data-driven systems. That means the innovation can’t be shared or scaled across the business, and it gets harder to update each system.

How do you make sure you are more like a Leader and not like a Laggard? Accenture said organizations face a set of decision points it calls PATHS, an acronym for progress, adaptation, timing of tech adoption, human + machine workforce, and strategy.

The study defines “progress” in this context as how extensively or broadly to apply new technologies to evolve business processes across the enterprise. Organizations have a few choices in how they pursue this. One option could be to transform the low-hanging business processes, such as the customer-facing ones. Another option would be to build innovation centers or hubs to transform multiple processes. Both of these options would result in progress.

However, the Leaders tend to choose the third option, which is reimaging business processes for the future, and targeting multiple business processes with the same technology. That option is harder and probably more painful, but successfully executing it yields the most rewards.

Accenture offers similar options for the four other decision points. The first two options may look like progress, but the optimal option in each category is what leads to organizations becoming Leaders.

The cloud option

The firm defines “adaptation” as how we adapt our current IT investments to changing business needs. The first option is to patch legacy systems and the second option is to lift and shift to the cloud. But the optimal option, according to Accenture, is to decouple from legacy and transform with the cloud.

As for “timing of tech adoption,” Accenture says this is about how to properly sequence and map adoption of new technologies. The first option is to experiment with new technologies on the leading edge, and the second is to double down on industry-specific, customized technology. But the optimal option is to identify fundamental or general-purpose technologies and prioritize their adoption in terms of timing and processes targeted.

Accenture defines “human + machine workforce” as how to activate and enable the workforce to use and be augmented by technology. The first option is to rely on traditional, periodic training about new technology via standard classroom or online learning modules. The second option is to individualize training to allow employees to learn at their own pace. But the optimal option is to deliver tech-augmented training for working with technologies of the future such as AI, XR (augmented, virtual, and mixed reality technologies), and experiential and personalized.

Finally, Accenture said “strategy” in this context refers to how to intentionally manage the intersection of business strategy and technology strategy. The first option is to let business units rapidly and independently address their pain points, and the second option is to devise a technology strategy to explore ambitious business goals like new business models and adjacent markets. But the optimal option is to “build boundaryless, adaptable, and radically human IT systems that explicitly enable scale and strategic agility,” according to Accenture.

By embracing this kind of strategy, leaders become increasingly agile and able to innovate at scale within the enterprise…[…] Read more »…..


Connecting To Secure Wireless Networks In Windows 10


Though they offer undeniable benefits of mobility, cost and convenience, wireless networks are less desirable from a security perspective. There is always a risk that signals can get intercepted as they traverse through the open air.

Unsecured or “open” wireless networks, like those found in public cafes and airports, offer cybercriminals an easy launching pad for attacks. Sensitive data can be compromised in many different ways on unsecured wireless networks through the use of malware, snooping or man-in-the-middle tactics.

Given a choice, it is always preferable to restrict your connectivity on Windows 10 devices to fully secured wireless networks. Such networks use various wireless security protocols to encrypt the connections and, more importantly, restrict access to authorized individuals and their devices.

Take a closer look at Windows 10

Take a closer look at Windows with this course covering everything Windows related. This skills course covers:

⇒ Your Windows Toolset
⇒ Windows 10 Task Manager
⇒ Information and Configuration Tools
⇒ And more topics related to windows 10


Different types of wireless security protocols

There are four main types of wireless security protocols currently in existence: WEP, WPA, WPA2 and WPA3. Their evolution was the result of incremental upgrades to wireless network security over the last 22 years pioneered by the Wi-Fi Alliance.

Though primitive implementations of wireless data technology date back to the 1970s, Wi-Fi as we know it (the 802.11 protocol) first came about in 1997. The earliest Wi-Fi security protocol was also unveiled the same year.

WEP — Wired Equivalent Privacy

As the first generation of wireless network security, WEP has been outdated for almost two decades. Due to the simplistic nature of the RC4 Encryption Algorithm used in WEP, hackers could easily crack its security encryption using basic network analysis tools like AirCrack, AirSnort and Kismet.

When it comes to WPE and Windows 10, the protocol is no longer supported by default due to its deprecated status. This has been the case since at least Windows 7. You can still use the protocol while creating a new network on Windows 10; it’s just not at all recommended.

WPA — Wi-Fi Protected Access

Due to the discovery of numerous security vulnerabilities within the Cyclic Redundancy Check (CRC) used in WEP authentication, WPA was developed as a new standard in 2003. Instead of CRC, the new system used Temporal Key Integrity Protocol (TKIP).

TKIP-based WPA was considered more robust, as it used unique encryption keys for each data packet sent across the network. This results in more complex codes that can take longer to decrypt and hack.

But the system was far from secure, as it still employed the RC4 encryption used by its predecessor. WPA served largely as a stopgap measure for the Wi-Fi Alliance as it was developing a stronger, more secure Wi-Fi security standard. WPA was quickly replaced by WPA2 in 2006.


Until the announcement of WPA3 in 2018, WPA2 was the most advanced form of wireless security. Two major things set it apart from its predecessor: the mandatory usage of Advanced Encryption Standard (AES) algorithms and the replacement of TKIP with Counter Cipher Mode with Block Chaining Message Authentication Code Protocol (CCMP).

While CCMP is a superior protocol with vastly improved security compared to other protocols, WPA2 is still vulnerable to brute-force attacks and rainbow table attacks which use vast databases of precomputed hash strings (rainbow tables).

Both WPA and WPA2 provide two separate authentication variants: Personal for individual and home use and Enterprise for use in an office context. In the former, there is just one single authentication key. In Enterprise, the system administrator can set multiple authentication keys for different users.

Connecting to a WPA or WPA2 network is a fairly straightforward process in Windows 10. The system automatically detects all available wireless networks in the vicinity. The user simply has to select the network from the list and provide the security key (Wi-Fi password) when prompted.

To check your current security protocol, go to the Taskbar and click the Wi-Fi Connection icon. Go to the Wi-Fi details found in Properties. Security Type is displayed prominently there.

WPA3 — The future

The next generation of wireless security is yet to reach widespread implementation. It aims to reduce the reliance on user-set passwords for security, which is a thing in WPA2 — the system is only secure if you use a 16-digit complex password.

In WPA3, this is no longer a necessity, as it uses a new protocol for key exchange called Simultaneous Authentication of Equals. As it reduces the reliance on hash string databases, attackers have to directly interact with the router/access point to crack the password.

Even if the security key is compromised, the protocol does not allow access to historic data transmitted through the network. WPA3 is also expected to make public or open wireless networks even more secure.

Different ways to connect to secure wireless networks

In Windows 10, users have multiple choices when it comes to connecting their PCs to a nearby secure Wi-Fi network. At least four options exist, with varying levels of convenience and complexity. They include:


The most straightforward option is using the taskbar. The wireless icon is usually located in the right corner. Clicking it displays a list of available connections. Select the appropriate network and provide the authentication key to connect.


Another option is to use the Network & Security page in the Settings menu. Head to the Wi-Fi section, select “Manage known networks” and opt for “Add a new network.” Provide the network name and select the appropriate security type. Input the security key (Wi-Fi password) and save the settings to connect..[…] Read more »….


Get Serious About SaaS Management in the Enterprise

While a software-as-a-service model leaves processing in the service provider’s hands, there’s plenty of work left for the IT group when it comes to administering the relationship and in supporting employee users.

For many business leaders, the advantages of software as a service (SaaS) are plentiful compared to on-premises or other cloud delivery architectures. These benefits include faster times to deployment, low administration overhead, infinite scalability, low CAPEX investment and flexible licensing/payment models. That said, SaaS comes with a host of administration tasks, which — if left unchecked — can severely limit the benefits that SaaS offers.

You should establish processes that not only foster communication and collaboration between the SaaS provider and the in-house IT department, but also internal processes that ensure application and data performance, usability and security. Let’s look at what tasks are required to maintain a healthy SaaS portfolio within your organization as well as some new tools that can help streamline administration efficiency.

SaaS administration tasks between business and service provider

Because of your reliance on a third-party SaaS provider to manage the underlying cloud infrastructure, data storage and application delivery methods, it’s critical to be in sync. One way to do this is to be sure that the service provider has multiple points of contact within your business. All too often, a single member of your IT staff will establish the lines of communication with a SaaS vendor. Then, once those employees move on from the company, remaining IT staff must scramble to reestablish those relationships. A better way to manage service provider communication is to assign administration tasks to a team rather than a team member. Missed communications can result in unplanned maintenance windows, missed details regarding new feature announcements or other important information.

Those tasked with managing SaaS contracts must fully understand how to handle licensing and service level agreements (SLA). Because every SaaS contract is different, the process of adding and removing licenses — as well as proper management of unused licenses — is critical to squeezing out the most value for your money. Understand the various license tiers and what differentiates them from a features perspective. Also, be sure to develop a strategy to reduce the number of idle or unused licenses that waste money.

DevPlans should be developed to ensure the proper balance between speed-to-delivery of a service and a reduction of idle-license spend. Finally, understand your leverage when it comes to missed SLAs. Make certain you’re getting what was agreed upon when services become unavailable according to SLA guidelines.

Lastly, be prepared for SaaS license renewals as well as the potential of a full service termination. Getting ahead of the this will lessen the risk of a disruption in service due to a misstep in the renewals or termination process. The weeks and months before a service renewal is also a great time to reassess the value of all services in the company’s application portfolio. That way, steps can be put in place to help with the renegotiation of contracts, re-training of employees on changed application usability or migration of data from one cloud provider to another.

Administration of SaaS tools within the business

The IT department must also look inwardly when getting serious about SaaS administration tasks. The initial setup and customization of the cloud-delivered application must be performed by a well-trained admin to be sure it’s done according to the provider’s best-practice standards. Additionally, administrators must stay informed of any feature adds/removals, maintenance windows and IT security-related information.

Onboarding and processes for SaaS services should be implemented to quickly bring on new employees. Even more importantly, when employees leave, administrators must be able to remove access for security purposes. SaaS platform integrations into existing user management tools or SaaS management platforms can help to automate and increase the speed/accuracy of these steps.

The SaaS applications and services that are approved and supported by the IT department must be well known to the business’s user base. An easy-to-access and understand portfolio of supported SaaS apps should be made available. This portfolio can not only be used to help eliminate shadow IT, but it’s also a great way to begin evaluating the portfolio to identify overlapping, underutilized and abnormally expensive tools within the organization….[…] Read more »…..


5G Networks Present New Risks and Security Challenges

The talk of the town, the next big thing, a revolutionary breakthrough – the 5G technology lives up to all these clichés. It captures the imagination with potential use cases capitalizing on the impressively high speed, low latency, and mind-blowing network capacity.

Contributed by David Balaban

The state of 5G deployment currently ranges from large-scale field testing to commercial roll-outs in small portions around the world. Next-generation connectivity is already available in dozens of cities in the U.S.,  Europe, and East Asia. Moreover, these advanced telco systems are expected to become the backbone of digital economies soon.

Just like any new technology, 5G networks can be low-hanging fruit for threat actors who seek to expand their malicious reach. Therefore, it’s in the best interest of governments to assess and tackle the entirety of potential security issues prior to the ubiquitous implementation of the tech.

These concerns have recently incited some expert discussions in the EU. In October, EU member states released a report on “coordinated risk assessment of 5G networks security”. It came in response to a recommendation issued by the European Commission, the executive branch of the EU, in March 2019. Here are the key takeaways from the officials’ findings.

Supplier monopoly deemed as a major risk

The report emphasizes the possible pitfalls of using a single supplier of 5G equipment, namely the Chinese technology giant Huawei. Interestingly, the document contains no direct references to the company in question, although the collaboration is officially underway. Network infrastructure with the solo contractor at its core is susceptible to a number of issues, including a shortage of telecommunications gear, dependencies on the supplier’s commercial well-being, and primitive malware attacks.

Considering this paradigm, the researchers claim network operators will have to rely too heavily on the contractor that may undergo commercial pressure and therefore fail to carry through with its obligations. The adverse influence may stem from economic sanctions affecting the supplier, as well as from a merger or acquisition. Consequently, such cooperation has a single point of failure (SPOF) that might undermine the successful adoption of the technology and stability of the network down the road.

An extra factor is a strong link between the supplier and the government of the country it is based in. It means there is a chance of state-level interference with the equipment provider’s activities. Furthermore, a lack of democratic checks and balances and the absence of data protection agreements between the EU and the said country are serious roadblocks endangering the future partnership.

According to the officials, one more facet of the peril comes down to a tightening connection between the EU’s telco networks and third-party software systems. The elevated scope of access the supplier will have to the region’s 5G infrastructure and the transferred data is a lure for cybercriminals who may take significant efforts to exploit these systems.

Additional security challenges – the big picture

Aside from the obvious caveats arising from the increased role of hardware and software suppliers, the joint report provides a lowdown on other possible security effects of 5G network deployment across the EU. A summary of these challenges is as follows.

More entry points for attackers

The architecture of 5th generation wireless networks is largely based on software. This hallmark makes them particularly vulnerable to security imperfections resulting from vendors’ inappropriate software development processes. Critical flaws may allow malefactors to inject backdoors into the applications and thereby maintain long-lasting surreptitious access to different layers of the targeted 5G infrastructure.

5G network slicing issue

Given that 5G will enable numerous services and applications operating within different virtualized environments, such as enterprise and government networks, the importance of securing these logically segregated ecosystems is going to grow. Unless reliably isolated and protected, these network segments (dubbed “slices”) can be exposed to data leaks…[…] Read more »…..

This article first appeared in CISO MAG.

<Link to CISO MAG site: www.cisomag.com>

Data Privacy Day 2020 Encourages Consumers to “Own Their Privacy”

The theme of Data Privacy Day 2020 is “Own Your Privacy.”

Data Privacy Day began in the United States and Canada in January 2008 as an extension of the Data Protection Day celebration in Europe and is officially led by NCSA in North America.

With the California Consumer Privacy Act taking effect this year and other states considering similar legislation, data privacy has become a growing concern for businesses and consumers alike, says StaySafeOnline. The organization cites a recent survey by Pew Research Center that found that a majority of Americans think their personal data is less secure now than five years ago and that data collection by businesses and government poses more risks than benefits. “Yet while these concerns increase, few people understand what is being done with the data that is collected and how it is used and shared by businesses, which can monitor, store and sell the data for profit. That is why the theme of Data Privacy Day 2020 is “Own Your Privacy,” notes the organization.

To celebrate the 13th annual Data Privacy Day on January 28, NCSA and a range of privacy experts will be ’Live from LinkedIn’ for the third consecutive year in San Francisco. This year’s event, titled “Data Privacy Day 2020: A Vision for the Future,” will bring together data privacy experts from industry, government, and non-profit for a morning of TED-style talks and panels on global and national data privacy regulations, says StaySafeOnline. Click here to join the live stream.

“With new privacy legislation going to effect this year, Data Privacy Day 2020 couldn’t be a more timely opportunity for helping businesses and consumers understand the importance of respecting and protecting personal information,” said Kelvin Coleman, executive director of NCSA. “With the amount of consumer data collected and stored online, Data Privacy Day encourages businesses to improve data privacy and security practices and educate consumers about the many ways they can make their personal information more private.”

Steve Durbin, managing director of the Information Security Forum, notes that, “The requirement for maintaining data privacy has increased as privacy regulations have been adopted by many more jurisdictions since they were first introduced. Fines for breaching data privacy regulations have multiplied, and penalties can be more severe than fines. Increased public awareness and media interest have led to potential commercial and reputational consequences for non-compliance. The risk of private data being compromised has increased as systems are increasingly accessible via connected devices and vulnerable to cyber-attacks.”

“With all of the focus on breaches and the loss of personal data, it is understandable that the main attention for organizations today seems to have shifted to data privacy – after all, we are seeing a growth in legislative requirements to protect personal information along with the associated fines and sanctions for non-compliance,” Durbin adds. “Most governments have created regulations that impose conditions on the protection and use of personally identifiable information (PII), with penalties for organizations who fail to sufficiently protect it. As a result, data privacy and the protection of PII, afforded protection under the General Data Protection Regulation (GDPR) in the European Union (EU) the California Consumer Privacy Act (CCPA) and the New York Privacy Act appear to be here to stay.”

“What is clear is that privacy is becoming more of an issue in the United States,” he says. “And there is a very real need for a Federal law to avoid States introducing their own variations and interpretations on privacy which adds a further compliance burden to already overstretched businesses looking to understand and comply with their obligations across the various regions in which they are transacting business. The good news is that the formal enactment of the CCPA is going to add momentum to endeavors within the United States to formalize a sweeping federal law on data privacy.”

Joseph Carson, chief security scientist at Thycotic, says,”The reality today is that almost everyone is being tracked and monitored 24/7 with thousands of cameras recording your expressions, fashion, interactions and speech to determine what you need, what you might be thinking and who you are meeting. Algorithms can even determine what your next action might be.”

“Privacy should be universal,” Carson adds. “However, we tend to have different definitions of privacy in the digital world as opposed to physical world. EU GDPR has been a ground-breaking change that set new regulations around digital privacy, empowering citizens with clear cut rights around consent and transparency of their personal information online….[…] Read more »….


The Enterprise Guide to Successful AI

In a survey of 1000 Canadians, 31% of people said that companies that use AI in their operations and customer communications are the future.

People recognize the potential of AI and companies can no longer afford to be ignorant. AI is now disrupting every industry and the question is whether established companies will take proactive steps to ensure that disruption doesn’t happen to them.

The long-term path to success with AI requires companies to approach the integration of AI through an “AI Triple Win” framework of utility, privacy/security, and trust.

The AI Triple Win Framework

To achieve business success, the framework incorporates three key, foundational components:

  1. Utility: AI must solve pain points, add value, and serve genuine needs.
  2. Privacy and Security: Companies must incorporate privacy as a fundamental principle in every aspect of their work as opposed to an afterthought, and data must be held safely.
  3. Trust: Companies must achieve AI for Good, not simply AI for profit.

Let’s consider each pillar in more detail.

Pillar #1: Utility

To have the goal of using AI simply because competitors are using it is misguided. Whether creating utility means answering customer questions within seconds, serving consumers with more relevant website ads, creating product delivery efficiencies, or entertaining people while they wait for a taxi, every AI tool must serve a genuine need. Companies must have clarity on the role that AI can play in for them in growing their company and that requires Utility.

Within companies that focus on retail and customer service, AI tools help people find clothes that fit properly (Levi’s), and answer questions about products and services (Sephora, Lowe’s). Alibaba, a leader in applying advanced technologies in the retail space, has even employed smart racks and mirrors to help people see themselves in new styles without ever trying the clothes on, a boon for accessibility.

Similarly, within the food and QSR category, both Campbell’s Soup and Knorr use AI to help customers customize recipes based on ingredients currently in their home. Taco Bell uses a Slack chatbot to take orders. In addition, Domino’s Pizza allows consumers to place orders by sending a message that contains only the word “Pizza.”

Consumers are ready for AI customer experiences

Our research has showed that Canadians feel positive about AI in the customer service space. Many people believe that AI has the potential to improve customer service (40%) and can provide the same or better customer service than a person (20%). Further, 59% of people would feel comfortable with AI providing recommendations on what to purchase.

Given that 36% of people say Canadian businesses should invest in using AI technologies to run their business, it is clear that consumers are ready for companies to use AI.

Pillar #2: Privacy and Security

Unfortunately, few companies have made the second pillar, privacy, a key differentiator. DuckDuckGo, an internet browser that purposefully does not track its users movements (unlike Google, Firefox, and others), is enjoying increased consumer interest. Snips is an up-and-coming voice assistant alternative to Alexa and Siri that focuses on privacy and security. And Purism builds digital technologies with security as the main feature.

What companies can do, however, is make privacy and security key components of their publicly displayed company policies. Plain language allows anyone to understand what data a company is collecting and for what purpose (Apple, Encircle), what changes have been made to privacy policies (Fitbit), and how to withdraw consent for the collection of data (Danske Bank).

Consumers are ready to bring AI into their personal lives

Our research shows that people are comfortable with the possibilities that AI facilitates. People are comfortable trusting AI to regulate the temperature inside their homes (72%), organize their schedules (64%), and provide companionship to people who need it (58%). At the same time, however, people don’t blindly trust brands to respect their privacy and always maintain security. More than 43% of people worry about the AI on their phone, and a whopping 78% believe that AI will increase the lack of privacy.

We’ve already seen that people understand and want the benefits of artificial intelligence in their personal and work lives. They simply want companies to implement those processes in a way that respects their privacy and maintains their security.

Pillar #3: Trust

The third pillar of successful applications of AI is trust, an overriding aim to achieve AI for Good. In today’s world of transparency and instant communication around the world, revenue grabs are simply not sustainable. Companies must act in ways that are genuinely good for their customers.

Fortunately, many companies build consumer trust by not only providing good quality products and services, but by also actively and intentionally striving to do the right thing. Nike and Under Armor are prime examples in that they have taken a higher level approach to implementing AI in their business. Rather than simply using AI to facilitate customer service and purchase decisions, Nike and Under Armor mapped AI tools against their mission statements to create apps and virtual assistants that go beyond their products and services and help people lead healthier lives.

Consumers don’t yet trust companies to do the right thing

Unfortunately, companies using AI still have a long way to go to achieve a broader level of trust from consumers. Our research found that:

  • 20% of people believe companies using AI don’t have any ethical standards for AI in place
  • 31% worry companies might misuse AI to their own advantage
  • 41% believe companies using AI are focused on reducing their costs at the expense of people
  • 28% say Canadian businesses will use AI in ways that harm customers financially

Even though technology has impacted our lives for centuries, making millions of jobs extinct (Where are the buggy builders and lamp lighters today?), and creating millions of new jobs (Hello, data miners and user experience designers), people still worry that companies using AI will treat people unfairly and cause job loss and personal financial problems. The fact that AI and robotics will create almost 60 million more jobs than they destroy by 2022 doesn’t always feel personally relevant. People need to trust that companies will treat their employees and their consumers fairly today…[…] Read more »


Cybersecurity Weekly: Colorado BEC scam, CyrusOne ransomware, new California privacy law

A town in Colorado loses over $1 million to BEC scammers. Data center provider CyrusOne suffers a ransomware attack. California adopts the strictest privacy law in the United States. All this, and more, in this week’s edition of Cybersecurity Weekly.

1. California adopts strictest privacy law in U.S.

A new privacy rights bill took effect on January 1, 2020 that governs the way businesses collect and store Californian consumer data. The California Consumer Privacy Act mandates strict requirements for companies to notify consumers about how their data will be used and monetized, along with offering them a hassle-free opt-out process.
Read more »

2. Starbucks API key exposed online

Developers at Starbucks recently left an API key exposed that could be used by an attacker to access the company’s internal systems. This issue could allow attackers to execute commands on systems, add/remove users and potentially take over the AWS instance. The security researcher who reported the incident to Starbucks was awarded a $4,000 bounty.
Read more »

3. Cybercriminals filling up on gas pump transaction scams

Gas stations will become liable for card-skimming at their pay-at-the-pump stations starting in October. In the meantime, cybercriminals are targeting these stations with a vengeance, according to security researchers. This is because pay-at-the-pump stations are one of the only PoS systems that don’t yet comply with PCI DSS regulations.
Read more »

4. Travelex currency exchange suspends services after malware attack

On New Year’s Eve, the U.K.-based currency exchange Travelex was forced to shut down its services as a “precautionary measure” in response to a malware attack. The company is manually processing customer requests while the network stays down during the incident response and recovery process.
Read more »

5. Xiaomi cameras connected to Google Nest expose video feeds from others

Google temporarily banned Xiaomi devices from its Nest Hub following a security incident with the Chinese camera manufacturer. Several posts on social media over the past week have showcased users gaining access to other random security cameras. Google warned users to unlink their cameras from their Nest Hub until a patch arrives.
Read more »

6. Colorado town wires over $1 million to BEC scammers

Colorado Town of Erie recently lost more than $1 million to a business email compromise attack after scammers used an electronic payment information form on the town’s own website. They requested a change to the payment information on the building contract for a nearby bridge construction project.
Read more »

7. Maze ransomware sued for publishing victim’s stolen data

The anonymous hackers behind the Maze ransomware are being sued for illegally accessing a victim’s network, stealing data, encrypting computers and publishing the stolen data after a ransom was not paid. Lawyers claim the lawsuit may be to reserve their spot for monetary damages if money is recovered by the government.
Read more »

8. Landry’s restaurant chain suffers payment card theft via PoS malware

A malware attack struck point of sale systems at Landry’s restaurant chain that allowed cybercriminals to steal customers’ credit card information. Due to end-to-end encryption technology used by the company, attackers were only able to steal payment data “in rare circumstances.”..[…] Read more »….



Watch Out: 7 Digital Disruptions for IT Leaders

Here are seven digital disruptions that you may not see coming.

Be like Apple, not Kodak. Years ago, Kodak was the first to offer digital film. But instead of pursuing the market that would disrupt one it already commanded, Kodak opted to invest in its traditional business by buying a chemical company for its conventional film business. Other companies went on to market digital film. Then came digital cameras and mobile devices with cameras in them. Kodak chose the wrong path.

Apple went down the path of disrupting its own successful product, the iPod MP3 player, to develop and sell the iPhone. It turned out to be the right decision.

Gartner VP, analyst and chief fellow Daryl Plummer recounted these stories in the introduction to his keynote address titled 7 Digital Disruptions You Might Not See Coming at the Gartner IT Symposium recently. So how do you be Apple instead of Kodak?

“It’s really about protecting yourself from what might happen to you,” Plummer said. “Futureproofing yourself means that you are ready for the things that are coming, and even if you don’t know what they are, you can adapt.”

What disruptions may be coming down the pike that you aren’t expecting? Plummer provided a peek into the following 7 digital disruptions that you may not see coming:

1. Emotional experiences

Inexpensive sensors can now track physical biometrics, and organizations are working on providing hyper-personalized digital experiences, according to Gartner. The firm is forecasting that by 2024, AI identification of emotions will influence more than half of the online ads that you see.

This trend will reach beyond marketing to consumers. It could also be used in HR applications and be applied to employee evaluations, for instance.

Gartner recommends that CIOs identify emotional trigger-based opportunities with employees and customers, add emotional states evaluation to 360 Review processes, and mitigate privacy concerns with opt-in for-pay emotion mining.

2. AI decency, trust, and ethics

How do we know that the decisions AI is making are fair when there are many examples of questionable results that exhibit bias? What about fake news and deep fakes? Plummer said that this trend will disrupt trust models, certification of developers, auditing rules, and societal norms for trust. Gartner is predicting that by 2023, a self-regulating association for oversight of AI and machine learning designers will be established in at least four of the G7 countries.

CIOs should prescribe principles that establish an AI trust framework for developers and users.

3. Distributed cloud

Plummer said that in its most basic form, this trend means that the responsibility for cloud will shift entirely to the provider. About 75% of private clouds won’t work out in the long run because the DIY effort won’t be as good as what is available in the public cloud. Openshift, Cloud Foundry, and Azure Stack are taking us along this path to distributed cloud.

The trend will disrupt private cloud, hybrid cloud, data location, and data residency.

CIOs should demand packaged hybrid services, identify latency-sensitive use cases, and request explanation of economics of cloud operations.

4. Democratization of space

While it cost 4% of the entire U.S. budget to put a man on the moon, putting a satellite into orbit now costs just $300,000, Plummer said. That has led to a low space orbit getting mighty crowded with hundreds of satellites. It also raises a host of new questions. What rules apply to data residency in space? What laws apply? What about crime in space? Countries and companies will be competing in space, and the cheaper it gets to launch a satellite, the more crowded it will become.

This trend will disrupt the economics of space-based systems, connectivity, and legal issues.

Technology providers will need to explore LEO (low earth orbit) connectivity options as space-based compute options become real.

5. Augmented humans

People will have technology such as chips and storage embedded in their bodies, and it will drive disruptions such as PC thought control, brain computer interfaces, and mind-link technology.

To prepare, tech providers should enhance disabled access to compute technology using brain computer interfaces and begin the shift from lifestyle to lifeline technologies, according to Gartner…[…] Read more »…..


Trial Before the Fire: How to Test Your Incident Response Plan to Ensure Consistency and Repeatability

While many organizations go to great lengths to set up effective security operations incident response plans, few proactively test their processes to ascertain how they will work when faced with a real threat.

Fifty-nine percent of incident response (IR) professionals admit that their organizations follow a reactive approach, according to a report from Carbon Black. Essentially, teams assume their processes work reasonably well to address the incident at hand … until they don’t. While organizations must have IR plans in place, it’s even more important that they a) work consistently and b) are updated and improved over time.

Testing incident response processes within the security operations center (SOC) should yield two important results: a clear understanding of whether your plan is likely to work and a list of gaps that should be addressed. There is no point testing them if the findings will play no role in optimizing your processes.

Lessons learned from your tests must be properly documented for them to have real, lasting value for your security operations team. Plus, you don’t want to find out your emergency plans don’t work when disaster strikes. What makes sense on paper or the whiteboard often doesn’t work as planned when put into practice.

Schools run fire drills, so everyone knows what to do when the bells go off. So, why aren’t we applying this logic more broadly in cybersecurity?

What is incident response?

IR refers to the systematic response to and management of events following a cyberattack or data breach. It involves a series of actions and activities aimed at reducing the impact of such an event.

A typical IR plan includes six phases which help the affected organization recover from an incident or simply contain it once it occurs: preparation, identification, containment, eradication, recovery and lessons learned.

When building an effective IR plan, security teams should determine the following:

  • The purpose of the plan.
  • Details on how to use the plan.
  • Your ability to respond to different incident types – including unauthorized access, malicious code, denial of service and inappropriate usage – and whether your information assets would be affected by such events.
  • Event handling protocols for each incident type and how to respond. This should include a checklist of which playbook needs to be triggered in the event of a cyberattack or breach. (A playbook, also known as a runbook, is common to the SOC and defines the flow of activities associated with a specific security issue and subsequent investigation and response. The goal is to build a consistent set of activities followed in every case, no matter the analyst assigned to it.)
  • Your ability to set up a “war room” for critical decision makers to receive and share information across the organization.
Testing the waters

Once you have a clear, documented plan in place, you should periodically test it through simulations to assess effectiveness and make continuous improvements. So, how can you put your processes to the test? Most security operations teams today use three methods:

1)     Paper tests

The most theoretical and likely the first step for security operations teams who don’t have well-documented processes. However, paper tests leave too much room for error and should only be used to look for small process changes.

2)     Tabletop exercises

These scenarios consist of company stakeholders sitting around a, you guessed it, table and running through a mock security event. While these exercises may appear informal, you should prepare well in advance, make sure the right individuals participate from across the organization and that the scenario is as real as possible. Allow for up to half a day to put key processes through their paces and troubleshoot as you go.

3)     Simulated attacks

The most effective way to pressure test your processes is to simulate a real-world attack to see how your organization will respond.[…] Read more »