GDPR

Digital thought clones manipulate real-time online behavior

In The Social Dilemma, the Netflix documentary that has been in the news recently for its radical revelations, former executives at major technology companies like Facebook, Twitter, and Instagram, among others, share how their ex-employers have developed sophisticated algorithms that not only predict users’ actions but also know which content will keep them hooked on their platforms.

digital thought clones

The knowledge that technology companies are preying on their users’ digital activities without their consent and awareness is well-known. But Associate Professor Jon Truby and Clinical Assistant Professor Rafael Brown at the Centre for Law and Development at Qatar University have pulled the curtain on another element that technology companies are pursuing to the detriment of people’s lives, and investigated what we can do about it.

“We had been working on the digital thought clone paper a year before the Netflix documentary aired. So, we were not surprised to see the story revealed by the documentary, which affirm what our research has found,” says Prof Brown, one of the co-authors.

Their paper identifies “digital thought clones,” which act as digital twins that constantly collect personal data in real-time, and then predict and analyze the data to manipulate people’s decisions.

Activity from apps, social media accounts, gadgets, GPS tracking, online and offline behavior and activities, and public records are all used to formulate what they call a “digital thought clone”.

Processing personalized data to test strategies in real-time

The paper defines digital thought clone as “a personalized digital twin consisting of a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.”

“Currently existing or future artificial intelligence (AI) algorithms can then process this personalized data to test strategies in real-time to predict, influence, and manipulate a person’s consumer or online decisions using extremely precise behavioral patterns, and determine which factors are necessary for a different decision to emerge and run all kinds of simulations before testing it in the real world,” says Prof Truby, a co-author of the study.

An example is predicting whether a person will make the effort to compare online prices for a purchase, and if they do not, charging a premium for their chosen purchase. This digital manipulation reduces a person’s ability to make choices freely.

Outside of consumer marketing, imagine if financial institutions use digital thought clones to make financial decisions, such as whether a person would repay a loan.

What if insurance companies judged medical insurance applications by predicting the likelihood of future illnesses based on diet, gym membership, the distance applicants walk in a day–based on their phone’s location history–and their social circle, as generated by their phone contacts and social media groups, and other variables?

The authors suggest that the current views on privacy, where information is treated either as a public or private matter or viewed in contextual relationships of who the information concerns and impacts, are outmoded.

A human-centered framework is needed

A human-centered framework is needed, where a person can decide from the very beginning of their relationship with digital services if their data should be protected forever or until they freely waive it. This rests on two principles: the ownership principle that data belongs to the person, and that certain data is inherently protected; and the control principle, which requires that individuals be allowed to make changes to the type of data collected and if it should be stored. In this framework, people are asked beforehand if data can be shared with an unauthorized entity.

The European Union’s landmark GDPR and the CCPA of 2018 can serve as a foundation for governments everywhere to legislate on digital thought clones and all that they entail.

But the authors also raise critical moral and legal questions over the status of these digital thought clones. “Does privacy for humans mean their digital clones are protected as well? Are users giving informed consent to companies if their terms and conditions are couched in misleading language?” asks Prof Truby.

A legal distinction must be made between the digital clone and the biological source. Whether the digital clone can be said to have attained consciousness will be relevant to the inquiry but far more important would be to determine whether the digital clone’s consciousness is the same as that of the biological source.

The world is at a crossroads: should it continue to do nothing and allow for total manipulation by the technology industry or take control through much-needed legislation to ensure that people are in charge of their digital data? It’s not quite a social dilemma.

Four easy steps for organizations to hand over data control

To stay connected with patients, healthcare providers are turning to telehealth services. In fact, 34.5 million telehealth services were delivered from March through June, according to the Centers for Medicare and Medicaid Services. The shift to remote healthcare has also impacted the roll out of new regulations that would give patients secure and free access to their health data.

hand over data control

The shift to online services shines a light on a major cybersecurity issue within all industries (but especially healthcare where people have zero control over their data): consent.

Hand over data control

Data transparency allows people to know what personal data has been collected, what data an organization wants to collect and how it will be used. Data control provides the end-user with choice and authority over what is collected and even where it is shared. Together the two lead to a competitive edge, as 85% of consumers say they will take their business elsewhere if they do not trust how a company is handling their data.

Regulations such as the GDPR and the CCPA have been enacted to hold companies accountable unlike ever before – providing greater protection, transparency and control to consumers over their personal data.

The U.S. Department of Health and Human Services’ (HHS) regulation, which is set to go into effect in early 2021, would provide interoperability, allowing patients to access, share and manage their healthcare data as they do their financial data. Healthcare organizations must provide people with control over their data and where it goes, which in turn strengthens trust.

How to earn patients’ trust

Organizations must improve their ability to earn patients’ confidence and trust by putting comprehensive identity and access management (IAM) systems in place. Such systems need to offer the ability to manage privacy settings, account for data download and deletion, and enable data sharing with not just third-party apps but also other people, such as additional care providers and family members.

The right digital identity solution should empower the orchestration of user identity journeys, such as registration and authentication, in a convenient way that unifies configuring security and user experience choices.

It should also enable the healthcare organization to protect patients’ personal data while offering their end-users a unified means of control of their data consents and permissions. Below are the four key steps companies should take to earn trust when users hand over data control:

  • Identify where digital transformation opportunities and user trust risks intersect. Since users are becoming more skeptical, organizations must analyze “trust gaps” while they are discovering clever new ways to leverage personal data.
  • Consider personal data as a joint asset. It’s easy for a company to say consumers own their own personal data, but business leaders have incentives to leverage that data for the value it brings to their business. This changes the equation. All the stakeholders within an organization need to come together and view data as a joint asset in which all parties, including end-users, have a stake.
  • Lean into consent. Given the realities of regulations, a business often has a choice to offer consent to end-users rather than just collecting and using data. Seek to offer the option – it provides benefits when building trust with skeptical consumers, as well as when proving your right to use that data.
  • Take advantage of consumer identity and access management (CIAM) for building trust. Identity management platforms automate and provide visibility into the entire customer journey across many different applications and channels. They also allow end-users to retain the controls to manage their own profiles, passwords, privacy settings and personal data.

Providing data transparency and data control to the end-user enhances the relationship between business and consumer. Organizations can achieve this trust with consumers in a comprehensive fashion by applying consumer identity and access management that scales across all of their applications. To see these benefits before regulations like the HHS regulations go into effect, organizations need to act now.

CPRA: More opportunity than threat for employers

Increasingly demanded by consumers, data privacy laws can create onerous burdens on even the most well-meaning businesses. California presents plenty of evidence to back up this statement, as more than half of organizations that do business in California still aren’t compliant with the California Consumer Privacy Act (CCPA), which went into effect earlier this year.

CPRA

As companies struggle with their existing compliance requirements, many fear that a new privacy ballot initiative – the California Privacy Rights Act (CPRA) – could complicate matters further. While it’s true that if passed this November, the CPRA would fundamentally change the way businesses in California handle both customer and employee data, companies shouldn’t panic. In fact, this law presents an opportunity for organizations to change their relationship with employee data to their benefit.

CPRA, the Californian GDPR?

Set to appear on the November 2020 ballot, the CPRA, also known as CCPA 2.0 or Prop 24 (its name on the ballot), builds on what is already the most comprehensive data protection law in the US. In essence, the CPRA will bring data protection in California nearer to the current European legal standard, the General Data Protection Regulation (GDPR).

In the process of “getting closer to GDPR,” the CCPA would gain substantial new components. Besides enhancing consumer rights, the CPRA also creates new provisions for employee data as it relates to their employers, as well as data that businesses collect from B2B business partners.

Although controversial, the CPRA is likely to pass. August polling shows that more than 80% of voters support the measure. However, many businesses do not. This is because, at first glance, the CPRA appears to create all kinds of legal complexities in how employers can and cannot collect information from workers.

Fearful of having to meet the same demanding requirements as their European counterparts, many organizations’ natural reaction towards the prospect of CPRA becoming law is fear. However, this is unfounded. In reality, if the CPRA passes, it might not be as scary as some businesses think.

CPRA and employment data

The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations.

Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners.

However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements.

Nonetheless, employers should act now

While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.

This is especially pertinent now that businesses are collecting more data than ever on their employees. With companies like the workplace monitoring company Prodoscore reporting that interest from prospective customers rose by 600% since the pandemic began, we are seeing rapid growth in companies looking to monitor how, where, and when their employees work.

This trend emphasizes the fact that the information flow between companies and their employees is mostly one-sided (i.e., from the worker to the employer). Currently, businesses have no legal requirement to be transparent about this information exchange. That will change for California-based companies if the CPRA comes into effect and they will have no choice but to disclose the type of data they’re collecting about their staff.

The only sustainable solution for impacted businesses is to be transparent about their data collection with employees and work towards creating a “culture of privacy” within their organization.

Creating a culture of privacy

Rather than viewing employee data privacy as some perfunctory obligation where the bare minimum is done for the sake of appeasing regulators, companies need to start thinking about worker privacy as a benefit. Presented as part of a benefits package, comprehensive privacy protection is a perk that companies can offer prospective and existing employees.

Privacy benefits can include access to privacy protection services that give employees privacy benefits beyond the workplace. Packaged alongside privacy awareness training and education, these can create privacy plus benefits that can be offered to employees alongside standard perks like health or retirement plans. Doing so will build a culture of privacy which can help companies ensure they’re in regulatory compliance, while also making it easier to attract qualified talent and retain workers.

It’s also worth bearing in mind that creating a culture of privacy doesn’t necessarily mean that companies have to stop monitoring employee activity. In fact, employees are less worried about being watched than they are by the possibility of their employers misusing their data. Their fears are well-founded. Although over 60% of businesses today use workforce data, only 3 in 10 business leaders are confident that this data is treated responsibly.

For this reason, companies that want to keep employee trust and avoid bad PR need to prioritize transparency. This could mean drawing up a “bill of rights” that lets employees know what data is being collected and how it will be used.

Research into employee satisfaction backs up the value of transparency. Studies show that while only 30% of workers are comfortable with their employer monitoring their email, the number of employees open to the use of workforce data goes up to 50% when the employer explains the reasons for doing so. This number further jumps to 92% if employees believe that data collection will improve their performance or well-being or come with other personal benefits, like fairer pay.

On the other hand, most employees would leave an organization if its leaders did not use workplace data responsibly. Moreover, 55% of candidates would not even apply for a job with such an organization in the first place.

Final thoughts

With many exceptions for workplace data management already built-in and more likely to come down the line, most employers should be able to easily navigate the stipulations CPRA entails.

That being said, if it becomes law this November, employers shouldn’t misuse the two-year window they have to prepare for new compliance requirements. Rather than seeing this time as breathing space before a regulatory crackdown, organizations should instead use it to be proactive in their approach to how they manage their employees’ data. As well as just ensuring they comply with the law, businesses should look at how they can turn employee privacy into an asset.

As data privacy stays at the forefront of employees’ minds, businesses that can show they have a genuine privacy culture will be able to gain an edge when it comes to attracting and retaining talent and, ultimately, coming out on top.

Phishers are targeting employees with fake GDPR compliance reminders

Phishers are using a bogus GDPR compliance reminder to trick recipients – employees of businesses across several industry verticals – into handing over their email login credentials.

Phishers GDPR compliance

The lure

“The attacker lures targets under the pretense that their email security is not GDPR compliant and requires immediate action. For many who are not versed in GDPR regulations, this phish could be merely taken as more red tape to contend with rather than being identified as a malicious message,” Area 1 Security researchers noted.

In this evolving campaign, the attackers targeted mostly email addresses they could glean from company websites and, to a lesser extent, emails of people who are high in the organization’s hierarchy (execs and upper management).

Every and any pretense is good for a phishing email, but when targeting businesses, the lure can be very effective if it can pass as an email sent from inside the organization. So the attackers attempted to make it look like the email was coming from the company’s “security services”, though some initial mistakes on their part would reveal to careful targets that the email was sent from an outside email account (a Gmail address).

“On the second day of the campaign the attacker began inserting SMTP HELO commands to tell receiving email servers that the phishing message originated from the target company’s domain, when in fact it came from an entirely different origin. This is a common tactic used by malicious actors to spoof legitimate domains and easily bypass legacy email security solutions,” the researchers explained.

The phishing site

Following the link in the email takes victims to the phishing site, initially hosted on a compromised, outdated WordPress site.

The link is “personalized” with the target’s email address, so the HTML form on the malicious webpage auto-populates the username field with the correct email address (found in the URL’s “email” parameter). Despite the “generic” look of the phishing page, this capability can convince some users to log in.

Once the password is submitted, a script sends the credentials to the phishers and the victim is shown an error page.

As always, users/employees are advised not to click on links in unsolicited emails and to avoid entering their credentials into unfamiliar login pages.

The state of GDPR compliance in the mobile app space

Among the rights bestowed upon EU citizens by the General Data Protection Regulation (GDPR) is the right to access their personal data stored by companies (i.e., data controllers) and information about how this personal data is being processed. A group of academics from three German universities has decided to investigate whether and how mobile app vendors respond to subject access requests, and the results of their four-year undercover field study are dispiriting.

The results of the study

“In three iterations between 2015 and 2019, we sent subject access requests to vendors of 225 mobile apps popular in Germany. Throughout the iterations, 19 to 26 % of the vendors were unreachable or did not reply at all. Our subject access requests were fulfilled in 15 to 53 % of the cases, with an unexpected decline between the GDPR enforcement date and the end of our study,” they shared.

“The remaining responses exhibit a long list of shortcomings, including severe violations of information security and data protection principles. Some responses even contained deceptive and misleading statements (7 to 13 %). Further, 9 % of the apps were discontinued and 27 % of the user accounts vanished during our study, mostly without proper notification about the consequences for our personal data.”

GDPR mobile

The researchers – Jacob Leon Kröger from TU Berlin (Weizenbaum Institute), Jens Lindemann from the University of Hamburg, and Prof. Dr. Dominik Herrmann from the University of Bamberg – made sure to test a representative sample of iOS and Android apps: popular and less popular, from a variety of app categories, and from vendors based in Germany, the EU, and outside of the EU.

They disguised themselves as an ordinary German user, created accounts needed for the apps to work, interacted with each app for about ten minutes, and asked app providers for information about their stored personal data (before and after GDPR enforcement).

They also used different a request text for each round of inquiries. The first one was more informal, while the last two were more elaborate and included references to relevant data protection laws and a warning that the responsible data protection authorities would be notified in the case of no response.

“While we cannot precisely determine their individual influence, it can be assumed that both the introduction of the GDPR as well as the more formal and threatening tone of our inquiry in [the latter two inquiries] had an impact on the vendors’ behavior,” they noted.

Solving the problem

Smartphones are ubiquitous and most users use a variety of mobile apps, which usually collect personal user data and share it with third parties.

In theory, the GDPR should force mobile app vendors to provide information about this data and how it’s used to users. In practice, though, many app vendors are obviously hoping that users won’t care enough about it and won’t make a stink when they don’t receive a satisfactory reply, and that GDPR regulators won’t have the resources to enforce the regulation.

“We (…) suspected that some vendors merely pretended to be poorly reachable when they received subject access requests – while others actually had insufficient resources to process incoming emails,” the researchers noted.

“To confirm this hypothesis, we tested how the vendors that failed to respond to our requests reacted to non-privacy related inquiries. Using another (different) fake identity, we emailed the vendors who had not replied [to the first inquiry] and [to the third inquiry], expressing interest in promoting their apps on a personal blog or YouTube channel. Out of the group of initial non-responders, 31 % [first inquiry] and 22 % [third inquiry] replied to these dummy requests, many of them within a few hours, proving that their email inbox was in fact being monitored.”

The researchers believe the situation for users can be improved by authorities doing random compliance checks and offering better support for data controllers through industry-specific guidelines and best practices.

“In particular, there should be mandatory standard interfaces for providing data exports and other privacy-related information to data subjects, obviating the need for the manual processing of GDPR requests,” they concluded.

How AI can alleviate data lifecycle risks and challenges

The volume of business data worldwide is growing at an astounding pace, with some estimates showing the figure doubling every year. Over time, every company generates and accumulates a massive trove of data, files and content – some inconsequential and some highly sensitive and confidential in nature.

Throughout the data lifecycle there are a variety of risks and considerations to manage. The more data you create, the more you must find a way to track, store and protect against theft, leaks, noncompliance and more.

Faced with massive data growth, most organizations can no longer rely on manual processes for managing these risks. Many have instead adopted a vast web of tracking, endpoint detection, encryption, access control and data policy tools to maintain security, privacy and compliance. But, deploying and managing so many disparate solutions creates a tremendous amount of complexity and friction for IT and security teams as well as end users. The problem with this approach is that it comes up short in terms of the level of integration and intelligence needed to manage enterprise files and content at scale.

Let’s explore several of the most common data lifecycle challenges and risks businesses are facing today and how to overcome them:

Maintaining security – As companies continue to build up an ocean of sensitive files and content, the risk of data breaches grows exponentially. Smart data governance means applying security across the points at which the risk is greatest. In just about every case, this includes both ensuring the integrity of company data and content, as well as any user with access to it. Every layer of enterprise file sharing, collaboration and storage must be protected by controls such as automated user behavior monitoring to deter insider threats and compromised accounts, multi-factor authentication, secure storage in certified data centers, and end-to-end encryption, as well as signature-based and zero-day malware detection.

Classification and compliance – Gone are the days when organizations could require users to label, categorize or tag company files and content, or task IT to manage and manually enforce data policies. Not only is manual data classification and management impractical, it’s far too risky. You might house millions of files that are accessible by thousands of users – there’s simply too much, spread out too broadly. Moreover, regulations like GDPR, CCPA and HIPAA add further complexity to the mix, with intricate (and sometimes conflicting) requirements. The definition of PII (personally identifiable information) under GDPR alone encompasses potentially hundreds of pieces of information, and one mistake could result in hefty financial penalties.

Incorrect categorization can lead to a variety of issues including data theft and regulatory penalties. Fortunately, machines can do in seconds–and often with better accuracy–what it might take years for a human to do. AI and ML technologies are helping companies quickly scan files across data repositories to identify sensitive information such as credit card numbers, addresses, dates of birth, social security numbers, and health-related data, to apply automatic classifications. They can also track files across popular data sources such as OneDrive, Windows File Server, SharePoint, Amazon S3, Google Cloud, GSuite, Box, Microsoft Azure Blob, and generic CIFS/SMB repositories to better visualize and control your data.

Retention – As data storage costs have plummeted over the past 10 years, many organizations have fallen into the trap of simply “keeping everything” because it’s (deceptively) cheap to do so. This approach carries many security and regulatory risks, as well as potential costs. Our research shows that exposure of just a single terabyte of data could cost you $129,324; now think about how many terabytes of data your organization stores today. The longer you retain sensitive files, the greater the opportunity for them to be compromised or stolen.

Certain types of data must be stored for a specific period of time in order to adhere to various customer contracts and regulatory criteria. For example, HIPAA regulations require organizations to retain documentation for six years from the date of its creation. GDPR is less specific, stating that data shall be kept for no longer than is necessary for the purposes for which it is being processed.

Keeping data any longer than absolutely necessary is not only risky, but those “affordable” costs can add up quickly. AI-enabled governance can track these set retention periods and minimize risk by automatically securing or eliminating any old or redundant files longer required (or allowed). With streamlined data retention processes, you can decrease storage costs, reduce security and noncompliance exposure and optimize data processing performance.

Ongoing monitoring and management – Strong governance gets easier with good data hygiene practices over the long term, but with so many files to manage across a variety of different repositories and storage platforms, it can be challenging to track risks and suspicious activities at all times. Defining dedicated policies for what data types can be stored in which locations, which users can access it, and all parties with which it be shared will help you focus your attention on further minimizing risk. AI can multiply these efforts by eliminating manual monitoring processes, providing better visibility into how data is being used and alerts when sensitive content might have been shared externally or with unapproved users. This makes it far easier to identify and respond to threats and risky behavior, enabling you to take immediate action on compromised accounts, move or delete sensitive content that is being shared too broadly or stored in unauthorized locations, etc.

The key to data lifecycle management

The sheer volume of data, files and content businesses are now generating and managing creates massive amounts of complexity and risk. You have to know what assets exist, where they’re stored, the specific users have access to them, when they’re being shared, what files can be deleted, which need to be stored in accordance with regulatory requirements, and so on. Falling short in any one of these areas can lead to major operational, financial and reputational consequences.

Fortunately, recent advances in AI and ML are enabling companies to streamline data governance to find and secure sensitive data at its source, sense and respond to potentially malicious behaviors, maintain compliance and adapt to changing regulatory criteria, and more. As manual processes and piecemeal point solutions fall short, AI-enabled data governance will continue to dramatically reduce complexity both for users and administrators, and deliver a level of visibility and control that business needs in today’s data-centric world.

340 GDPR fines for a total of €158,135,806 issued since May 2018

Since rolling out in May 2018, there have been 340 GDPR fines issued by European data protection authorities. Every one of the 28 EU nations, plus the United Kingdom, has issued at least one GDPR fine, Privacy Affairs finds.

GDPR fines

Whilst GDPR sets out the regulatory framework that all EU countries must follow, each member state legislates independently and is permitted to interpret the regulations differently and impose their own penalties to organizations that break the law.

Nations with the highest fines

  • France: €51,100,000
  • Italy: €39,452,000
  • Germany: €26,492,925
  • Austria: €18,070,100
  • Sweden: €7,085,430
  • Spain: €3,306,771
  • Bulgaria: €3,238,850
  • Netherlands: €3,490,000
  • Poland: €1,162,648
  • Norway: €985,400

Nations with the most fines

  • Spain: 99
  • Hungary: 32
  • Romania: 29
  • Germany: 28
  • Bulgaria: 21
  • Czech Republic: 13
  • Belgium: 12
  • Italy: 11
  • Norway: 9
  • Cyprus: 8

The second-highest number of fines comes from Hungary. The National Authority for Data Protection and Freedom of Information has issued 32 fines to date. The largest being €288,000 issued to an ISP for improper and non-secure storage of customers’ personal data.

UK organizations have been issued just seven fines, totalling over €640,000, by the Information Commissioner. The average penalty within the UK is €160,000. This does not include the potentially massive fines for Marriott International and British Airways that are still under review.

British Airways could face a fine of €204,600,000 for a data breach in 2019 that resulted in the loss of personal data of 500,000 customers.

Similarly, Marriott International suffered a breach that exposed 339 million people’s data. The hotel group faces a fine of €110,390,200.

The largest and highest GDPR fines

The largest GDPR fine to date was issued by French authorities to Google in January 2019. The €50 million was issued on the basis of “lack of transparency, inadequate information and lack of valid consent regarding ads personalization.”

Highest fines issued to private individuals:

  • €20,000 issued to an individual in Spain for unlawful video surveillance of employees.
  • €11,000 issued to a soccer coach in Austria who was found to be secretly filming female players while they were taking showers.
  • €9,000 issued to another individual in Spain for unlawful video surveillance of employees.
  • €2,500 issued to a person in Germany who sent emails to several recipients, where each could see the other recipients’ email addresses. Over 130 email addresses were visible.
  • €2,200 issued to a person in Austria for having unlawfully filmed public areas using a private CCTV system. The system filmed parking lots, sidewalks, a garden area of a nearby property, and it also filmed the neighbors going in and out of their homes.

70% of organizations experienced a public cloud security incident in the last year

70% of organizations experienced a public cloud security incident in the last year – including ransomware and other malware (50%), exposed data (29%), compromised accounts (25%), and cryptojacking (17%), according to Sophos.

public cloud security incident

Organizations running multi-cloud environments are greater than 50% more likely to suffer a cloud security incident than those running a single cloud.

Europeans suffered the lowest percentage of security incidents in the cloud, an indicator that compliance with GDPR guidelines are helping to protect organizations from being compromised. India, on the other hand, fared the worst, with 93% of organizations being hit by an attack in the last year.

“Ransomware, not surprisingly, is one of the most widely reported cybercrimes in the public cloud. The most successful ransomware attacks include data in the public cloud, according to the State of Ransomware 2020 report, and attackers are shifting their methods to target cloud environments that cripple necessary infrastructure and increase the likelihood of payment,” said Chester Wisniewski, principal research scientist, Sophos.

“The recent increase in remote working provides extra motivation to disable cloud infrastructure that is being relied on more than ever, so it’s worrisome that many organizations still don’t understand their responsibility in securing cloud data and workloads. Cloud security is a shared responsibility, and organizations need to carefully manage and monitor cloud environments in order to stay one step ahead of determined attackers.”

The unintentional open door: How attackers break in

Accidental exposure continues to plague organizations, with misconfigurations exploited in 66% of reported attacks. Misconfigurations drive the majority of incidents and are all too common given cloud management complexities.

Additionally, 33% of organizations report that cybercriminals gained access through stolen cloud provider account credentials. Despite this, only a quarter of organizations say managing access to cloud accounts is a top area of concern.

Data further reveals that 91% of accounts have overprivileged identity and access management roles, and 98% have multi-factor authentication disabled on their cloud provider accounts.

public cloud security incident

Public cloud security incident: The silver lining

96% of respondents admit to concern about their current level of cloud security, an encouraging sign that it’s top of mind and important.

Appropriately, “data leaks” top the list of security concerns for nearly half of respondents (44%); identifying and responding to security incidents is a close second (41%). Notwithstanding this silver lining, only one in four respondents view lack of staff expertise as a top concern.