Policy

Zoom lied to users about end-to-end encryption for years, FTC says

Zoom founder Eric Yuan speaking at Nasdaq.

Enlarge / Zoom founder and CEO Eric Yuan speaks before the Nasdaq opening bell ceremony on April 18, 2019, in New York City as the company announced its IPO.

Zoom has agreed to upgrade its security practices in a tentative settlement with the Federal Trade Commission, which alleges that Zoom lied to users for years by claiming it offered end-to-end encryption.

“[S]ince at least 2016, Zoom misled users by touting that it offered ‘end-to-end, 256-bit encryption’ to secure users’ communications, when in fact it provided a lower level of security,” the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that “Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised.”

The FTC complaint says that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, which were intended for health-care industry users of the video conferencing service. Zoom also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers, the complaint said.

“In fact, Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC complaint said.

The FTC announcement said that Zoom also “misled some users who wanted to store recorded meetings on the company’s cloud storage by falsely claiming that those meetings were encrypted immediately after the meeting ended. Instead, some recordings allegedly were stored unencrypted for up to 60 days on Zoom’s servers before being transferred to its secure cloud storage.”

To settle the allegations, “Zoom has agreed to a requirement to establish and implement a comprehensive security program, a prohibition on privacy and security misrepresentations, and other detailed and specific relief to protect its user base, which has skyrocketed from 10 million in December 2019 to 300 million in April 2020 during the COVID-19 pandemic,” the FTC said. (The 10 million and 300 million figures refer to the number of daily participants in Zoom meetings.)

No compensation for affected users

The settlement is supported by the FTC’s Republican majority, but Democrats on the commission objected because the agreement doesn’t provide compensation to users.

“Today, the Federal Trade Commission has voted to propose a settlement with Zoom that follows an unfortunate FTC formula,” FTC Democratic Commissioner Rohit Chopra said. “The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom’s data protection claims. And it does not require Zoom to pay a dime. The Commission must change course.”

Under the settlement, “Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false,” Democratic Commissioner Rebecca Kelly Slaughter said. “This failure of the proposed settlement does a disservice to Zoom’s customers, and substantially limits the deterrence value of the case.” While the settlement imposes security obligations, Slaughter said it includes no requirements that directly protect user privacy.

Zoom is separately facing lawsuits from investors and consumers that could eventually lead to financial settlements.

The Zoom/FTC settlement doesn’t actually mandate end-to-end encryption, but Zoom last month announced it is rolling out end-to-end encryption in a technical preview to get feedback from users. The settlement does require Zoom to implement measures “(a) requiring Users to secure their accounts with strong, unique passwords; (b) using automated tools to identify non-human login attempts; (c) rate-limiting login attempts to minimize the risk of a brute force attack; and (d) implementing password resets for known compromised Credentials.”

FTC calls ZoomOpener unfair and deceptive

The FTC complaint and settlement also cover Zoom’s controversial deployment of the ZoomOpener Web server that bypassed Apple security protocols on Mac computers. Zoom “secretly installed” the software as part of an update to Zoom for Mac in July 2018, the FTC said.

“The ZoomOpener Web server allowed Zoom to automatically launch and join a user to a meeting by bypassing an Apple Safari browser safeguard that protected users from a common type of malware,” the FTC said. “Without the ZoomOpener Web server, the Safari browser would have provided users with a warning box, prior to launching the Zoom app, that asked users if they wanted to launch the app.”

The software “increased users’ risk of remote video surveillance by strangers” and “remained on users’ computers even after they deleted the Zoom app, and would automatically reinstall the Zoom app—without any user action—in certain circumstances,” the FTC said. The FTC alleged that Zoom’s deployment of the software without adequate notice or user consent violated US law banning unfair and deceptive business practices.

Amid controversy in July 2019, Zoom issued an update to completely remove the Web server from its Mac application, as we reported at the time.

Zoom agrees to security monitoring

The proposed settlement is subject to public comment for 30 days, after which the FTC will vote on whether to make it final. The 30-day comment period will begin once the settlement is published in the Federal Register. The FTC case and the relevant documents can be viewed here.

The FTC announcement said Zoom agreed to take the following steps:

  • Assess and document on an annual basis any potential internal and external security risks and develop ways to safeguard against such risks;
  • Implement a vulnerability management program; and
  • Deploy safeguards such as multi-factor authentication to protect against unauthorized access to its network; institute data deletion controls; and take steps to prevent the use of known compromised user credentials.

The data deletion part of the settlement requires that all copies of data identified for deletion be deleted within 31 days.

Zoom will have to notify the FTC of any data breaches and will be prohibited “from making misrepresentations about its privacy and security practices, including about how it collects, uses, maintains, or discloses personal information; its security features; and the extent to which users can control the privacy or security of their personal information,” the FTC announcement said.

Zoom will have to review all software updates for security flaws and make sure that updates don’t hamper third-party security features. The company will also have to get third-party assessments of its security program once the settlement is finalized and once every two years after that. That requirement lasts for 20 years.

Zoom issued the following statement about today’s settlement:

The security of our users is a top priority for Zoom. We take seriously the trust our users place in us every day, particularly as they rely on us to keep them connected through this unprecedented global crisis, and we continuously improve our security and privacy programs. We are proud of the advancements we have made to our platform, and we have already addressed the issues identified by the FTC. Today’s resolution with the FTC is in keeping with our commitment to innovating and enhancing our product as we deliver a secure video communications experience.

Study shows which messengers leak your data, drain your battery, and more

Stock photo of man using smartphone.

Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that’s being linked.

Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly. First a brief discussion of previews.

When a sender includes a link in a message, the app will display the conversation along with text (usually a headline) and images that accompany the link. It usually looks something like this:

For this to happen, the app itself—or a proxy designated by the app—has to visit the link, open the file there, and survey what’s in it. This can open users to attacks. The most severe are those that can download malware. Other forms of malice might be forcing an app to download files so big they cause the app to crash, drain batteries, or consume limited amounts of bandwidth. And in the event the link leads to private materials—say, a tax return posted to a private OneDrive or DropBox account—the app server has an opportunity to view and store it indefinitely.

The researchers behind Monday’s report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety—even if it’s gigabytes in size. Again, this may be a concern if the file is something the users want to keep private.

Link Previews: Instagram servers download any link sent in Direct Messages even if it’s 2.6GB.

It’s also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That’s a problem because users have no way of vetting the security of JavaScript and can’t expect messengers to have the same exploit protections modern browsers have.

Link Previews: How hackers can run any JavaScript code on Instagram servers.

Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes.

Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. “We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom,” Haj Bakry and Mysk wrote.

Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. The chart below provides a comparison of each app in the study.

Talal Haj Bakry and Tommy Mysk

All in all, the study is good news because it shows that most messaging apps are doing things right. For instance, Signal, Threema, TikTok, and WeChat all give the users the option of receiving no link preview. For truly sensitive messages and users who want as much privacy as possible, this is the best setting. Even when previews are provided, these apps are using relatively safe means to render them.

Still, Monday’s post is a good reminder that private messages aren’t always, well, private.

“Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world,” the researchers wrote. “Link previews are a nice feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.”

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.

The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.

But that’s not all

It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.

Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.

“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.

In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.

Patch on the way

The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.

In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:

We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.

It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.

As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.

Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.

The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.

Heroic measures

Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.

A modified USB cable attached to the back of an X4 watch.

Enlarge / A modified USB cable attached to the back of an X4 watch.
Mnemonic

One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.

Sand’s suspicions were further aroused when he found intents with the following names:

  • WIRETAP_INCOMING
  • WIRETAP_BY_CALL_BACK
  • COMMAND_LOG_UPLOAD
  • REMOTE_SNAPSHOT
  • SEND_SMS_LOCATION

After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.

“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”

Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.

As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.

Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.

The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.

Hong Kong downloads of Signal surge as residents fear crackdown

Hong Kong downloads of Signal surge as residents fear crackdown

d3sign / Getty

The secure chat app Signal has become the most downloaded app in Hong Kong on both Apple’s and Google’s app stores, Bloomberg reports, citing data from App Annie. The surging interest in encrypted messaging comes days after the Chinese government in Beijing passed a new national security law that reduced Hong Kong’s autonomy and could undermine its traditionally strong protections for civil liberties.

The 1997 handover of Hong Kong from the United Kingdom to China came with a promise that China would respect Hong Kong’s autonomy for 50 years following the handover. Under the terms of that deal, Hong Kong residents should have continued to enjoy greater freedom than people on the mainland until 2047. But recently, the mainland government has appeared to renege on that deal.

Civil liberties advocates see the national security law approved last week as a major blow to freedom in Hong Kong. The New York Times reports that “the four major offenses in the law—separatism, subversion, terrorism and collusion with foreign countries—are ambiguously worded and give the authorities extensive power to target activists who criticize the party, activists say.” Until now, Hong Kongers faced trial in the city’s separate, independent judiciary. The new law opens the door for dissidents to be tried in mainland courts with less respect for civil liberties or due process.

This has driven heightened interest among Hong Kongers in secure communication technologies. Signal offers end-to-end encryption and is viewed by security experts as the gold standard for secure mobile messaging. It has been endorsed by NSA whistleblower Ed Snowden.

One of Signal’s selling points is that it minimizes data collection on its users. When rival Telegram announced it would no longer honor data requests from Hong Kong courts, Signal responded that it didn’t have any user data to hand over in the first place.

Bloomberg has also reported on the surging adoption of VPN software in Hong Kong as residents fear government surveillance of their Web browsing.