Biz & IT

Apple lets some Big Sur network traffic bypass firewalls

A somewhat cartoonish diagram illustrates issues with a firewall.

Patrick Wardle

Firewalls aren’t just for corporate networks. Large numbers of security- or privacy-conscious people also use them to filter or redirect traffic flowing in and out of their computers. Apple recently made a major change to macOS that frustrates these efforts.

Beginning with macOS Catalina released last year, Apple added a list of 50 Apple-specific apps and processes that were to be exempted from firewalls like Little Snitch and Lulu. The undocumented exemption, which didn’t take effect until firewalls were rewritten to implement changes in Big Sur, first came to light in October. Patrick Wardle, a security researcher at Mac and iOS enterprise developer Jamf, further documented the new behavior over the weekend.

“100% blind”

To demonstrate the risks that come with this move, Wardle—a former hacker for the NSA—demonstrated how malware developers could exploit the change to make an end-run around a tried-and-true security measure. He set Lulu and Little Snitch to block all outgoing traffic on a Mac running Big Sur and then ran a small programming script that had exploit code interact with one of the apps that Apple exempted. The python script had no trouble reaching a command and control server he set up to simulate one commonly used by malware to exfiltrate sensitive data.

“It kindly asked (coerced?) one of the trusted Apple items to generate network traffic to an attacker-controlled server and could (ab)use this to exfiltrate files,” Wardle, referring to the script, told me. “Basically, ‘Hey, Mr. Apple Item, can you please send this file to Patrick’s remote server?’ And it would kindly agree. And since the traffic was coming from the trusted item, it would never be routed through the firewall… meaning the firewall is 100% blind.”

Wardle tweeted a portion of a bug report he submitted to Apple during the Big Sur beta phase. It specifically warns that “essential security tools such as firewalls are ineffective” under the change.

Apple has yet to explain the reason behind the change. Firewall misconfigurations are often the source of software not working properly. One possibility is that Apple implemented the move to reduce the number of support requests it receives and make the Mac experience better for people not schooled in setting up effective firewall rules. It’s not unusual for firewalls to exempt their own traffic. Apple may be applying the same rationale.

But the inability to override the settings violates a core tenet that people ought to be able to selectively restrict traffic flowing from their own computers. In the event that a Mac does become infected, the change also gives hackers a way to bypass what for many is an effective mitigation against such attacks.

“The issue I see is that it opens the door for doing exactly what Patrick demoed… malware authors can use this to sneak data around a firewall,” Thomas Reed, director of Mac and mobile offerings at security firm Malwarebytes, said. “Plus, there’s always the potential that someone may have a legitimate need to block some Apple traffic for some reason, but this takes away that ability without using some kind of hardware network filter outside the Mac.”

People who want to know what apps and processes are exempt can open the macOS terminal and enter sudo defaults read /System/Library/Frameworks/NetworkExtension.framework/Resources/Info.plist ContentFilterExclusionList.

NKEs

The change came as Apple deprecated macOS kernel extensions, which software developers used to make apps interact directly with the OS. The deprecation included NKEs—short for network kernel extensions—that third-party firewall products used to monitor incoming and outgoing traffic.

In place of NKEs, Apple introduced a new user-mode framework called the Network Extension Framework. To run on Big Sur, all third-party firewalls that used NKEs had to be rewritten to use the new framework.

Apple representatives didn’t respond to emailed questions about this change. This post will be updated if they respond later. In the meantime, people who want to override this new exemption will have to find alternatives. As Reed noted above, one option is to rely on a network filter that runs from outside their Mac. Another possibility is to rely on PF, or Packet Filter firewall built into macOS.

Zoom lied to users about end-to-end encryption for years, FTC says

Zoom founder Eric Yuan speaking at Nasdaq.

Enlarge / Zoom founder and CEO Eric Yuan speaks before the Nasdaq opening bell ceremony on April 18, 2019, in New York City as the company announced its IPO.

Zoom has agreed to upgrade its security practices in a tentative settlement with the Federal Trade Commission, which alleges that Zoom lied to users for years by claiming it offered end-to-end encryption.

“[S]ince at least 2016, Zoom misled users by touting that it offered ‘end-to-end, 256-bit encryption’ to secure users’ communications, when in fact it provided a lower level of security,” the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that “Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised.”

The FTC complaint says that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, which were intended for health-care industry users of the video conferencing service. Zoom also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers, the complaint said.

“In fact, Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC complaint said.

The FTC announcement said that Zoom also “misled some users who wanted to store recorded meetings on the company’s cloud storage by falsely claiming that those meetings were encrypted immediately after the meeting ended. Instead, some recordings allegedly were stored unencrypted for up to 60 days on Zoom’s servers before being transferred to its secure cloud storage.”

To settle the allegations, “Zoom has agreed to a requirement to establish and implement a comprehensive security program, a prohibition on privacy and security misrepresentations, and other detailed and specific relief to protect its user base, which has skyrocketed from 10 million in December 2019 to 300 million in April 2020 during the COVID-19 pandemic,” the FTC said. (The 10 million and 300 million figures refer to the number of daily participants in Zoom meetings.)

No compensation for affected users

The settlement is supported by the FTC’s Republican majority, but Democrats on the commission objected because the agreement doesn’t provide compensation to users.

“Today, the Federal Trade Commission has voted to propose a settlement with Zoom that follows an unfortunate FTC formula,” FTC Democratic Commissioner Rohit Chopra said. “The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom’s data protection claims. And it does not require Zoom to pay a dime. The Commission must change course.”

Under the settlement, “Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false,” Democratic Commissioner Rebecca Kelly Slaughter said. “This failure of the proposed settlement does a disservice to Zoom’s customers, and substantially limits the deterrence value of the case.” While the settlement imposes security obligations, Slaughter said it includes no requirements that directly protect user privacy.

Zoom is separately facing lawsuits from investors and consumers that could eventually lead to financial settlements.

The Zoom/FTC settlement doesn’t actually mandate end-to-end encryption, but Zoom last month announced it is rolling out end-to-end encryption in a technical preview to get feedback from users. The settlement does require Zoom to implement measures “(a) requiring Users to secure their accounts with strong, unique passwords; (b) using automated tools to identify non-human login attempts; (c) rate-limiting login attempts to minimize the risk of a brute force attack; and (d) implementing password resets for known compromised Credentials.”

FTC calls ZoomOpener unfair and deceptive

The FTC complaint and settlement also cover Zoom’s controversial deployment of the ZoomOpener Web server that bypassed Apple security protocols on Mac computers. Zoom “secretly installed” the software as part of an update to Zoom for Mac in July 2018, the FTC said.

“The ZoomOpener Web server allowed Zoom to automatically launch and join a user to a meeting by bypassing an Apple Safari browser safeguard that protected users from a common type of malware,” the FTC said. “Without the ZoomOpener Web server, the Safari browser would have provided users with a warning box, prior to launching the Zoom app, that asked users if they wanted to launch the app.”

The software “increased users’ risk of remote video surveillance by strangers” and “remained on users’ computers even after they deleted the Zoom app, and would automatically reinstall the Zoom app—without any user action—in certain circumstances,” the FTC said. The FTC alleged that Zoom’s deployment of the software without adequate notice or user consent violated US law banning unfair and deceptive business practices.

Amid controversy in July 2019, Zoom issued an update to completely remove the Web server from its Mac application, as we reported at the time.

Zoom agrees to security monitoring

The proposed settlement is subject to public comment for 30 days, after which the FTC will vote on whether to make it final. The 30-day comment period will begin once the settlement is published in the Federal Register. The FTC case and the relevant documents can be viewed here.

The FTC announcement said Zoom agreed to take the following steps:

  • Assess and document on an annual basis any potential internal and external security risks and develop ways to safeguard against such risks;
  • Implement a vulnerability management program; and
  • Deploy safeguards such as multi-factor authentication to protect against unauthorized access to its network; institute data deletion controls; and take steps to prevent the use of known compromised user credentials.

The data deletion part of the settlement requires that all copies of data identified for deletion be deleted within 31 days.

Zoom will have to notify the FTC of any data breaches and will be prohibited “from making misrepresentations about its privacy and security practices, including about how it collects, uses, maintains, or discloses personal information; its security features; and the extent to which users can control the privacy or security of their personal information,” the FTC announcement said.

Zoom will have to review all software updates for security flaws and make sure that updates don’t hamper third-party security features. The company will also have to get third-party assessments of its security program once the settlement is finalized and once every two years after that. That requirement lasts for 20 years.

Zoom issued the following statement about today’s settlement:

The security of our users is a top priority for Zoom. We take seriously the trust our users place in us every day, particularly as they rely on us to keep them connected through this unprecedented global crisis, and we continuously improve our security and privacy programs. We are proud of the advancements we have made to our platform, and we have already addressed the issues identified by the FTC. Today’s resolution with the FTC is in keeping with our commitment to innovating and enhancing our product as we deliver a secure video communications experience.

Study shows which messengers leak your data, drain your battery, and more

Stock photo of man using smartphone.

Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that’s being linked.

Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly. First a brief discussion of previews.

When a sender includes a link in a message, the app will display the conversation along with text (usually a headline) and images that accompany the link. It usually looks something like this:

For this to happen, the app itself—or a proxy designated by the app—has to visit the link, open the file there, and survey what’s in it. This can open users to attacks. The most severe are those that can download malware. Other forms of malice might be forcing an app to download files so big they cause the app to crash, drain batteries, or consume limited amounts of bandwidth. And in the event the link leads to private materials—say, a tax return posted to a private OneDrive or DropBox account—the app server has an opportunity to view and store it indefinitely.

The researchers behind Monday’s report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety—even if it’s gigabytes in size. Again, this may be a concern if the file is something the users want to keep private.

Link Previews: Instagram servers download any link sent in Direct Messages even if it’s 2.6GB.

It’s also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That’s a problem because users have no way of vetting the security of JavaScript and can’t expect messengers to have the same exploit protections modern browsers have.

Link Previews: How hackers can run any JavaScript code on Instagram servers.

Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes.

Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. “We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom,” Haj Bakry and Mysk wrote.

Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. The chart below provides a comparison of each app in the study.

Talal Haj Bakry and Tommy Mysk

All in all, the study is good news because it shows that most messaging apps are doing things right. For instance, Signal, Threema, TikTok, and WeChat all give the users the option of receiving no link preview. For truly sensitive messages and users who want as much privacy as possible, this is the best setting. Even when previews are provided, these apps are using relatively safe means to render them.

Still, Monday’s post is a good reminder that private messages aren’t always, well, private.

“Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world,” the researchers wrote. “Link previews are a nice feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.”

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.

The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.

But that’s not all

It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.

Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.

“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.

In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.

Patch on the way

The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.

In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:

We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.

It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.

As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.

Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.

The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.

Heroic measures

Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.

A modified USB cable attached to the back of an X4 watch.

Enlarge / A modified USB cable attached to the back of an X4 watch.
Mnemonic

One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.

Sand’s suspicions were further aroused when he found intents with the following names:

  • WIRETAP_INCOMING
  • WIRETAP_BY_CALL_BACK
  • COMMAND_LOG_UPLOAD
  • REMOTE_SNAPSHOT
  • SEND_SMS_LOCATION

After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.

“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”

Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.

As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.

Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.

The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.

Apple’s T2 security chip has an unfixable flaw

2014 Mac mini and 2012 Mac mini

Enlarge / The 2014 Mac mini is pictured here alongside the 2012 Mac mini. They looked the same, but the insides were different in some key—and disappointing—ways.

A recently released tool is letting anyone exploit an unusual Mac vulnerability to bypass Apple’s trusted T2 security chip and gain deep system access. The flaw is one researchers have also been using for more than a year to jailbreak older models of iPhones. But the fact that the T2 chip is vulnerable in the same way creates a new host of potential threats. Worst of all, while Apple may be able to slow down potential hackers, the flaw is ultimately unfixable in every Mac that has a T2 inside.

In general, the jailbreak community hasn’t paid as much attention to macOS and OS X as it has iOS, because they don’t have the same restrictions and walled gardens that are built into Apple’s mobile ecosystem. But the T2 chip, launched in 2017, created some limitations and mysteries. Apple added the chip as a trusted mechanism for securing high-value features like encrypted data storage, Touch ID, and Activation Lock, which works with Apple’s “Find My” services. But the T2 also contains a vulnerability, known as Checkm8, that jailbreakers have already been exploiting in Apple’s A5 through A11 (2011 to 2017) mobile chipsets. Now Checkra1n, the same group that developed the tool for iOS, has released support for T2 bypass.

On Macs, the jailbreak allows researchers to probe the T2 chip and explore its security features. It can even be used to run Linux on the T2 or play Doom on a MacBook Pro’s Touch Bar. The jailbreak could also be weaponized by malicious hackers, though, to disable macOS security features like System Integrity Protection and Secure Boot and install malware. Combined with another T2 vulnerability that was publicly disclosed in July by the Chinese security research and jailbreaking group Pangu Team, the jailbreak could also potentially be used to obtain FileVault encryption keys and to decrypt user data. The vulnerability is unpatchable, because the flaw is in low-level, unchangeable code for hardware.

“The T2 is meant to be this little secure black box in Macs—a computer inside your computer, handling things like Lost Mode enforcement, integrity checking, and other privileged duties,” says Will Strafach, a longtime iOS researcher and creator of the Guardian Firewall app for iOS. “So the significance is that this chip was supposed to be harder to compromise—but now it’s been done.”

Apple did not respond to WIRED’s requests for comment.

There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can’t remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn’t “persistent”; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn’t reboot every time the device does. To be certain that a Mac hasn’t been compromised by the jailbreak, the T2 chip must be fully restored to Apple’s defaults. Finally, the jailbreak doesn’t give an attacker instant access to a target’s encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn’t a silver bullet.

“There are plenty of other vulnerabilities, including remote ones that undoubtedly have more impact on security,” a Checkra1n team member tweeted on Tuesday.

In a discussion with WIRED, the Checkra1n researchers added that they see the jailbreak as a necessary tool for transparency about T2. “It’s a unique chip, and it has differences from iPhones, so having open access is useful to understand it at a deeper level,” a group member said. “It was a complete black box before, and we are now able to look into it and figure out how it works for security research.”

The exploit also comes as little surprise; it’s been apparent since the original Checkm8 discovery last year that the T2 chip was also vulnerable in the same way. And researchers point out that while the T2 chip debuted in 2017 in top-tier iMacs, it only recently rolled out across the entire Mac line. Older Macs with a T1 chip are unaffected. Still, the finding is significant because it undermines a crucial security feature of newer Macs.

Jailbreaking has long been a gray area because of this tension. It gives users freedom to install and modify whatever they want on their devices, but it is achieved by exploiting vulnerabilities in Apple’s code. Hobbyists and researchers use jailbreaks in constructive ways, including to conduct more security testing and potentially help Apple fix more bugs, but there’s always the chance that attackers could weaponize jailbreaks for harm.

“I had already assumed that since T2 was vulnerable to Checkm8, it was toast,” says Patrick Wardle, an Apple security researcher at the enterprise management firm Jamf and a former NSA researcher. “There really isn’t much that Apple can do to fix it. It’s not the end of the world, but this chip, which was supposed to provide all this extra security, is now pretty much moot.”

Wardle points out that for companies that manage their devices using Apple’s Activation Lock and Find My features, the jailbreak could be particularly problematic both in terms of possible device theft and other insider threats. And he notes that the jailbreak tool could be a valuable jumping off point for attackers looking to take a shortcut to developing potentially powerful attacks. “You likely could weaponize this and create a lovely in-memory implant that, by design, disappears on reboot,” he says. This means that the malware would run without leaving a trace on the hard drive and would be difficult for victims to track down.

The situation raises much deeper issues, though, with the basic approach of using a special, trusted chip to secure other processes. Beyond Apple’s T2, numerous other tech vendors have tried this approach and had their secure enclaves defeated, including Intel, Cisco, and Samsung.

“Building in hardware ‘security’ mechanisms is just always a double-edged sword,” says Ang Cui, founder of the embedded device security firm Red Balloon. “If an attacker is able to own the secure hardware mechanism, the defender usually loses more than they would have if they had built no hardware. It’s a smart design in theory, but in the real world it usually backfires.”

In this case, you’d likely have to be a very high-value target to register any real alarm. But hardware-based security measures do create a single point of failure that the most important data and systems rely on. Even if the Checkra1n jailbreak doesn’t provide unlimited access for attackers, it gives them more than anyone would want.

This story originally appeared on wired.com.

Chinese-made drone app in Google Play spooks security researchers

A DJI Phantom 4 quadcopter drone.

Enlarge / A DJI Phantom 4 quadcopter drone.

The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.

The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.

Wide array of sensitive user data

Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.

According to the reports, the suspicious behaviors include:

  • The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
  • A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
  • Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
  • Advanced obfuscation techniques that make third-party analysis of the app time-consuming.

This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.

DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.

“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.

A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.

Obfuscated, acquisitive, and always on

In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.

Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.

Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.

The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.

Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:

In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.

In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.

DJI responds

DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.

“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:

  • When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
  • Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
  • Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
  • DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
  • The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
  • The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
  • The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
  • This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
  • DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.

Don’t forget the Android app mess

The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.

People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.

There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.