Apple needs to guard free speech while it's protecting kids

In this article:

This article was first featured in Yahoo Finance Tech, a weekly newsletter highlighting our original content on the industry. Get it sent directly to your inbox every Wednesday by 4 p.m. ET. Subscribe

Apple might not be able to protect both kids and free speech

Apple (AAPL) has made privacy the cornerstone of its sales and marketing campaigns. But major changes to how it protects children have put the tech giant at odds with security researchers who say authoritarian governments could use Apple’s new initiative to silence dissidents.

Apple’s new features have a laudable goal: the detection and reporting of photos of child sexual abuse material, something a 2019 New York Times report shows has exploded in recent years due to the rise of encryption, social media, and smartphones.

Researchers, however, are raising concerns that governments could force Apple to use its image scanning capabilities to search for content ranging from LGBTQ+ photos to items denouncing oppressive regimes.

Apple has vowed to never acquiesce to such demands and only provide verifiable instances of photos of abuse to authorities. But researchers aren’t entirely sure it can make such promises.

“I think Apple miscalculated,” Matthew Green, associate professor at the Johns Hopkins Information Security Institute, told Yahoo Finance. “I think it would be great if Apple tried it on a limited scale and actually looked at the results of how many people they caught, how many people actually got prosecuted.”

The company that prides itself on user privacy and security is now in the delicate position of trying to protect children, while preserving the safety of those imperiled by totalitarian governments.

Apple has built a reputation around privacy

Apple finds itself in its current predicament because it has pushed its stance on privacy to the forefront of its marketing campaigns. It has put up billboards across the country and run TV ads to burnish its reputation as the tech company that cares the most about user privacy.

It’s not just talk, either. Apple famously fought the Justice Department’s attempt to force the company to create a backdoor to gain access to one of the iPhones used by one of the shooters in the 2015 San Bernardino attack. Apple’s rationale was that by creating such a security hole, it would endanger the security of all iPhones, putting users around the world at risk.

A shopper looks at an iPhone 12 display while waiting in line to enter an Apple Store in the city centre as the state of New South Wales continues to report low case numbers of the coronavirus disease (COVID-19) in Sydney, Australia, October 26, 2020. REUTERS/Loren Elliott
Apple has tussled with the Justice Department in the past to protect the iPhone's security features and user privacy. REUTERS/Loren Elliott (Loren Elliott / reuters)

So how does it continue to uphold that reputation while helping to fight the spread of images of abuse? According to Green, the company should have started off on a smaller level than a blanket introduction for all products in the U.S.

If the methods Apple plans to use worked in that instance, Green said, it would be worth the potential privacy problems.

But the tech industry has been woefully behind in dealing with child sexual abuse material. A 2020 Times report found a wild disparity between companies like Facebook (FB), that have been praised for their efforts in rooting out such content, and others, like Apple, Microsoft (MSFT) and Yahoo (which, like Yahoo Finance, is owned by Verizon Media), that don’t hunt down nearly as much.

What Apple’s new features do and what they don’t do

How do Apple’s new features fix that? By creating a multi-step system for identifying images of child abuse. First, instead of using actual images of minors, Apple works with agencies like the Center for Missing and Exploited Children (NCMEC) to create what are essentially numerical representations of those photos called hashes.

They’re a string of numbers that can’t be used to see the photos or reverse engineered to display them, but can be matched up to those existing images.

When Apple sends out its next major software update, your device will receive those hashes that relate to known images depicting abuse. Here’s the important part. When you connect to iCloud and begin uploading photos, your phone will automatically search for those matches on your device, meaning Apple doesn’t know the results of the search at this point.

If it detects matches, a voucher appears on your device. When you upload your photos to iCloud, Apple’s servers will scan these vouchers and if it discovers a certain threshold of vouchers that match the known abusive content, Apple will begin a manual review. If positive for illicit images, Apple will share the content with NCMEC.

Essentially Apple’s technology allows it to find exploitative images without looking at users’ more innocuous photos. What’s more, Apple doesn’t start scanning photos until you connect to iCloud. Without that connection, the scans don’t even run on your phone, let alone on Apple’s servers.

Why security researchers are nervous

If Apple is only dealing with heinous photos, why are security researchers so nervous? According to NYU Tandon School of Engineering professor Justin Cappos, they worry governments like China’s Communist Party could strong arm Apple into changing the hashes stored on users’ phones to find images or documents for pro-democracy groups.

Apple is rolling out the new technologies in the U.S. for now, but says it will open it up to other countries in the future. It hasn’t commented on China, which is one of its largest sales markets.

“The problem is that filtering for child abuse, the technical way of doing that is the same way you filter for images of a revolutionary flag, or images or documents that contain the manifesto of an organization arguing for change,” Cappos said.

China’s Communist Party could prove to be an especially big threat for Apple’s new system, especially in light of the leverage it can exert over the tech giant.

SHANGHAI, CHINA - JUNE 14, 2021 - An iPhone store is seen at nanjing Road Pedestrian Street in Shanghai on June 14, 2021. On August 8, 2021, apple announced another security breach after the Company's Pepeus software hacked into the phone system to monitor users, and its
An iPhone store is seen at nanjing Road Pedestrian Street in Shanghai on June 14, 2021. (Photo credit should read Costfoto/Barcroft Media via Getty Images) (Barcroft Media via Getty Images)

“The government has a huge amount of control not only of [Apple’s] infrastructure in China, but also their manufacturing infrastructure,” Green said. “So there's just this huge amount of pressure that can be applied to Apple to do things that Apple might not be personally comfortable with.”

Apple famously complies with the Chinese government’s laws even if they appear antithetical to the company’s stance on privacy.

A 2017 cybersecurity law forced Apple to store Chinese iCloud user data with a third-party, state-owned company — and the Chinese government held the encryption keys to that data. That same year Apple removed virtual private network apps out of its Chinese App Store, on government orders, because the apps could be used to circumvent China’s Great Firewall. In other words, researchers’ concerns aren’t exactly unfounded.

"I think that the minute you build a technological capability that governments will want to use, and then say, ‘Don't worry, it could be used for this bad purpose. But we promise it won't be.’ I think that's really dangerous,” Green said.

“This is not a slippery slope. We're just going to be at the bottom of that slippery slope or somewhere in the middle of that slippery slope, and we'll get there,” added Green.

But broader user privacy can also lead to the further spread of exploitative images. Encrypted messaging services allow users to send information, photos, and videos to each other without fear of government snooping. However, the same encryption can be used to hide illicit content.

“The only thing I can really think that even remotely could do something like this is if Apple refused to actually collect and give this information to law enforcement, but instead decided to display some warning or something to the user, and the information about it never left the phone,” Cappos said.

For now, Apple’s means of scanning for child abuse may be the only way to stop the spread of such material while preserving privacy. But it might be impossible for Apple to expand the same capabilities to all of its users around the world while assuring the same protections.

Daniel Howley is tech editor at Yahoo Finance.

Got a tip? Email Daniel Howley at dhowley@yahoofinance.com over via encrypted mail at danielphowley@protonmail.com, and follow him on Twitter at @DanielHowley.

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit

Advertisement