Sundar Pichai Calls for Government Regulation of Artificial Intelligence
Sundar Pichai, CEO of Alphabet and Google
Alphabet and Google CEO Sundar Pichai this week threw his support behind a European Union proposal for a temporary ban on the use of facial recognition technology in public areas while regulators assess the risks associated with the technology.
Speaking at an event Monday hosted by the Brussels-based think tank Bruegel, Pichai told the audience: “It’s important that governmental regulation tackles it [facial recognition] sooner rather than later and gives a framework for it.”
Pichai also wrote an opinion piece in the Financial Times Monday that called for greater government regulation of artificial intelligence – a technology in which the company has heavily invested over the last several years.
Five-Year Ban Considered
On Friday, Reuters reported that the European Union is considering a five-year ban on the use of facial recognition technology in public areas in order to work out ways to prevent abuses and protect user privacy for citizens who have not given consent.
“Building on these existing provisions, the future regulatory framework could go further and include a time-limited ban on the use of facial recognition technology in public spaces,” according to the EU document seen by Reuters.
The European regulators would then use this time the develop a methodology for assessing the effects of facial recognition technology as well as create ways to reduce the risk to citizens, Reuters reports.
Facial Recognition Stirs Concerns
In the last year, the use of facial recognition technology in public areas has raised concerns in the EU. In France for example, the government has tried to use the technology to connect citizens with public services, with officials attempting to balance privacy concerns. In the U.K., the use of cameras in public spaces to identify possible criminal activity has met with backlash.
One of the biggest concerns about facial recognition data is its potential use for identity theft, which is a direct violation of European Union’s General Data Protection Regulation. Some of the other challenges include data harvesting, unauthorized tracking and misuse of data for credential stealing (see: Facial Recognition: Big Trouble With Big Data Biometrics).
Governments must charter the course when it comes to facial recognition, Pichai noted in his presentation. Google has held off on offering its own facial recognition technology as a general purpose API because of the risks of abuse, he said.
“Sensible regulation must also take a proportionate approach, balancing potential harms with social opportunities. This is especially true in areas that are high risk and high value,” Pichai noted.
Microsoft President Brad Smith offers a different point of view. He told Reuters that a ban on this technology would do more harm than good. “There is only one way at the end of the day to make technology better and that is to use it,” Smith said.
Pichai noted in his Financial Times opinion piece: “Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.”
Concerns about the potential negative consequences of AI include the development of so-called “deepfake” videos and nefarious uses of facial recognition by government agencies, Pichai wrote, adding that international collaboration will be critical for regulation.
While the European Union has started to develop regulatory proposals, Pichai wrote that governments don’t need to start from scratch. Instead, he proposed that rules be built on existing regulations, such as GDPR, which can act as a foundation for regulation.
Last August, Sweden’s Data Protection Authority fined a school for violating GDPR after officials launched a facial recognition pilot program to track students’ attendance without obtaining proper consent (see: Facial Recognition Use Triggers GDPR Fine)
Instead of introducing blanket rules for all sectors, Pichai urged governments to come up with case-by-case regulation for different sectors.
“For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits,” Pichai wrote.
Earlier this month, the U.S. government announced a hands-off approach aimed at helping to reduce barriers for development and adoption of AI.
In a memorandum published Jan. 7, the White House announced that “federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”