Google says it wants to help people do positive things with artificial intelligence.
The search giant on Monday announced a new challenge for nonprofits, universities and other organizations working on AI projects that will benefit society. The contest is called the AI Global Impact Challenge, and the company has pledged $25 million in grants.
The challenge is part of a new Google initiative called “AI for Social Good.” Google says the program is meant to help solve big, pressing problems, like issues around crisis relief, environmental conservation or sex trafficking. But it also comes at a time when Google has been under increased scrutiny over how its own artificial intelligence could be used, including in controversial military work or reported efforts to build a censored search engine in China.
At an event Monday, Google’s artificial intelligence chief Jeff Dean didn’t address those controversies directly, but he referenced Google’s AI ethical principles, which outline how the company says it will and will not use the technology.
“We’re all grappling with questions of how AI should be used,” Dean said at Google’s offices in Sunnyvale, California. “AI truly has the potential to improve people’s lives.”
Contest winners will also be given access to Google’s technical resources, as well as be appointed an expert from Google to help develop their projects. The company will open the application process Monday and announce winners next spring at Google’s annual I/O developer conference.
“The gist of the program is to encourage people to leverage our technology. Google can’t work on everything,” Yossi Matias, vice president of engineering, said in an interview last week. “There are many problems out there we may not even be aware of.”
Meanwhile, Google has faced backlash for some of its own AI projects. The company’s cloud division, under executive Diane Greene, has gone after lucrative military contracts. But employees have challenged Google’s decision to take part in Project Maven, a Defense Department initiative aimed at developing better AI for the US military. More than 4,000 employees reportedly signed a petition addressed to Pichai demanding the company cancel the project. In June, Google said it wouldn’t renew the Maven contract or pursue similar contracts.
A week later, Google CEO Sundar Pichai released the ethical guidelines regarding the company’s development of AI. He said Google would not create technology that would be used for weapons, but said Google would still pursue work with the military.
The company has also drawn criticism for Duplex, which is AI software that can book things like restaurant reservations and hair appointments through the Google Assistant, the company’s digital helper service. Duplex sparked intense debate because the software speaks with a strikingly lifelike voice, complete with verbal ticks like “um” and “uhh.” Critics were worried about the software’s ability to deceive people, but Google later clarified that it would disclose that the call is automated.
But the AI for Social Good initiative isn’t a response to recent headlines, Matias said. He added that the program had been in the works for a long time. He declined to comment on any of the backlash surrounding Maven or Dragonfly.
At a press conference following his presentation Monday, Dean also said the program doesn’t have anything to do with Google’s recent controversies. “It’s not really a reaction,” he said.
At the event, Google sought to highlight less controversial projects. For example, one initiative involved a Google team working on “bioacoustics.” The project takes underwater data from whale species and works with shipping companies to try to avoid collisions with the marine life. Another project with Iowa State University and the Iowa Department of Transportation tries to improve road safety and traffic management.
The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.
Special Reports: CNET’s in-depth features in one place.