A hot potato: Google has come a long way since its early days when “Don’t be evil” was its guiding principle. This departure has been duly noted before for various reasons. In its latest departure from its original ethos, the company has quietly removed a key passage from its AI principles that previously committed to avoiding the use of AI in potentially harmful applications, including weapons.
This change, first noticed by Bloomberg, marks a shift from the company’s earlier stance on responsible AI development.
The now-deleted section titled “AI applications we will not pursue” had explicitly stated that Google would refrain from developing technologies “that cause or are likely to cause overall harm,” with weapons being a specific example.
In response to inquiries about the change, Google pointed to a blog post published by James Manyika, a senior vice president at Google, and Demis Hassabis, who leads Google DeepMind.
The post said that democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. It also called for collaboration among companies, governments, and organizations sharing these values to create AI that protects people, promotes global growth, and supports national security.
This shift in Google’s AI principles has not gone unnoticed by experts in the field. Margaret Mitchell, former co-lead of Google’s ethical AI team and current chief ethics scientist at Hugging Face, told Bloomberg she was concerned about the implications of removing the “harm” clause. “[It] means Google will probably now work on deploying technology directly that can kill people,” she said.
Google’s revision of its AI principles is part of a larger trend among tech giants to abandon previously held ethical positions. Companies like Meta Platforms and Amazon have recently scaled back their diversity and inclusion efforts, citing outdated or shifting priorities. Moreover, Meta announced last month that it was ending its third-party fact-checking program in the U.S.
Even though Google has maintained that its AI is not used to harm humans until very recently, the company has been gradually moving towards increased collaboration with military entities. Recent years have seen the company providing cloud services to the U.S. and Israeli militaries, decisions that have sparked internal protests from employees.
Google surely expects to receive backlash for its latest position. More than likely, it has concluded the benefits of its revised stance outweigh the negatives. The tech giant can now compete more directly with rivals already involved in military AI projects, for starters. Also, the shift could lead to increased research and development funding from government sources, potentially accelerating Google’s AI advancements.
Source link