Why I'm Disappointed in Google's New AI Direction
Google has made a troubling decision. That’s what I thought after reading the headline, “Google Lifts a Ban on Using Its AI for Weapons and Surveillance,” on my Google News feed. My first thought was of Google’s former motto, “Don’t be evil.” Though those words were removed years ago, many hoped that some vestige of that ethos remained internally. Now, it appears Google is continuing to dismantle its ethical framework.
In 2018, as the AI race accelerated, Google published its “AI Principles,” outlining applications they would not pursue. I applauded Google for these commitments:
We will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
While some might consider these the bare minimum for a company of Google’s size, many wondered if they would uphold them. It seems that cautious optimism has given way to “I told you so,” as Google has revised these “principles.” As Wired reported, Google executives cited the growing use of AI, evolving standards, and geopolitical competition as the reasons for revising their principles.
What was overhauled? Conspicuously absent is the entire section from 2018 regarding prohibited AI applications. The “overhauled” principles now focus on three aspects: Bold Innovation, Responsible Development and Deployment, and Collaborative Progress, Together:
Bold Innovation: We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity’s biggest challenges.
Responsible Development and Deployment: Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it an imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve.
Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively.
This shift illustrates what happens when a company grows to such a scale. It begins to be influenced by prevailing political winds. This realization represents a personal evolution. For decades, I followed tech news and was excited about gadgets, viewing them primarily as just that—fun gadgets. I overlooked the ethical responsibilities involved. The past ten years have shown me the far-reaching effects of these “gadgets” at scale. All of these technologies—smartphones, tablets, AI, software, and services—have a dual nature. Products can be announced with shiny, inspiring keynotes promising to “make the world a better place,” but they also have a darker side. This is Google’s dark side today, and I strongly disapprove of it. This decision could lead to the development of AI-powered weapons systems, increased surveillance, and other ethically problematic applications.
Google’s “AI Principles” in 2018
Google’s “AI Principles” in 2025
Proverbs 16:25: “There is a way that seems right to a man, but its end is the way to death."
