From "Don't Be Evil" to "Well, Actually...": How Google Quietly Ditched Its Moral Compass

Feb 11, 2025By Yvette Schmitter
Yvette Schmitter

Let me tell you something about ethics in tech, and y'all know I'm about to serve some tea with a side of "hold up, wait a minute." Back in 2015, over 1,000 of the world's leading AI experts and researchers signed an open letter warning us about a "military artificial intelligence arms race" and begged – BEGGED – for a ban on offensive autonomous weapons. Did we listen? Phew....let me tell you what's been happening since then.

While these brilliant minds were waving red flags, Google was out here acting like the Mother Teresa of Silicon Valley with their "Don't be evil" motto. But wait until you hear what happened next.

The Tea

Here's the tea: Google went from "we would NEVER build weapons" to "well, what's your definition of a weapon?" faster than you can say "stock options." Back in 2018, they got caught up in this little situation called Project Maven – using AI to analyze drone footage for the Pentagon. Their employees said "NOT TODAY" and raised holy hell. Google responded with some fancy AI Principles promising they wouldn't build harmful tech or weapons. As a New Yorker, I watch what they do, not what they say, because those principles have gotten more flexible than my Pilates instructor (and she is flexible!)

Ok, let me paint you a picture of what's REALLY going on here. Get ready, because I'm just going to rip the bandaide off.

You've got Google out here doing the corporate equivalent of "I'm not touching you" with their military contracts. They're not building weapons, they're building "defense systems." They're not enabling warfare, they're "enhancing security." MMMHMM. Yea, right..

And why? Because they're watching Amazon and Microsoft cash those government checks like they're at the club making it rain. But here's what I need you to understand:

  • They're playing word games while AI gets more powerful by the day
  • They've got employees choosing between their principles and their paychecks
  • They're trying to convince us that "defensive technology" can't be used offensively (and if you believe that, I've got a bridge to sell you in Brooklyn and London.)

Let me be clear: This isn't just about business – this is about the SOUL of artificial intelligence. Those 1,000+ experts who signed that letter in 2015? They weren't just being dramatic. They saw exactly where this bullet train was headed, and it's right on schedule. What kind of future are we manifesting when we let profit margins dictate our moral boundaries? When we ignore the very people who BUILT these technologies when they tell us "this is dangerous"?

Now, I want you to feel me on this one. When Google – GOOGLE! – starts backing away from their ethical commitments, that's not just a company decision. That's a moment in history. That's a signal to every other tech company that it's okay to put profits over principles.

Think about it:

  • Every time a tech giant weakens their ethical stance, they're writing a check your children's children will have to cash.
  • The line between "good" and "evil" in tech is getting blurrier than my vision without my long distance glasses.
  • We're setting precedents way faster than we're setting protections.

The trust we put in these companies is evaporating like my patience with this whole situation.

And let me tell you something else: When we let companies decide their own ethical boundaries, we're essentially letting the fox guard the henhouse while wearing a "Trust Me" t-shirt.

What To Do

Now, I'm going to tell you what you're going to do, because I love you and I want better for us. And unlike that 2015 warning letter that we collectively ignored (how's that working out for us?), this time we're going to take action:

1. Get Loud: Your voice does matter, and it matters more than you know. When these companies try to slide their ethical changes under the radar, make some NOISE. BlueSky about it. Write about it. Talk about it at dinner parties. Make it impossible to ignore, because the rug we continue to sweep stuff under is about to trip us up - big time!

2. Demand Better: We need ethical guidelines that are carved in stone, not written in sand. We need diversity in AI development that looks like the WORLD (you see, I didn't say America). And we need it - yesterday.

3. Check Your Tech: Start asking questions about the technology you use. Who made it? What data trained it? Who benefits from it? Who might it harm? Because.... if you're not asking these questions, you're complicit and part of the problem.

4. Build the Future You Want: Support companies and initiatives that align with your values. Put your money where your morals are. I deleted my Facebook and Instagram accounts. And if you're in tech, be the change you want to see – even if it means being the uncomfortable voice in the room. Say something, even if your voice cracks - still speak!

Because here's what I know for certain: The future of AI isn't just about algorithms and data sets. It's about the world we're creating for generations to come. It's about whether we'll use one of humanity's greatest achievements to lift everyone up or push some people down.

When I think about the world I want to leave behind, I think about AI that serves humanity – ALL of humanity. I think about technology that recognizes the beauty in our differences and works to preserve them. I think about ethical guidelines that protect the vulnerable and empower the voiceless.

So, let me ask you this: What kind of ancestor do you want to be? What story do you want your great-grandchildren to tell about the choices we made in this crucial moment? What side of history do YOU want to me on? Because make no mistake – this is our moment. This is our chance to stand up and say, "Not on our watch."

The future is coming, whether we're ready or not. But here's the kicker -  WE get to decide what that future looks like. And I, for one, am not about to let Silicon Valley or techoligarchs write that story without our input.

So, now, what are YOU going to do about it?