Google's Grand Experiment: From Energy to Ecosystem—A 13-Year Observation

Thirteen years ago, as a GeekSquad Advanced Repair Agent, I saw Chromebooks for what they were: cheap, $200 laptops with a measly 16GB of storage. Thin clients, the IT crowd called them. I called them underwhelming. Many other’s thought the same. Back then, Google’s cloud ambitions manifested as these bare-bones machines—a far cry from the integrated ecosystems I was used to, dominated by Macs and PCs. I knew Google made Android and those software services—Search, Docs, Sheets, Slides—they were fine. But the big picture? I missed it. Google didn’t build hardware like Apple did. To me, they were just… energy. Pure potential, no form. Steve Jobs said computers are a bicycle for the mind. Google was the kinetic energy pushing the bike, not the bike itself. In this analysis, I want to trace Google’s evolution from that pure energy to a company building the bike, the road, the pedals—even the rider.

The shift in my perspective came when I grasped the consumer side of cloud computing—servers, racks, the whole ‘someone else’s computer’ spiel. Suddenly, Chromebooks started to make sense. Cost-effective, they said. All the heavy lifting on Google’s servers, they said. Naïve as I was, I hadn’t yet fully registered Google’s underlying ad-driven empire, the real reason behind the Chromebook push. That revelation led me to a stint as a regional Chromebook rep. A role masquerading as tech, but really, it was sales with a side of jargon. The training retreat? Let’s call it an indoctrination session. The ‘Moonshot Thinking’ video from Google [X]—all inspiration, no product—was the hook. Suddenly, streaming movies and collaborative docs weren’t just features, they were visions. ‘Moonshot thinking,’ I told myself, swallowing the Kool-Aid. Cloud computing, in that moment, seemed revolutionary. I even had ‘office hours’ with Docs project managers, peppering them with questions about real-time collaboration. ‘What if someone pastes the same URL?’ I asked, probably driving them nuts. But I was hooked. Cloud computing, I thought, was the future—or so they wanted me to believe.

That journey, from wide-eyed newbie to… well, slightly less wide-eyed observer, has taught me one thing: Google’s Achilles' heel is execution. They’ve got the vision, the talent, the sheer audacity—but putting it all together? That’s where they stumble. Only in the last three years have they even attempted to wrangle their disparate hardware, software, cloud, and AI efforts into a coherent whole. Too little, too late? Perhaps. Look at the Pixel team: a frantic scramble to catch up, complete with a Jobsian purge of the ‘unpassionate.' Rick Osterloh, a charming and knowledgeable figurehead, no doubt—but is he a ruthless enough leader? That’s the question. He’s managed to corral the Platform and Services division. Yet, the ecosystem still feels… scattered. The Pixel hardware, for all its promise, still reeks of a ‘side project’—a lavish, expensive, and perpetually unfinished side project. The pieces are there, scattered across the table. Can Google finally assemble the puzzle, or will they forever be a company of impressive parts, but no cohesive whole?

After over a decade of observing Google’s trajectory, certain patterns emerge. Chromebooks (bless their budget-friendly hearts), for instance, have settled comfortably into the budget lane: affordable laptops for grade schoolers and retirees. Hardly the ‘sexy’ category Apple’s M-series or those Windows CoPilot ARM machines occupy, is it? Google’s Nest, meanwhile, envisioned ambient computing years ago. Yet, Amazon’s Alexa+ seems to be delivering on that promise while Google’s vision gathers dust. And let’s talk apps: Google’s own, some of the most popular on both Android and iOS, often perform better on iOS. Yes, that’s changing—slowly. And the messaging app graveyard? Overblown, some say. I say, try herding a family group chat through Google’s ever-shifting messaging landscape. Musical chairs, indeed. But, credit where it’s due, Google Messages is finally showing some long-term commitment. Perhaps the ghosts of Hangouts and Allo are finally resting in peace.

The long view, after thirteen years of observing Google’s sprawling ambitions, reveals a complex picture: immense potential, yet a frustrating pattern of fragmented execution. They’ve built impressive pieces—the AI, the cloud, the hardware—but the promised cohesive ecosystem has remained elusive. Whispers of “Pixel Sense,” Google’s rumored answer to true AI integration, offer a glimmer of hope. (And let’s be clear, these are just rumors—I’m not grading on a curve here.) But, after years of watching disjointed efforts, I find myself cautiously optimistic about the direction Rick Osterloh (knowledgeable, and, some might say, charming) and his newly unified Platform and Services division are taking. There’s a sense that, finally, the pieces might be coming together. The vision of a seamlessly integrated Google experience—hardware, software, AI, and cloud—is tantalizingly close. Will they finally deliver? Or will Google continue to be a company of impressive tech demos and unfulfilled promises? Time will tell. But for the first time in a long time, I’m willing to entertain the possibility that Google might just pull it off.

Apple is postponing yet another product because of Siri + Apple Intelligence woes.

More bad news on the Apple front. Apple customers are getting desperate for new hardware by just suggesting they should just ship unfinished Apple Smart Home Hub hardware.

Apple should take their time.

You know it’s important if Bloomberg puts out a Google News Showcase for the story

Siri, You're Breaking Our Hearts (and Our Settings)

It seems Apple’s AI woes continue. Just when we thought things couldn’t get more embarrassing than delaying “Apple Intelligence,” we get this gem. John Gruber, over on Mastodon, shared a screenshot of his recent conversation with Siri, and let’s just say it’s not a good look:

MacOS 15.3.1. Asked Siri “How do you turn off suggestions in Messages?”

Siri responds with instructions that:

(a) Tell me to go to a System Settings panel that doesn’t exist. There is no “Siri & Spotlight”. There is “Apple Intelligence & Siri" and a separate one for “Spotlight”.

(b) Are for Mail, not Messages, even though I asked about Messages, and Siri’s own response starts with “To turn off Siri suggestions in Messages”

Gruber simply asked Siri how to turn off suggestions in Messages on his MacOS 15.3.1. Siri, in its infinite wisdom, first sent him on a wild goose chase to a non-existent “Siri & Spotlight” panel in System Settings. Then, it proceeded to give him instructions for Mail, not Messages!

This is beyond a simple bug; it’s a fundamental failure in understanding basic user requests. And remember, this is the same Siri that Apple wants us to believe is the foundation for their upcoming “Apple Intelligence” revolution.

Gruber himself pointed out the irony, highlighting how Apple touted “product knowledge” as a key feature of their AI, yet Siri can’t even navigate its own settings.

“Product knowledge” is one of the Apple Intelligence Siri features that, in its statement yesterday, Apple touted as a success. But what I asked here is a very simple question, about an Apple Intelligence feature in one of Apple’s most-used apps, and it turns out Siri doesn’t even know the names of its own settings panels.

It’s becoming increasingly clear that Apple’s AI ambitions have outpaced their reality. This latest Siri stumble, coupled with the “Apple Intelligence” delay, paints a picture of a company struggling to keep up in the AI race.

Perhaps it’s time for Apple to take a step back, focus on getting the basics right, and then, maybe then, they can start talking about revolutionizing our AI experience.

Mark Gurman Exposes Apple Intelligence Delay: My Relief and Google's Gain

If I had purchased the iPhone 16 like I had planned on after seeing what Apple teased with Apple Intelligence at WWDC 2024, I’d be furious. Mark Gurman has the scoop on Apple’s upgraded Siri experience and it’s not good:

Apple Inc.’s turmoil in its AI division reached new heights on Friday, with the company delaying promised updates to the Siri digital assistant for the foreseeable future.

Apple said that features introduced last June, including Siri’s ability to tap into a user’s personal information to answer queries and have more precise control over apps, will now be released sometime in “the coming year.” The iPhone maker hadn’t previously set a public deadline for the capabilities, but they were initially planned for the iOS 18.4 software update this April.

Bloomberg News reported on Feb. 14 that Apple was struggling to finish developing the features and the enhancements would be postponed until at least May — when iOS 18.5 is due to arrive. Since then, Apple engineers have been racing to fix a rash of bugs in the project. The work has been unsuccessful, according to people involved in the efforts, and they now believe the features won’t be released until next year at the earliest.

Apple has reached a new low. Expectations are high for Apple because they made their bed by showcasing such a future forward AI experience at their annual WWDC event last year. Honestly, as I wrote that, I realized that’s not even the main issue. This is the main issue…

Apple, with its ‘crack marketing team’ as deemed by Craig Federighi, created this impressive ad. The problem? None of these features exist. The upgraded personal Siri, capable of providing helpful on-device information in a manner similar to Gemini or ChatGPT, is not yet available. I nearly purchased an iPhone 16, hoping this feature would be available by the end of 2024. Unfortunately, it wasn’t, and I’m sure many people, unaware of the feature’s delay, purchased the iPhone 16 expecting these features. Apple is one of the few tech companies that can release a product with delayed features without widespread customer backlash on platforms like Reddit or device returns. Consumers invest significant money in these devices, and I believe they are beginning to realize Apple is not immune to such issues. I used to believe Apple would only announce features ready for immediate and complete delivery. Fool me once, shame on you; fool me twice, shame on me. This time, I avoided being fooled, thanks to Mark Gurman’s reporting on Apple Intelligence.

In the meantime, I’m glad I stuck with Team Pixel and I’m looking forward to the continuation of Google’s Gemini advancements and the rumored on-device (Apple Intelligence-esque) “Pixel Sense” and “Pixie”.

My Mom is Going to Love Scam Detection

Aisha Sharif, Product Manager, Pixel Phone on The Keyword

Scam Detection for phone calls, powered by Gemini Nano, protects you from fraud with on-device AI while keeping your conversations private to you. This Pixel-exclusive feature detects conversation patterns in calls commonly used by scammers in real time and will notify you if it senses anything suspicious.

And Scam Detection is now available in Google Messages, too. It uses on-device AI to flag conversational text patterns commonly associated with scams, so it can identify messages that seem harmless, but turn dangerous over time. You’ll receive a real-time warning so you can easily block and report the conversation.

My mom recently retired and if there’s one thing that raises her stress level it’s scammers. I bought her a Pixel 8a this last year, after having a Pixel 5 for some time, and she absolutely loves Call Screening and the Spam detection features in the phone app. Just the other day she sent me a screenshot of an E-ZPass texting scam that has been going around. She almost fell victim to it because E-ZPass is an actual highway toll system that is within her area. Thankfully, she didn’t respond to it or click the link, but now that Scam Detection is coming to Google Messages, my mom will now get a large badge alerting her that this message was a Scam.

More from the Google Online Security Blog:

Scam Detection in Google Messages uses powerful Google AI to proactively address conversational scams by providing real-time detection even after initial messages are received. When the on-device AI detects a suspicious pattern in SMS, MMS, and RCS messages, users will now get a message warning of a likely scam with an option to dismiss or report and block the sender.

As part of the Spam Protection setting, Scam Detection on Google Messages is on by default and only applies to conversations with non-contacts. Your privacy is protected with Scam Detection in Google Messages, with all message processing remaining on-device. Your conversations remain private to you; if you choose to report a conversation to help reduce widespread spam, only sender details and recent messages with that sender are shared with Google and carriers.

Scam Detection is only available in English in the U.S., U.K. and Canada and will expand to more countries soon.

Also, something I also found interesting is that a cybersecurity firm, conducted a funded evaluation of fraud protection features on a number of smartphones and found that Android smartphones, led by the Pixel 9 Pro, scored highest for built-in security features and anti-fraud efficacy. The full report is in a PDF.

Today, we’re also introducing our newest Labs experiment for Search: AI Mode.

Sundar Pichai on Threads:

You’ll get AI responses using Gemini 2.0’s advanced reasoning, thinking, multimodal capabilities + new ways to explore even more of the web. We’re rolling out AI Mode to Google One AI Premium subscribers today, opt in on Labs. And just like AI Overviews, AI Mode will get better with time and feedback.

A pretty big shakeup to the web and how search on the web works. Placing AI directly on one of the most popular websites on the planet is a great way to train your AI for even more advancements.

Amazon launching Alexa.com and new app for Alexa Plus (another subscription 😮‍💨), which is basically a home of all of Amazon AI stuff, seems meh, but this feels like for a lot of iOS users that use Echo devices.

Google's SafetyCore: Your Phone's New AI Bouncer (with a Side of Truth)

Auto-generated description: A smartphone screen displays the Google Play Store page for the Android System SafetyCore app, showing options to uninstall, details about its availability on more devices, user ratings, and the option to join the beta program.

So there has been a lot of online chatter about Google’s newly released app called Android System SafetyCore that’s being downloaded on a lot Android devices. Mostly Android 9+ devices. And there’s been a lot of misinformation about it so I figured I’d provide more correct information on the web about it. In short, it’s like having a bouncer for your phone. You can read the entire blog post from Google about it on their Security Blog, but I’ll try to explain in less detail. In short, Google lists the following new protections:

  1. Enhances detection protects you from package delivery and job scams.
  2. Intelligent warnings alert you about potentially dangerous links.
  3. Controls to turn off messages from unknown international senders.
  4. Sensitive Content Warnings give you control over seeing and sending images that may contain nudity.
  5. More confirmation about who you’re messaging.

So this bouncer uses AI to spot shady stuff like spam, scams, malware, and even those NSFW pics (yikes!) in your messages and apps. The best part? It does all this without snitching to Google or anyone else. Think of it like a super-smart security guard who can spot trouble without calling the cops. By not snitching to Google or anyone else or calling the cops, it’s not sending your information to anyone. ANYONE.

Now, some people have been mistakenly thinking, “Isn’t this like that client-side scanning thing Apple tried to pull?” Nah, not even close. That was all about scanning your pics and reporting potentially illegal stuff, which was a major privacy no-no.

Android’s SafetyCore is different. It keeps everything on your phone and doesn’t share anything with anyone. It’s more like Apple’s Communication Safety feature in iMessage, which warns kids about sensitive content but doesn’t share anything with Apple.

Unfortunately, like I said, there’s been some misinformation floating around about SafetyCore, with some folks calling it “spyware.” But that’s simply not true. As the privacy-focused folks at GrapheneOS put it:

The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

The GrapheneOS team are experts in this field, and they clearly state that SafetyCore doesn’t share your data. So, you can rest assured that your privacy is protected.

The GrapheneOS team does wish Google would make the whole thing open source, which would increase transparency and trust. I agree with this too, but here’s the thing: apps have been able to do this AI security stuff for a while now, but they usually send your data to their servers. SafetyCore keeps everything local, which is a big win for privacy.

Okay, so circling back to that “not snitching to the cops” part: SafetyCore isn’t about reporting illegal stuff to the authorities. It’s simply about giving your phone the ability to spot potentially harmful stuff and give you a heads up. Remember that and take that how you will.

So, there you have it! The TL;DR is Google’s Android System SafetyCore seems like a pretty sweet deal for boosting your phone’s security and privacy. Don’t let the misinformation scare you away from a potentially useful tool. It’ll be interesting to see how it evolves and how other apps start using it.

Gemini can now reference past chats. Google announced this updated feature for Gemini Advanced subscribers via Google One AI Premium Plan on the Gemini web and mobile.

No more having to start a new chat from scratch with the same info.

Google is updating Gemini at an impressively rapid pace.