Category: AI
You are viewing all posts from this category, beginning with the most recent.
Chrome at a Crossroads: Antitrust Fallout and the Future of Google's Browser
The tech world is watching closely as the U.S. Department of Justice (DOJ) and Google battle over the future of the internet giant’s search dominance. Following a landmark ruling that found Google illegally maintained a monopoly in online search, the focus has shifted to potential remedies, and one proposal has sent ripples through the industry: forcing Google to sell its ubiquitous Chrome browser.
The Antitrust Verdict and Proposed Fixes
In August 2024, U.S. District Judge Amit Mehta delivered a significant blow to Google, ruling that the company had indeed acted unlawfully to protect its monopoly in the general search market. A key finding centered on Google’s multi-billion dollar agreements with companies like Apple, Mozilla, and various Android device manufacturers to ensure Google Search was the default option, effectively boxing out competitors (CBS News, TechPolicy.Press).
Now, in the ongoing remedy phase of the trial (as of April 2025), the DOJ is arguing for significant structural changes to restore competition. Their most drastic proposal? Requiring Google to divest Chrome, the world’s most popular web browser with billions of users (Mashable). The DOJ contends that Chrome serves as a critical “entry point” for searches (accounting for roughly 35% according to some reports) and that selling it is necessary to level the playing field (CBS News, MLQ.ai). Other proposed remedies include banning Google from paying for default search engine placement and requiring the company to share certain user data with rivals to foster competition.
Potential Suitors Emerge
The possibility of Chrome hitting the market, however remote, has prompted several companies to express interest during court testimony. Executives from legacy search player Yahoo, and newer AI-focused companies OpenAI and Perplexity, have all indicated they would consider purchasing Chrome if Google were forced to sell (Courthouse News Service, India Today).
Yahoo Search’s General Manager, Brian Provost, testified that acquiring Chrome would significantly boost their market share and accelerate their own browser development plans (Tech Times). OpenAI’s Head of Product for ChatGPT, Nick Turley, suggested acquiring Chrome could help “onramp” users towards AI assistants (Courthouse News Service). Perplexity’s Chief Business Officer, Dmitry Shevelenko, stated his belief that Perplexity could operate Chrome effectively without diminishing quality or charging users, despite initial reluctance to testify fearing retribution from Google (The Verge via Reddit).
An Independent Future for Chrome?
While the prospect of established companies like Yahoo or disruptive forces like OpenAI taking over Chrome is intriguing, it raises concerns about simply swapping one dominant player’s control for another. Google, naturally, opposes the divestiture, arguing it would harm users, innovation, and potentially jeopardize the open-source Chromium project that underpins Chrome and many other browsers (Google Blog).
There’s a strong argument to be made that the ideal outcome wouldn’t involve another tech giant acquiring Chrome. Instead, perhaps a company like Perplexity, which is challenging the traditional search paradigm, could be a suitable steward. Even better, envisioning Chrome transitioning to an independent entity, perhaps governed similarly to the non-profit Mozilla Foundation (which oversees Firefox) or maintained purely as an open-source project like Chromium itself, feels like the most pro-competitive and pro-user path. This would prevent the browser – a critical piece of internet infrastructure, especially vital in the Android ecosystem you’re passionate about, Michael – from becoming a tool solely to funnel users into a specific ecosystem, whether it’s Google’s, Microsoft’s, OpenAI’s, or Yahoo’s. An independent Chrome could focus purely on being the best possible browser, fostering true competition and innovation across the web.
The remedy hearings are expected to conclude in the coming weeks, with Judge Mehta likely issuing a decision by late summer 2025. However, Google is expected to appeal any adverse ruling, meaning the final fate of Chrome and the resolution of this antitrust saga could still be years away (PBS NewsHour). Until then, the tech industry, developers, and billions of users worldwide will be watching with anticipation.
Wait, Google's AI is Trying to Talk to Dolphins Now? Cool!
Okay, so you know how dolphins make all those cool clicking and whistling sounds? Turns out, it’s super complex, like their own secret language. Scientists have been trying to crack the code for ages, but it’s tough because the ocean is noisy, and dolphin chatter is complicated. Enter Google AI with something called DolphinGemma. It’s basically a smart AI model they’ve built specifically to listen in and try to understand what dolphins might be saying.
So, how does it actually work? Instead of scientists needing tons of custom, expensive gear, the article mentions they actually used Google Pixel smartphones as part of the setup! This is pretty clever because using phones makes the whole system easier to maintain, uses less power, and shrinks the cost and size – huge wins when you’re doing research out in the middle of the ocean. Then, the DolphinGemma AI does the heavy lifting on the sound analysis. Think of it like giving the AI a massive playlist of dolphin sounds recorded underwater. The AI listens to everything – all the clicks, whistles, and background noise. It uses clever machine learning tricks (based on Google’s Gemma AI tech) to start picking out patterns all by itself. It learns to spot the difference between random ocean noise and actual dolphin calls, and even starts figuring out which sounds might be important.
The really neat part is that DolphinGemma learns directly from the raw sound waves. It doesn’t need humans to tell it “this is a whistle” or “that’s a click” beforehand. It just listens and learns, kind of like how a baby learns language by hearing it all the time. This means it might catch subtle things in their calls that humans could easily miss. The goal is to get good enough to identify different types of calls and maybe even tell individual dolphins apart just by their voice! The AI’s predictive power can also help researchers react faster during interactions, making the study more fluid.
Beyond just dolphins, think about where else this kind of smart listening tech could be useful. The basic idea of teaching AI to pick out meaningful patterns in complex sounds could definitely be applied elsewhere in nature. Imagine, for instance, using something similar to better understand all the different animal sounds happening in a busy rainforest environment. It might help track hard-to-see animal populations or get a better sense of the ecosystem’s health just by listening in.
Now, are we going to have full conversations with Flipper tomorrow? Probably not! Getting a complete “dolphin dictionary” is still way off. But, this DolphinGemma thing is a huge step. Understanding dolphins better could tell us so much about how they live, hang out, and how they’re doing in the ocean. Plus, the tech they built for this could help us understand other chatty animals too. It’s pretty amazing to see AI being used to connect us more with the awesome creatures we share the planet with!
Inside Google's Deep Mind
“Inside Google’s Two-Year Frenzy to Catch Up With OpenAI” by Paresh Dave and Arielle of Wires
To build the new ChatGPT rival, codenamed Bard, former employees say Hsiao plucked about 100 people from teams across Google. Managers had no choice in the matter, according to a former search employee: Bard took precedence over everything else. Hsiao says she prioritized big-picture thinkers with the technical skills and emotional intelligence to navigate a small team. Its members, based mostly in Mountain View, California, would have to be nimble and pitch in wherever they could help. “You’re Team Bard,” Hsiao told them. “You wear all the hats.”
In January 2023, Pichai announced the first mass layoffs in the company’s history—12,000 jobs, about 7 percent of the workforce. “No one knew what exactly to do to be safe going forward,” says a former engineering manager. Some employees worried that if they didn’t put in overtime, they would quickly lose their jobs. If that meant disrupting kids’ bedtime routines to join Team Bard’s evening meetings, so be it.
I remember, not long ago, when Bard was announced and feeling like Google was being left behind. Bard felt safe and rushed and plenty of people were calling OpenAIs ChatGPT the next Google if not just fully replacing Google. Reading through this profile proves to me that OpenAI being the new player in the game of innovation, not having as much guardrails, was perfect for Google to ignite competition within their walls.
Josh Woodward, lead on Google Labs, explains the vigor within Google:
“AROUND 6:30 ONE evening in March 2024, two Google employees showed up at Josh Woodward’s desk in the yellow zone of Gradient Canopy. Woodward leads Google Labs, a rapid-launch unit charged with turning research into entirely new products, and the employees were eager for him to hear what they had created. Using transcripts of UK Parliament hearings and the Gemini model with long context, they had generated a podcast called Westminster Watch with two AI hosts, Kath and Simon. The episode opened with Simon speaking in a cheery British accent: “It’s been another lively week in the House, with plenty of drama, debate, and even a dash of history.” Woodward was riveted. Afterward, he says, he went around telling everyone about it, including Pichai.
The text-to-podcast tool, known as NotebookLM Audio Overviews, was added to the lineup for that May’s Google I/O conference. A core team worked around the clock, nights and weekends, to get it ready, Woodward told WIRED. “I mean, they literally have listened at this point to thousands and thousands” of AI-generated podcasts, he said.”
I personally thought NotebookLM was incredibly impressive when I was in attendance at I/O 2024. It felt like Google Duplex, but on steroids. It felt like Google had been woken up from a slumber of complacency and lack of competition really gunning for them. Just looking at their AI Journey page shows just how quickly things ramped up after 2021. I think Google is still just getting started. My prediction is that within these next two years, Google is going to ship even more fleshed out products across their Platform and Services team.
Google Please Do This At Google I/O 🙏🏾
John Gruber provides a genius script for what Google should do at Google I/O 2025:
Presenter: This is a live demo, on my Pixel 9. I need to pick my mom up at the airport and she sent me an email with her flight information. [Invokes Gemini on phone in hand…] Gemini, when is my mom’s flight landing?
Gemini: Your mom’s flight is on time, and arriving at SFO at 11:30.
Presenter: I don’t always remember to add things to my calendar, and so I love that Gemini can help me keep track of plans that I’ve made in casual conversation, like this lunch reservation my mom mentioned in a text. [Invokes Gemini…] What’s our lunch plan?
Gemini: You’re having lunch at Waterbar at 12:30.
Presenter: How long will it take us to get there from the airport?
People have been laying it on thick on Apple for their massive misstep of Apple Intelligence marketing a product that was never ready to sell another product. As John Gruber said in his blog post, Google should do a live demo of the exact thing that Apple said Siri would be able to do in front of a live audience in a live stream to truly prove that Gemini and Google Pixel are great products and a match made in tech heaven.
As John Gruber said, Apple just handed Google a potential marketing gift.
Google's Gemini Transition: A Necessary Step, But Execution is Key
Honestly, I wasn’t sure what Google’s long-term plans were for Assistant. Given their history with sunsetting projects, there was a bit of skepticism. But then came Gemini, and the shift is happening. Let’s see how it unfolds.
Google said in a blog post ‘millions’ of users have transitioned to Gemini. While this is a positive sign, the scale within the broader Android user base is something to consider. The focus on ‘most requested features’ like music and timers highlights the practical aspects of the update. The potential for ‘free-flowing, multimodal conversations’ and ‘deep research’ remains an area of interest. The introduction of features like Gemini 2.0 flash thinking and ‘Memory’ adds to the evolving capabilities of the platform. AI development is complex, and balancing innovation with timely delivery is a challenge.
The AI revolution, sparked by OpenAI, has certainly changed the landscape. While Apple has hinted at contextually intelligent assistants and fumbled so far, Google’s approach with Gemini 2.0, Android, and Gemini Nano suggests a comprehensive strategy. They have the AI capabilities, the software, and the hardware, which is a significant advantage. The timeline for execution is a point of interest. Google’s announcements at I/O, including Project Astra, initially suggested a year-end rollout. The current mid-year update indicates a delay. While delays are common in tech, timely execution is always preferred.
Ultimately, Google’s move to replace Assistant with Gemini is a necessary step in the age of AI. They have the pieces, but the execution will determine their success. If they can deliver on the promise of a truly intelligent and helpful assistant, they could redefine how we interact with our devices. But if they stumble, they risk falling behind in a rapidly evolving market. The next few months will be crucial.
Google's Grand Experiment: From Energy to Ecosystem—A 13-Year Observation
Thirteen years ago, as a GeekSquad Advanced Repair Agent, I saw Chromebooks for what they were: cheap, $200 laptops with a measly 16GB of storage. Thin clients, the IT crowd called them. I called them underwhelming. Many other’s thought the same. Back then, Google’s cloud ambitions manifested as these bare-bones machines—a far cry from the integrated ecosystems I was used to, dominated by Macs and PCs. I knew Google made Android and those software services—Search, Docs, Sheets, Slides—they were fine. But the big picture? I missed it. Google didn’t build hardware like Apple did. To me, they were just… energy. Pure potential, no form. Steve Jobs said computers are a bicycle for the mind. Google was the kinetic energy pushing the bike, not the bike itself. In this analysis, I want to trace Google’s evolution from that pure energy to a company building the bike, the road, the pedals—even the rider.
The shift in my perspective came when I grasped the consumer side of cloud computing—servers, racks, the whole ‘someone else’s computer’ spiel. Suddenly, Chromebooks started to make sense. Cost-effective, they said. All the heavy lifting on Google’s servers, they said. Naïve as I was, I hadn’t yet fully registered Google’s underlying ad-driven empire, the real reason behind the Chromebook push. That revelation led me to a stint as a regional Chromebook rep. A role masquerading as tech, but really, it was sales with a side of jargon. The training retreat? Let’s call it an indoctrination session. The ‘Moonshot Thinking’ video from Google [X]—all inspiration, no product—was the hook. Suddenly, streaming movies and collaborative docs weren’t just features, they were visions. ‘Moonshot thinking,’ I told myself, swallowing the Kool-Aid. Cloud computing, in that moment, seemed revolutionary. I even had ‘office hours’ with Docs project managers, peppering them with questions about real-time collaboration. ‘What if someone pastes the same URL?’ I asked, probably driving them nuts. But I was hooked. Cloud computing, I thought, was the future—or so they wanted me to believe.
That journey, from wide-eyed newbie to… well, slightly less wide-eyed observer, has taught me one thing: Google’s Achilles' heel is execution. They’ve got the vision, the talent, the sheer audacity—but putting it all together? That’s where they stumble. Only in the last three years have they even attempted to wrangle their disparate hardware, software, cloud, and AI efforts into a coherent whole. Too little, too late? Perhaps. Look at the Pixel team: a frantic scramble to catch up, complete with a Jobsian purge of the ‘unpassionate.' Rick Osterloh, a charming and knowledgeable figurehead, no doubt—but is he a ruthless enough leader? That’s the question. He’s managed to corral the Platform and Services division. Yet, the ecosystem still feels… scattered. The Pixel hardware, for all its promise, still reeks of a ‘side project’—a lavish, expensive, and perpetually unfinished side project. The pieces are there, scattered across the table. Can Google finally assemble the puzzle, or will they forever be a company of impressive parts, but no cohesive whole?
After over a decade of observing Google’s trajectory, certain patterns emerge. Chromebooks (bless their budget-friendly hearts), for instance, have settled comfortably into the budget lane: affordable laptops for grade schoolers and retirees. Hardly the ‘sexy’ category Apple’s M-series or those Windows CoPilot ARM machines occupy, is it? Google’s Nest, meanwhile, envisioned ambient computing years ago. Yet, Amazon’s Alexa+ seems to be delivering on that promise while Google’s vision gathers dust. And let’s talk apps: Google’s own, some of the most popular on both Android and iOS, often perform better on iOS. Yes, that’s changing—slowly. And the messaging app graveyard? Overblown, some say. I say, try herding a family group chat through Google’s ever-shifting messaging landscape. Musical chairs, indeed. But, credit where it’s due, Google Messages is finally showing some long-term commitment. Perhaps the ghosts of Hangouts and Allo are finally resting in peace.
The long view, after thirteen years of observing Google’s sprawling ambitions, reveals a complex picture: immense potential, yet a frustrating pattern of fragmented execution. They’ve built impressive pieces—the AI, the cloud, the hardware—but the promised cohesive ecosystem has remained elusive. Whispers of “Pixel Sense,” Google’s rumored answer to true AI integration, offer a glimmer of hope. (And let’s be clear, these are just rumors—I’m not grading on a curve here.) But, after years of watching disjointed efforts, I find myself cautiously optimistic about the direction Rick Osterloh (knowledgeable, and, some might say, charming) and his newly unified Platform and Services division are taking. There’s a sense that, finally, the pieces might be coming together. The vision of a seamlessly integrated Google experience—hardware, software, AI, and cloud—is tantalizingly close. Will they finally deliver? Or will Google continue to be a company of impressive tech demos and unfulfilled promises? Time will tell. But for the first time in a long time, I’m willing to entertain the possibility that Google might just pull it off.
Apple is postponing yet another product because of Siri + Apple Intelligence woes.
More bad news on the Apple front. Apple customers are getting desperate for new hardware by just suggesting they should just ship unfinished Apple Smart Home Hub hardware.
Apple should take their time.
Siri, You're Breaking Our Hearts (and Our Settings)
It seems Apple’s AI woes continue. Just when we thought things couldn’t get more embarrassing than delaying “Apple Intelligence,” we get this gem. John Gruber, over on Mastodon, shared a screenshot of his recent conversation with Siri, and let’s just say it’s not a good look:
MacOS 15.3.1. Asked Siri “How do you turn off suggestions in Messages?”
Siri responds with instructions that:
(a) Tell me to go to a System Settings panel that doesn’t exist. There is no “Siri & Spotlight”. There is “Apple Intelligence & Siri" and a separate one for “Spotlight”.
(b) Are for Mail, not Messages, even though I asked about Messages, and Siri’s own response starts with “To turn off Siri suggestions in Messages”
Gruber simply asked Siri how to turn off suggestions in Messages on his MacOS 15.3.1. Siri, in its infinite wisdom, first sent him on a wild goose chase to a non-existent “Siri & Spotlight” panel in System Settings. Then, it proceeded to give him instructions for Mail, not Messages!
This is beyond a simple bug; it’s a fundamental failure in understanding basic user requests. And remember, this is the same Siri that Apple wants us to believe is the foundation for their upcoming “Apple Intelligence” revolution.
Gruber himself pointed out the irony, highlighting how Apple touted “product knowledge” as a key feature of their AI, yet Siri can’t even navigate its own settings.
“Product knowledge” is one of the Apple Intelligence Siri features that, in its statement yesterday, Apple touted as a success. But what I asked here is a very simple question, about an Apple Intelligence feature in one of Apple’s most-used apps, and it turns out Siri doesn’t even know the names of its own settings panels.
It’s becoming increasingly clear that Apple’s AI ambitions have outpaced their reality. This latest Siri stumble, coupled with the “Apple Intelligence” delay, paints a picture of a company struggling to keep up in the AI race.
Perhaps it’s time for Apple to take a step back, focus on getting the basics right, and then, maybe then, they can start talking about revolutionizing our AI experience.

Mark Gurman Exposes Apple Intelligence Delay: My Relief and Google's Gain
If I had purchased the iPhone 16 like I had planned on after seeing what Apple teased with Apple Intelligence at WWDC 2024, I’d be furious. Mark Gurman has the scoop on Apple’s upgraded Siri experience and it’s not good:
Apple Inc.’s turmoil in its AI division reached new heights on Friday, with the company delaying promised updates to the Siri digital assistant for the foreseeable future.
Apple said that features introduced last June, including Siri’s ability to tap into a user’s personal information to answer queries and have more precise control over apps, will now be released sometime in “the coming year.” The iPhone maker hadn’t previously set a public deadline for the capabilities, but they were initially planned for the iOS 18.4 software update this April.
Bloomberg News reported on Feb. 14 that Apple was struggling to finish developing the features and the enhancements would be postponed until at least May — when iOS 18.5 is due to arrive. Since then, Apple engineers have been racing to fix a rash of bugs in the project. The work has been unsuccessful, according to people involved in the efforts, and they now believe the features won’t be released until next year at the earliest.
Apple has reached a new low. Expectations are high for Apple because they made their bed by showcasing such a future forward AI experience at their annual WWDC event last year. Honestly, as I wrote that, I realized that’s not even the main issue. This is the main issue…
Apple, with its ‘crack marketing team’ as deemed by Craig Federighi, created this impressive ad. The problem? None of these features exist. The upgraded personal Siri, capable of providing helpful on-device information in a manner similar to Gemini or ChatGPT, is not yet available. I nearly purchased an iPhone 16, hoping this feature would be available by the end of 2024. Unfortunately, it wasn’t, and I’m sure many people, unaware of the feature’s delay, purchased the iPhone 16 expecting these features. Apple is one of the few tech companies that can release a product with delayed features without widespread customer backlash on platforms like Reddit or device returns. Consumers invest significant money in these devices, and I believe they are beginning to realize Apple is not immune to such issues. I used to believe Apple would only announce features ready for immediate and complete delivery. Fool me once, shame on you; fool me twice, shame on me. This time, I avoided being fooled, thanks to Mark Gurman’s reporting on Apple Intelligence.
Just to provide some visual context on the disconnect between Apple marketing and the reality of Apple AI. This TV ad for Apple Intelligence was released 5 months ago. It still cannot do what is shown here. https://t.co/uC2qmHaVpe
— Mark Gurman (markgurman) March 2, 2025
In the meantime, I’m glad I stuck with Team Pixel and I’m looking forward to the continuation of Google’s Gemini advancements and the rumored on-device (Apple Intelligence-esque) “Pixel Sense” and “Pixie”.
