Two men have become considerably more powerful inside Google.
How Google is changing to compete with ChatGPT
What recent changes inside Google say about the future of Gemini and Search. Also: Meta’s head of AI research on AGI, agents, and the compute wars.
What recent changes inside Google say about the future of Gemini and Search. Also: Meta’s head of AI research on AGI, agents, and the compute wars.


The first is Demis Hassabis, who already heads up Google’s AI research and is now better positioned to compete with ChatGPT. The second is Nick Fox, a company veteran who now oversees the company’s cash cow: Search.
Before, Hassabis didn’t have control of the product team that put the models his researchers developed into the world. Now, he oversees the end-to-end experience of Gemini, from the research driving the models to the chatbot people use to access them.
Based on my convos with Google employees, this organizational change is meant to help the company improve the Gemini product faster. Another goal is to avoid past blunders, like when Gemini generated images of racially diverse Nazis. As I wrote at the time:
After talking to sources at Google, I’ve come to the conclusion that these bad Gemini responses slipped through testing because everyone felt rushed to ship. An illustrative example: the photo generation in the Gemini app is not actually powered by the Gemini model. It’s an older, text-to-photo model that was tacked onto the Gemini user-facing experience to get the feature out the door faster.
I’m told that there’s also a lack of alignment between Demis Hassabis’ research team building the underlying models and Prabhakar Raghavan’s search organization that’s putting them into user-facing products.
Alignment problem solved: CEO Sundar Pichai announced on Thursday that the Gemini product group led by VP Sissie Hsiao, who previously reported to Raghavan, is being moved to Hassabis’ Google DeepMind org. “Bringing the teams closer together will improve feedback loops, enable fast deployment of our new models in the Gemini app, make our post-training work proceed more efficiently and build on our great product momentum,” Pichai wrote in a memo to employees that was published on Google’s website.
This org chart change mirrors what Mark Zuckerberg did at Meta earlier this year, when he moved the company’s AI research group, FAIR, into the social media product division led by Chris Cox. As the saying goes, you ship your org chart. If your goal is to turn AI models into actual products that people use, it makes sense to collapse the divide between your research and product teams. (OpenAI is going through this shift in its own messy way.)
Meanwhile, the most lucrative software business in history has a new leader. Nick Fox is the new SVP of Google’s “Knowledge & Information” org, which includes search, ads, maps, and commerce. The group’s previous leader, Prabhakar Raghavan, is staying on as “chief technologist,” which is the kind of title you get at Google when you’re tired and have made the company billions of dollars.
Fox has a low-key profile, even by Google executive standards. He started in 2003, when the company first moved offices to Mountain View. After helping build Google’s ad business in the early days, he worked on some of the company’s more adventurous bets, including its wireless service, Google Fi, and communications products. (RIP Duo and Allo.) During the discovery phase, of the Department of Justice’s antitrust lawsuit, he appeared frequently in internal emails about the business of Search.
While he’s a close confidant to Pichai and known leader inside Google, Fox has never managed a team anywhere near as large as Raghavan’s org. However, people who have worked with him say he’s a diehard Googleologist who knows the dark art of navigating the politics in the company’s upper ranks. “I frequently turn to Nick to tackle our most challenging product questions and he consistently delivers progress with tenacity, speed, and optimism,” wrote Pichai in his note to employees.
In that context, Fox’s move makes sense. For the first time in awhile, Search itself is a challenging question. Fox is taking over an organization that can’t afford to coast anymore. Let’s see if he’s up to the challenge.
While I expect to see more of Fox publicly in his new role as the face of Search, it’s becoming increasingly clear that Hassabis is poised to become Pichai’s eventual successor. My sources at Google all seem to agree the CEO job will one day be his (if he wants it, which is another question entirely). Otherwise, the most likely candidates would be Cloud boss Thomas Kurian or YouTube chief Neal Mohan.
1:1 with Meta’s AI leader
Earlier this week, I caught up with Joelle Pineau, Meta’s VP of AI research. As someone overseeing one of the top AI research labs in the world, I wanted to hear what she thought about artificial general intelligence (AGI), the rise of AI agents, and more. You can read part of our chat below.
The following conversation has been edited for length and clarity:
What does AGI mean?
I think of AGI as the ability for machines to accomplish a broad set of tasks at a level equal to or exceeding human abilities.
I think there’s a movement toward building instantiations of AGI. I call them agents. Agents are essentially a surface for AGI. You can think of ChatGPT as a very primitive language conversational agent. We’re going to see increasingly more sophisticated agents.
We still have a really long road to go to general intelligence that requires learning a lot about what humans like, what humans don’t like. The question of alignment to individual values, to societal values, is wide open, and we need a lot of people to participate in this conversation. So I do think having agents that people can interact with and play with is going to be super important.
The promise with agents is that AI will be able to reliably do something like interact with my bank account or book a trip for me. When can we actually rely on agents to take actions on our behalf?
I’m a scientist. I may be a little bit more careful than most. But I think it’s going to be a little while until we talk about reliable behavior for our agents. Honestly, we need to build agents that make some mistakes so we can learn from these mistakes. This assumption that the first generation will suddenly be able to do all of these things with very high reliability, I think is optimistic.
How do we interleave the agent’s autonomy with human autonomy? There’s a version of the agent that confirms absolutely everything with you. “Do you want me to send that email? Do you want me to book that trip? Do you want me to make that financial transaction?” At some point, that gets very unpleasant and you’re like, “Why do I have an agent if I have to do all the work of confirming rather than just getting it done?”
There’s a version of it where that agent takes on a huge amount of the decision-making in your life, including financial decisions, including health decisions, including education decisions. The path where we do that reliably is pretty far out. I think, early on, we’ll have agents that aim to do a lot of that.
What is your team seeing on reasoning breakthroughs, especially in light of OpenAI’s last model and that chain-of-thought architecture that they shared?
There’s maybe a bit of a misconception among the broad public that reasoning is one thing. You can reason about math. There’s reasoning that’s more like planning. There’s both what we call discrete reasoning, meaning you search through a bunch of symbols to find a solution, and you have reasoning that’s more linguistic reasoning: how many “R’s” are there in [the word] “strawberries,” for example. That’s where the chain of thought reasoning tends to be quite good. Then there’s modal reasoning. Are you trying to ask questions about visual or audio or video content?
I think for the math type of reasoning, the o1 approach makes sense. It’s something that our teams are also familiar with. I don’t think there were any major breakthroughs compared to what’s in the research literature. We don’t have a lot of data that shows that people come to Meta AI to solve university-level math problems. So reasoning about textual information, multimodal information, is something we spend a lot more time on ourselves.
On the fundamental architecture side, do you see anything coming after transformers, or is the current LLM architecture the path forward for the foreseeable future?
We have a pretty good sense of the few really important things that transformers don’t do very well. They kind of get around it somehow by scaling. We also have another proof point, which is the human brain. The human brain has about 85 billion neurons. Llama 3.2 has 405 billion neurons. So there’s something we’re not getting here.
For a company like Meta, we can afford to have an amazing scaling effort on transformers, and we can afford to try other things. I think JEPA, which is Yann LeCun’s architecture, is one hypothesis. We’ve shared some work on our Transfusion model, which is another hypothesis, and there will be more. We’re going to scale them and see where they work, where they don’t work, and we’ll keep on doing that work. And at some point, I suspect something other than transformers will be more useful.
AI researchers have lots of options for where they can work. Obviously, money is a factor, but beyond that, what is the thing that the best researchers you’re trying to hire tell you attracts them to work at a company?
A lot of these researchers genuinely believe that you need to partner with people across the community, and so our open-source strategy is a big one. I would say a year or two ago, the startups had a good amount of compute equivalent to the big companies. We’re on an arc that very soon those startups are not going to be very competitive on compute. We go from a research lab to having a model that’s in the hands of billions of people overnight. Researchers really care about that.
I also think Meta, despite everything that may be written, is a really healthy company. There are people who really care about solving some super hard problems, and decision-making is done in a way that’s transparent. You talk to leadership, you present data, and that speaks when you make decisions. It’s not, from what I hear, as political as other places. And that matters.
Job board
Some interesting (non-Google) tech career moves you may have missed lately:
- OpenAI hired Sébastien Bubeck, Microsoft’s VP of AI research. It also hired Dane Stuckey to be its CISO. He previously held the same role at Palantir.
- Lori Goler, Meta’s longtime head of people, is leaving next summer after more than 16 years at the company. One of her deputies, Janelle Gale, will take on her role.
- Carol Surface, Apple’s chief people officer, is leaving. Deirdre O’Brien, who also oversees retail, will once again oversee HR.
- One of Elon Musk’s trusted leaders, Omead Afshar, has been tapped to oversee Tesla operations in North America and Europe.
- Netflix has named Jeet Shroff, formerly of Epic Games, as its new VP of game technology and portfolio development. (Sounds like strategy changes are underway: that org was recently hit with layoffs.)
- A top Uber engineering executive, Anirban Kundu, is now Instacart’s CTO. With he and CPO Sundeep Jain leaving Uber, the company has named Sachin Kansal as its new CPO and Praveen Neppalli Naga as CTO.
Elsewhere
- As part of The Verge’s wonderfully nostalgic 2004 story package, here’s my ode to what Facebook represented when it launched 20 years ago and how it has changed (us all) over time. (Facebook cofounder Dustin Moskovitz: “I like this piece.”)
- Yes, Meta did, in fact, fire about two dozen employees for abusing their $25 Grubhub meal stipends to buy things like detergent. And then there were layoffs.
- OpenAI might say that it has reached AGI to get out of its contract with Microsoft. Both companies have hired banks to help work through Microsoft’s equity in OpenAI’s planned shift to a for-profit structure. And Mira Murati’s sudden exit may have played a role in Apple backing out of OpenAI’s latest funding round.
- Drop the “coin,” it’s cleaner: One of Sam Altman’s other startups, World, is still pushing to scan everyone’s eyeballs in exchange for crypto. (Full keynote here.)
- I challenge you to read this story about Magic Leap founder Rony Abovitz’s new startup and understand what it’s about.
- Things aren’t going well over at Automattic, the parent company of WordPress.
- The worst presidential candidate endorsement you’ll read from any person in tech.
If you aren’t already subscribed to Command Line, don’t forget to sign up and get future issues delivered directly to your inbox. You’ll also get full access to the archive, featuring scoops about companies like Meta, Google, OpenAI, and more.
As always, I want to hear from you, especially if you have a tip or feedback. You can ping me securely on Signal. I’m happy to keep you anonymous.
Thanks for subscribing.
Most Popular
- PC makers are not ready for the MacBook Neo
- Gemini’s task automation is here and it’s wild
- European retailers yank popular headphones after study reports trace amounts of hormone-disrupting chemicals
- Amazon Prime Video nearly doubles the price to go ad-free and stream 4K video
- What it was like to watch grieving parents stare down Mark Zuckerberg in court












