Big tech is suffering from the incumbents disease.
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
Once you have a golden goose, the risk taking innovators who built the thing are replaced by risk averse managers who protect it. Not killing the golden goose becomes priority 1, 2, and 3.
I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.
This but also: not the managers in the teams that build/"protect" it.
But really, leadership above, echoing your parents.
I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.
Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.
It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.
Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.
Feels like this is the fundamental flaw with a lot of things not just in the private sector, but the public one too.
Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.
It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?
I take your broader point but personally I feel like it’s ok if the FDA is cautious. The incentives that bias towards rejection may be “not killing people”.
What about the people who die because a safe and effective drug that could have saved their life got rejected? The problem is that there's a fundamental asymmetry here - those deaths are invisible but deaths from a bad drug that got approved are very visible.
I mean drugs are different than consumer technology. Instagram isn’t great but it doesn’t cause birth defects. Also things like the compassionate release of hiv drugs in study show the govt can see the nuance here with enough pressure.
I deliberately chose the FDA here specifically because of this. The problem here is that on a societal level, we have to be willing to tolerate some risk. If a drug could have saved many, but is rejected because of occasional complications, that sounds like a poor cost benefit analysis.
You have a flawed understanding of the FDA pharmaceutical approval process. There is no bias towards either rejection or approval. If an drug application checks all the required boxes then it will be approved.
I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.
For patients who can't wait there is the Expanded Access (compassionate use) program.
Setting up a separate insulated internal organization to pursue disruptive innovations is basically what Clayton Christensen recommended in "The Innovator's Dilemma" back in 1997. It's what IBM did to successfully develop the original PC.
Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).
but X isn't really an insulated org... it has close ties with other parts of Google. It shares the corporate infra and it's not hard to get inside and poke around. it has to be, because it's intended to create new products that get commercialized through Google or other Alphabet companies.
A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.
I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.
I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.
MSFT were the masters of this technique (spin off a startup, acquire it after it proves viable) for decades, but sadly they stopped.
Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.
Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.
Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.
There are varying degrees of insulation. I'm not convinced that Calico is a good example of Christensen's recommendations. It seems like a vanity research project sponsored by a Google founder rather than an internal startup intended to bring a disruptive innovation to market.
and then sat on it for half a decade because they worried it would disrupt their search empire. Googles invention of transformers is a top 10 example of the innovators dilemma.
Used it to do things? This seems like a weird question. OpenAI took about the same amount of time to go big as well (Sam was excited about open AI in 2017, but it took 5+ years for it to pan out into something used by people).
I think the point is that they hoarded the technology for internal use instead of opening it up to the public, like OpenAI did with ChatGPT, thus kicking off the current AI revolution.
As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.
So I started at Google in 2020 (after Sam closed our lab down in 2017 to focus on OpenAI), and if they were hoarding it, I at least had no clue about it. To be clear, my perspective is still limited.
Fair enough, maybe a better way to put it is: why was the current AI boom sparked by ChatGPT and not something from Google? It's clear in retrospect that Google had similar capabilities in LaMDA, the precursor to Gemini. As I recall it was even announced a couple years before ChatGPT but wasn't released (as Bard?) until after ChatGPT.
LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)
Google had been burned badly in multiple previous launches of ML-based products and their leadership was extremely cautious about moving too quickly. It was convenient for Google that OpenAI acted as a first mover so that Google could enter the field after there was some level of cultural acceptance of the negative behaviors. There's a whole backstory where Noam Shazeer had come up with a bunch of nice innovations and wanted to launch them, but was only able to do so by leaving and launching through his startup- and then returned to Google, negotiating a phenomenal deal (Noam has been at Google for 25 years and has been doing various ML projects for much of that time).
Thanks for the helpful context. Google being risk averse is definitely a common criticism I've heard. I can't think of what previous problematic launches could have been, but ironically all the ones I remember offhand were after the release of Gemini!
I think "hoarding" is the wrong connotation. They were happy to have it be a fun research project alongside alphago while they continued making money from ads.
Pre-ChatGPT OpenAI produced impressive RL results but their pivot to transformers was not guaranteed. With all internet data, infinite money, and ~800x more people, Google's internal LLMs were meh at best, probably because the innovators like Radford would constantly be snubbed by entrenched leaders (which almost happened in OpenAI).
For “as insulated as possible”, I’d personally start a whole new corporate entity, like Verizon did with Visible.
It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.
IIRC Intuit did that for QBO. Put a new team off-site and everything. The story I read is old (maybe was a business book) and my motivated searches gave nothing.
From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.
Which is amusing if you look at Apple's product lines and there's several decisions and examples across each that have specs/features that are clearly about delineation and preventing cannibalization.
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible.
Didn't Netflix do this when they went from DVDs to online streaming?
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".
Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.
I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.
> For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
Search is not a commodity. Search providers other than Google are only marginally used because Google is so dominant. At the same time, when LLMs companies can start providing a better solution to the actual job of finding answers to user queries, then Google's dominance is disrupted and their future business is no longer guaranteed. Maintaining Google search infra to serve as a search backbone is not big enough for Google.
I get better results than Google on segments of common craw using a desktop computer and a research model.
Given that Google has decades of scrapes, and more than four gpus to work with, they can do a better job than me. That I beat them right now is nothing short of embarrassing, bordering on an existential threat.
> Search, after Bert, is very much a commodity.
> I get better results than Google on segments of common craw using a desktop computer and a research model.
For data which hasn't changed since knowledge cutoff - for sure, but for real life web search, being able to get fresh data is a hard requirement.
Your intuition is right. I work at a big corp right now and the average age in the operations department is probably just under 50. That's not to say age is bad, however... these people have never worked anywhere else.
They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.
Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.
You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.
The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."
Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.
When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.
> Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.
Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.
Just burning those VC bucks.
> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.
I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...
I'm being nice about this and assuming Google (which also owns Youtube, and it shows, in their decisions about content) is just run by a bunch of naive people and isn't directly controlled by three letter agencies - they have to take a stronger stance against censorship/selective information
"Oh, but that doesn't happen" - it does, Goog results have been manipulated before to the extent that probably can't be attributed purely to SEO. Youtube removed tons of "covid misinformation" about things we all know now to be true
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.