We are talking here about the most basic things- nothing AI related. Basic billing. The fact that it is not working says a lot about the future of the product and company culture in general (obviously they are not product-oriented)
Given how many paid offerings Google has, and the complexity and nuance to some of those offering (e.g. AdSense) I am pretty surprised that Google don't have a functioning drop in solution for billing across the company.
If they do, it's failing here. The idea of a penny pinching megacorp like Google failing technically even in the penny pinching arena is a surprise to me.
Even though my post complaining about google's billing and incoherent mess got so many upvotes, I'll be the first to say that there is nothing basic about "give me money".
Apart from the fact that what happens to the money when it gets to google (putting it in the right accounts, in the right business, categorizing it, etc), it changes depending on who you're ASKING for money.
1. Getting money from an individual is easy. Here's a credit card page.
2. Getting money from a small business is slightly more complicated. You may already have an existing subscription (google workspaces), just attach to it.
3. As your customers get bigger, it gets more squishy. Then you have enterprise agreements, where it becomes a whole big mess. There are special prices, volume discounts, all that stuff. And then invoice billing.
The point is that yes, we all agree that getting someone to plop down a credit card is easy. Which is why Anthropic and OpenAI (who didn't have 20 years of enterprise billing bloat) were able to start with the simplest use case and work their way slowly up.
But I AM sensitive to how hard this is for companies as large and varied as Google or MS. Remember the famous Bill Gates email where even he couldn't figure out how to download something from Microsoft's website.
It's just that they are also LARGE companies, they have the resources to solve these problems, just don't seem to have the strong leadership to bop everyone on the head until they make the billing simple.
And my guess is also that consumers are such a small part of how they're making money (you best believe that these models are probably beautifully integrated into the cloud accounts so you can start paying them from day one).
My first thought was this is the whole thing about managers at Google trying to get employees under other managers fired and their own reports promoted -- but it feels too similar to how fucked up all the account and billing stuff is at Microsoft. This is what happens when you try to "fix" something by layering on more complexity and exceptions.
From past experience, the advertising side of the business was very clear with accounts and billing. GCP was a whole other story. The entire thing was poorly designed, very confusing, a total mess. You really needed some justification to be using it over almost everything else (like some Google service which had to go through GCP.) It's kind of like an anti-sales team where you buy one thing because you have to and know you never want to touch anything from the brand ever again.
We made the bet 2 years ago to build AI Studio on top of the Google Cloud infra. One of the real challenges is that Google is extremely global, we support devs in hundreds of countries with dozens of different billing methods and the like. I wish the problem space was simple but on the first day I joined Google we kicked off the efforts to make sure we could bring billing into AI Studio, so January cannot come soon enough : )
I think a better prediction would be that the current (or future?) generation Software Engineers will migrate to building and developing AI systems, basically working for OpenAI, Anthropic, etc.
The future of computing could very well be AI (and related fields) + Robotics + Hardware .. instead of Software + Hardware.
Yes there is, for coding for example you need to learn how to use the tools efficiently otherwise you'll get garbage... And end up either discarding everything and claiming AI is crap, or push it to prod and have to deal with the garbage code in prod.
Just open codex or claude code, add md file with basic instructions and tell it what you want. there is no "tooling" or "workflows" around it. "swarm of agents" is not a thing.
And if your agent is running in background for hours then you are doing something wrong and wasting time.
Can you explain what you mean by this? iPhone was the end of Blackberry. It seems reasonable that a smarter, cheaper, faster model would obsolete anything else. ChatGPT has some brand inertia, but not that much given it's barely 2 years old.
That's an odd take. Teams doesn't have the leading market share in videoconferencing, Zoom does. I can't judge what it's like because I've never yet had to use Teams - not a single company that we deal with uses it, it's all Zoom and Chime - but I do hear friends who have to use it complain about it all the time. (Zoom is better than it used to be, but for all that is holy please get rid of the floating menu when we're sharing screens)
The may want to use 3rd party or just wait for AI to be more stable to see how people actually use it instead of adding slop in the core of their product.
This is revisionist history. Apple wanted to fully jump in. They even rebranded AI as Apple Intelligence and announced a hoard of features which turned out to be vaporware.
While it is obviously much easier than creating a real OS, some people have created desktop managers web apps, with resizeable and movable windows, apps such as terminals, nodepads, file explorer etc.
This is still a challenging task and requires lots of work to get this far.
You could just tell it to check out readme, but I suspect it would have checked it out anyway or figured out the type of project and how it is structured as a first step of any other command you give it as without it it is impossible to add or update the project.
For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.
But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.
> He can now speedrun the part he’s not interested in.
The reductio that people tend to be concerned about is, what if someone is not interested in any aspect of software development, and just wants to earn money by doing it? The belief is that the consequences then start becoming more problematic.
Some people will always look for ways to "cheat". I don't want to hold back everyone else just because a few people will harm themselves by using this stuff as a replacement for learning and developing themselves.
I don't understand the argument that post is making.
I agree that people using LLMs in a lazy way that has negative consequences - like posting slop on social media - is bad.
What's not clearly to me is the scale of the problem. Is it 1/100 people who do this, or is it more like 1/4?
Just a few people behaving badly on social media can be viewed by thousands or even millions more.
Does that mean we should discard the entire technology, or should we focus on teaching people how to use it more positively, or should we regulate its use?
You’re not properly accounting for the risk of getting blocked on one of these 5 second tasks. Do an expected value calculation and things look very different.
Across a day of doing these little “run one command” tasks, even getting blocked by one could waste an hour. That makes the expected value calculation of each single task tilt much more in favor of a hands off approach.
Secondly, you’re not valuing the ability to take yourself out of the loop - especially when the task to be done by AI isn’t on the critical path, so it doesn’t matter if it takes 5 minutes or 5 milliseconds. Let AI run a few short commands while you go do something else that’ll definitely take longer than the difference - maybe a code review - and you’ve doubled your parallelism.
These examples are situational and up to the individual to choose how they operate, and they don’t affect you or your decisions.
The most important thing is to have it successfully build the software, to prove to both me and itself that a clean compile is possible before making any further changes.
Suggestion: make a “check.sh” script that builds everything, lints everything, and runs all (fast) tests. Add a directive in the agent system prompt to call it before & after doing anything. If it fails, it will investigate why.
In situations like this is better to ask the agent to write a short document about how to run the project. Then you read it and delete useless parts. Then you ask the agent to follow that document and improve it until the software builds. By the final step, you get a personalized README.md for your needs.
reply