Hacker Newsnew | past | comments | ask | show | jobs | submit | risyachka's commentslogin

There is nothing to learn, the entry barrier is zero. Any SWE can just start using it when they really need to.

Some of us will need time to learn to give less of a shit about quality.

Or you could learn how to do it the right way with quality intact. But it’s definitely your choice.

Had no idea it just came out! Was using it today to install os on my old raspberry and ux was very smooth!

Am I missing something or is this essentially same as GPT Apps that have been introduced a while ago and have been discussed 10000 times.

Turns the concept of GPT Apps into an open standard rather than something ChatGPT only.

We are talking here about the most basic things- nothing AI related. Basic billing. The fact that it is not working says a lot about the future of the product and company culture in general (obviously they are not product-oriented)

There’s nothing basic about billing.

Given how many paid offerings Google has, and the complexity and nuance to some of those offering (e.g. AdSense) I am pretty surprised that Google don't have a functioning drop in solution for billing across the company.

If they do, it's failing here. The idea of a penny pinching megacorp like Google failing technically even in the penny pinching arena is a surprise to me.


It is basic in the sense that it is difficult to run a business where billing doesn’t work. It’s not basic in the „easy“ sense.

I mean this problem has been solved. Nothing new to it. You just take a few weeks and implement it properly. No surprises will come up.

Even though my post complaining about google's billing and incoherent mess got so many upvotes, I'll be the first to say that there is nothing basic about "give me money".

Apart from the fact that what happens to the money when it gets to google (putting it in the right accounts, in the right business, categorizing it, etc), it changes depending on who you're ASKING for money.

1. Getting money from an individual is easy. Here's a credit card page.

2. Getting money from a small business is slightly more complicated. You may already have an existing subscription (google workspaces), just attach to it.

3. As your customers get bigger, it gets more squishy. Then you have enterprise agreements, where it becomes a whole big mess. There are special prices, volume discounts, all that stuff. And then invoice billing.

The point is that yes, we all agree that getting someone to plop down a credit card is easy. Which is why Anthropic and OpenAI (who didn't have 20 years of enterprise billing bloat) were able to start with the simplest use case and work their way slowly up.

But I AM sensitive to how hard this is for companies as large and varied as Google or MS. Remember the famous Bill Gates email where even he couldn't figure out how to download something from Microsoft's website.

It's just that they are also LARGE companies, they have the resources to solve these problems, just don't seem to have the strong leadership to bop everyone on the head until they make the billing simple.

And my guess is also that consumers are such a small part of how they're making money (you best believe that these models are probably beautifully integrated into the cloud accounts so you can start paying them from day one).


My first thought was this is the whole thing about managers at Google trying to get employees under other managers fired and their own reports promoted -- but it feels too similar to how fucked up all the account and billing stuff is at Microsoft. This is what happens when you try to "fix" something by layering on more complexity and exceptions.

From past experience, the advertising side of the business was very clear with accounts and billing. GCP was a whole other story. The entire thing was poorly designed, very confusing, a total mess. You really needed some justification to be using it over almost everything else (like some Google service which had to go through GCP.) It's kind of like an anti-sales team where you buy one thing because you have to and know you never want to touch anything from the brand ever again.


We made the bet 2 years ago to build AI Studio on top of the Google Cloud infra. One of the real challenges is that Google is extremely global, we support devs in hundreds of countries with dozens of different billing methods and the like. I wish the problem space was simple but on the first day I joined Google we kicked off the efforts to make sure we could bring billing into AI Studio, so January cannot come soon enough : )

Everyone uses or will use ai, there is no learning curve so this is not an advantage

I think a better prediction would be that the current (or future?) generation Software Engineers will migrate to building and developing AI systems, basically working for OpenAI, Anthropic, etc.

The future of computing could very well be AI (and related fields) + Robotics + Hardware .. instead of Software + Hardware.


Yes there is, for coding for example you need to learn how to use the tools efficiently otherwise you'll get garbage... And end up either discarding everything and claiming AI is crap, or push it to prod and have to deal with the garbage code in prod.

Just open codex or claude code, add md file with basic instructions and tell it what you want. there is no "tooling" or "workflows" around it. "swarm of agents" is not a thing.

And if your agent is running in background for hours then you are doing something wrong and wasting time.


> it's over for the other labs.

Its not over and never will be for 2 decade old accounting software, it is definitely will not be over for other AI labs.


Can you explain what you mean by this? iPhone was the end of Blackberry. It seems reasonable that a smarter, cheaper, faster model would obsolete anything else. ChatGPT has some brand inertia, but not that much given it's barely 2 years old.


Yeah iPhone was the end of Blackberry but Google Pixel was not the end of iPhone.

The new Gemini is not THAT far of a jump to switch your org to a new model if you already invested in e.g. OpenAI.

The difference must be night and day to call it "its over".

Right they all are marginally different. Today google fine tuned their model to be better, tomorrow it will be new Kimi, after that DeepSeek.


Ask yourself why Microsoft Teams won. These are business tools first and foremost.


That's an odd take. Teams doesn't have the leading market share in videoconferencing, Zoom does. I can't judge what it's like because I've never yet had to use Teams - not a single company that we deal with uses it, it's all Zoom and Chime - but I do hear friends who have to use it complain about it all the time. (Zoom is better than it used to be, but for all that is holy please get rid of the floating menu when we're sharing screens)

It looks more like a strategic decision tbh.

The may want to use 3rd party or just wait for AI to be more stable to see how people actually use it instead of adding slop in the core of their product.


> It looks more like a strategic decision tbh.

Announcing a load of AI features on stage and then failing to deliver them doesn't feel very strategic.


In contrast to Microsoft, who puts Copilot buttons everywhere and succeeds only in annoying their customers.


This is revisionist history. Apple wanted to fully jump in. They even rebranded AI as Apple Intelligence and announced a hoard of features which turned out to be vaporware.


But apple intelligence is a thing, and they are struggling to deliver on the promises of apple intelligence.


Its always amusing when "an app like windows xp" considered hard or challenging somehow.

Literally the most basic html/css, not sure why it is even included in benchmarks.


While it is obviously much easier than creating a real OS, some people have created desktop managers web apps, with resizeable and movable windows, apps such as terminals, nodepads, file explorer etc.

This is still a challenging task and requires lots of work to get this far.


Those things are LLMs, with text and language at the core of their capabilities. UIs are, notably, not text.

An LLM being able to build up interfaces that look recognizably like an UI from a real OS? That sure suggests a degree of multimodal understanding.


UIs made in the HyperText Markup Language are, in fact, text.


You could just tell it to check out readme, but I suspect it would have checked it out anyway or figured out the type of project and how it is structured as a first step of any other command you give it as without it it is impossible to add or update the project.


For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.

But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.


> He can now speedrun the part he’s not interested in.

The reductio that people tend to be concerned about is, what if someone is not interested in any aspect of software development, and just wants to earn money by doing it? The belief is that the consequences then start becoming more problematic.


Those people are their own worst enemies.

Some people will always look for ways to "cheat". I don't want to hold back everyone else just because a few people will harm themselves by using this stuff as a replacement for learning and developing themselves.


Do you genuinely believe that this only applies to "a few people"?

This new post gets at the issue: https://news.ycombinator.com/item?id=45868271


I don't understand the argument that post is making.

I agree that people using LLMs in a lazy way that has negative consequences - like posting slop on social media - is bad.

What's not clearly to me is the scale of the problem. Is it 1/100 people who do this, or is it more like 1/4?

Just a few people behaving badly on social media can be viewed by thousands or even millions more.

Does that mean we should discard the entire technology, or should we focus on teaching people how to use it more positively, or should we regulate its use?


>> He can now speedrun the part he’s not interested in

In this case its more like slowrunning. Building rust project is 1 command and chatgpt will tell you this command in 5 seconds.

Running an agent for that is 1000x more inefficient.

At this point its not optimizing or speeding things up but running agent for the sake of running agent.


The best thing about having an agent figure this out is you don't even need to be at your computer while it works. I was cooking dinner.


You’re not properly accounting for the risk of getting blocked on one of these 5 second tasks. Do an expected value calculation and things look very different.

Across a day of doing these little “run one command” tasks, even getting blocked by one could waste an hour. That makes the expected value calculation of each single task tilt much more in favor of a hands off approach.

Secondly, you’re not valuing the ability to take yourself out of the loop - especially when the task to be done by AI isn’t on the critical path, so it doesn’t matter if it takes 5 minutes or 5 milliseconds. Let AI run a few short commands while you go do something else that’ll definitely take longer than the difference - maybe a code review - and you’ve doubled your parallelism.

These examples are situational and up to the individual to choose how they operate, and they don’t affect you or your decisions.


The most important thing is to have it successfully build the software, to prove to both me and itself that a clean compile is possible before making any further changes.


Suggestion: make a “check.sh” script that builds everything, lints everything, and runs all (fast) tests. Add a directive in the agent system prompt to call it before & after doing anything. If it fails, it will investigate why.


In situations like this is better to ask the agent to write a short document about how to run the project. Then you read it and delete useless parts. Then you ask the agent to follow that document and improve it until the software builds. By the final step, you get a personalized README.md for your needs.


Not really. The difference between high quality web app and native app is very noticeable.

And between average native and average web view - it is night and day.

99% of web apps in desktop browser are laggy. And on mobile it feels like crap.

Sure if you are an expert in top1% you can probably get it working really good. But this is true only for 1 in 100 if not less.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: