Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

people say this like it's a criticism, but damn is it ever nice to start writing a simple crud form and just have copilot autocomplete the whole thing for me.


Yep. I find the hype around AI to be wildly overblown, but that doesn’t mean that what it can do right now isn’t interesting & useful.

If you told me a decade ago that I could have a fuzzy search engine on my desktop that I could use to vaguely describe some program that I needed & it would go out into the universe of publicly available source code & return something that looks as close to the thing I’ve asked for as it can find then that would have been mindblowing. Suddenly I have (slightly lossy) access to all the code ever written, if I can describe it.

Same for every other field of human endeavour! Who cares if AI can “think“ or “do new things”? What it can do is amazing & sometimes extremely powerful. (Sometimes not, but that’s the joy of new technology!)


Why do you think what you describe being excited about does not warrant the current level of AI hype? I agree with your assessment and sometimes I think there is too much cynicism and not enough excitement.


the current level of AI hype amongst a lot of people, but especially investors and bosses, is that you can already give an AI a simple prompt and get it to spit out a fully functional, user-ready application for you. and we're so incredibly far off that.

the things that AI is able to do are incredible, but hype levels are just totally detached from reality.


> is that you can already give an AI a simple prompt and get it to spit out a fully functional, user-ready application for you.

But it can already do that. Isn't that the whole "one-shotting" thing?

The problem is, of course, that it won't be optimized, maintainable or have anyone responsible you can point to if something with it goes wrong. It almost certainly (unless you carefully prompted it to) won't have a test suite, which means any changes (even fixes) to it are risky.

So it's basically a working mockup generator.

I am so, so tired of "semi-technical" youtubers showing off new models with one-shots. The vast majority of actual devs who use this stuff need it to work over long-term context windows and over multiple iterations.


The thing is, we've already had "working mockup generators" — a.k.a. prototyping tools — for decades now.

If you come at the problem from the direction of "I draw a user interface; you guess what it's supposed to do and wire it up for me", then all you need to solve that problem (to a first-order approximation) is some plain-old 1970s "AI" heuristics.

The buzz around current AI coding prompting seems to be solely generated by the fact that while prototyping tools require you to at least have some training as a designer (i.e. understanding the problem you're solving on the level of inputs and outputs), these tools allow people with no experience in programming or design to get results. (Mainly by doing for UIs what genAI image/video tools do for art: interpolating the average of many ingested examples of how a designer would respond to a client request for X, with no regard for the designer's personal style†.)

† Unless prompted to have such regard... but if you know enough to tell the AI how to design everything, then you may as well just design everything. Just as, if you know art well enough to prompt an AI into developing a unique art style, then you likely know art well enough to just make that same art yourself with less effort than it takes to prompt and re-prompt and patch-erase-infill-prompt the AI into drawing what you want.


from what i can tell, the one-shot thing only works on youtube.

you might produce something that looks usable at first, but the actual application functionality will be significantly broken in most ways. it maybe works enough to do a demo for your video, but it won't work enough to actually distribute to end-users. and of course, as you say, it's not testable or maintainable in any way, so fixing what's broken is a bigger project than just writing it properly in the first place.


I think the cynicism is only on software dev circles, and it’s probably a response to the crazy hype.

Remember the hype isn’t just “wow it’s so cool and amazing and useful”, it’s also “I can’t wait to fire all my dumb meat-based employees”


Because to justify the current hype and spending, these companies have to have a product that will generate trillions of dollars and create mass unemployment. Which they don't have.


The current AI hype is causing a lot of leaders to put their organizations on the path to destruction.


Oh sure, there’s also way too much cynicism in some quarters. But that’s all part of the fun.


They go beyond merely "return something that looks as close to the thing I’ve asked for as it can find". Eg: Say we asked for "A todo app that has 4 buttons on the right that each play a different animal sound effect for no good reason and also you can spin a wheel and pick a random task to do". That isn't something that already exists, so in order to build that, the LLM has to break that down, look for appropriate libraries and source and decide on a framework to use, and then glue those pieces together cohesively. That didn't come from a singular repo off GitHub. The machine had to write new code in order to fulfill my request. Yeah, some if it existed in the training data somewhere, but not arranged exactly like that. The LLM had to do something in order to glue those together in that way.

Some people can't see past how the trick is done (take training data and do a bunch of math/statistics on it), but the fact that LLMs are able to build the thing is in-and-of-itself interesting and useful (and fun!).


I’m aware. But the first part is “find me something in the vector space that looks something like the thing I’m asking for”. Then the rest is vibes. Sometimes the vibes are good, sometimes they are ... decidedly not.

If the results are useful, then that’s what matters. Although I do suspect that some AI users are spending more time pulling the AI one-armed bandit handle than it would take them to just solve their problem the old fashioned way a lot of the time - but if pulling the one-armed bandit gets them a solution to their problem that they wouldn’t work up the motivation to solve themselves then that counts too, I guess.


Back in the 90s you could drag and drop a vb6 applet in Microsoft word. Somehow we’ve regressed..

Edit: for the young, wysiwyg (what you see is what you get) was common for all sorts of languages from c++ to Delphi to html. You could draw up anything you wanted. Many had native bindings to data sources of all kinds. My favourite was actually HyperCard because I learned it in grade school.


Wysiwyg kind of fell apart once we had to stop assuming everyone had an 800x600 or 1024x768 screen, because what you saw was no longer what others got.


Not entirely, in these RAD tools you also had flexible layout choices and obviously you could test it for various window sizes (although the maximum was the one supported by your graphics card). Too bad many chose the lazy way and just enforced fixed window size at 800x600.


Most of the internet still assumes you're using a 96 DPI monitor. Tho the rise of mobile phone has changed that it seems like the vast majority of the content consumed on mobile lends itself to being scaled to any DPI - eg.. movies, pictures, youtube ect.


Not a big issue with QT layouts (still have to test the result though)


I can imagine adding breakpoints to a wysiwyg editor being not terribly difficult. They decouple presentation from logic pretty well.


I still miss my days of programming Visual Basic 6. Nothing since then ever compares.


4gl or RAD is still here, but now it’s called low- or no-code.


I agree. I am "writing" simple crud apps for my own convenience and entertainment. I can use unfamiliar frameworks and languaged for extra fun and education.

Good times!


Before copilot what I'd do is diagnose and identify the feature that resembles the one that I'm about to build, and then I'd copy the files over before I start tweaking.

Boilerplate generation was never, ever the bottleneck.


I've been using AI like this as well. The code-complete / 'randomly pop up a block of code while typing' feature was cool for a bit but soon became annoying. I just use it to generate a block of boilerplate code or to ask it questions, I do 90% of the 'typing the code' bit myself, but that's not where most programmers time is spent.


i'm not sure when you tried it, but if you've had copilot disabled it might be worth giving it another go. in my totally anecdotal experience, over the last few months it's gotten significantly better at shutting up when it can't provide anything useful.


It is, because the frontend ecosystem is not just React. There are plenty of projects where LLMs still give weird suggestions just because the app is not written in React.


I've probably commented the same thing like 20 times, but my rule of thumb and use with AI / "vibe coding" is two-fold:

* Scaffolding first and foremost - It's usually fine for this, I typically ask "give me the industry standard project structure for x language as designed by a Staff level engineer" blah blah just give me a sane project structure to follow and maintain so I don't have to wonder after switching around to yet another programming language (I'm a geek, sue me).

* Code that makes sense at first glance and is easy to maintain / manage, because if you blindly take code you don't understand, you'll regret it the moment you need to be called in for a production outage and you don't know your own codebase.


"Anything that can be autogenerated by a computer shouldn't have to be, it can be automated"


People say inbreeding like it’s criticism too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: