This is roughly how I’m starting to think about it. This may be doomsday or it may be an amazing opportunity. I don’t know. If it’s doomsday, it doesn’t matter whether I act as if it’s doomsday or if I look for opportunities. Either way I’m fucked. But if I act as if it’s doomsday, but it’s actually an opportunity, I’ll miss all the possible opportunities that come up. So I don’t see any downside in looking at this as an opportunity. I may as well.
And once I started to look at it that way, I started seeing all these potential ways to use AI to do what I do now, only better and more easily. So I’m going to keep looking for opportunities and going after them. If I’m going down, I’m going down swinging.
Pascal’s wager is full of holes[1]. I agree with you that these ideas are similar, and it’s because of that that your parent comment deserves scrutiny rather than blind acceptance. For example:
> If it’s doomsday, it doesn’t matter whether I act as if it’s doomsday or if I look for opportunities. Either way I’m fucked.
There are different degrees of “being fucked”. Not everyone is disadvantaged the same in a doomsday scenario: it does not literally happen in a day and it does not necessarily spell doom for those who caused it or are outside their sphere of influence. It follows that someone who looked for alternatives rather than fully embracing the system might stand a better chance.
Furthermore, a scenario which is not yet inevitable may become so because people think it is inevitable and give up or act in a way that reinforces it. That’s the definition of a self-fulfilling prophecy.
Note I’m not advocating a specific approach to this case. My aim is to highlight that while the initial argument seems logically airtight, it is deeply flawed and you shouldn’t take it as the true solution.
I keep seeing lots of parallels between believing in God and blackbox AI systems. Especially in forums like lesswrong. Ironic, because most people there would be offended if you told them they are religious.
I’m the same, I really think “AI” worship is a new type of religion. People quite honestly believe we’re about to meet God like intelligence and inherit eternal life from such a meeting. I guess anything is possible but have I heard this before ?
I'm sure I'll flip back to AI pessimism, but I'm on the upswing of my anxiety cycle so: when you think about software engineering do you imagine yourself as a sailmaker or a ship designer?
> If it’s doomsday, it doesn’t matter whether I act as if it’s doomsday or if I look for opportunities
If everyone acted like this all the time, there'd be no point in unionizing or voting. If you suspect it might be doomsday, you should join other people who feel the same and do something about it.
It's probably too late to stop it from filling the internet with misinformation and spam, but it's not too late to stop it from taking the jobs of the people who produced the training sets (i.e. writers, open-source coders, etc.)
Since the training data is sourced from the web and from users, I've been wondering how difficult it would be to collectively poison these systems with junk data. I'm guessing it wouldn't take that many people to do it.
This isn't a categorical imperative. This is a decision for this individual situation. By "doomsday" I mean, an economic doomsday for privileged people like me who have had a pretty good ride for a while, not a literal doomsday for everyone. I also don't see any way of possibly stopping this.
Here's what I think. At first, it will be used as a tool to augment devs. But quickly it will learn how to replace a lot of devs. Which ones? What's the most common code on Github? Probably YAFRA (Yet another fucking react app). It's going to learn on those first and replace those first. If you write code against/for bespoke hardware, for example, (where there are essentially zero projects that the AI can learn from), you're probably going to have another couple decades before you're replaced.
Not saying one is harder than the other or criticizing anyone's skills, etc. Just looking at what the AI has available to learn from.
It’s here, it’s real, and we have to deal with it as a society.
The goal of the “AI Ethics” professional class to have all AI be approved by well-compensated PhDs in corporate labs has always been a paternalistic farce.
Common people deserve access so they can figure out what their lives will be like now, and then we can make collective decisions about where we go.
> I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.
> And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”
The more it serves us, the more indulgent, flaccid and incapable we will become. When our butler invitably degrades, we will have forgotten how to serve ourselves. At least with bicycles we were still pedalling.
More like moped, powered, with small wheels, definitely not off-road. Still good if you don't mind going the same way again and again. Or going "in the wild" only to find out someone is already there.
BTW, animals do make tool. Even some insects (ChatGPT told me last night :)).
So, here's my big problem with AI: To quote Strong Bad, "I don't trust any device I can't mash Ctrl-Alt-Del on."[0] And the good bits of AI, as it exists currently, are inscrutable NN statistical models living in the cloud (i.e., someone else's computer). You pay for access, drink a verification can, submit your query, and then a miracle occurs[1], and you get your result.
Today, if I want to be a programmer, I can procure all the equipment I need to do some serious programming -- for cheap! I have control over every step of the process. If creating software becomes something that you use AI to do rather than do yourself, it will require surrendering much of the process to mysterious cloud entities, like William Gibson's electronic voodoo gods floating out on the edges of cyberspace. Unless we have open-source-equivalent AI models that we can download, play with, run on our own hardware that actually give good results. I think I will wait for those to become available on hardware I actually have before faffing about with AI.
> [Wozniak] was overjoyed when he learned that the skill that put so much effort into suddenly became massively easier. He wasn't worried about not being able to earn an above-average salary from this anymore, he was happy about all the new cool things he and everyone else would be able to build.
If you want to get things done, then having your skills obsoleted is good.
This is true, but it also ignores the economic reality: your employer wants to get things done, but money is their constraint more than time. If they can replace you, they are obligated to do so.
This is a nice idea, but in economic reality, you won’t be getting anything done if your boss lets you go.
"Boss" always means "somebody like me from the same industry that is just more entrepreneurial and risk-accepting than me, who prefers a safe job".
Become a boss, literally.
Yes, I know, some industries are entrenched with old interests and gatekeeping. Then don't become a boss at building gas stations or housing or whatever. There are new, far more lucrative opportunities coming up.
Something else that frames this conversation well is to consider the experience for a non programmer, inspecting the AI results to some query. We view this all through very adept and skilled eyes that know how to code an how to read code. Imagine you don't know anything about programming...
* You don't know what a function is (or a method / class)
* You don't even understand variables, static types, bools/ints/strings
* You have no idea about an entry point such as main, how returns work
* You have no idea about OOO, organising code over multiple files
* You don't know what a socket is, you don't understand I/O, file handling
To top it all, code looks alien to you, you have no idea how to read it.
The only individuals who would be able to bridge the knowledge above, would likely become programmers anyhow.
* You don't have experience with basic technical decision making -- e.g. why would you pick Java vs. C vs. Python vs. Go vs. any other language, and how would that decision change if you were building a web app vs. mobile app vs. embedded software?
* You don't have a familiar set of tools and established preferences for the aforementioned decisions -- i.e. if an experienced programmer is building a web app, there are probably specific languages and tools they're inclined to reach for.
* You don't know about application deployment and productionization. How do you get your project somewhere people can use it? You don't know about deploying code to servers or running it in Docker or K8s or PaaS.
* You don't have the practiced logical thought process for things like algorithm design, debugging, figuring out user bug reports, identifying edge cases, predicting user behavior, etc.
* You don't have the experience with things that experienced programmers find "obvious" -- e.g. if you're building a web app, you need an authentication system, registration flow, password reset, etc.
I think the main effort to address this will be LLM-driven low-code solutions. Imagine something like Airtable, but with an integrated LLM assistant that can make changes to your app and write code to implement business rules and automation. This gets rid of problems like choosing your tech stack, deployment, etc. and it minimizes the amount of code the LLM has to create.
I imagine this will have effects comparable to those of spreadsheet software: non-programmers will be able to build custom tools with a newfound ease and some will end up learning a lot of programming skills in the process.
And, much like with spreadsheets, these tools will end up becoming business critical in unexpected ways and will turn out to be a huge pain in the ass to maintain. They'll find that parts of their business logic don't work as expected and that they can't figure out how to prompt the LLM to get it to implement the rules the right way. The users of these tools will basically have to learn some programming anyway or they'll have to hire experienced programmers to figure the problems out for them.
Yeah, I think this nails it. I think about the non-trivial parts of writing code, and I think about all the previous failed attempts at this sort of thing, and I remain unimpressed so far.
I think specifically about things like optimizations and wonder how an LLM would deal with that. In the last week I’ve written code that dealt with file I/O slowness, thread contention, thread deadlock, etc. In order to even begin fixing these problems, I had to first identify what the cause was by debugging them. Then I had to reason about how the program works to think up a good fix for the issue in our specific case.
There are no stack overflow answers for some of the problems I fixed this week. Heck, at one point in the past, I even wrote some code that we thought about getting patented. It was an interesting solution to the problem. And the problem itself was not a common one (but not unheard of). There’s a small amount of literature on it, but not a ton. I don’t see LLMs being a threat to this type of work. But in talking about it, I do see how some other type of AI might assist with it.
The biggest effort when I write code is to make it readable for other developers. The day where non tech people start using AI to produce non critical software and start interacting with code as a black box, simply telling the AI what they want and pointing the edge cases to correct, how much simpler development will become for AIs?
I really doubt if AI will have any impact on developer jobs. If I want to build an alternative to TurboTax, how much can AI help? I like to think of software as automating a large number of bizarre corner cases which are a result of us being human. And corner cases are what AI usually fails at.
In 2019, this is sci-fi
In 2023, it can help engineers understand the complexity of taxes, translate some tax rules into code, help refine edge cases and fix mistakes.
In 2030, ???
Things move insanely fast lately. I would not bet on anything.
It honestly seems sort of like a straight path toward an LLM that can hold a whole codebase in context and can produce new functionality on command. Fundamentally, software is just taking inputs and producing outputs; we will almost certainly be able to define those and let the LLM do everything in between within a year or two.
And brain surgery is just cutting someone's head open with a scalpel. Programming languages are stupidly information dense. I'm highly skeptical an AI will be useful for non-trivial boiler plate any time soon, they lack the reasoning powers to get the details right even occasionally for common tasks and basically never for uncommon ones.
While I take your meaning here, I would suggest also that natural languages are even more stupidly information-dense. And while the rules of their logic are not as hard and immutable as the semantics of programming languages, they do have plenty.
I don't agree that natural language is more information dense than most programming languages, there's a bunch of shared context, work, and assumptions that must be understood before effective use of both, but natural language is typically pretty loose and relies heavily on context for real meaning, which is still subjective. Conversely, PLs have no subjectivity once executed/compiled. Natural language is very verbose when trying to express the concepts of a PL with the same specificity, just look at how long entry-level tutorials for writing hello world in any PL are. Or try explaining all the details of even a basic function call in C to a CS101 class. You need to say a lot of words to describe the same exact thing.
PLs benefit from being targeted at a specific domain dramatically that NL just can't.
Programming languages are stupidly information dense indeed, but if that's the most major roadblock, I don't see why throwing more compute at the problem won't eventually solve it. GPT-4 is already handling a very stupid amount of information density.
My point isn't the information density, it's the fact that every detail matters. LLMs aren't great at details or facts and therefore aren't really suited for writing software right now. It's not a computational limit, it's intrinsic to how LLMs work.
Something a lot of people aren't considering is the added power the LLM will have once we are building systems from the ground up specifically for the AI to use. Right now it's doing a decent job with languages and tools that are built for human usability. I imagine soon we'll have languages, systems, and tools specifically built to be extremely usable by AI.
Consistency is still a huge problem. I tried to build a D&D character with Chatgpt the other day. Absolute nightmare. For example, it said the character gets a racial bonus to two attributes. I told it to give me my attributes taking into account racial bonuses. It applied the bonus to one attribute but not the other.
Maybe at some stage software will also have limits to, I know that is hard to imagine but maybe why software has stagnated a little is because we’re running out of actual things it can do for us from a societal and economic standpoint?
People seem to assume tech progress is always some linear or exponential thing where things just keep getting better.
They can’t imagine we could also just hit a plateau and have no progress for a decade until some other major breakthrough happens.
AI art is hitting a plateau or has hit a plateau. Progress came quickly for months but nowadays each update is just a bit better details with more resolution and better hands. That’s it.
If you have AI anxiety, just read my comment history.
I am not saying that some linear or exponential curve is certain but so far I don't see any sign of slowing down. The research papers seem actually full of new avenues and very promising low hanging fruits.
> AI art is hitting a plateau or has hit a plateau.
I am not sure I am following. Are you saying AI art has hit a plateau since stable diffusion and dall-e 2 got released? This is less than a year ago. Look for plateau at the scale of a 5 to 10 years, not 6 months.
Is there a name for the technology advancement fallacy? I see all the time that people think with enough time technology will simply make every problem surmountable. It doesn't matter if it's limited by physics, conservation of energy, or reality
I don't think your second list item here is true, or uniquely true of GPT/LLMs anyway, so I'm not sure we're any further along than we were in 2019 for this specific problem.
Help engineers understand taxes: it is pretty much part of the GPT-4 demo. You can copy the tax code in it and ask questions about it. As a matter of facts, it helped me with some taxes questions I had this year, although it was ChatGPT and I didn't paste the tax document in it.
Translate some tax rules into code? I did something similar last week for a complex billing project I am working on. It didn't give me the best variable names and made up a few some functions but the formulas were correct and I reused a good chunk of it.
Fixing the logic of the code it just wrote? I think there are enough examples of that on Twitter.
I use it a fairly amount and it does indeed hallucinate things here and there. I know not to take things at face value.
But whenever I have to deal with a complex question such as taxes, I often find it very useful to break down a problem, rephrase and provide context around my question and offer different leads to follow or cross reference.
I see it as GPT spiting out a skeleton of an answer onto which I can attach some meat. It is very useful when I don't really know the domain and therefore how to attack a problem.
As an SRE I'm not too worried on a 25 year horizon. Yes I pulled that number out of my ass.
Depending on the quarter only 10% to 25% of my time is writing code. The other is dealing with: finding high level reliability concerns, figuring out what solutions to implement, diving into complex acute production issues, evaluating trade-offs for fixes, finding ways to make things more efficient, negotiating what to prioritize with other teams/orgs/leadership, etc.
Could an AI do some of these things? Sure. Are they going to be able to figure out the right stakeholder to talk to, negotiate with them, get buy-in and decide on overall designs/solutions? No.
I'd be a bit concerned if I was a frontend developer (though any decent one could pivot to something else.) however.
Why? Frontend development is one of the most challenging roles and for any non-trivial UI application, there is a ton of "non-code" user context that isn't even available to an AI. AI might be able to spit out template websites, but it can also spit out Cloudformation templates.
It is extremely challenging! (Based on my brief foray into it.) But leveraging LLMs to create really good starting point for a new feature would reduce the amount of frontend developers needed.
I guess what I mean to say as SRE/Systems, we code so little already that it being able to spit out a CF template wouldn't reduce the needs for cross-functional ops type work.
I think a review of the state of frontend tooling will show that efficiency with respect to developer hours is not a widely shared priority. I only say this with only 50% intention of starting a flame war.
well the presumed doomsday is presumably going to hit everybody, because how hard is it to retrain from a just-made-redundant programmer to an SRE? if your life depends on it? so the jobs that are still needed will get flooded with people from adjacent specialties because people want to eat.
or so it goes, no idea if this is going to actually happen.
okdood64 says>"As an SRE I'm not too worried on a 25 year horizon. Yes I pulled that number out of my ass."<
I so wish you had not done that: the number 25 is now tainted in my workmates' collective memory, no one of them is willing to touch it, yet I fear we shall need it again soon.
You should have instead "pulled that number out of a hat":
> I really doubt if AI will have any impact on developer jobs.
It already has. Every single one of my coworkers is using Copilot. If you mean in terms of the job market, the shift to more declarative supervision of the tool rather than imperative implementation of the work yourself means that the bar for skill is far lower.
There will come a time when product managers and UX designers just use AI to generate Figma mocks, and AI generates an entire scaleable platform from just that. Will they need developers? Sure, but maybe 1. Certainly not 8-10.
> Every single one of my coworkers is using Copilot
And all of you still got a job. Copilot has been out for a time now, if it was transformative we would see companies releasing a ton of products and fix all the bugs and then fire most of their developers etc the past half year, but that didn't happen.
Making developers a bit more productive wont have an impact on developer jobs. It changes your day to day job a bit, but that is like switching to another IDE, it wont transform the industry. Even if it reduces the need for developers by an entire 10% it is not that big of a deal, it would be worth many billions but for an average developer they wont notice much of an impact. It is like the productivity improvement of giving developers another screen, a better computer, a better IDE etc, those are nice but they didn't transform the industry.
Fortunately in every company I worked, the main limiting factor to hire engineers seem to be money and not lack of tings to build. For a time, it might just translate in companies building more features and being even more wasteful than today.
I think capitalism might take care of this. Let's say we have tools that make devs 10x more productive. A company could lay off 90% of their engineers and produce as much as they did before generative AI. What if their competitor decides they're going to retain all of their engineers and produce 10x more than they did before? That's going to force everyone to staff up to be able to keep up with the 10x company.
This has never saved anyone's job from automation. Ask the coal mining industry in the US. In fact, capitalism will do the opposite because investors own most companies and investors like to see reductions in head counts.
> What if their competitor decides they're going to retain all of their engineers and produce 10x more than they did before?
There are very few companies where "more code" = "more profit". Code is a cost center and support function of the business. Look at all the companies that recently fired 20%+ of their coders and are more profitable than they were before.
If your coders are 10x as productive and your business is growing in a healthy way, your bottleneck is going to be sales and marketing. It probably already was sales and marketing. You're not going to retain all the people you don't need.
> If I want to build an alternative to TurboTax, how much can AI help?
The alternative would be built to help users leverage AI to do their taxes. Upload your docs and it’ll get you 90% of the way there or something like that. AI will help developers build that alternative faster. Almost every piece of software will need to be reimagined from the ground up in this way. That is the job security
I think the article ends up makes the point in favour of AI anxiety:
> In particular, I started to realize that every little thing that we decide not to do as a company could have been done,
If every single company has more development bandwidth available, then at some multiple it would start driving demand and then wages down quickly. The multiple doesn't have to come just by AI improvements, as a lot of non developers can start using the AI to implement things.
It is very good if you intend to found a company, you have multiplied the things your company can do by leaps and bounds. Not so good for salaries developers.
While I appreciate that smart people will use this as simply the next phase in bicycles for the mind, I can't help but see the current AI revolution as the first tremors of the singularity.
At the rate this is developing, I can't help but think AIs will be designing smarter AIs within a decade or two. People reassure themselves that these AIs are just synthesizing the internet datasets -- that they don't have "souls". I would say that doesn't matter -- whether or not a submarine can swim, a race of advanced submarines could dominate the ocean.
I'm not fully convinced that the singularity requires intentionally designing a general AI, or that a glorified chatbot with some Darwinian forces applied to its existence couldn't get there.
People keep thinking The Matrix or I Have No Mouth But I Must Scream, but I just keep thinking Her -- that even a benevolent AI revolution means humans are pointless. Or worse, Bladerunner, since I'm quite sure that if we do create a general AI we'll still murder it after it answers 15 questions or steps out of line.
That anxiety is much harder to shake than the one about my job - I thought we had more time.
Honestly threat from AI is not novel in the sense that anyone is under threat from being replaced by cheaper labor or eliminated altogether. AI is simply the newest one. Before this, it was jobs moving to India, RAD tools for non-devs etc. At the end of the day, whoever is unable to stay relevant is going to get steamrolled by the machine (no pun intended) of progress
Guys, AI is not doomsday. If anything, it will be a boon to society when it reaches mass adoption. I'm thinking of how much faster we can develop new technologies, how much quicker we can research things. MEDICAL research that can be progressed that much quicker, lives that can be saved, animals rescued from extinction, etc. Hell, even the Computer from Star Trek. All of those things have potential applications with AI.
It's just a new tool. Just like when computers arrived, mathematicians were not replaced just because computers could do the calculations faster. When autotune arrived, artists were not replaced - they just added that to their repertoire.
Yes, some jobs will be lost. Yes, there will be a changing of the guards. That's just the nature of things - it's just how it's always been.
> Then at some point he describes his first time writing a game in software. He goes on about how running all the variations he tried in software would take a skilled engineer months of work, and he had done it in a couple hours - and was ecstatic.
I am sure everyone felt ecstatic at some point, regardless of their trade or field, when they worked on something all day or longer and got it done. The sheer joy it brings afterward makes your mind completely calm; past and future cease to exist for that moment. There are always going to be people like that.
AI or no AI, 20 years from now, people will still be writing software because it just makes them happy. Sure, the nature of problems will change, but these are the people who will keep pushing the field forward.
“I never think of the future - it comes soon enough.” - Grandpa Einstein.
This is great. The world will change dramatically, and it is OK to be afraid of that and the unknowns, but the best approach is to take advantage of all the good things that are coming and do not spend too much time being sad about the things that are going away.
That depends on what the good things are that are coming and what the things are that are going away.
And it also depends on the bad things that are coming, and what new things are going to come.
You are making an ethical argument based on utility, those two are not the best in combinations like that, the ethical argument should be made on an ethical foundation, not on the temporary outcome.
It's interesting how one of the most obvious problem spaces, the one of ethical reasoning is one that is least amendable to pure logic without adding a very large dose of humanity.
Example: killing is bad. But killing to save someone from endless misery (called Euthanasia) can be a good thing. But it still requires - where I live - two doctors to come to the conclusion that someone is beyond medical help and suffering needlessly. And then there needs to be a clear declaration on the capability of the person making the request that they are able to make such a request in the first place.
It isn't a matter of Ethics. Ethics implies that I have control of the situation (Do I kill the person who wants to die or not) or (Do I stop AI or not)
I am not in a position to decide to stop technological progress. So I am choosing control what I can, which is my attitude.
Those that are in control of that progress and whether or not they release certain capabilities to the general public are the ones that will have to deal with the ethical bits, and society as a whole can do so as well. Politicians can make laws based on ethics.
Broken windows fallacy. If 100m highly competent people were involved in managing cancer, and we cured cancer, these 100m highly competent people could then turn their attention and skills to something else important. There are plenty of health care crises in the world
Can any ChatGPT fans point to some actual case studies where the time to market for producing a non-trivial piece of software was dramatically reduced?
It appears[1] to be a single file (~200 lines of code) in an otherwise boilerplate React project. I know JS/TS very well, also have no iOS development experience, and could code this in a few hours easily. I don't even know React particularly well.
On top of that, you don't know how much plagiarism went into this. It could be line-for-line an exact copy of someone else's code. A human can copy and paste code, too.
An interesting proof of concept would be a larger application with a multi-component UI, some business logic, and a database access layer. Almost everything that you could sell would have those qualities.
Nothing I wrote was personal or rude. I argued against the content of your comment, not you as a person. HN is only interesting because people disagree with each other here all the time.
> go build it
I don't know if my comment implied this or not, but I can't build it because I don't think it's possible. I don't think ChatGPT can help with non-trivial applications yet, and LLMs may never be able to because they can't "understand" things, they can only guess what billions of other people would write next.
That's why Copilot works (pretty) well, but writing an entire novel with a cohesive plot doesn't.
At one point around the turn of the 20th century, two guys riding on horses side by side came upon an automobile going about the same speed. The one guy said to the other, “I don’t get what people see in these automobiles. We are going just as fast on our horses. These autos aren’t going to last very long.”
And they were on a relatively flat part of the tech curve…
I've finally gotten around to reading Homo Deus by Yuval Noah Harari. It's been very interesting reading this alongside everything that's happening with AI at the moment.
Would be curious to hear from others that have read it, but I find it difficult to fault his core arguments (or at least what I interpret them to be).
The problem white collar humans have right now is that they're highly specialized. They're incredibly good at being very effective cogs. This is exactly what AI is getting so good at doing (in certain verticals). Traditional capitalism effectively demands that if a company can pay the owner of an algorithm 10% of what it would pay for a human to do the same thing (for even 80% of the quality), then that's what will eventually happen.
Can government regulate it? They can sure try, but then either the companies or AI hosting providers will move to a country that doesn't have the same restrictions and it will happen anyway.
Then people will say "it will just open up other industries". I'm not sure it will. What other industries will the swaths of copywriters, lawyers, accountants retrain for?
I just don't understand everyone saying "It's going to make everyone's lives easier". In the short term sure, but if AI gets to where it's owners want it to get to, then a lot of people are going to find themselves professionally worthless.
It's entirely possible that this is just not something we're prepared for, and it's almost guaranteed at this point that there's no stopping it.
What's really interesting is this book was released in 2016... and Yuval was using Microsoft's Cortana as the example of this upcoming AI...
I can't wait for the LLMs to 'invent' their own more efficient programming language, and a dumbed down IDE for us meatbags. Maybe they can even figure out a better way to silicon then we do now.
And once I started to look at it that way, I started seeing all these potential ways to use AI to do what I do now, only better and more easily. So I’m going to keep looking for opportunities and going after them. If I’m going down, I’m going down swinging.