Technically, "enable_ai" doesn't imply that all AI features are really turned off. Without context, it might imply that some basic AI features exist and "enable_ai" just enables further features. "disable_ai" is unambiguous.
Enable/disable are the only two dichotomies in the whole of all possible states regarding this AI feature, so I'll have to bite: What's your "Technically," referring to here?
First of all, enable/disable is a dichotomy, and is not a set of two dichotomies.
Second, imagine an editor that has AI running in the background, scanning your files. "Enable_AI" could just mean enable the visibility of the feature to actually use the results. On the other hand, it would sound more suspicious if there were some background AI tasks running, even for training purposes, if "disable_AI" were "True" as compared to "Enable_AI" to be false.
In other words, Enable_AI COULD have the connotation (to some) of just enabling the visibility of the feature, whereas Disable_AI gives more of a sense of shutting it off.
Imagine for example you're in a court of law. Which one sounds more damning?
=======
Prosecutor: You still have AI tasks running in the background but AI_Enable is set to false?
Defendent: But Enable_AI just means enabling the use of the output!
====
====
Prosecutor: You still have AI tasks running in the background, but AI_Disable is TRUE?
> Enable_AI COULD have the connotation (to some) of just enabling the visibility of the feature, whereas Disable_AI gives more of a sense of shutting it off.
Personally, I don't feel much difference between the two. I doubt that an average reasonable person would either.
Well, I do feel a distinct connotational difference, but then again, I could be the only one I suppose. And if the average person doesn't care, then why argue about it at all? And how many average people will be using Zed anyway?
My pet peeve is CGO_ENABLED compiler option in Go. It's set to 0 or 1 to enable/disable (can never remember which mapa to which)
If it was just CGO=true or CGO=false I think so much confusion could have been avoided.
I think similar thinking applies here. It's convoluted to disable something by setting ai_disable=true because I read it like: setting false true instead of just setting boolean.
> It's set to 0 or 1 to enable/disable (can never remember which mapa to which)
That's crazy. Boolean logic is the most fundamental notion of computer science, I can still remember learning that in my very first course on my very first year.
This follows a convention that was well established and felt pretty ancient when I learned about environment variables in the nineties (i.e. 30 years ago). Variables that are flags enabling/disabling something use 1 to enable, and 0 to disable. I'd not be surprised if this has been pretty much standard behavior since the seventies.
I always thought that an unset boolean env var should define the default behavior for a production environment and any of these set with a value of length>0 will flip it (AUTH_DISABLED, MOCK_ENABLED, etc.). I thought env vars are always considered optional by convention.
I don't doubt any of that but why stick to such old conventions when there are explicit and immediately clear options?
I don't think me writing an if condition
if boolean != true
instead of
if boolean == false
should pass code review. I don't think my pet peeve is necessarily different from that. I understand there's a historical convention but I don't think there's any real reason for having to stick to it.
Hell, some of the other compiler options are flags with no 0 or 1, why could this not have been --static or any flag? I'm genuinely curious.
Moreover, 0 here maps to false but in program exit codes it maps to success which in my mind maps to true but then we have this discrepancy so it does not appear to be the right mental model.
Sorry to say, but this is the general trend and nature of technology. Technology can only advance to the level that it does because it does isolate people. The isolation effect cannot be fixed by social means, because it as the pressure gradient of technological and economic development on its side. The very adoption of technology encourages isolation and hence more dependence, which in turn increases its economic power because people begin to need it. And I dare say, even develop a psychological dependence on it.
The only way out is a strict restriction on the development of technology, especiall AI. Sadly, those who develop it and fund it grew up with it and it has become a comforting and crucial part of life so it is simply impossible to convince them that technology has a systemic (rather than merely social) downside. They convince themselves that we just need to learn how to use it, because a true systemic decrease in life quality via technology would imply that their entire world is wrong, and most people cannot handle that psychologically.
Why? I see no arguments, only propositions. (I will bring little arguments myself below. Not flaming.)
Technological advancement is in my opinion unrelated to the way current “social” technologies impact “social relations”. Things are not going well, I’d agree. But I can imagine one hundred beautiful features (or historical technological advances) that have improved social relations, trust and general well-being.
Current big tech is dystopian and extraction based, but that’s not the general trend of the last two centuries. In the late ‘90s, early ‘00s I was actually very optimistic about technology and the state of the world (poverty, global village, war, climate).
Antisocial tech has put us back a long way. But that’s not technology general, ‘just’ Google, Apple, Meta and the app-0-sphere being or doing evil by extracting attention in finite time via small machines. The big machines have brought us much. And even then both ways; for good and for bad.
1. Even early computers of the 90s and 00s tended to reduce face-to-face contact. At least a lot of children started to spend more time on the computer than hanging out in real life. (The latter wasn't obliterated, but reduced.)
2. Airline and rapid travel encourages people to move away from friends and family because they can be visited or reuinted on occasion more often.
3. Not sure how you were enthusiastic about the climate – it's been going steadily worse since the use of fossil fuel technology, which in addition makes it harder for people to engage in sustenance farming in many places due to unpredcitability. People have to rely more on industrial farming
4. Industrial farming and large-scale farming puts many family farms out of business, meaing less human dependence on individuals and more on technology.
5. YouTube, etc. brings more knowledge to the world but many tutorials mean people can be more independent and rely on individuals less for their knowledge.
6. Personal cars mean people do not have to rely on each other for a lot of manual labor like hauling stuff, and they now drive instead of walk to the grocery store, which means a lower likelihood of encountering others you know.
7. All communications technologies in general mean less in-person communication, or a greater ability to move away from communities. The internet means less going to the library, etc.
1. time spent in front of screen means you aren't in a face to face interaction with someone, but the same can be said of books. When many people still connected to a local BBS it was another way to get to know people around you and meetups were common.
2 & 6 & 7. access to airlines (and travel in general) was a net positive for meeting new people. Suddenly people could meet and get to know far more people than the handful of folks in the town they grew up in. Travel is probably on the best ways to meet new people and gain relationships and being able to pack up and move to where your new friends/love interests are is a good thing while communication tech lets you keep in touch with people who are in different cities/states/countries and maintain those relationships
3 & 4. People have to depend more on others to do their farming for them, but if you're working the fields you can't be out meeting real people face to face either. You're much more likely to have a social encounter at a grocery store than a grain silo. the hours you aren't spending growing your own food means you have more time to be with the people you love
5. independence is good and learning new skills means going out to new places to practice them or for supplies and equipment where you can meet other people with similar interests. It's the parasocial aspect of youtube that's most harmful.
> 2 & 6 & 7. access to airlines (and travel in general) was a net positive for meeting new people. Suddenly people could meet and get to know far more people than the handful of folks in the town they grew up in.
Debatable because social relationships also become more frivilous.
> 3 & 4. People have to depend more on others to do their farming for them, but if you're working the fields you can't be out meeting real people face to face either.
But at least you can develop closer relationships with fewer people. Again, it's a matter of what place on the spectrum is ideal.
> 5. independence is good and learning new skills means going out to new places to practice them or for supplies and equipment where you can meet other people with similar interests
Independence is good only up to a point. Too much independence is a natural consequence of advancing technology and becomes pathological.
>1. time spent in front of screen means you aren't in a face to face interaction with someone
Even if you are video chatting? I video chat with family and friends all the time to keep in touch over longer distances. I feel technology is helping there a lot.
If there were no video chatting, people would have more incentive to meet in person or not move away as much. Although a small proportion of people will have video chatting over nothing, the GENERAL trend will be more distance between people, even if in SOME cases it means less distance and more meaningful communication.
That's the key also: a small subset of people who benefit in the short-term does not mean that the technology doesn't move things in a worse direction in the long term. After all, the introduction of new technologies like video chatting sometimes just solves problems created by older technologies, possibly leading to a situation of decreasing LOCAL maxima, each of which seems like it is an improvement because it is, after all, a local maximum.
Hmm so it seems technology is empowering the individual to the level of killing society? I mean it in the sense that we came to this development over millennia of social fueled evolution, and now technology allows us to get rid of all this "legacy". I'm only thinking loud here, but it seems conservatism should have a better target with this, or at least more close to reality, instead of only attacking the consequences with magical thinking.
That is true. But the downside is that technology is also at the same time pushing biological life aside, because its development is fundamentally unsustainable. So it also means eventual complete subservience to it without any true freedom.
Thanks for your replies. I understand the worldview. We differ on a few points of view.
Large scale farming releases hands for more specialization. Specialization leads to interdependence and (in my naïveté) peace. ‘We’ did get a very large part of the world out of poverty. That was part of my optimism. And I thought we would reach peak oil faster and go for sustainable faster (batteries are still the major future potential upside for me).
Perhaps in a ‘might have been’-scenario 9/11 and the end of the end of history (Fukuyama), plus the antisocial tech are the turning points. Haven’t thought that shift from techno optimism to political, social and cultural negativity (in me, but it seems a trend as well) through enough. The whole bitcoin shebang, the return of the 80s American Psycho capitalism and consumerism, the wars just rub me the wrong way. I might be turning hippie in my second half of life.
As a child of the 80s I’ve never felt technology reducing social interaction. But that might have been a temporal sweet spot. Massive amounts of screen time, massive amounts of outside time (friends, sports).
Kind of ridiculous. If you've ever actually read Ayn Rand, she was a competent writer with a talent for simile but had a remarkably simplistic and glorified view of pure capitalism that was obvious a psychosis from her early days when her father's farmacy was nationalized. Her characterization of pure capitalism and rationality as the only good any everyone else who disagrees as lazy (as in her book Atlas Shrugged), as well as her complete ignoring of the commons and other non-human creatures, has become a religion for some. Trump himself of course loves Ayn Rand.
Rather represensible, although her literature gives insight into the mind of the enemy.
>Top Gun: Maverick and Taylor Sheridan’s nostalgic, libertarian-inflected Yellowstone. And a longstanding Christian culture industry has backed projects like the 2023 film Sound of Freedom, a dramatization of child trafficking that grossed more than $242 million for Provo’s Angel Studios. The Christian drama The Forge earned $30 million on a $6 million budget last year.
There’s good evidence that this is an untapped market which normal studios are too afraid to tap.
I don’t think it’s ridiculous at all. If you listen to people on the right they have felt unrepresented in popular entertainment for a long time, and these successes are evidence that they are willing to pay for entertainment which aligns with their values.
Given the length of _Atlas Shrugged_ (1168 pages) I would wager my entire earthly possessions that he’s never read it. But possibly he’s been advised to like Ayn Rand in theory.
I love math, completed a PhD, and am very self-disciplined. But even so, I don't think I would have been able to learn much on my own with video lectures, at least not at the start. For some reason, it seems like you need to reach a "critical mass" of knowledge first before you can do that, and I've observed that a crucial component is being in a program with others, and definitely having a very experienced mentor.
Without a very experienced mentor, I think it's very difficult to get to the independent-learning stage with math. That's the key. You need someone to go through your work, correct you, and make sure you don't go off in a very wrong direction.
So my advice is find at least a graduate student in math to help you. It's like a piano teacher, if you've ever taken piano, you know it's absolutely mandatory to have a teacher. People who self-learn from the start end up being able to play but not very well.
Edit: one other crucial component is time. If you're really interested in knowing something like linear algebra, analysis, or calculus with fluency, expect to spend at least 10 hours per week on it for a year. Two hours per week will give you a cursory and very weak understanding only.
> But even so, I don't think I would have been able to learn much on my own with video lectures, at least not at the start.
This was exactly my situation. Videos can give you a lot of structured, well presented information. And for MIT courses you'd get this knowledge from the very best. The problem is that no matter how well the subject matter is presented, I would hit some conceptual snag that I couldn't resolve just by repeating the sections in the video.
Now, years ago, to clear up the concepts, I would go to math stack exchange, write down exactly what I wanted to understand using mathjax and hope that someone will provide a detailed enough explanation. Most of the time I did learn from the answers, but sometimes the answer would be too succinct. In such cases there would be a need for a back and forth and stackexchange is not really designed around that usage pattern. This hassle would eventually make me give up the whole endeavor.
Now however there are LLMs. They don't need mathjax to understand what I am talking about and they are pretty good at back and forth. In the past 6 months I have gone through 2 full MIT courses with practice sheets and exams.
So I would encourage anyone who went through the route of self learning via videos and found it to be too cumbersome and lacking to give it another go with your favorite LLM.
My only concern with using LLMs to learn new material is being certain that it's not leading me astray.
Too many times I've used LLMs for tasks at work and some of the answers I've gotten back are subtlety wrong. I can skip past those suggestions because the subject is one I'm strong/experienced in and I can easily tell that the LLM is just wrong or speaking nonsense.
But if I didn't have that level of experience, I don't think I would be able to tell where the LLM was wrong/mistaken.
I think LLMs are great for learning new things, but I also think you have to be skeptical of everything it says and need to double check the logic of what it's telling you.
I have the same doubts, it's like the old rule of reading a newspaper story. When it's outside your area of expertise you think they're a genius. When it's something you know a lot about you think it's an idiot.
But it might still help, especially if you think about the LLM as a fellow student rather than as a teacher. You try to catch it out, spot where it's misunderstood. Explain to it what you understand and see if it corrects you?
LLMs are indeed excellent as conversation partners for helping with difficult concepts or for working through problem sheets. They’re really opened up self-learning for me again in math. You can use them to go much deeper with concepts much deeper than the course you’re taking - e.g. I was relearning some basic undergrad probability and stats but ended up exploring a bit of measure theory using Gemini as well. I would go so far as to say that an LLM can be more effective for explaining things than a randomly selected graduate student (though some grad students with a particular talent for teaching will be better).
What the LLM still does not provide is accountability (a LLM isn’t going to stop you from skipping a problem set) and the human social component. But you could potentially get that from a community of other self-learners covering the same material if you’re able to pull one together.
Even if they don't skip, they adopt weird hand positions that are hard to correct. There is just too much motor movement that needs to be done right that cannot really be explained or learned by watching a video or reading a book. It's actually similar to math in a certain way, where motor memory is replaced by subtle steps in logical reasoning.
Not sure why you added "but even so", getting a PhD is fundamentally about believing in the necessity of the mentor/mentee relationship for learning. It's not at all surprising that you would find:
> You need someone to go through your work, correct you, and make sure you don't go off in a very wrong direction.
I've learned enough to publish (well received) technical books in areas I've never taken a single course in, and have personally found that in-classroom experiences were never as valuable as I had hoped they would be. Of course starting from absolute 0 is challenging, but one good teacher early on can be enough.
Though I also don't think video lectures alone are adequate. Rather than focusing on "exercises", I've found I get the biggest boost in learning when I need to build something or solve a real problems with the mathematical tools I'm studying. Learning a bit, using it to build a real project, and then coming back when you need to unblock the next hurdle is very effective.
On top of this, books are just better for learning than videos (or lectures in general). Lectures are only useful for getting the lay of the land, and getting a feel for how types of problems are worked out. Especially with mathematics, you need time to look at an equation, read ahead, flip back, write it in a notebook, etc until you really start to get it.You really can't possibly get any of these ideas in 45-60 minutes of someone talking about it.
That's why, for me, online lectures don't really change the autodidact game all that much. Reading books and solving problems seems to have been the standard way to learn things well for at least the last several hundred years, and lectures don't improve on that too much.
Because the "even so" was for the "self-motivated" part, not the "getting the PhD" part.
> I've learned enough to publish (well received) technical books in areas I've never taken a single course in,
I'm talking about pure math here, not other technical fields which are more hands on and don't require as much mentorship. Programming is easier to self-learn than math for sure, because it is not very abstract compared to math. It's also guided by whether the code works or not.
Well the post is "Mathematics for Computer Science" which I don't think anyone considers "pure math". Most of my writing has been in the area of applied mathematics, the closest I've gotten to pure math would be some stuff on measure theory.
So yea, it might be a challenge to self teach something like cluster algebras, but at that level much of the work in the field is academic communication anyway.
I would say that you need to start at a lower level when self learning with a simpler resource. Something like Openstax. People get far too obsessed with the name attached to a resource than whether it is the right method of learning.
I am about finished with my CS PhD and I taught databases at the university during covid. I, personally, would have failed in the remote learning environment we were providing.
I am amazed at those wo fought or even flourished through that.
I’m currently enrolled in an online MS program, and I had never struggled so much in courses. The lack of social component might be what’s causing that. The material is mostly a recap of undergrad and things I already knew, so the coursework should not be so difficult for me, but it’s been incredibly difficult.
Then again, William & Mary had some incredible teachers, and maybe the online program through a different school just isn’t very good at designing assignments and teaching by comparison. But I feel that there was a difference in how I could succeed at challenging assignments when I was among other students in a social setting. The work in undergrad was highly rigorous, though exploring it alongside other real-life students made it a very different undertaking.
I'm a fourth-year W&M student considering an online MSCS program post-grad (possibly the same one you're in) - I'd love to hear more about your experience in it, as compared to traditional undergrad, if you'd be willing to share?
I've found you have to be very careful with LLM as teacher since, especially when it's the one explaining, it is wrong more often then you might think, and there's no way to know.
The best use of an LLM I've found in learning is for when I explain to it my understanding of what I learned and have it critique what I've said. This has greatly reduced the amount of backtracking I need to do as I start to realize I've misunderstand a foundational concept later on when things stop making sense. Often simply having the model response with "Not quite, ..." is enough to make me realize I need to go back and re-read a section.
The other absolute godsend is just being able to take a picture of an equation in a book and ask for some help understanding it notationally. This is especially helpful when going between fields that use different notation (e.g. statistics -> physics)
Of course there are bad teachers out there. The question wasnt "are there human yeachers as bad as an LLM" it was whether an LLM is as good as a good human teacher
> We just need the Wille—the will—to ask it.
Thats the thing. Its is a very good search resource. But thats not what a teacher is. A good teacher will help you get to the right questions, not just get you the right answers. And the student often wont know the right questions until they already know quite a bit. You need a sufficiently advanced, if incomplete, mental model of the sybject to know what you dont know. An LLM cant really model what your thinking, what your stuck on, and what questions you should be asking
> You need a sufficiently advanced, if incomplete, mental model of the sybject to know what you dont know.
I believe that through a few common prompts and careful reflection on the LLM's responses, this challenge can be easily overcome. Also, nobody truly knows what you're stuck on or thinking, unless you figure out the existence of unknown and seek it out. However, I do agree with your point that "a good teacher will help you get to the right questions," since a great teacher is an active agent; they can present the unknown parts first, actively forcing you to think about them.
- when people see some things as beautiful(best), other things become ugly(ordinary)....Being and non-being create each other. — Laozi, Tao Te Ching
Perhaps the emphasis on the greatness of an LLM gives the impression that it undermines the greatness of a great human teacher, which has already led to a few downvotes. I want to clarify that I never intended to undermine that. I have encountered a few great teachers in my life, whether during my school years or those teaching in the form of MOOCs. A great teacher excels at activating the students' wille to seek the unknown and teaching more than just knowledge. Also, the LLM relies heavily on these very people to create the useful materials it trains on.
Metaphorically speaking, the LLM is learning from almost all great teachers to become a great 'teacher' itself. In that sense, I find no problem saying "LLM could be the teacher, one of the best already."
Yes, it would not be good if the same four days were selected. The third day should be free-ranging, because then everyone else is out too, which is irritating!
Darn, it's all rather obvious. Personally, I could never sustain a regimented five-day workweek. That just left Saturday to decompress, and Sunday to be on the edge for going back. There's no time for actual life, especially in North America where people typically get shitty amounts of vacation time. Is that life?
Personally, I'd rather be poor, at least defined by modern economic standards. And I did choose to be. And after having quit my high-paying job, I'm so much happier.
Seriously my job is relatively relaxed, but I am still losing my mind because it is just an endless stretch of 5 day weeks for... 20 years? The only version of reality where I could accept that is if I had kids and family all dependent on my work, but without that I am unsure how people aren't filled with ennui when they look into that future.
Well, I think that's part of the lock-in. Not kids exactly, but the fact that society has made raising kids over time a hugely expensive activity (whereas once it was not in much earlier times). Because in modern society, having a family means paying for services to keep them functioning (extracurricular activities, romantic vacations for the parents, daycare), and that was encouraged by the system not because it makes us the most happy, but because it is the most efficient for technological innovation.
One does not necessarily need a healthy environment for the next generation of scientists if there is already a surplus of them willing to work in rather sordid conditions, and if there is a surplus of scientific discoveries that can be easily capitalized upon with few workers. Or if there is a surplus of science being done elsehwere. The system optimizes for short-term production and growth, and a good environment for science everyhere is not necessarily optimal for that.
Kids play a part in trapping you in that lifestyle though. If you’re fortunate enough to have a job where you can save money, as long as you don’t have kids you still have the option of just packing your bags and pissing off somewhere for a while.
Lifestyle creep is another way people get trapped in the rat race. Buying silly stuff like expensive cars on finance to show off.
Working from home makes it more bearable. It feels like I use my free time to make money instead of going to work. It's not like I do hobbies 16 hours a day
I wish the job market was better so I could find a more engaging job.
Yeah, it's tough to find an engaging job that provides a balance because of the insane competition that will only get worse due to the increase in efficiency with AI. Efficiency up to a point implies comfort, and beyond that point implies wage-slavery.
I sustained a five day workweek for more than two decades, but now that I have young kids it's hard. Weekdays are work + childcare, then two hours of being a vegetable, then sleep. Weekends are childcare, then two hours of being a vegetable, then sleep.
A 4 day work week would really help by giving me one day with 8 hours of me time, but that's not something any job here is going to provide. Fortunately, having sustained that 5 day work week for so long with "North America" compensation, I can comfortably go to a 0 day work week. It would be better for the economy if I continued to participate via a 3 or 4 day work week, but any job that would give me that would pay so little as to not be worth it.
> It would be better for the economy if I continued to participate via a 3 or 4 day work week
Not necessarily even. Better for the GDP but not necessarily the long-term economy, which might actually be more likely to thrive in the long run with more happy, balanced people out there.
Honestly I have not asked. I've poked around in the HR guidelines for companies I've worked for (big tech companies) and the provisions for less than full-time work are either non-existent or so full of unfriendly exceptions and approvals that I assumed it wasn't a realistic option. I have also never seen or heard of anyone doing it at those companies across the hundreds of people I have worked with.
I'd love to hear if anyone at a FAANG pulled it off, how they did it, and what the financial impact was.
I sincerely don't understand the surprise at all. AI is perfect for authoritarian regimes. Typically, they are not as efficient as more traditional democracies because they don't allow the markets to function as freely as in democracies, (even though in democracies there's a lot of corruption as well..) But AI can get around that because it replaces a lot of labour and can do things efficiently without the market.
I'd expect AI to be one of the prime weapons of choice of authoritarian regimes in the future, and no amount of regulation can stop it.
The only thing us normal people will ever get out of AI is increased efficiency at the cost of everyone else competing at an increased efficiency level via the prisoner's dilemma. And the people profiting off that series of prisoner's dilemma games will exactly be companies like Anthropic and dictators or quasi-dictators that can use their existing power to leverage the dominating effect of AI.
The most despicable and dark beauty of this scheme is that we will be completely distracted by the overwhelming information and novelty of it.
What I have found with these fonts (and I have tried them all) is that one isn't really much better than an another, but instead I have to switch between them (and others) because eventually I get sick of every single one of them.
I don't know. I like all the fonts, they're good. But looking at them for long periods of time makes me tired of looking at them and I just need to switch. You might as well ask me why I get tired of a certain food if I eat it too often.
Oh, OK. I asked, because sometimes people doesn't like a certain aspect of a font and can't stand it, and need to switch. Also, I'm also the exact opposite of you. I can use a font I like for a decade without getting tired of it. Same for a good color scheme for my terminals / IDE.
So it was a genuine curiosity of me. Sorry if it sounded rude or accusatory or similar.
> It's quite interesting to find someone who can use the same one over and over.
I like to solve some problems once, and once I solve them sufficiently, I don't prefer to touch them, so I can focus on other things. It's not that I don't look for better solutions, but I don't actively seek them.
Same is true for tools. I prefer to master a single tool over the years to jump from tool to tool.
I checked it out, it looks pretty great! That being said, I actually quit my full-time developer job so no longer can justify that sort of expense, haha!
Honestly, BBB means nothing to me. I see it and I shrug. Reviews, checking the business out yourself, and using a dozen other cues are more useful than BBB.
It is because it is a direct attack against human creativity. It separates people into two very disparate classes: those who want to use and develop it to become more efficient and rich, and those who hate it with a passion because for them, it takes away the beauty of humanity at the forefront of creativity.
Unlike blockchain, the philosophy and morality of these two classes, one represented by efficiency and one represented by human passion, are diametrically opposed in every respect.
OK - so I deeply value human creativity and I disagree with your first statement. At least I think we don't currently know whether it will work out this way.
My hunch is that human creativity is incredibly resilient and will route around damage. (But employability in creative professions? That's a slightly different topic - an orthogonal one strictly speaking)
I think we already do, at least for many people. AI-generated stuff reduces the value that humans put on human creation because human-only creation is harder to find. AI takes the joy of discovery out of many processes and activities. It's of course harder to get jobs in creative fields now such as translation and graphic design. It's a devaluing of creativity and it's discouraging to quite a few creatives. The very fact that many artists already feel depressed about AI is already itself a huge negative impact on creativity.
I can't tell you how many people have told me how depressed they are about AI and how they have less impetus to create things. Although one can certainly do it still for joy, it's harder for many in an environment that is so ruthless.
On top of "hard to find" I think it also displaces the market for real art, there are lots of blogs (or splogs?) like this one that are full of AI slop art, like this one
maybe the whole article is AI generated, but the second image from the top is just awful. If people get the idea that crap like that is acceptable, how can anybody sell real work?
so far as I can tell and the AI generated image on it is actually pretty funny and it really problematizes the idea that you could sell art to that kind of market.
I find the "vibe coding" idea offensive because I've often been on projects where somebody junior thought he did 80% of the work and then I have to do the other 80% of the work and it's been a very expensive and extensive project of figuring out all the little things and sometimes all of the big things they did wrong.
I really like working with the AI Assistant in IntelliJ IDEA in that it's like pair programming with a junior who is really smart in some ways but weak in other ways. I get back an answer within seconds and can make up my mind whether it is right or wrong or somewhere in between.
Things like Windsurf and Junie on the other hand seem to be mostly a waste of time as they go off and do stuff for 5-20 minutes and when they get back it is usually pretty screwed up an a lot of effort to understand what's wrong with it and fix it... It's very much that "do the last 20% that is 80% of the work" experience.
There is a lot of discourse around creativity and LLMs that I find really annoying on lots of levels.
There are the people who don't have any idea of what creativity is which leads to ideas like: "LLMs (by definition) can't be creative" (comes across way too much like Robert Penrose saying he can do math because he's a thetan) or the many people who don't get that "genius is 99% perspiration and 1% inspiration." There are also the people who are afraid of getting "ripped off" who don't get it that if they got a fair settlement for what was stolen from then it would probably be about $50, not a living wage. [1] They also don't seem to get it that Google's web crawler has been ripping people off since 2001, and just now they're worried. Maybe I have 50% sympathy for the ideas that visual art is devalued by LLMs since I feel that my work is devalued when people are seduced into thinking that the job is 80% done, not 20% done by the LLM.
[1] arrived by dividing some quantity of money that is input or output from the AI machine by the number of content pieces that are put in to it
> There are the people who don't have any idea of what creativity is which leads to ideas like: "LLMs (by definition) can't be creative
It's not that LLMs can't be creative. It's that we shouldn't allow them to because creativity is more than just about output. It's about human expression. End of story.
I have been an artist since I was a child and I disagree with you. Some of my favorite works of human creativity have made use of AI, or been inspired by the field.
Yes, there will always be exceptions, especially at the beginning. But economically, human-only art will suffer in the long term as AI becomes more sophisticated and fewer people have the opportunity to make a living from art.
But one or two exceptions, especially on HN (where people are highly addicted to technology), does not make a case for AI.
There's also the topic of labor that ties in here. Creators (and I'd argue most computer related jobs) are now having to compete against technology for wages.
Absolutely right, which will make it harder to make a living from creativity. A lot of people do, such as graphic designers, who will have to turn to other jobs to keep eating. And that is discouraging, even if they can do art in their spare time.
Agreed - with one caveat: "Discouraging" is an understatement, to put it lightly. I think the top 25th percentile of people in software tend to underestimate how difficult it is to switch careers for the vast majority of all people, and what the consequences of that are.