You're right, but a plugin for a common compositor like Plasma's KWin would still make it accessible to a large number of users. Shouldn't be too hard to do either. Maybe I'll do it this weekend!
The first part of the comment is very valuable. “I looked at it and it made me feel extremely strange almost immediately“. That is very good to know.
The second bit I’m less sure about. What do they mean by “check to make sure this can't trigger migraines or seizures”? Like what check are they expecting? Literature research? Or experiments? The word “check” makes it sound as if they think this is some easy to do thung, like how you could “double check” the spelling of a word using a dictionary.
On the other other hand, the average system in that survey presumably cost more than what the Steam Machine will retail for, if we're correct in interpreting this as being a competitor to dedicated consoles.
But even if it's double the price of the PS5/Xbox, it's still likely to be less than the price (at the time of purchase) of the mean PC in the hardware survey. For every gamer out there struggling along on a $500 mini-PC, there's another who plunked down $5,000 to play Cookie Clicker at 8K/240 FPS.
I'm not sure this is something I really worry about. Whenever I use an LLM I feel dumber, not smarter; there's a sensation of relying on a crutch instead of having done the due diligence of learning something myself. I'm less confident in the knowledge and less likely to present it as such. Is anyone really cocksure on the basis of LLM received knowledge?
> As I ChatGPT user I notice that I’m often left with a sense of certainty.
They have almost the opposite effect on me.
Even with knowledge from books or articles I've learned to multi-source and question things, and my mind treats the LLMs as a less reliable averaging of sources.
I remember back when I was in secondary school, something commonly heard was
"Don't just trust wikipedia, check it's resources, because it's crowdsourced and can be wrong".
Now, almost 2 decades later, I rarely hear this stance and I see people relying on wikipedia as an authoritative source of truth. i.e, linking to wikipedia instead of the underlying sources.
In the same sense, I can see that "Don't trust LLMs" will slowly fade away and people will blindly trust them.
> "Don't just trust wikipedia, check it's resources, because it's crowdsourced and can be wrong"
This comes from decades of teachers misremembering what the rule was, and eventually it morphed into the Wikipedia specific form we see today - the actual rule is that you cannot cite an encyclopaedia in an academic paper. full stop.
Wikipedia is an encyclopaedia and therefore should not be cited.
Wikipedia is the only encyclopaedia most people have used in the last 20 years, therefore Wikipedia = encyclopaedia in most people's minds.
There's nothing wrong with using an encyclopaedia for learning or introducing yourself to a topic (in fact this is what teachers told students to do). And there's nothing specifically wrong about Wikipedia either.
I remember all of our encyclopedias being decades out of date growing up. My parents bought a set of Encyclopedia Brittanica in 1976 or something like that, so by the time I was reading the Encyclopedia for research on papers in the late 90s and early 00s, it was without a doubt less factual than even the earliest incarnation of Wikipedia was.
Either way, you are correct, we weren't allowed to cite any encyclopedia, but they were meant to be jumping off points for papers. After Wikipedia launched when I was in 9th grade, we weren't allowed to even look at it (blocked from school computers).
I agree about blocking ChatGPT though. Kids (and most adults, honestly) aren't "smart" enough to understand the limitations, and trust it, and wikipedia, without question.
The original rule when I was a lad (when wikipedia was a baby) was, "don't trust stuff on the internet, especially Wikipedia where people can change it at will."
Today they might have better trust for Wikipedia-- and I know I use it as a source of truth for a lot of things-- but back in my day teachers were of the opinion that it couldn't be trusted. This was for like middle and high school, not college or university, so we would cite encyclopedias and that sort of thing, since we weren't reading cutting edge papers back then (maybe today kids read them, who knows).
Edit: Also, I think the GP comment was proven correct by all of the replies claiming that Wikipedia was never controversial because it was very clear to everyone my age when Wikipedia was created/founded that teachers didn't trust the internet nor Wikipedia at the time.
There was a period of time where Wikipedia was more scrutinized than print encyclopedias because people did not understand the power of having 1000s of experts and the occasional non-experts editing an entry for free instead of underpaying one sudo-expert. They couldn't comprehend how an open source encyclopedia would even work or trust that humans could effectively collaborate on the task. They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse. Humans are very pessimistic about other humans. In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.
All of that said, I trust Wikipedia more than I trust any LLMs but don't rely on either as a final source for understanding complex topics.
> the power of having 1000s of experts and the occasional non-experts editing an entry
When Wikipedia was founded, it was much easier to change articles without notice. There may not have been 1000s of experts at the time, like there are today. There's also other things that Wikipedia does to ensure articles are accurate today that they may not have done or been able to do decades ago.
I am not making a judgment of Wikipedia, I use it quite a bit, I am just stating that it wasn't trusted when it first came out specifically because it could be changed by anyone. No one understood it then, but today I think people understand that it's probably as trustworthy or moreso than a traditional encyclopedia is/was.
> In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.
Personally, my opinion of human nature falls somewhere in the middle of those two extremes.
I think when humans are given the choice to cooperate or fight, most choose to order a pizza.
A content creator I used to follow was fond of saying "Chill out, America isn't headed towards another civil war. We're way too fat and lazy for that."
Sure but I hope you get my point. Fighting takes effort, cooperation takes effort. Most people have other things to worry about and don't care about whatever it is you're fighting or cooperating over. People aren't motivated enough to try and sabotage the wikipedia articles of others. Even if they could automate it. There's just nothing in it for them.
> "They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse."
Isn't that exactly what happens on any controversial Wikipedia page?
There's not that many controversial topics at any given time. One of Wikipedia's solutions was to lock pages until a controversy subsided. Perma-controversy has been managed in other ways, like avoiding the statement of opinion as fact, the use of clear and uncontroversial language, using discussion pages to hash out acceptable and unacceptable content, competent moderators... Rage burns itself and people get bored with vandalism.
It doesn't always work. There are many topics that are perpetual edit wars because both (multiple) sides see the proliferation of their perspective as a matter of life and death. In many cases, one side is correct in this assessment and the others are delusional, but it's not always easy to align the side that's correct with the people who effectively control the page, because editors indeed do have their own biases (whether because of ideology, a philosophy, a political party, a nation, or whatever else). For those topics, Wikipedia can never be a source of "truth".
More colloquially, people would say that Wikipedia could not be trusted because "anyone can edit the pages or write whatever they want."
Of course that's demonstrative of the genesis fallacy. Anyone can write or publish a book, too. So it always comes down to "how can you trust information?" That's where individual responsibility to think critically comes in. There's not really anything you can do about the fact that a lot of people will choose to not think.
Yeah you weren't allowed to cite encyclopedias when I was a kid because:
1) encyclopedias are a tertiary source. They cite information collected by others. (Primary source: the actual account/document etc, Secondary source: books or articles about the subject, Tertiary source: Summaries of secondary sources.)
2) The purpose of writing a research paper was.. doing research and looking up an entry in an encylopedia is a very superficial form of research.
Also the overall quality of Wikipedia articles has improved over the years. I remember when it was much more like HHG with random goofy stuff in articles, poor citations, etc. Comparing it to, for instance, Encarta was often fun.
Encyclopedias are tertiary sources, compilations of information generated by others. They are neither sources of first hand information (primary sources) nor original analysis (secondary sources). You can't cite encyclopedias because there's nothing to cite. The encyclopedia was not the first place the claim was made, even if it was the first place you happened to read it. You don't attribute a Wayne Gretsky quote to Michael Scott no matter how clearly he told you Wayne Gretsky said it.
What about scholarly encyclopedias? For example, the Stanford Encyclopedia of Philosophy. The articles are written in the style of a survey article, and if they're merely tertiary, I can't tell. If the intention behind a citation is a reference for a concept (an "existence proof" of it) rather than identifying its source or providing evidence, then a tertiary source such as to a textbook seems adequate.
There is some nuance. Wikipedia is a tertiary source for the subjects of its articles. However, it is a primary source for what is on wikipedia. You can cite an encyclopedia the same way you would cite the dictionary (which is also a tertiary source) as a way of establishing that information is in circulation.
Likewise, primary sources for some claims may be tertiary sources for others. If you read the memoirs of a soldier in WW1 who is comparing his exploits to those of a roman general from antiquity, he is a primary source for the WW1 history and a tertiary source for the roman history.
Survey articles and textbooks are generally tertiary. They may include analysis which is secondary and citable, but even then only the parts which are original are citable.
As a more general rule, you can't cite a piece of information from a work which is itself citing that piece of information (or ought to be).
You gave some good context I missed - The (even) more technical (read: pretentious) explanation is that it's a tertiary source. As a general rule of thumb secondary sources are preferred over primary sources, but both are acceptable in the right academic context.
I do understand the "latest version" argument, and it is a weakness, but it's also a double edged sword - it means Wikipedia can also be more up-to-date than (almost) any other source for the information. Thats why I say there's "nothing specifically wrong about Wikipedia either" it can be held in similar regard to other tertiary sources and encyclopaedias - with all the problems that come with those.
> (Wikipedia has the additional problem that, by default, the version cited is the ever-changing "latest" version, not a fixed and identified version.)
Only citing means copying the URL directly. If you use Wikipedia's "Cite this page" or an external reference management tool (e.g. Zotero), the current page ID will be appended to the URL.
> Now, almost 2 decades later, I rarely hear this stance and I see people relying on wikipedia as an authoritative source of truth. i.e, linking to wikipedia instead of the underlying sources.
That's a different scenario. You shouldn't _cite wikipedia in a paper_ (instead you should generally use its sources), but it's perfectly fine in most circumstances to link it in the course of an internet argument or whatever.
Well also years of Wikipedia proving to be more accurate than anything in print and rarely and not for very long misrepresenting source materials. For LLMs to get that same respect they would have to pull off all of the same reassuring qualities.
There’s also the fact that both Wikipedia and LLMs are non-stationary. The quality of wikipedia has grown immensely since its inception and LLMs will get more accurate (if not explicitly “smarter”)
I think you would need a complicated set of metrics to claim something like "improved" that wasn't caveated to death. An immediate conflict being total number of articles vs impressions of articles labeled with POV biases. If both go up has the site improved?
I find I trust Wikipedia less these days, though still more than LLM output.
I can't think of a better accidental metric than that!
I'll go ahead and speculate that the number of incoherent sentences per article has gone down substantially over the last decade, probably due to the relevant tooling getting better over the same period.
> I can see that "Don't trust LLMs" will slowly fade away and people will blindly trust them.
That's already happening. I don't even think we had a very long "Don't trust LLMs" phase, if we did it was very short.
The "normies" already trust whatever they spit out. At leadership meetings at my work, if I say anything that goes against the marketing hype for LLMs, such as talking about "Don't trust LLMs", it's met with eye rolls and I'm not forward thinking enough, blah blah.
Management-types have 100% bought into the hype and are increasingly more difficult to convince otherwise.
I can’t speak to your specific experience, but I do some of this kind of eye-rolling when people bring short term limitations on LLMs into long term strategy.
I’m reminded of when people at work assured me the internet was never going to impact media consumption because 28.8kbps is not nearly enough for video.
Problem is they also included newspapers in authoritative sources - except foreign ones that is - and Wikipedia at least has some kind of peer review process.
It's genuinely as authoritative as most other things called authoritative.
Except when they glaringly get things wrong like "character X on show Y said catchphrase Z", and two queries produce two different values of X, one right, one wrong. The more I use gemini summaries for things I know a bit about, the worse my opinion of them..
I know you are not serious, but what would constitute as an acceptable source?
I could paste its content into an LLM for rephrasing or summarizing or whatever, or just simply ask an LLM about it and put it on my personal website. Would that be an acceptable source?
What even is an acceptable source for such things?
I don't think the cases are really the same. With Wikipedia people have learned to trust that the probability of the information being at least reasonably good is pretty high because there's an editing crucible around it and the ability to correct misinformation surgically. No one can hotpatch a LLM in 5mins.
The best LLM powered solutions are as little LLM and as much conventional search engine / semantic database lookups and handcrafted coaxing as possible. But even then, the conversational interface is nice and lets you do less handcrafting in the NLP department.
Using Perplexity or Claude in "please source your answer" mode is much more like a conventional search engine than looking up data embedded in 5 trillion (or whatever) parameters.
A big reason for this is that Wikipedia's source is often a book or a journal article that is either offline or behind an academic paywall. Checking the source is effectively impossible without visiting a college campus's library. The likelihood that the cited information is wrongly summarizing the contents is low enough and the cost is high enough that doing so regularly would be irrational.
A bigger problem in this respect with Wikipedia is it often cites secondary sources hidden behind an academic fire/paywall. It very often cites review articles and some of these aren't necessary entirely accurate.
It wasn't just Wikipedia, which was a relatively recent addition to the web, everything online was a 'load of rubbish'.
In turn-of-the-century boomer world, reality was what you saw on TV. If you saw something with your own eyes that contradicted the world view presented by the media, then one's eyes were to be disbelieved. The only reputable sources of news were the mainstream media outlets. The only credible history books would be those with reviews from the mainstream media, with anything else just being the 'ramblings of a nutter'.
In short, we built a beautiful post-truth world and now we are set on outsourcing our critical thinking to LLMs.
> Is anyone really cocksure on the basis of LLM received knowledge?
I work for a company with an open source product and the number of support requests we get from people who ask the chatbot to do their config and then end up with something nonfunctioning is quite significant. Goes up to users complaining our api is down because the chatbot hallucinated the endpoint.
LLMs do love to make up endpoints and parameters, but I have found that ones with web access are pretty good at copy/pasting configs if they can find them, so it might be worth a few minutes of exploring what people are actually finding that's causing it to make up an endpoint. I have not (yet!) seen an instance where making something easier for LLMs to parse didn't also help human comprehension.
I work in DevSecOps, and devs sometimes come to us with AI-slop summaries and writeups about our own tooling. Any time I see emojis in a message, I know I'm about to have a laugh.
This captures my experience quite well. I can "get a lot more done," but it's not really me doing the things, and I feel like a bit of a fraud. And as the workday and the workweek roll on, I find myself needing to force myself to look things up and experiment rather than just asking the LLM. It's quite clear that for most people LLMs will make the more dependent. People with better discipline I think will really benefit in big ways, and you'll see this become a new luxury belief; the disciplined geniuses around us will genuinely be perplexed why people are saying that LLMs have made them less capable, much in the same way they wonder why people can't just limit their drug use recreationally.
I've been thinking about that comparison as well. A common fantasy is that civilization will collapse and the guy who knows how to hunt and start a fire will really excel. In practice, this never happens and he's sort of left behind unless he also has other skills relevant to the modern world.
And, for instance, I have barely any knowledge of how my computer works, but it's a tool I use to do my job. (and to have fun at home.)
Why are these different than using LLMs? I think at least for me the distinction is whether or not something enables me to perform a task, or whether it's just doing the task for me. If I had to write my own OS and word processor just to write a letter, it'd never happen. The fact that the computer does this for me facilitates my task. I could write the letter by hand, but doing it in a word processor is way better. Especially if I want to print multiple copies of the letter.
But for LLMs, my task might be something like "setting up apache is easy, but I've never done it so just tell me how do it so I don't fumble through learning and make it take way longer." The task was setting up Apache. The task was assigned to me, but I didn't really do it. There wasn't necessarily some higher level task that I merely needed Apache for. Apache was the whole task! And I didn't do it!
Now, this will not be the case for all LLM-enabled tasks, but I think this distinction speaks to my experience. In the previous word processor example, the LLM would just write my document for me. It doesn't allow me to write my document more efficiently. It's efficient, only in the sense that I no longer need to actually do it myself, except for maybe to act as an editor. (and most people don't even do much of that work) My skill in writing either atrophies or never fully develops since I don't actually need to spend any time doing it or thinking about it.
In a perfect world, I use self-discipline to have the LLM show me how to set up Apache, then take notes, and then research, and then set it up manually in subsequent runs; I'd have benefited from learning the task much more quickly than if I'd done it alone, but also used my self-discipline to make sure I actually really learned something and developed expertise as well. My argument is that most people will not succeed in doing this, and will just let the LLM think for them.
I remember seeing a tweet awhile back that talked about how modernity separated work from physicality, and now you have to do exercise on purpose. I think the Internet plus car-driven societies had done something similar to being social, and LLMs are doing something to both thinking, as well as the kind of virtue that enables one to master a craft.
So, while it's an imperfect answer that I haven't really nailed down yet, maybe the answer is just to realize this and make sure we're doing hard things on purpose sometimes. This stuff has enabled free time, we just can't use it to doomscroll.
>Internet plus car-driven societies had done something similar to being social,
That's an interesting take on the loneliness crisis that I had not considered. I think you're really onto something. Thanks for sharing. I don't want to dive into this topic too much since it's political and really off-topic for the thread, but thank you for suggesting this.
Radio and especially TV also had large social effects. People used to play cards, instruments, and other social things before TV. Then household TV watching maxxed at 9 hours/day in 2010 (5hr/d in 1950). (Would like to know the per person watching and these are from Nielsen who would want higher numbers) [1].
Cars help people be social in my world. I would say that riding on a train in your own bubble with strangers is not a social activity, but others would disagree.
You don't just set up Apache to have run Apache? You set it up to serve web content! It is middleware, it is not in of itself useful?
Isn't setting up Apache robbing yourself of the opportunity to learn about writing your own HTTP server? In C? And what a bad idea that is?
The LLM helping you configure a web server is no different than the web server helping you serve HTTP instead of implementing a web server from scratch. You've just seemingly? arbitrarily decided your preferred abstraction layer is where "real work" happens.
Okay, maybe LLMs might disappear tomorrow and so for some reason the particular skill of configuring Apache will become useful again, maybe! But I'm already using brainpower to memorize phone numbers in case my smartphone contacts disappear, so maybe I won't have room for those Apache configs ;-)
Computers have a bunch of abstractions, but they are leaky abstractions. Apache is not leaking that much, so you don't need to write an HTTP server (until you write an Apache module). Abstracting over Apache can be something able to do when all you need to do is host static pages on port 80/443, that's called a Webhoster or github.io .
> the distinction is whether or not something enables me to perform a task, or whether it's just doing the task for me.
I think school has taught us to believe that if we're assigned a task, and we take a shortcut to avoid doing the task ourselves, that's wrong. And yes, when the purpose is to learn the task or the underlying concepts, that's probably true. But in a job environment, the employer presumably only cares that the task got done in the most efficient way possible.
Edit to add: When configuring or using a particular program is tedious and/or difficult enough that you feel the need to turn to an LLM for help, I think it's an indication that a better program is needed. Having an LLM configure or operate a computer program for you is kind of like having a robot operate a computer UI that was designed for humans, as opposed to having a higher-level program just do the higher-level automation directly. In the specific case of the Apache HTTP Server, depending on what you need to do, you may find that Caddy is easy enough that you can configure it yourself without requiring the LLM. For common web server scenarios, a Caddyfile is very short, much shorter than a typical Apache or nginx configuration.
When I perform a task myself, it will be reproducible, so it is done once and for all for this employer. That probably won't be the case for the LLM, which will change or might be down next week.
> But for LLMs, my task might be something like "setting up apache is easy, but I've never done it so just tell me how do it so I don't fumble through learning and make it take way longer." The task was setting up Apache. The task was assigned to me, but I didn't really do it. There wasn't necessarily some higher level task that I merely needed Apache for. Apache was the whole task! And I didn't do it!
To play devil's advocate: Setting up Apache was your task. A) Either it was a one-off that you'll never have to do again, in which case it wasn't very important that you learn the process inside and out, or b) it is a task you'll have to do again (and again), and having the LLM walk you through the setup the first time acts as training wheels (unless you just lazily copy & paste and let it become a crutch).
I frequently have the LLM walk me through an unfamiliar task and, depending on several factors such as whether I expect to have to do it again soon, the urgency of the task, and my interest and/or energy at the moment, I will ask the LLM follow-up questions, challenge it on far-fetched claims, investigate alternative techniques, etc. Execute one command at a time, once you've understood what it's meant to do, what the program you're running does, how its parameters change what it does, and so on, and let the LLM help you get the picture.
The alternative is to try to piece together a complete picture of the process from official documentation like tutorials & user manuals, disparate bits of information in search results, possibly wrong and/or incomplete information from Q&A forums, and muddle through lots of trial and error. Time-consuming, labor-intensive, and much less efficient at giving your a broad-strokes idea of how the whole thing works.
I much prefer the back-and-forth with the LLM and think it gives me a better understanding of the big picture than the slow and frustrating muddling approach.
The alternative to LLMs wouldn't necessarily be to start from scratch, you likely will just start with a documented version from your distro, and change the documented settings suggested. Meanwhile using the documentation, that is also provided by the distro.
I would say that with a computer you're using a tool to take care of mundane details and speed up the mechanics of tasks in your life. Such as writing a document, or playing a game. I can't think of a way I would be seriously disadvantaged by not having the ability to hand-write an essay or have games I can readily play without a computer. Computers are more like tools in the way a hammer is a tool. I don't mind being totally dependent on a computer for those tasks in the same way I don't mind that I need a hammer anytime I want to drive a nail.
But for many people, LLMs replace critical thinking. They offer the allure of outsourcing planning, research, and generating ideas. These skills seem more fundamental to me, and I would say there's definitely a loss somehow of one's humanity if you let those things atrophy to the point you become utterly dependent on LLMs.
>But for many people, LLMs replace critical thinking...[and] outsourc[e] planning, research, and generating ideas
Sure, but I guess you could say that any tech advancement outsources these things, right? I don't have to think about what gear to pick when I drive a car to maximize its performance, I don't have to think about "i before e" types of rules when spell check will catch it, I don't have to think about how to maintain a draft horse or think as much about types of dirt or terrain difficulties when I have a tractor.
Or, to add another analogy, for something like a digital photo compared to film photography that you'd develop yourself or portrait painting before that: so much planning and critical thought has been lost.
And then there's another angle: does a project lead not outsource much of this to other people? This invites a "something human is being lost" critique in a social/developmental context, but people don't really lament that the CEO has somehow lost his humanity because he's outsourcing so much of the process to others.
I'm not trying to be clever or do gotchas or anything. I'm genuinely wrestling with this stuff. Because you might be right: dependence on LLMs might be bad. (Though I'd suggest that this critique is blunted if we're able to eventually move to hosting and running this stuff locally.) But I'm already dependent on a ton of tech in ways I probably can't even fully grasp.
I don't have any great answer. But when I think about this for myself, I realize there is are different kinds of abstraction that qualitatively change the nature of the work.
I don't want my software developer's experience to turn into a real estate developer's experience. I don't want to go from being a technical knowledge worker to a financier or contract negotiator. I've realized I was never in it for the outcome. I was in it for the exploration and puzzles.
Similarly, I don't want to become a "Hollywood producer" cliche. This caricature was a common joke earlier in my tech career in Southern California. We detested the idea of becoming a "tech" person acting like a Steve Martin parody of a Hollywood wheeler-dealer. Someone sitting in a cafe, pitching ideas that was nothing more than a reference to an existing work with an added gimmick or casting change.
To me, that caricature combines two negative aspects. One is the heavily derivative and cynical nature. The other is the stratospheric abstraction level, where folks at this level see themselves as visionaries rather than just patrons of someone else doing all the creative work.
I don't want to be a patron of an LLM or other black box.
It's appropriate to think this way with LLM output because LLMs are still terrible some significant portion of the time. If you don't actually know what you're doing, you have no way to distinguish between their output being correct or their output being able to pass the tests you can think of.
As a software developer, your job is to understand code and business constraints so you can solve problems the way most appropriate for the situation. If you aren't actually keeping up with those constraints as they change through time, you're not doing your job. And yeah, that's a kind of fraud. Maybe it's more on yourself than your employer most of the time, but... It's your job. If you don't want to do it, maybe it's more respectful of your own time, energy, and humanity to move on.
I mostly agree with this. LLMs are just another tool, and we've learned how to use and adapted to using many other tools throughout our history just fine.
With the caveat of for our field in particular, it's one of the few that require continuous learning and adaptation, so tech workers in a way are better predisposed to this line of thinking and tool adoption without some of the potential harmful side effects.
To pick on spell check, it has been showing that we can develop a dependency on it and thereby losing our own ability to spell and reason about language. But, is that a bad thing? I don't know.
What I do know is humans have been outsourcing our thinking for a long time. LLMs are another evolution in that process, just another way to push off cognitive load onto a tool like we've done with stone tablets, books, paper notes, digital notes, google, etc.
I agree but I've personally seen some egregious examples of people who are not only extremely confident in their new "knowledge" and "ability" but simultaneously think everyone else is extremely stupid. It's been absolutely wild to watch people paste chatgpt output and claim they wrote it, over and over again, even though every time I actually read it and ask a few "what does this mean" questions they have no idea and simply ask chatgpt then confidently say the response. It's so bad it's like a pathology; I wouldn't believe it if I hadn't seen it with my own eyes.
Something is happening here. Hopefully it's just revealing something that was already there in society and it isn't something new.
If you feel dumber, it’s because you’re using the LLM to do raw work instead of using it for research. It should be a google/stackoverflow replacement, not a really powerful intellisense. You should feel no dumber than using google to investigate questions.
I find that it is terrible for research, and hallucinates 25% to 90% of its references.
If you tell it to find something and give it a detailed description of what you're looking for, it will pretend like it has verified that that thing exists, and give you a bulletpoint lecture about why it is such an effective and interesting thing that 1) you didn't ask for, and 2) is really it parroting your description back to you with embellishments.
I thought I was going to be able to use LLMs primarily for research, because I have read an enormous number of things (books, papers) in my life, and I can't necessarily find them again when they would be useful. Trying to track them down through LLMs is rarely successful and always agonizing, like pulling teeth that are constantly lying to you. A surprising outcome is that I often get so frustrated by the LLM and so detailed in how I'm complaining about its stupid responses that I remind myself of something that allows me to find the reference on my own.
I have to suspect that people who find it useful for research are researching things that are easily discoverable through many other means. Those are not the things that are interesting. I totally find it useful to find something in software docs that I'm too lazy to look up myself, but it's literally saving me 10 minutes.
I don't think this is entirely accurate. If you look at this: https://www.media.mit.edu/publications/your-brain-on-chatgpt..., it shows that search engines do engage your brain _more_ than LLM usage. So you'll remember more through search engine use (and crawling the web 'manually') than by just prompting a chatbot.
> Is anyone really cocksure on the basis of LLM received knowledge?
Some people certainly seem to be. You see this a lot on webforums; someone spews a lot of confident superficially plausible-looking nonsense, then when someone points out that it is nonsense, they say they got it from a magic robot.
I think this is particularly common for non-tech people, who are more likely to believe that the magic robots are actually intelligent.
Most of the time it feels like a crutch to me. There has been a few moments where it unlocked deep motivation (by having a feel for the size of a solution based on chatgpt output) and one time a research project where any crazy idea I threw, it would imagine what it would entail in terms of semantics and then I was inspired even more.
The jury is Still out on what value these things will bring
Here are just a few examples of how I have used them.
My fountain pen stopped working, so I tried the common solutions recommended, but they did not solve the problem. Claude told me to try using a mixture of window cleaner and water. It worked! (The solution must have been in the corpus used to train Claude.)
I switched from W2 to consulting, but I didn't know anything about taxes. ChatGPT gave me the right recommendation, saving me hours of research.
I wanted to evaluate the quality of some shirts I have, so I used ChatGPT to estimate stitches per inch and the quality of the buttonholes and other details.
I planned my diet using ChatGPT. I had it calculate calories, macros, and deficit. Could I have done it without an LLM? Of course, but it made the planning much faster.
It's not that it didn't occur to me. Sure I understand I'm missing the immediacy and the visceral effect here, and I presume the parallel impact on other senses. But then again if I was the sort of person that mattered to, my outlook would probably be different. I'm fine with others having different preferences.
I would say to me these videos work wonders in confirming a little bit that I'm not really missing out. There's a lot of FOMO and myth-making around drugs, I think experience reports and replications are a pretty good way to make everyone's decisions more informed whether it's "for them".
This could totally be some form of confirmation bias at work, but it works for me ...
The visuals are like a fraction of the experience. Personally, I get very little in terms of visuals. It’s insight, wisdom, love, and the releasing of emotional holding patterns that is the most prominent thing for me. You can read about ego death all you want, but until you actually experience that sort of thing it’s just nice words on a page. It’s why Buddha would say don’t take my word for it, do the practice and have the experience yourself.
My first LSD trip is probably the most important experience of my life, and sure I saw some fractals in the clouds, but that’s close to zero percent of what was important during it.
This exchange reminds me a bit of the experience of becoming a parent. The permanent reconfiguration of priorities from the intense oxytocin high is also quite impossible to explain to non-parents.
It is interesting to me as my first acid trip was 30 years ago but I have never gained anything profound from the experience.
My best trips were at psytrance parties as peak experiences in terms of fun.
I have tripped many times alone in a dark room and basically gained nothing from the experience besides falling into an existential void.
Personally, from so much experiences, reading thousands of trip reports, most the psychedelic literature up to about 2005, I think the psychedelic experience is like a blank white canvas. Some people end up with a Monet painting experience and some people end up with a Dali painting experience. Some run into a Hieronymus Bosch the first time and never try it again. You can't really make overall statements about what the blank canvas is going to be before someone starts to paint.
For me, my best psychedelic experiences were better versions of my most fun nights drunk. Anything I have learned that is all that deep though I have learned from reading books.
Never having a psychedelic experience I think is like never being drunk. It is really missing out on an interesting life experience but at the same time it is not this profound loss.
Working out all these life problems like some kind of pyschotherapy session is for sure something that never happened to me. That just lead me to the existential void when attempted.
Yeah, you're right in that it is highly dependent on the person and the set and setting. For me, I went into that first experience seeking a catalyst for insight into the things that were holding me back in my life, and got it. Intention setting is super important, which is why in formal meditation practice and in yoga they teach you to set a samkalpa for your practice session [1].
I've certainly taken LSD and gone to a rave with 6k people before, but I usually end up wanting to go home to meditate after a while. Insight into that existential void (sunyata) is exactly what I'm seeking out. But there's of course nothing wrong with wanting to stay at the party and dance all night! They're both manifestations of the same thing if you can see it.
The type of person who is arrogant enough to read some trip reports on the internet and look at a couple gifs and thinks "yeah I totally get this" is exactly the type of person a trip will benefit the most.
The problem with the whole "tripping has made me wiser and more kind and loving" type stuff is that it's self-serving and doesn't really stand up to Occam's Razor. It's a bit like that xkcd post on homeopathy: If it actually worked at scale, health insurers would be doing it.
Experience has taught me to be wary of identity-conferring stuff that's easy and not hard to do. Taking drugs is not difficult.
> MDMA has limited approved medical uses in a small number of countries,[32] but is illegal in most jurisdictions.[33] MDMA-assisted psychotherapy is a promising and generally safe treatment for post-traumatic stress disorder when administered in controlled therapeutic settings.[34][35] In the United States, the Food and Drug Administration (FDA) has given MDMA breakthrough therapy status (though there no current clinical indications in the US).[36] Canada has allowed limited distribution of MDMA upon application to and approval by Health Canada.[37] In Australia, it may be prescribed in the treatment of PTSD by specifically authorised psychiatrists.[38]
If you don't think an acid trip can be a difficult experience you really don't know what you're taking about. I guess you would think therapy isn't difficult either because you're just sitting on a couch.
That could be down to the fact that homeopathic treatments have largely been legal. The studies have been done and showed that a lot of it doesn't work so it isn't offered in traditional medicine. There were a lot of promising studies into the effects of LSD and Psilocybin before they were made illegal. Now with the loosening of restrictions we are able to get more research into the potential uses of psychedelics and there have been a lot of positive results. The research into MDMA for PTSD is really exciting, as well as Ketamine, LSD, and Psilocybin for different forms of depression for example.
They will never be a solution for every problem like some people evangelize but where they work, they give people with these conditions another avenue to try when other "legal" drugs have failed.
> more research into the potential uses of psychedelics and there have been a lot of positive results
You'd have to agree, the types of people who choose to research psychedelics professionally, are the types of people who want to see, and demonstrate, positive results. These aren't unbiased research outcomes.
I don't use drugs, but the LSD situation is crazy: is well down in any rank of harm (both to user and to others). The alleged harm has been proved fabricated (people getting blind for staring at the sun) or incredibly overstated (suicides while tripping). Is way less dangerous than tobacco or alcohol, and has next to zero addiction. Their users praise the experience, and some studies show potential medical use. Yet is furiously prohibited, deviled and prosecuted.
We were having a debate among friends when a couple of people said they took MDMA once, and some of the most obviously alcoholics (drunk twice a week) went to their yugular calling them junkies and "irresponsible" because drugs fry your brain.
MDMA fries your teeth. Gave in to 1/5th of a dose with people I was partying with, received one year of tooth-grinding. Never again.
Classic dental study: 89% of ecstasy users reported clenching or grinding; 60% had tooth wear into dentine vs 11% of non-users. https://pubmed.ncbi.nlm.nih.gov/10403088/
ChatGPT: “One pill, one year of grinding” – biologically plausible as a trigger, but not a universal rule.
"I think that MDMA, unlike other drugs, is potentially much more neurotoxic and dangerous than any drug that has comparable effects, like hallucinogens for example, for which we haven't shown long-term alterations. MDMA is therefore a special case. It's difficult to give recommendations for use. It's better not to take it regularly, and if someone asks me, in my opinion, I would say it's better to keep your distance from this drug so as not to run any risks."
Are you telling me you've never met an old LSD abuser whose brain was fried like an egg? LSD can also trigger legitimate lifelong psychotic states in some people.
There is a big difference between 'generally not harmful in very small singular doses' and 'all harm is fabricated'.
Ive seen people with fried brains from copious amounts of drug abuse taking everything under the sun that will also sometimes take LSD. Ive never seen someone who only takes psychodelics like LSD and mushrooms, even heroic amounts, have any cognitive problems from it.
> Are you telling me you've never met an old LSD abuser whose brain was fried like an egg?
I’ve known old LSD abusers with fried brains but never seen a LSD abuser go from non-fried to fried brains. Correlation is not causation, but it could be.
> LSD can also trigger legitimate lifelong psychotic states in some people.
These statements should be accompanied by the necessary caveat that just about anything can trigger psychotic states in people prone to psychosis.
Nah, you may be missing or may be not, but there no way ay to explain psychedelic feelings. Just not possible. Sometimes world changes to wonder full or curiosity, and you think why, or why I cannot live like that always. Extent of this is not possible to experience without psychedelics or other strong mind altering practises.
The visuals are like 10% of the experience. The last thing you could describe psychedelics as is underwhelming. It is not possible for you to understand what the experience is like without trying it for yourself.
And I am not advocating for trying them. Im not one of these evangelists. But replication images are a very weak simulacrum of what the experience is actually like.
I was using stable diffusion 1.5 when it came out and had my first LSD trip shortly after that, prolly like a few months. Anyways, what struck me was how similar the closed eyes visuals were on LSD compared to the generated images from stable diffusion when i was using it on low CFG and also some of my poorly trained textual inversions at the time. Watching the "training process" of the textual inversion in the early epochs made a lot of such images before the TI finally completed. Makes me think if the processes are somehow related, like if in human brains the reason we don't have experience seeing these "hallucinations" is because we have many robust subsystems that filter out the noise and make the mental model cohere on a stable world view.
Our culture is very image-centric. You have to understand that the drug induced image distortions are just a very specific side-effect that is part of a larger whole.
Hallucinogens act on deeper mechanisms that control from visual perception all the way to the sense of self. It can fundamentally change during the experience the way you see yourself and the world. It's not uncommon for users of LSD or DMT and psilocybin to describe the experience as getting in touch with the interconnectedness of all things. Also bad trips can be very terrifying because of how much you are exposed to the experience. Like dying or feeling the fleeting nature of existence very present in your skin.
All this to say that videos don't do any of this justice. It's just a fun way to represent the image distortions.
I get that, and I guess I try to extrapolate from the image-based examples to other senses and congnition in general. The image replications give me the idea that there is some generative extrapolation based on actual sensory input a seed going on, like the brain circuitry that goes and re-imagines the input consciously going haywire and growing and extrapolating into overdriven, bizarre directions.
I recently read "A Brief History of Intelligence" by Bennet which spends quite a bit of time dwelling on "generative" simulation mechanisms in brain function and their role in cognition from prediction to mentalizing, and I think I can get a rough sense of how this would all click together.
It makes sense why creative/artistic people may be drawn to this and could consider it a heightened form or a letting loose of their normal processes, etc.
But to me it's still not that attractive. I can never shake the idea that it's a bit like driving a system past specifications and assigning meaning to malfunctions, and essentially lying to yourself. I get it's not black and white, and obviously philosophy is rife with arguments and takes on what is true experience and cognition, but given the risks and downsides I'd rather not.
I'm very fine with other people occupying different points on the spectrum.
>generative extrapolation based on actual sensory input a seed going on
>brain circuitry that goes and re-imagines the input consciously going haywire and growing and extrapolating into overdriven, bizarre directions.
>assigning meaning to malfunctions, and essentially lying to yourself
The problem is that your description fully applies to "normal", non-chemically-altered cognition. Miscognitions propagate. The spec only goes as far as anatomically modern, i.e. cavefolk, where the error correction mechanism there is "get eaten by wild animals, having failed to reproduce".
We don't have sabertooth tigers any more, we have a planetary-scale material culture developed over millenia. It provides for our safety; it records and propagates imprints of what we think, say, and do; it makes meaningful actions out of human utterances and movements, by providing them with interpretations (shared collective cognitions).
It's a safe and rich environment, one where people get to live safe lives in the grasp of utter, insane delusion, we just can't agree on which ones exactly are the deluded ones. We consider that one is responsible primarily for one's own actions, so let's start with the self, shall we.
What is one to do, if one wants to say the words "I am not lying to myself" in the sense of an actual falsifiable statement, and not just as a form of "I'm significant... said the dust speck"?
I mean, how do you even know? Couldn't you just lie to yourself about that one, too, and carry on none the wiser?
You know how you can look at your eye with your eye, by means of routing photons through space in a clever way, with some help from that best friend of the psychonaut - the bathroom mirror?
Turns out you can also look at your mind with your mind, by routing concept-patterns though time in a clever way, by means of chemicals which alter the activation thresholds and signal propagation times throughout your body.
And what this gives you is a basis for comparison. Otherwise, you simply don't know. You're taking your introspection on faith, and that's massively irresponsible towards everyone else. Ask me how I know.
It’s kinda like if the feeling you see in the videos were replicated across all of your senses. Your senses kinda blur together in an indescribable way, and it’s extremely intense, kinda all consuming.
They're not the same thing, although visual migraines can appear at the start of a "real" migraine. It's a distortion of vision with glitchy geometric patterns that pulsate and move a bit. Here are some attempts to recreate them: https://duckduckgo.com/?t=ffab&q=visual+migraine+&ia=images&...
I would highlight `std::launder` as an example. It was added in C++17. Famously, most people have no idea what it is used for or why it exists. For low-level systems it was a godsend because there wasn’t an official way to express the intent, though compilers left backdoors open because some things require it.
It generates no code, it is a compiler barrier related to constant folding and lifetime analysis that is particularly useful when operating on objects in DMA memory. As far as a compiler is concerned DMA doesn’t exist, it is a Deus Ex Machina. This is an annotation to the compiler that everything it thinks it understands about the contents and lifetime of a bit of memory is now voided and it has to start over. This case is endemic in high-end database engines.
It should be noted that `std::launder` only works for different instances of the same type. If you want to dynamically re-type memory there is a different set of APIs for informing the compiler that DMA dropped a completely different type in the same memory address.
All of this is compiled down to nothing. It annotates for the compiler things it can’t understand just by inspecting the code.
I don't think that's quite right. For DMA you would normally use an empty asm block, which is what's typically referred to as a "compiler barrier" and does tell the compiler to discard everything it knows about the contents of a some memory. But std::launder doesn't have the same effect. It only affects type-based optimizations, mainly aliasing, plus the assumption that an object's const fields and vtable can't change.
GCC generates a store followed by a load from the same location, because of the asm block (compiler barrier) in between. But if you change `if (1)` to `if (0)`, making it use `std::launder` instead of an asm block, GCC doesn't generate a load. GCC still assumes that the value read back from the pointer must be 42, despite the use of `std::launder`.
This doesn't seem quite right. The asm block case is equivalent to adding a volatile qualifier to the pointer. If you add this qualifier then `std::launder` produces the same codegen.
I think the subtle semantic distinction is that `volatile` is a current property of the type whereas `std::launder` only indicates that it was a former property not visible in the current scope. Within the scope of that trivial function in which the pointer is not volatile, the behavior of `std::launder` is what I'd expect. The practical effect is to limit value propagation of types marked `const` in that memory. Or at least this is my understanding.
DMA memory (and a type residing therein) is often only operationally volatile within narrow, controlled windows of time. The rest of the time you really don't want that volatile qualifier to follow those types around the code.
One thing that I've found really useful is being able to annotate o pointer's alignment. I'm working on an interpreter, and I'm using tagged pointers (6 bits), so the data structure needs to have 128 byte alignment. I can define a function like `fn toInt(ptr: *align(128) LongString) u56` and the compiler will track and enforce the alignment.
You might also find some of the builtin functions interesting as well[1], they have a lot of really useful functions that in other languages are only accessible via the blessed stdlib, such as @addrSpaceCast, @atomicLoad, @branchHint, @fieldParentPtr, @frameAddress, @prefetch, @returnAddress(), and more.
Pretty nice to see KDE's Konsole rank really high, especially if you take performance/test elapsed time into account. Considering it's a decades-older workhorse compared to the new star ghostty, not too shabby that it's kept up.
Agreed. It's nice to be able to just use the provided terminal when running KDE. It's very customisable and runs plenty fast. I also love being able to right click on Dolphin and tell it to open Konsole in the current folder. Also, I leave infinite scroll back turned on in Konsole and it works really well, swapping out to a file as it gets too much scroll back. Nothing worse than getting errors that I can't read because the terminal discarded them. I have Ctrl+Shift+X bound to clear everything, which I use before running just about any operation.
reply