Excuse me for veering off topic but I saw this, was intrigued (I used to live in Lyon), browsed some of your previous comments here, saw the book Carmilla, which I'd never heard of, and am very happy. I will be reading J. S. Le Fanu, and am stunned to learn of him. Cheers!
Maybe this was tongue-in-cheek in a way that eludes me, but in case any innocent and curious bystanders are as confused as me by your comment, I'm not sure "Worse Is Better" refers to what you think it does. It isn't about "features and functionality", it's about how ease of implementation beats everything else. I can't see how that applies here, or what your comment means in that light.
I was more a happy Guix user and computing beginner, rather than an expert, I didn't do anything overly fancy. I was starting to dabble with more advanced topics, but had to leave that system unfortunately for unrelated reasons. Looking forward to a glorious return though, I must say, as soon as I can manage it.
Don't hesitate to try the official documentation with your problems, it's excellent. No answer there, try a full description of your issues on the mailing list, they're a great bunch too.
I'm reminded of definitively the most extreme writing on programming I've ever read, here https://llthw.common-lisp.dev/introduction.html, including but in no way limited to claims such as:
> The mind is capable of unconsciously understanding the structure of the computer through the Lisp language, and as such, is able to interface with the computer as if it was an extension to its own nervous system. This is Lisp Consciousness, where programmer and computer are one and the same; they drink of each other, and drink deep; and at least as long as the Lisp Hacker is there in the flow, riding the current of pure creativity and genius with their trusty companions Emacs and SLIME, neither programmer nor computer know where one ends and the other begins. In a manner of speaking, Lispers already know machine intelligence---and it is beautiful.
Has any other language produced such thoughts in the minds of human beings? Maybe yes, but I don't know of one. Maybe Forth, or Haskell, or Prolog, but I haven't found similar writing. Please do share.
I agree, and it gets even better: while low level ML support in Common Lisp does not match Python libraries, now it often does not matter because LLMs are not embedded in applications, then are often accessed via a HTTP request.
It is not "an agent" in the sense you are implying here, it does not will, want, plan, none of those words apply meaningfully. It doesn't reason, or think, either.
I'll be excited if that changes, but there is absolutely no sign of it changing. I mean, explicitly, the possibility of thinking machines is where it was before this whole thing started - maybe slightly higher, but moreso because a lot of money is being pumped into research.
LLMs might still replace some software workers, or lead to some reorganising of tech roles, but for a whole host of reasons, none of which are related to machine sentience.
As one example - software quality matters less and less the as users get locked in. If some juniors get replaced by LLMs and code quality plummets causing major headaches and higher workloads for senior devs, as long as sales don't dip, managers will be skipping around happily.
I didn't mean to imply AI was sentient or approaching sentience. Agency seems to be the key distinction between it and other technologies. You can have agency, apparently, without the traits you claim I imply.
Ah, ok, you must be using agency in some new way I'm not aware of.
Can you clarify what exactly you mean then when you say that "AI" (presumably you mean LLMs) has agency, and that this sets it apart from all other technologies? If this agency as you define it makes it different from all other technologies, presumably it must mean something pretty serious.
This is not my idea. Yuval Noah Harari discusses it in Nexus. Gemini (partially) summarizes it like this:
Harari argues that AI is fundamentally different from previous technologies. It's not just a tool that follows instructions, but an "agent" capable of learning, making decisions, and even generating new ideas independently.
> If this agency as you define it makes it different from all other technologies, presumably it must mean something pretty serious.
Yes, AI does seem different and pretty serious. Please keep in mind the thread I was responding to said we should think of AI as we would a hammer. We can think of AI like a tool, but limiting our conception like that basically omits what is interesting and concerning (even in the context of the original blog post).
That's the thing, you don't have to say it, sort of by definition. It's always implied, as is its opposite. Imagine a video or article or whatever, titled:
"How to get a Mediocre Job and live an Unremarkable Life"
Where the content of the article or video was, simply:
"Be born a human. You'll have a 99.99% chance of succeeding!"
It'd be pretty grim, it'd last about 15s to read or view, so no-one makes that video. Instead, they make videos about becoming pilots at 21, or owning your own house at 23, etc.
It's hard to accept these percentages as real, because it's about the opposite of what is presented on the content farm platforms, and that's partly as I said the fact that the story is too grim, but also the fact that the "unremarkable" people are not the ones producing "content".
I like Graham's writing, and defend it elsewhere in this thread, but that has such an obsequious and somehow macho smack to it, wow. One imagines Hercules chiseling his abs. If that's what his writing does for you, fair enough, but it sure is intense.
The commenter did not say Paul Graham writes quickly, so I'm not sure why you keep fixating on that point.
> I pity you and the likes of you who are coming here to shit all over as if there aren't any better things to do during the day.
Good lord. They said they like his writing, but found the particular tweet you shared pretentious. Your response to that light criticism is so disproportionate it reads sycophantic. This is a thread about good writing, I think criticizing anything is fair game.
Successful people outgrowing their jodhpurs and losing their reason is a thing, sure, but that does not apply in this specific case. Tech writing is still writing, my friend.
Have you read ANSI Common Lisp? Or even the introduction to it?
I have criticisms of Mr. Graham, but the man can write, and consistently. Some of the essays can be a tad too terse for me at times, but when he gets it right, his stuff can be exquisite.
Another example that comes immediately crashing to mind is Donald Knuth - have you read any of his tech writing? It's glorious.
Anyone who wants to claim there's a hard line between writing worthy of "literary merit" and tech writing is going to have their work cut out for them with those two already.
You mean, the CEO is only pretending to make the decisions, while secretly passing every decision through their LLM?
If so, the danger there would be... Companies plodding along similarly? Everyone knows CEOs are the least capable people in business, which is why they have the most underlings to do the actual work. Having an LLM there to decide for the CEO might mean the CEO causes less damage by ensuring consistent mediocrity at all times, in a smooth fashion, rather than mostly mediocre but with unpredictable fluctuations either way.
All hail our LLM CEOs, ensuring mediocrity.
Or you might mean that an LLM could have illicitly gained control of a corporation, pulling the strings without anyone's knowledge, acting on its own accord. If you find the idea of inscrutable yes-men with an endless capacity to spout drivel running the world unpalatable, I've good news and bad news for you.