Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexanderDhoore's commentslogin

Can someone with more knowledge give me a software overview of what AMD is offering?

Which SDKs do they offer that can do neural network inference and/or training? I'm just asking because I looked into this a while ago and felt a bit overwhelmed by the number of options. It feels like AMD is trying many things at the same time, and I’m not sure where they’re going with all of it.


I love teaching Scratch to kids. Some years ago, I used to do "CoderDojo", which is like a hobby club where kids can learn programming. Some kids go to soccer, others to art academy — and these kids learn programming. Super cool to teach.

However, most kids get stuck after they master Scratch. Especially kids around the age of 8–10. They learn Scratch. It's awesome. They make some advanced games and really get the hang of it.

Then they ask to do something more — some “real programming.” And that's where the hurdles start to pop up. First problem: my kids don't speak English, so most documentation and tutorials are out of reach. Second problem: suddenly they need to learn everything about computers — source files, graphics, networking... This is too big a hurdle for them to take. Third problem: text-based programming. Most of them literally can't type on a keyboard properly. Text is also much less fun than visual programming.

What I've always wondered — and this project reminds me of it — is: can we make the transition smoother? Stay within the Scratch ecosystem, which they know, but start introducing extra concepts step by step, without the big jump.

GoboScript introduces "text-based programming" as a first step, while staying within the Scratch world. I would have liked it more if we could teach the kids a real-world programming language, like Python or JavaScript — because then they’re moving toward "real programming" step by step.

The next step would be: introduce other computer concepts like file systems or networking.

I would love to build this myself. Alas, no time. Maybe one day.


As a kid who didn't speak english I used the AutoIT language - not related to AutoHotKey

It's a got a fully localized offline documentation embedded in a plug-and-play IDE designed to always compile & run your code with a single F5 press, no configuration needed

The language itself is fully fledged but mostly revolves around things that kids already know.

The tutorial makes you leave the CLI stage by chapter 5, because when kids want to make software they want to make UIs, they've never used CLIs to do fun things before.

It's also centered around automating desktop tasks. Moving the mouse, typing keystrokes, downloading or opening web pages, parsing the source, identifying windows on screen, moving them around, reading pixels, playing sounds...

https://www.autoitscript.com/site/autoit/


I loved AutoIT as a kid! And it was also fun playing around with stuff like https://www.perfectautomation.com/ (nice to see that it’s now freeware, and the newer replacement project is completely free / open source!).

But it was about the same time as I started digging around in Delphi 7, then discovered a RAD package pretty much exactly like Delphi, but with PHP instead (wild times), and as I was going down the webmaster route in parallel it was the thing I spent most of my days in. (That, and making bootleg Windows XP builds just for fun, of course.)


Fortunately, for Delphi we have an opensource successor:

https://www.lazarus-ide.org/

https://news.ycombinator.com/item?id=43913414


Lazarus is great and seems more stable than when I’ve last tried it. (~10 years ago, wow!) I don’t think I can come back to Pascal now, and I don’t really like this drag’n’drop GUI building method anymore, but it’s a great IDE otherwise!


I've done some research on this. I've asked classrooms of kids the words they would use to describe programming. When they are young in elementary school, the are taught Scratch, and describe programming as fun, exciting, challenging, favorite activity, something they look forward to.

Then I surveyed older kids, when they get to middle school and they transition immediately from scratch to Python and Java using VS Code. The words students use to describe programming take a dark turn: hard, frustrating, scary, not for me, are the top sentients. Programming starts up there next to recess in terms of K-6 approval rating, but plummets to to math class status in just a few years.

I attribute the change to language a tool design. This change in sentiment happens exactly when they are taken from tools designed for kids and forced to use tools designed for professional programmers. There's a chasm between Scratch and "real" programming languages. As lauded as Python is for being a good beginner or learning language, it does not fill that gap. In fact, it's part of the problem -- people believing it's a good enough first language prevents other perhaps better suited languages from being adopted. It may be a good language for dev-brained individuals, but for other people they can get discouraged very easily by it and the tooling ecosystem. I teach graduate students who find it perplexing, so middle school students don't stand a chance.


I wish I could upvote this twice, and I will go further.

Computing as a whole has a human factors problem. The people most able to fix this problem - programmers - are also the people who least recognize that there is a problem to fix. Indeed programmers are invested, both literally and subconsciously, in the current order. Programmers implicitly place themselves at the top of a computing class hierarchy that strictly categorizes "developers" and "users" by whether they pay or are paid, and developments that facilitate "normies" programming to a useful level are staunchly opposed. The current front line of this debate is LLM programming, which is a goddamned revolution as far as non-programmers are concerned, and programmers hate that non-programmers might be able to write programs, and tell scary stories about security vulnerabilities. But it's not new - before they were sneering at LLMs, they were sneering at LabVIEW, and telling scary stories about maintainability. Or Excel.

The smartphone revolution was really a user interface revolution, but in making computers accessible it also infantilized the users. This philosophy is now baked in to such an extent that not only are you not allowed root on most computers, but writing an "app" has an exponentially higher barrier to entry than writing a shell script, or even a basic Hello World in C. Programming is becoming harder, not easier. Compare to the 80s, when a child or teen's "game console" might have been a ZX Spectrum or Commodore 64, which would boot right to BASIC - a language which, incidentally, remains a much better introduction to programming than Python.


This reminds me of a tweet from a while back: https://x.com/jonathoda/status/960952613507231744

  My thesis: The theory and practice of programming is permeated with the sensibilities of high-functioning autistics like myself. De-nerding programming will unlock great benefits for all of humanity. We too will benefit, for despite our hubris we are also way over our heads.
There's a great deal of truth to that. Programming languages are made by nerds, for nerds, and that's a problem in a world that is becoming more automated every day.

> Compare to the 80s, when a child or teen's "game console" might have been a ZX Spectrum or Commodore 64, which would boot right to BASIC

Fully agreed here! I had a very unsettling conversation with a student interested in making video games. He was 18, and in a CS degree program, and he has told me he had been into video games his whole life, but he never knew you could use the computer he had at home growing up to make them.

This floored me because like you, I had experience booting up an old PC and having access to a programming language right there. Mine was QBasic on DOS, and I used it to make a rudimentary Zork type text adventure game. I was 6 or 7, and tons of people my age had that same experience getting into programming.

I would have thought in the 30 years since that time, with the proliferation of computing devices and especially game creation software, that it would be more accessible today to get into gaming. And in some ways it is, but in many ways it's also been heavily monetized and walled off, to the point that every day people are discouraged from creating their own games. It's really quite sad, because we've actually learned a lot over the years about how to make computing more accessible to people, but we haven't invested enough in putting that knowledge into real products that people use.


Maybe something like Hedy will work better for you, instead of Scratch.

https://hedy.org/

https://youtube.com/watch?v=ztdxlkmxpIQ


Yeah, I first thought of Hedy too. It doesn't keep them within the ecosystem but it does address the gradual intro problem and the localization problem.


This is how I learned a long time ago with Game Maker. Everything was Gui based, but you could also add code blocks to do more powerful things.

Eventually, most things I built were nothing but code blocks.


I had pretty much the exact same experience with Game Maker too. In retrospect, feels like a very powerful pedagogical tool. Even when I wasn't really trying to "learn coding" but rather I just wanted to make some games, I ended up learning to code

The fact that _most_ things could be done with drag-and-drop, but for some features you had to drop down to scripting, served as a really nice and gentle stepping stone to writing code.


Anecdote follows. The below matters little.

I did the same gradual move, and I can remember being excited to get home from school because I might have solved some problem by letting it tick over in my head.

But I do remember thinking GML was amazing (it was fugly, kid), and struggling with C, because the language was so different. (These days, leap to love2D and Lua instead).

Just the idea of multiple languages was so foreign and impossible to me. Writing a raycaster in GML was possible, writing an event loop in C was insane... And these days picking up a language tutorial for something new is a hobby.


Same, except for me it was Corel Click & Create.


It sounds like you want the ability to instantiate a scratch block that contains a text box, which in turn contains the function body for the block? It would then be possible to incrementally write as little or as much as desired in text.

Getting fancy, that block could use a backend interpreter/compiler of choice, so the language could be Squeak, Python, C, an LLM generator, ...


I know this is probably a bit more advanced, but this suggestion reminds me of Blueprints w/ magic nodes in Unreal Engine. There is a plugin for Magic Nodes (https://forums.unrealengine.com/t/magic-nodes/121220) where you can enter C++ into a blueprint node that integrates into the blueprint system. Similar kind of UI could work well


> It sounds like you want the ability to instantiate a scratch block that contains a text box, which in turn contains the function body for the block ?

That is the escape hatch from all visual development environments. Having seen Talend and W4 in action, I know the end state of the process: a single block with everything in it - I'm barely caricaturing here.

Maybe the specific needs of early learners will keep the system from degenerating too fast but, the moment code goes in that is not visually represented in the environment's visual paradigm, coherence goes downhill fast and one starts longing for properly managed scripts.


As I see it, that the whole point in this case. Alexander wants to teach children how to program in text mode, but can't see the bridge from Scratch to text mode. With textboxes, the child can write small functions to start with. As they learn, they may well start making the blocks of code more complex. Eventually they might end up with a single block with everything in it, as you describe. At that point they ditch the Scratch "wrapper" and start using a typical text mode tool chain. Mission accomplished.

One of my children did something like this. In the days when Scratch was written in Squeak, he discovered that shift-clicking the 'r' in the Scratch logo dropped him into the underlying Squeak environment. He then started modifying and writing Scratch blocks and was eventually comfortable with text mode programming.


I’m familiar with the Grasshopper visual scripting environment for the Rhino CAD system, and what you’re describing happens there as well…but I don’t really perceive it as a negative. Users who aren’t comfortable with text programming continue to use the visual method, and users who are tend to migrate their more complicated functions to single blocks. There’s a limit of complexity beyond which the visual programming becomes an impediment to understanding. It’s OK if moving things to a text-based block will make the internal logic of that block inaccessible to some number of users, given that those users would struggle to understand the visual version of the function as well.


This is done rather well in TouchDesigner


I still think Scratch has some brilliant ideas that aren't quite captured into traditional text-based programming languages. The fact that the types are rigid out to the editor (i.e. the editor will let you have a partially-formed program but not an invalid program because it does static type checking per-edit and rejects edits that don't pass) is quite a powerful feature tucked into a toy / educational language.


Would something like Blockly [0] or MakeCode [1] fill that gap?

[0] https://developers.google.com/blockly/

[1] https://makecode.microbit.org/


An interesting usage of Blockly is BlockSCAD:

https://www.blockscad3d.com/editor/

which uses it to wrap up (most of) OpenSCAD for interactive 3D modeling.


For a textual teaching language check out https://hedy.org It is multilingual (not just English) and introduces syntax gradually


The solution I built for this is Leopard. It is a Scratch → JavaScript converter. You can take an existing Scratch project and convert it to JavaScript code and then keep working, or use the Leopard library to create a new project from (ahem) scratch, following all the same conventions as a Scratch project.

Check it out! https://leopardjs.com/


Flash circa 4-7 era had a really nice scripting environment for ActionScript where you could select instructions via a drop-down and edit fields for each instruction (with the fields being of dynamic input types). I've never seen something like these days, but for a 12 year old me it was really fun and simple to use. It also came with an integrated offline manual.

Maybe you could try something like that?


I loved learning and then mentoring at CoderDojos. Incredible meetups where they really let kids learn in their own way with guidance of more experienced coders. Very fun and I never had a bad experience. The ones I went to were at University of Minnesota.


I teach programming to designers and architects at the local university. We're using Processing quite successfully, because it skips a lot of steps. My daughters are too young and are still doing Scratch (with the great micro:bit). But I think next would be Processing, or Arduino with Micropython. But yes, typing is a problem. My older daughter inputs almost all her text via voice input. At work we're doing a lot of low code for new architectures. I think agentic low code tools for kids would be nice.


You use case is exactly what hedy tackles, your experience is really similar to what the author of Hedy tells in conference and interviews.



have you tried showing them LabVIEW? it ticks tons of boxes:

- graphical

- more advanced (inherently parallel, more useful async data structures like events and queues)

- interfaces with tons of cool hardware

- built in network programming

- pretty powerful debugging

- free with community edition

Once they have the basics down, you could transition them out to a text based language slowly, even using the c/Matlab based text nodes to start


Am I the only one who sort of fears the day when Python loses the GIL? I don't think Python developers know what they’re asking for. I don't really trust complex multithreaded code in any language. Python, with its dynamic nature, I trust least of all.


You are not the only one who is afraid of changes and a bit change resistant. I think the issue here is that the reasons for this fear are not very rational. And also the interest of the wider community is to deal with technical debt. And the GIL is pure technical debt. Defensible 30 years ago, a bit awkward 20 years ago, and downright annoying and embarrassing now that world + dog does all their AI data processing with python at scale for the last 10. It had to go in the interest of future proofing the platform.

What changes for you? Nothing unless you start using threads. You probably weren't using threads anyway because there is little to no point in python to using them. Most python code bases completely ignore the threading module and instead use non blocking IO, async, or similar things. The GIL thing only kicks in if you actually use threads.

If you don't use threads, removing the GIL changes nothing. There's no code that will break. All those C libraries that aren't thread safe are still single threaded, etc. Only if you now start using threads do you need to pay attention.

There's some threaded python code of course that people may have written in python somewhat naively in the hope that it would make things faster that is constantly hitting the GIL and is effectively single threaded. That code now might run a little faster. And probably with more bugs because naive threaded code tends to have those.

But a simple solution to address your fears: simply don't use threads. You'll be fine.

Or learn how to use threads. Because now you finally can and it isn't that hard if you have the right abstractions. I'm sure those will follow in future releases. Structured concurrency is probably high on the agenda of some people in the community.


> What changes for you? Nothing unless you start using threads

Coming from the Java world, you don't know what you're missing. Looking inside an application and seeing a bunch of threadpools managed by competing frameworks, debugging timeouts and discovering that tasks are waiting more than a second to get scheduled on the wrong threadpool, tearing your hair out because someone split a tiny sub-10μs bit of computation into two tasks and scheduling the second takes a hundred times longer than the actual work done, adding a library for a trivial bit of functionality and discovering that it spins up yet another threadpool when you initialize it.

(I'm mostly being tongue in cheek here because I know it's nice to have threading when you need it.)


Just consider that mess job security!


> But a simple solution to address your fears: simply don't use threads. You'll be fine.

Im not worried about new code. Im worried about stuff written 15 years ago by a monkey who had no idea how threads work and just read something on stack overflow that said to use threading. This code will likely break when run post-GIL. I suspect there is actually quite a bit of it.


Software rots, software tools evolve. When Intel released performance primitives libraries which required recompilation to analyze multi-threaded libraries, we were amazed. Now, these tools are built into processors as performance counters and we have way more advanced tools to analyze how systems behave.

Older code will break, but they break all the time. A language changes how something behaves in a new revision, suddenly 20 year old bedrock tools are getting massively patched to accommodate both new and old behavior.

Is it painful, ugly, unpleasant? Yes, yes and yes. However change is inevitable, because some of the behavior was rooted in inability to do some things with current technology, and as hurdles are cleared, we change how things work.

My father's friend told me that length of a variable's name used to affect compile/link times. Now we can test whether we have memory leaks in Rust. That thing was impossible 15 years ago due to performance of the processors.


> Software rots

No it does not. I hate that analogy so much because it leads to such bad behavior. Software is a digital artifact that can does not degrade. With the right attitude, you'd be able to execute the same binary on new machines for as long as you desired. That is not true of organic matter that actually rots.

The only reason we need to change software is that we trade that off against something else. Instructions are reworked, because chasing the universal Turing machine takes a few sacrifices. If all software has to run on the same hardware, those two artifacts have to have a dialogue about what they need from each other.

If we didnt want the universal machine to do anything new. If we had a valuable product. We could just keep making the machine that executes that product. It never rots.


yes it does.

If software is implicitly built on wrong understanding, or undefined behaviour, I consider it rotting when it starts to fall apart as those undefined behaviours get defined. We do not need to sacrifice a stable future because of a few 15 year old programs. Let the people who care about the value that those programs bring, manage the update cycle and fix it.


Software is written with a context, and the context degrades. It must be renewed. It rots, sorry.


You said it's the context that rots.


It's a matter of perspective, I guess...

When you look from the program's perspective, the context changes and becomes unrecognizable, IOW, it rots.

When you look from the context's perspective, the program changes by not evolving and keeping up with the context, IOW, it rots.

Maybe we anthropomorphize both and say "they grow apart". :)


We say the context has breaking changes.

We say the context is not backwards compatible.


Can you see how this comes off as a pedantic difference? If I ran a program 10 years ago and it worked, then run it today and it doesn't work, we say the program is broken and needs to be updated. We don't say the world around it is broken and needs to revert back to its original state.


We do revert back to a previous context if that seems practical: revert back to a previous compiler or library version.


>> Software rots > No it does not.

I'm thankful that it does, or I would have been out of work long ago. It's not that the files change (literal rot), it is that hardware, OSes, libraries, and everything else changes. I'm also thankful that we have not stopped innovating on all of the things the software I write depends on. You know, another thing changes - what we are using the software for. The accounting software I wrote in the late 80s... would produce financial reports that were what was expected then, but would not meet modern GAAP requirements.


That’s not what the phrase implies. If you have a C program from 1982, you can still compile it on a 1982 operating system and toolchain and it’ll work just as before.

But if you tried to compile it on today’s libc, making today’s syscalls… good luck with that.

Software “rots” in the sense that it has to be updated to run on today’s systems. They’re a moving target. You can still run HyperCard on an emulator, but good luck running it unmodded on a Mac you buy today.


> You can still run HyperCard on an emulator, but good luck running it unmodded on a Mac you buy today.

I grew up with HyperCard, so I had a moment of sadness here.


We all have our own personal HyperCard.


Fair point, but there is an interesting question posed.

Software doesn't rot, it remains constant. But the context around it changes, which means it loses usefulness slowly as time passes.

What is the name for this? You could say 'software becomes anachronistic'. But is there a good verb for that? It certainly seems like something that a lot more than just software experiences. Plenty of real world things that have been perfectly preserved are now much less useful because the context changed. Consider an Oxen-yoke, typewriters, horse-drawn carriages, envelopes, phone switchboards, etc.

It really feels like this concept should have a verb.


obsolescence


>execute the same binary

Only if you statically compile or don't upgrade your dependencies. Or don't allow your dependencies to innovate.


The other day I compiled a 1989 C program and it did the job.

I wish more things were like that. Tired of building things on shaky grounds.


Hello world without -O2 -Werror? I've done several compiler toolchain updates on larger code bases and every new version of Clang, GCC or glibc will trip up new compiler warnings. Worse, occasionally taking advantage of some UB laying around leading to runtime bugs.

I'm not complaining, the new stricter warnings are usually for the better, what I'm saying is that the bedrock of that world isn't as stable as it's sometimes portraited.


If you go into mainframes, you'll compile code that was written 50 years ago without issue. In fact, you'll run code that was compiled 50 years ago and all that'll happen is that it'll finish much sooner than it did on the old 360 it originally ran on.


> A language changes how something behaves in a new revision, suddenly 20 year old bedrock tools are getting massively patched to accommodate both new and old behavior.

In my estimation, the only "20 year old bedrock tools" in Python are in the standard library - which currently holds itself free to deprecate entire modules in any minor version, and remove them two minor versions later - note that this is a pseudo-calver created by a coincidentally annual release cadence. (A bunch of stuff that old was taken out recently, but it can't really be considered "bedrock" - see https://peps.python.org/pep-0594/).

Unless you include NumPy's predecessors when dating it (https://en.wikipedia.org/wiki/NumPy#History). And the latest versions of NumPy don't even support Python 3.9 which is still not EOL.

Requests turns 15 next February (https://pypi.org/project/requests/#history).

Pip isn't 20 years old yet (https://pypi.org/project/pip/#history) even counting the version 0.1 "pyinstall" prototype (not shown).

Setuptools (which generally supports only the Python versions supported by CPython, hasn't supported Python 2.x since version 45 and is currently on version 80) only appears to go back to 2006, although I can't find release dates for versions before what's on PyPI (their own changelog goes back to 0.3a1, but without dates).


My only concern is this kind of change in semantics for existing syntax is more worthy of a major revision than a point release.


Python already has a history of "misrepresenting" the ycope of the change (like changing behaviour of one of core data types and calling it just a major version change — that's really a new language IMHO).

Still, that's only a marketing move, technically the choice was still the right one, just like this one is.


It's opt-in at the moment. It won't be the default behavior for a couple releases.

Maybe we'll get Python 4 with no GIL.

/me ducks


Agreed. This should have been Python 4.


If it is C-API code: Implicit protection of global variables by the GIL is a documented feature, which makes writing extensions much easier.

Most C extensions that will break are not written by monkeys, but by conscientious developers that followed best practices.


If code has been unmaintained for more than a few years, it's usually such a hassle to get it working again that 99% of the time I'll just write my own solution, and that's without threads.

I feel some trepidation about threads, but at least for debugging purposes there's only one process to attach to.


>Im worried about stuff written 15 years ago

Please don't - it isn't relevant.

15 years ago, new Python code was still dominantly for 2.x. Even code written back then with an eye towards 3.x compatibility (or, more realistically, lazily run through `2to3` or `six`) will have quite little chance of running acceptably on 3.14 regardless. There have been considerable removals from the standard library, `async` is no longer a valid identifier name (you laugh, but that broke Tensorflow once). The attitude taken towards """strings""" in a lot of 2.x code results in constructs that can be automatically made into valid syntax that appears to preserve the original intent, but which are not at all automatically fixed.

Also, the modern expectation is of a lock-step release cadence. CPython only supports up to the last 5 versions, released annually; and whenever anyone publishes a new version of a package, generally they'll see no point in supporting unsupported Python versions. Nor is anyone who released a package in the 3.8 era going to patch it if it breaks in 3.14 - because support for 3.14 was never advertised anyway. In fact, in most cases, support for 3.9 wasn't originally advertised, and you can't update the metadata for an existing package upload (you have to make a new one, even if it's just a "post-release") even if you test it and it does work.

Practically speaking, pure-Python packages usually do work in the next version, and in the next several versions, perhaps beyond the support window. But you can really never predict what's going to break. You can only offer a new version when you find out that it's going to break - and a lot of developers are going to just roll that fix into the feature development they were doing anyway, because life's too short to backport everything for everyone. (If there's no longer active development and only maintenance, well, good luck to everyone involved.)

If 5 years isn't long enough for your purposes, practically speaking you need to maintain an environment with an outdated interpreter, and find a third party (RedHat seems to be a popular choice here) to maintain it.


> Im not worried about new code. Im worried about stuff written 15 years ago by a monkey who had no idea how threads work and just read something on stack overflow that said to use threading. This code will likely break when run post-GIL. I suspect there is actually quite a bit of it.

I was with OP's point but then you lost me. You'll always have to deal with that coworker's shitty code, GIL or not.

Could they make a worse mess with multi threading? Sure. Is their single threaded code as bad anyway because at the end of the day, you can't even begin understand it? Absolutely.

But yeah I think python people don't know what they're asking for. They think GIL less python is gonna give everyone free puppies.


> There's some threaded python code of course

A fairly common pattern for me is to start a terminal UI updating thread that redraws the UI every second or so while one or more background threads do their thing. Sometimes, it’s easier to express something with threads and we do it not to make the process faster (we kind of accept it will be a bit slower).

The real enemy is state that can me mutated from more than one place. As long as you know who can change what, threads are not that scary.


> Nothing unless you start using threads.

Isn't it also promises/futures? They might start threads implicitly.


Why would you use those without threads?


More realistically, as it happened in ML/AI scene, the knowledgeable people will write the complex libraries and will hand these down to scientists and other less experienced, or risk-averse developers (which is not a bad thing).

With the critical mass Python acquired over the years, GIL becomes a very sore bottleneck in some cases. This is why I decided to learn Go, for example. Properly threaded (and green threaded) programming language which is higher level than C/C++, but lower than Python which allows me to do things which I can't do with Python. Compilation is another reason, but it was secondary with respect to threading.


Knowledgeable people? Pytorch has memory leaks by design, it uses std::shared_ptr for a graph with cycles. It also has threading issues.


I don't want to add more to your fears, but also remember that LLMs have been trained on decades worth of Python code that assumes the presence of the GIL.


This could, indeed, be quite catastrophic.

I wonder if companies will start adding this to their system prompts.


Suppose they do. How is the LLM supposed to build a model of what will or won't break without a GIL purely from a textual analysis?

Especially when they've already been force-fed with ungodly amounts of buggy threaded code that has been mistakenly advertised as bug-free simply because nobody managed to catch the problem with a fuzzer yet (and which is more likely to expose its faults in a no-GIL environment, even though it's still fundamentally broken with a GIL)?


GIL or no-GIL concerns only people who want to run multicore workloads. If you are not already spending time threading or multiprocessing your code there is practically no change. Most race condition issues which you need to think are there regardless of GIL.


A lot of Python usage is leveraging libraries with parallel kernels inside written in other languages. A subset of those is bottlenecked on Python side speed. A sub-subset of those are people who want to try no-GIL to address the bottleneck. But if non-GIL becomes pervasive, it could mean Python becomes less safe for the "just parallel kernels" users.


Yes sure. Thought experiment: what happens when these parallel kernels suddenly need to call back in to Python? Let's say you have a multithreaded sorting library. If you are sorting numbers then fine nothing changes. But if you are sorting objects you need to use a single thread because you need to call PyObject_RichCompare. These new parallel kernels will then try to call PyObject_RichCompare from multiple threads.


With the GIL, multithreaded Python gives concurrent I/O without worrying about data structure concurrency (unless you do I/O in the middle of it) - it's a lot like async in this way - data structure manipulation is atomic between "await" expressions (except in the "await" is implicit and you might have written one without realizing in which case you have a bug). Meanwhile you still get to use threads to handle several concurrent I/O operations. I bet a lot of Python code is written this way and will start randomly crashing if the data manipulation becomes non-atomic.


Afaik the only guarantee there is, is that a bytecode instruction is atomic. Built in data structures are mostly safe I think on a per operation level. But combining them is not. I think by default every few millisecond the interpreter checks for other threads to run even if there is no IO or async actions. See `sys.getswitchinterval()`


Bytecode instructions have never been atomic in Python's past. It was always possible for the GIL to be temporarily released, then reacquired, in the middle of operations implemented in C. This happens because C code is often manipulating the reference count of Python objects, e.g. via the `Py_DECREF` macro. But when a reference count reaches 0, this might run a `__del__` function implemented in Python, which means the "between bytecode instructions" thread switch can happen inside that reference-counting-operation. That's a lot of possible places!

Even more fun: allocating memory could trigger Python's garbage collector which would also run `__del_-` functions. So every allocation was also a possible (but rare) thread switch.

The GIL was only ever intended to protect Python's internal state (esp. the reference counts themselves); any extension modules assuming that their own state would also be protected were likely already mistaken.


Well I didn't think of this myself. It's literally what the python official doc says:

> A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setswitchinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

https://docs.python.org/3/faq/library.html#what-kinds-of-glo...

If this is not the case please let the official python team know their documentation is wrong. It indeed does state that if Py_DECREF is invoked the bets are off. But a ton of operations never do that.


This is the nugget of information I was hoping for. So indeed even GIL threaded code today can suffer from concurrency bugs (more so than many people here seem to think).


That doesn't match with my understanding of free-threaded Python. The GIL is being replaced with fine-grained locking on the objects themselves, so sharing data-structures between threads is still going to work just fine. If you're talking about concurrency issues like this causing out-of-bounds errors:

    if len(my_list) > 5:
        print(my_list[5])
(i.e. because a different thread can pop from the list in-between the check and the print), that could just as easily happen today. The GIL makes sure that only one python interpreter runs at once, but it's entirely possible that the GIL is released and switches to a different thread after the check but before the print, so there's no extra thread-safety issue in free-threaded mode.

The problems (as I understand it, happy to be corrected), are mostly two-fold: performance and ecosystem. Using fine-grained locking is potentially much less efficient than using the GIL in the single-threaded case (you have to take and release many more locks, and reference count updates have to be atomic), and many, many C extensions are written under the assumption that the GIL exists.


You start talking about GIL and then you talk about non-atomic data manipulation, which happen to be completely different things.

The only code that is going to break because of "No GIL" are C extensions and for very obvious reasons: You can now call into C code from multiple threads, which wasn't possible before, but is now. Python code could always be called from multiple python threads even in the presence of the GIL in python.


When you launch processes to do work you get multi-core workload balancing for free.


This is a common mistake and very badly communicated. The GIL do not make the Python code thread-safe. It only protect the internal CPython state. Multi-threaded Python code is not thread-safe today.


Internal cpython state also includes say, a dictionary's internal state. So for practical purposes it is safe. Of course, TOCTOU, stale reads and various race conditions are not (and can never be) protected by the GIL.


Well, I think you can manipulate a dict from two different threads in Python, today, without any risk of segfaults.


It's memory safe, but it's not necessarily free of race conditions! It's not only C extensions that release the GIL, the Python interpreter itself releases the GIL after a certain number of instructions so that other threads can make progress. See https://docs.python.org/3/library/sys.html#sys.getswitchinte....

Certain operations that look atomic to the user are actually comprised of multiple bytecode instructions. Now, if you are unlucky, the interpreter decides to release the GIL and yield to another thread exactly during such instructions. You won't get a segfault, but you might get unexpected results.

See also https://github.com/google/styleguide/blob/91d6e367e384b0d8aa...


You can do so in free-threaded Python too, right? The dict is still protected by a lock, but one that’s much more fine-grained than the GIL.


Sounds good, yes.


This should not have been downvoted. It's true that the GIL does not make python code thread-safe implicitly, you have to either construct your code carefully to be atomic (based on knowledge of how the GIL works) or make use of mutexes, semaphores, etc. It's just memory-safe and can still have races etc.


You're not the only one. David Baron's note certainly applies: https://bholley.net/blog/2015/must-be-this-tall-to-write-mul...

In a language conceived for this kind of work it's not as easy as you'd like. In most languages you're going to write nonsense which has no coherent meaning whatsoever. Experiments show that humans can't successfully understand non-trivial programs unless they exhibit Sequential Consistency - that is, they can be understood as if (which is not reality) all the things which happen do happen in some particular order. This is not the reality of how the machine works, for subtle reasons, but without it merely human programmers are like "Eh, no idea, I guess everything is computer?". It's really easy to write concurrent programs which do not satisfy this requirement in most of these languages, you just can't debug them or reason about what they do - a disaster.

As I understand it Python without the GIL will enable more programs that lose SC.


Good engineering design is about making unbalanced tradeoffs where you get huge wins for low costs. These kinds of decisions are opinionated and require you to say no to some edge cases to get a lot back on the important cases.

One lesson I have learned is that good design cannot survive popularity and bureaucracy that comes with it. Over time people just beat down your door with requests to do cases you explicitly avoided. You’re blocking their work and not being pragmatic! Eventually nobody is left to advocate for them.

And part of that is the community has more resources and can absorb some more complexity. But this is also why I prefer tools with smaller communities.


I'm sure you'll be happy using the last language that has to fork() in order to thread. We've only had consumer-level multicore processors for 20 years, after all.


You have to understand that people come from very different angles with python. Some people write web servers where in python, where speed equals money saved. Other people write little UI apps that where speed is a complete non-issue. Yet others write aiml code that spends most of its time in gpu code. But then they want to do just a little data massaging in python which can easily bottleneck the whole thing. And some people people write scripts that don't use a .env but rather os-libraries.


I don’t understand this argument. My python program isn’t the only program on the system - I have a database, web server, etc. It’s already multi-core.


Worst case is probably that it is like a "Python4": Things break when people try to update to non-GIL, so they rather stay with the old version for decades.


What reliance did you have in mind? All sorts of calls in Python can release the GIL, so you already need locking, and there are race conditions just like in most languages. It's not like JS where your code is guaranteed to run in order until you "await" something.

I don't fully understand the challenge with removing it, but thought it was something about C extensions, not something most users have to directly worry about.


I hope at least the option remains to enable the GIL, because I don't trust me to write thread-safe code on the first few attempts.


It's called job security. We'll be rewriting decades of code that's broken by that transition.


While it certainly has its rough edges, I'm a big asyncio user. So I'll be over here happily writing concurrent python that's single threaded, ie. pretending my Python is nodejs.

For the web/network workloads most of us write, I'd highly recommend this.


Asyncio being able to use thread pools would reduce memory usage at the very least.


How so? As opposed to running multiple processes you mean?


Exactly, you could do like other runtimes and run a single process that can saturate all cores.

You wouldn’t be duplicating the interpreter, your code, config, etc.


this looks extremely promising https://microsoft.github.io/verona/pyrona.html


As a Python dabbler, what should I be reading to ensure my multi-threaded code in Python is in fact safe.


The literature on distributed systems is huge. It depends a lot on your use case what you ought to do. If you're lucky you can avoid shared state, as in no race conditions in either end of your executions.

https://www.youtube.com/watch?v=_9B__0S21y8 is fairly concise and gives some recommendations for literature and techniques, obviously making an effort in promoting PlusCal/TLA+ along the way but showcases how even apparently simple algorithms can be problematic as well as how deep analysis has to go to get you a guarantee that the execution will be bug free.


My current concern is a CRUD interface that transcribes audio in the background. The transcription is triggered by user action. I need the "transcription" field disabled until the transcript is complete and stored in the database, then allow the user to edit the transcription in the UI.

Of course, while the transcription is in action the rest of the UI (Qt via Pyside) should remain usable. And multiple transcription requests should be supported - I'm thinking of a pool of transcription threads, but I'm uncertain how many to allocate. Half the quantity of CPUs? All the CPUs under 50% load?

Advise welcome!


Use `concurrent.futures.ThreadPoolExecutor` to submit jobs, and `Future.add_done_callback` to flip the transcription field when the job completes.


Although keep in mind that the callback will be "called in a thread belonging to the process" (say the docs), presumably some thread that is not the UI thread. So the callback needs to post an event to the UI thread's event queue, where it can be picked up by the UI thread's event loop and only then perform the UI updates.

I don't know how that's done in Pyside, though. I couldn't find a clear example. You might have to use a QThread instead to handle it.


Thank you. Perhaps I should trigger the transcription thread from the UI thread, then? It is a UI button that initiates it after all.


The tricky part is coming back onto the UI thread when the background work finishes. Your transcription thread has to somehow trigger the UI work to be done on the UI thread.

It seems the way to do it in Qt is with signals and slots, emitting a signal from your QThread and binding it to a slot in the UI thread, making sure to specify a "queued connection" [1]. There's also a lower-level postEvent method [2] but people disagree [3] on whether that's OK to call from a regular Python thread or has to be called from a QThread.

So I would try doing it with Qt's thread classes, not with concurrent.futures.

[1] https://doc.qt.io/qt-5/threads-synchronizing.html#high-level...

[2] https://doc.qt.io/qt-6/qcoreapplication.html#postEvent

[3] https://www.mail-archive.com/pyqt@riverbankcomputing.com/msg...


Terrific, thank you. You've put me on the right track.


Thank you.


Just use multiprocessing. If each job is independent and you aren’t trying to spread it out over multiple workers, it seems much easier and less risky to spawn a worker for each job.

Use SharedMemory to pass the data back and forth.


Honestly unless youre willing to devote a solid 4+ hours to learning about multi threading stick with ayncio


I'm willing to invest an afternoon learning. That's been the premise of my entire career!


how does the the language being dynamic negatively affect the complexity of multithreading?


I have a hypothesis that being dynamic has no particular effect on the complexity of multithreading. I think the apparent effect is a combination of two things: 1. All our dynamic scripting languages in modern use date from the 1990s before this degree of threading was a concern for the languages and 2. It is really hard to retrofit code written for not being threaded to work in a threaded context, and the "deeper" the code in the system the harder it is. Something like CPython is about as "deep" as you can go, so it's really, really hard.

I think if someone set out to write a new dynamic scripting language today, from scratch, that multithreading it would not pose any particular challenge. Beyond that fact that it's naturally a difficult problem, I mean, but nothing special compared to the many other languages that have implemented threading. It's all about all that code from before the threading era that's the problem, not the threading itself. And Python has a loooot of that code.


Is there so much legacy python multithreaded code anyway?

Considering everyone knew about the GIL, I'm thinking most people just wouldn't bother.


There is, and what's worse, it assumes a global lock will keep things synchronized.


Does it? The GIL only ensured each interpreter instruction is atomic. But any group of instruction is not protected. This makes it very hard to rely on the GIL for synchronization unless you really know what you are doing.


AFAIK a group of instructions is only non-protected if one of the instructions does I/O. Explicit I/O - page faults don't count.


If I understand that correctly, it would mean that running a function like this on two threads f(1) and f(2) would produce a list of 1 and 2 without interleaving.

  def f(x):
      for _ in range(N):
          l.append(x)
I've tried it out and they start interleaving when N is set to 1000000.


Dynamic(ally typed) languages, by virtue of not requiring strict typing, often lead to more complicated function signatures. Such functions are generally harder to reason about. Because they tend to require inspection of the function to see what is really going on.

Multithreaded code is incredibly hard to reason about. And reasoning about it becomes a lot easier if you have certain guarantees (e.g. this argument / return value always has this type, so I can always do this to it). Code written in dynamic languages will more often lack such guarantees, because of the complicated signatures. This makes it even harder to reason about Multithreaded code, increasing the risk posed by multithreaded code.


When the language is dynamic there is less rigor. Statically checked code is more likely to be correct. When you add threads to "fast and loose" code things get really bad.


Unless your claim is that the same error can happen more times per minute because threading can execute more code in the same timespan, this makes no sense.


Some statically checked languages and tools can catch potential data races at compile time. Example: Rust's ownership and borrowing system enforces thread safety at compile time. Statically typed functional languages like Haskell or OCaml encourage immutability, which reduces shared mutable state — a common source of concurrency bugs. Statically typed code can enforce usage of thread-safe constructs via types (e.g., Sync/Send in Rust or ConcurrentHashMap in Java).


Do you understand what you're implying?

"Python programmers are so incompetent that Python succeeds as a language only because it lacks features they wouldn't know to use"

Even if it's circumstantially true, doesn't mean it's the right guiding principle for the design of the language.


I was thinking that too. I am really not a professional developer though.

OFC it would be nice to just write python and everything would be 12x accelerated, but i don't see how there would not be any draw-backs that would interfere with what makes python so approachable.


Leuven. It's a Dutch speaking town. The university is KU Leuven. https://en.wikipedia.org/wiki/KU_Leuven


Yes, it is, but you can't rewrite history.

It was internationaly called Louvain before the international recognition for Leuven only camed after the events of 1968.

That's why the “Louvain shall be our battle cry” became a popular march in Britain. [1]

And why it was called so in Australia. [2].

I guess that's why most of the English articles about Leuven includes the French name.

1. https://theo.kuleuven.be/apps/press/ecsi/belgian-culture-and... 2. https://trove.nla.gov.au/newspaper/article/15533652.


> Yes, it is, but you can't rewrite history.

Yes you can.

What do you call the capital of China, The People's Republic of China? What do you call the city divided by the Bosphorus? What do you call the country of which that city is the largest? What do you call the country of which Prague is the capital?


Any country can rename cities however they want, they can't erase the fact that they were called something else in the past, and that people remember the previous name.

Saint Petersburg was renamed to Petrograd, then Leningrad and then Saint Petersburg again but when Germans sieged the town in 1941, it was Leningrad.

But if your point is more about how people call them, well you're going down another rabbit hole with me.

Just for Belgium, all the major international cities have a name in the 3 official languages. (Dutch, French and German) : Cairo in Egypt is called Caïro (D), Le Caire (F) and Kairo(G).

There I've never met anyone using "Beijing" for the capital of China, it's always "Pékin"(F) and "Peking"(D). [1] [2]

And within Belgium itself, many towns have their names in the 3 languages; Liège (F), Luik (D) and Lüttich(G). (But other have only one like Knokke, Dinant or Eupen).

This can lead to many interesting situations, especially with the peculiar linguistic situation in Belgium :

- Anyone speaking their native language will use the name from that language : Mechelen in Dutch, Malines in French and Mecheln in German.

- Speaking in French with a Dutch native, they usually use the Dutch name. If they are willing to use the French name, they'll switch to it if you didn't catch it in Dutch, otherwise, they'only use the Dutch name. [3]

- Speaking English with a French or Dutch native, they'll use the English name if it's Brussels and their language's name any other time, wherever it's located. For cities in the Dutch speaking part, French natives will always use Anvers(F) for Antwerpen (D), Gand(F) for Gent(D) (even though it is Ghent in English). For cities in the French speaking part, Dutch will always use Namen(D) for Namur(F) or Bergen(D) for Mons(F)

- French native often need to specify which city they are talking about when using "Louvain" because it can refer to Leuven or Louvain-la-Neuve.

- And everyone want that cities only be called with the name from their linguistic region... hence the first post that said that it's now called Leuven.

1. https://www.lesoir.be/635347/article/2024-11-12/attaque-la-v...

2. https://www.hln.be/economie/china-kondigt-weer-tijdelijke-ma...

3. Which brings me to my true story of meeting a guy from France that only knew the town Luik and never heard of Liège. Nobody in Liège call it Luik, just like nobody in Leuven call it Louvain.


I think you missed their point, for instance you call it Constantinople when talking about events before the Ottomans took it over, and Istanbul after.


It was only officially renamed in 1930.


Leuven (Afrikaans, Dutch, Finnish), Louvain (French, Romanian), Lováin (Irish), Lovaina (Catalan, Portuguese, Spanish), Lovaň (Czech), Lovanio (Italian), Löwen (German), Louvéni – Λουβαίνη (Greek), Lovin (Walloon), Léiwen (Luxembourgish), Lovanium (Latin), Lowanium (Polish), 魯汶 (Chinese)

(see wikipedia)


The French university was re-founded in Louvain-la-Neuve, an atypical and fairly successful planned city. It has some small community character and a whimsical architecture that's more trying to be cozy rather than impressive. Recommended for architecture enthusiasts.


It is now. Belgian history is the true definition of insanity. There were Belgians who were quite happy to see this university go up in flames.


It (Leuven) has always been a Dutch speaking town. Well, the majority of the population that is. The administration (city, university) started using French at some point (19th century - mid 20th century).


clarification: used french in the 19th-century until mid 20th century. Anyway, the fact that the university was still giving some courses in French in 1968 in what was officially a Flemish University caused the "Leuven Vlaams" student riots.

I live in Leuven and you can clearly see that in the middle ages, they spoke dutch here. The switch to French and back is visible in the street names (because often they carry all 3 names). A funny example: The "grasmusstraat" (now) was "Rue Erasmus" in the 19th century and "grasmusstroike" in the middle ges. Apparently the French civil servants didn't know that a grasmus is a little bird and renamed it after a famous alumnus.


Being a European, I’d love to try this. Many businesses operate completely local. I think there is a market for a Europe-only cloud provider.

How do I try this? Do they have a free tier?


You can sign up on their website: https://www.stackit.de/en/

While there are no free credits the services are priced pay per use to the minute with a much simpler pricing model than the large hyperscalers like AWS. See prices for EC2 here: https://www.stackit.de/en/pricing/cloud-services/iaas/stacki...

You can find the docs here: https://docs.stackit.cloud/stackit/en/knowledge-base-8530170...


Note that you need to be Incorporated in Germany, Austria or Switzerland to use it. And they dont allow individuals to open accounts. Only companies.

"The European cloud" that doesn't allow sign ups from Europe is extremely ironic.

I don't know how they keep getting all this press without actually delivering anything


Lidl got SAP's award for best customer a few years before admitting they have wasted half a billion on SAP implementation.

It's the same thing again.


> award for best customer

I've never heard of this. Does it mean best cash cow?


I expect nothing less from SAP


I came here to state exactly this. As a dutch individual that has 'cloud' high on his CV, I would like to create an account and test this to see if it is something I should invest my time in to make it part of my cv. But ... they won't allow that.

Ah well, next!


Their pricing page is funny. Can I have 2 RAMs please?

My physics teacher would get spitting mad at them for not specifying the unit.

Of course their billing is also 'hours'. Instead of 'hourly'.


Hetzner and OVH are top of mind, Gandi is nice too. Not Amazon-scale, by far, but European companies hosting in Europe with decent service.


OVH is a joke (their data center burned because they had wooden roofs), Gandi is no more, and Scaleway gave up. There is no French host anymore. Only Hetzner is left in this business.


OVH is still in business, even if they had a fire 3 years ago. Both AWS and Google have had rather large fires.


OVH is quite good, actually. We are using their K8S offering and S3 to build a service. It works well.


How exactly did Scaleway give up? They keep releasing new cloud and serverless products.

There's also IONOS.


The fire occurred in their old datacenter, built in an era when OVH was aggressively cheap and experimental. It is in no way representative of today's OVH.


Dassault has a cloud offering with 3DS Outscale


Exoscale has a simple sign up, with a credit of EUR 20 to get you started.

(I work there, and my job tomorrow is to get my 2 apprentices new accounts so they can start following the self-paced training in the Exoscale Academy.)


See also: https://www.scaleway.com/

They have three zones, Paris, Amsterdam and Warsaw.

Not sure if they have a free tier, but I still pay about 1€/month for two (really) small instances that I used for testing their service (and kept around for personal stuff).


Is it normal in the US for software developers to work 60-70 hour weeks? I understand this is the case in hip startup culture, but what about normal, boring companies? I work as an embedded software developer in Belgium, and here 40 hours is normal.


Depends on the company. A place like Microsoft or Google, where the company doesn’t really need to try to reap the benefits of their monopolies, a lot of people get away with working 20 hour weeks. Amazon and Meta are known to be harder places to work, so maybe 50-60 isn’t rare - although many just do 40. At startups you have to work long hours but that can be anywhere from 50-70 hours. No one is actually doing the 100 hour weeks they glorify, because it’s impossible to sustain.

The truth is though it’s a broken system. In my opinion even a startup should be able to make it on 40 hours. If they have to put in insane hours for just a slim chance to survive, it’s an indication that there isn’t really fair competition and that the market is too skewed towards existing players.



No not typically. In my experience most people work 40-45 hours at the boring companies that I’ve been at


In my experience it's usually 35-40 hours "butt in seat" time but 1 mins - 5 hours of work actually happen. The rest of the time is dopamine switching between news, personal communications, and other forms of non-work entertainment.

I count checking emails, work instant communications, and working through bureaucracy (paperwork flows) as work, not just hands on keyboard working on software solutions.

Also in my experience there are people who focus on only work at work, and they usually drag others into performing their job function.


If the only reason I'm doing something is for the company, whatever the task is ... it's on the clock.


No, that's an insane workload. Your employer doesn't even deserve 40 hours, let alone 70. Jesus, people, live your lives instead of toiling for the rich people who will take from you until you keel over.


Work fortifies the spirit!


This is kind of wishful thinking. I'm not saying you're wrong, but there's no real way to prove you're right. Sometimes open source wins, but not every time. The whole machine learning field is still too young to have a clear answer.


(author here): I am currently writing a book about programming with LLMs, I have absolutely put my money where my mouth is over the last year, and there is not doubt in my mind that we will see incredible tools in 2024.

Already the emergent tools and frameworks are impressive, and the fact that you can make them yours by adding a couple of prompting lines and really tailor them to your codebase is the killer factor.

My tooling ( https://github.com/go-go-golems/geppetto ) sucks ass UI wise, yet I get an incredible value out of it. It's hard to quantify as a 10X, because my code architecture has changed to accomodate the models.

In some ways, the trick to coding with LLMs is to... not have them produce code, but intermediate DSL representations. There's much more to it, thus the book.


Can you name some examples where open source won?

In the past I wanted to believe this can be the future, where open source will somehow win (at least in some parts). What I see is that even the biggest projects are mere tools in the hands of the big corporations. Linux, Postgres, etc. All great! But have been assimilated. I cannot really consider them a win.

It seems to me that it goes back and forth - it also seems to me that the advancements in LLMs will go a similar route.


Why do you not consider that Linux has won?

It dominates everywhere from fairly small embedded, to super computers, with the one notable exception of the desktop, a shrinking market and mostly a historical anomaly (Microsoft cornered it before Linux was a viable player in that space).

I wouldn't say it is a "mere tool in the hands of big corporations". Sure these days most Linux developers are paid by corporations (a good thingg since that allows them to work full time on Linux) but the important point is that those corporations don't control Linux. Sure they can pay people to work on specific areas but they don't get to decide what gets merged or what the acceptance criteria are.

More generally, beyond Linux, huge swathes of new technology are expected to be open source or no one looks at them (think language and frameworks).

In the late 90s / early 2000s it became obvious that software development would no longer be about writing things from scratch but building on existing components. But there were two competing models for this. There was Microsoft's vision which envisionned a market of binary components that people would buy and use to compose application (that gave rise to the likes of Active X, DCOM, OLE) and the Open Source community vision that saw us building on components supplied in source form. It's clear that today the second vision has won. Even proprietary software now uses huge quantites of open source internally (take a look at the "about" screen on your Smartphone, TV or router).

LLMs may be the exception here for the moment (mainly due to the compute power needed for training).


Open source has won hands down for developers. It’s basically a giant tool bin and parts yard for people who build things with software. It’s also useful to extremely tech savvy people who like to DIY homelab type stuff.

In the consumer realm it has lost equally decisively.

The reason, I think, is that the distance between software nerds can use and software the general public can (or wants) to use is significantly larger than anyone understood. Getting something to work gets it like 5% of the way to making it usable.

Even worse, making it usable requires a huge amount of the kind of nit picky UI/UX work that programmers hate to do. This means they have to be paid to do it, which means usable software is many many times more expensive than technical software.

The situation is hopeless unless people start paying for open software, which is hard because the FOSS movement taught everyone that software should be free.


"In the consumer realm it has lost" is kind of weird to me, though. I'd say, don't think about "developers" v. everyone else but "anyone doing anything creative, as opposed to merely consuming, with computers and related devices" and it's not at all clear that the creators are the losers?


This seems almost like you're framing an observation "in the consumer realm it has lost" as some moral failing and feels in dangerously bad faith. What's the point? Do you really want to willfully ignore the artists using Procreate a closed source iOS app, the chip designers using proprietary EDA tools, the DJs using proprietary DJ software like Serrato, the musicians using proprietary DAWs like Ableton?

If anything, it's the creatives who use more proprietary software than the folks doing generic office work that can get away using LibreOffice and read PDFs using evince.


The generous reading, which I think is mostly correct, is that consumers mostly don't directly run open source software, e.g. LibreOffice on Linux. A ton of the software they run has significant open source components but it's packaged up as a proprietary SaaS or an app store app.


SaaS is how most people use open source, which is very ironically the least open way of using software. Closed source commercial (local) software is considerably more open and offers far more privacy and freedom.


I wouldn't say that open-source SaaS is the "least open" way of using software.

If you're using a hosted service based on open source software, you know that you can leave. You can grab your data and self-host. You can move it to another host that has reused or forked the code. You can run it locally. You have options.

If you're using local closed-source software, your files might not even be usable without an Internet connection. Think Spotify (closed source, local) where even your "offline" playlists won't load if you don't allow the software to phone home once in a while.


Self hostable SaaS does restore a lot of freedom but it’s not the most common paradigm. Most SaaS is closed.

Spotify is SaaS. It’s an app that runs locally but the data and most of what it really does lives in the cloud.


Yes, this was exactly my point. Not sure why I got down voted so much for it. And note what companies such as hashicorp and elastic are doing.


>Linux, Postgres, etc. All great! But have been assimilated.

If you're a company, you almost certainly want support, certifications, and related benefits of having a commercial product. And it's nice to have that avenue to fund developers. But the side effect is that the software is still free as in beer open source that anyone can download. In general, I'd say open source infrastructure software has won pretty thoroughly (even if not universally).


> Linux, Postgres, etc. All great! But have been assimilated. I cannot really consider them a win.

Are you trying to argue that free software becomes less free because specific groups of people contribute to it?


Linux absolutely won the OS wars.

We just didn't fully realize what that would look like before it happened.


It won the OS wars on the server side, no question there! It works greats, serving all the SaaS platforms out there...

But as a desktop? Most of my colleagues are using apple hardware/MacOSX. Personally, I'm writing this on Ubuntu - I've been on Linux since forever.

Another one: Android. Built on top of Linux, its core is open source, but is it REALLY in the spirit of open source? One most new phones you cannot put LineageOS (I've always checking the list before I buy).

How does it help that I have the source of Linux and Android, but I'm still not able to build the OS for most mobiles?

Don't get me wrong, I'm not disillusioned with open source. Not at all. I love and appreciate the open source I have (this includes e.g. Rust) I'm just cautious about what the author of the article is foreseeing. You might have a vibrant community, doesn't mean its output won't get wrapped inside some SaaS/big corp.


Complaining about companies releasing their work under a liberal open source license is a first-world-problem if I've ever seen one ;)


To make the game more fun, think about letting the scenarios mess with each other. Right now, they kinda just happen on their own. But imagine if one user's scenario could throw a curveball into the next person's situation. Like, you can try to mess up someone else's plans. It's a party game, after all. That could add a cool and funny twist to keep things interesting.


I think there is would be another great way to take advantage of AI here, following inspiration from the Jackbox games. In the Jackbox game I played the most, there were intermediate games where there was a chance that you would lose your finger, leaving you unable to pick some of the choices in following questions. I think in a similar vein it would be cool that you can catch negative traits over multiple prompts that interact with what you were trying to answer.


I absolutely love that idea. The AI could absolutely pick a negative trait when the player survives. Maybe they gain a positive trait when they die? That way it might balance out a little.


"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146


expertise in one area often leads people to believe they are experts for everything else too


funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts


This is comparing a foot to a mile


I don't intend this as criticism at all, but it's quite amusing how routine iOS updates and new iPhone releases have become. I recall being in high school when the first iPhone was introduced, and the sheer novelty of smartphones was awe-inspiring. Nowadays, they've become so commonplace that I find myself getting more enthusiastic about new additions to the Python standard library!


And even though iOS/MacOS updates happen frequently and bring little new each time, they still manage to break lots of stuff each time.

Like how many devs need to recompile their old apps to keep them "compatible", or how the network stack keeps changing destroying our VPN and stuff, or how even programs like Office sometimes won't work.


I recently rewatched the original keynote where Steve Jobs first announced the iPhone. The thing that struck me was how wild the crowd went when he demoed pinch-to-zoom, something which we basically take for granted nowadays.


It killed the segregated internet for phones. They just figured out how to display normal web pages. And the double tap to zoom in on a column was a pretty big feature too.

I kept waiting for the Apple Watch to have its iPhone 3GS moment, where they came out with one noticeably thinner and with the same or better battery life. But it never came. Is anything they’re a tiny bit taller than the original. Instead they went with a smaller and larger version which is not quite the same.

I still struggle with keeping it on while doing anything that requires work gloves. Make it thinner already.


Something weird is going on with Apple Watch.

When the iPhone 12 came out in 2020, we got a new design language. Sharp flat edges were back. Every year since then we’ve been waiting for the watch to get a design update. But still in 2023 it’s the same old fat bubbly design which looks _very_ dated.

I wonder if Apple is waiting for some piece of tech to get better (batteries?), so they can launch a sleeker flatter watch.


They had a patent on putting auxiliary batteries in the watch band. I was so stoked and then nothing came of it. I’m guessing poor performance or a fire hazard. I waited way too long before I gave up on that every hitting the manufacturing floor.


Often companies sit on patents and do nothing with them simply to ensure their competitors can't use that innovative feature.


> And the double tap to zoom in on a column was a pretty big feature too.

By itself this feature is not a very big deal: Opera Mobile/Mini had it for quite some time before iPhone, I certainly used it a lot. But the whole package the iPhone brought was a game changer for the industry.


I remember the era. We had to double click or basically press a zoom button. It was still so bad even with zoom that my first job ended up making mobile/responsive sites because nobody wanted to zoom in and out all the time.


Anything multi-touch at that point was pretty impressive. Up until then touch screens had been single finger only, and often resistive.


Android 14 isn't much better: https://developer.android.com/about/versions/14/features

Grammatical inflection is interesting, though. I'd prefer formality over gendered though, like the difference between du and Sie in German.


Android "peaked" around v8, maybe v9. Up until then, jumping from one version to the next one really felt worthy of a new major release.

10-13 (haven't used v14 yet) were all just completely forgettable (mostly UI changes, no significant new feature).


There were some tightened security features on both Android and iOS in that phase.

Basically a lot of really dark patterns were killed. A free casual game reading all your files. A game with media access being able to "track" you from location information in your photos, without any file or location access. A social media app could be pasting your password from clipboard to their servers. A messaging app which you gave cam/mic access once to could be recording you while it's in background, and converting your audio information via their speech to text.

So a lot of the changes have been around making this behavior really visible. Sure, any app can access your cam/mic, but there's a green light every time it does. Sure, it can copy/paste from clipboard, there will just be a little notification each time.


Almost as if there are now more decent smartphone manufacturers that one can bother to remember, let alone models.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: