That example is too simple for me to grasp it. How would you code a function that iterates over an array to compute its sum. No cheating with a built-in sum function. If you had to code each addition, how would that work? Curious to learn (I probably could google this or ask Claude to write me the code).
Carmack gives updating in a loop as the one exception:
> You should strive to never reassign or update a variable outside of true iterative calculations in loops.
If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like
def sum(l):
if not l: return 0
return l[0] + sum(l[1:])
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.
You don't have to use recursion, that is, you don't need language support for it. Having first class (named) functions is enough.
For example you can modify sum such that it doesn't depend on itself, but it depends on a function, which it will receive as argument (and it will be itself).
While your example of `sum` is a nice, pure function, it'll unfortunately blow up in python on even moderately sized inputs (we're talking thousands of elements, not millions) due to lack of tail calls in Python (currently) and the restrictions on recursion depth. The CPython interpreter as of 3.14 [0] is now capable of using tail calls in the interpreter itself, but it's not yet in Python, proper.
Yeah, to actually use tail-recursive patterns (except for known-to-be-sharply-constrained problems) in Python (or, at least, CPython), you need to use a library like `tco`, because of the implementation limits. Of course the many common recursive patterns can be cast as map, filter, or reduce operations, and all three of those are available as functions in Python's core (the first two) or stdlib (reduce).
Updating one or more variables in a loop naturally maps to reduce with the updated variable(s) being (in the case of more than one being fields of) the accumulator object.
My definition of AGI is when AI doesn't need humans anymore to create new models (to be specific, models that continue the GPT3 -> GPT4 -> GPT5 trend).
By my definition, once that happens, I don't really see a role for Microsoft to play. So not sure what value their legal deal has.
Interesting. Is there a picture that explains how the layers talk to each other?
Is there a VM system? How does message passing work, what kinds of protections are between tasks?
I have had many ideas for such kernels over the years, but not the patience yet to implement any of them. I wonder if Claude could help me with generating a kernel in less than a day.
I should definitely make that picture. Yes, it as VM. Message passing all runs through the kernel. Task isolation is strictly memory based, whether a task acts on or responds to a message is something the tasks will have to negotiate between themselves. You could whitelist, use a token scheme or any other method for authentication that you wish to put in place, but out of the box there is none, if you know a tasks ID you can send it a message. Obviously there are ownership of resource constraints and the kernel will always ensure that the receiving task knows who the sending task is to ensure that parties are not stealing each others descriptors and such.
Instead of just saying: rsync on my system is version 3.2, have you tried copy/pasting rsync --help? In my experience, that would be enough for the AI to figure out what it needs to do and which arguments to use. I don't treat AI like an oracle, I treat it like an eager CS grad. I must give it the right information for the task.
We can only hope the parent comment indeed knows a better source or even what he is talking about. Let give him the benefit of the doubt and assume that the shortness of his comment derives from a lack of time and not from a lack of knowledge.
Pssh, real developers read volumes 1, 3A, 3B, 3C, and 3D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual Combined Volumes, pages 3125 through 4200
And if you have a steady hand, a magnetized needle and a hard drive can be used for practice exercises.
The RISC-V privileged spec describes their paging implementation (under the Supervisor-level ISA), while the unprivileged spec has a chapter and two appendices describing their memory model (RVWMO), formal axiomatic and operational models included.
Finding the inevitable bugs in the formal model is left as an exercise to the reader (three bugs where the axiomatic/operational models disagree are already known).
If you want a more gentle introduction, the Computer Organization and Design book is pretty nice.
There never was a problem generating documentation from code
That doesn't match my experience at all. Before AI, generating documentation from code was either getting some embedded comments for a function call, or just list a function's arguments.
AI reads the implementation, reads the callers and callees, and provides much smarter documentation, with context. Something sorely lacking from previous solutions. Is it sometimes not completely accurate? Probably. But still a giant step forward.
>This analysis would suggests a 2.7× speedup vs. hash tables: 128 bytes vs. 48 bytes of memory traffic per uint64
It's ok to waste bandwidth, it doesn't directly impact performance. However, the limitation you are hitting (which directly impacts performance) is the number of memory accesses you can do (per cycle for instance) and the latency of each memory access. With linear access, after some initial read, data is ready instantly for the CPU to consume. For scattered data (hash tables), you have to pay a penalty on every read.
So the bandwidth ratio is not the right factor to look at to estimate performance.
reply