Hacker Newsnew | past | comments | ask | show | jobs | submit | griffzhowl's commentslogin

I don't know which it was but Dr. Jorge Diaz has an excellent video on Lagrangian mechanics as part of a series on quantum mechanics (this video just pertains to the formalism applicable classically)

https://www.youtube.com/watch?v=QbnkIdw0HJQ


Use of the word "mechanical" to describe formal reasoning predates computers.

Here's the first sentence of Godel's 1931 On formally undecidable propositions...

"The development of mathematics in the direction of greater exactness has—as is well known—led to large tracts of it becoming formalized, so that proofs can be carried out according to a few mechanical rules."

Leibniz had discussed calculating machines (and even thought about binary arithmetic being the most appropriate implementation), so the general idea probably goes back quite far

Edit: Oh, I guess by "late 1930s" you're referring to Turing's 1936 paper where he defines Turing machines, rather than actual electronic computers. Still, understanding "formal" as "mechanical" predates it.


Yes, by Godel's time the notion of "calculability" was already at least intuitively grasped, and it was then that "formal" was understood to mean mechanical. Turing made the connection rigorous.

Leibniz spoke of "automatons" and dreamt of some sort of "thoughtless" reasoning, but I don't know if he had the right building blocks to even think of mechanisation as we could since the 19th century. E.g. here's how Leibniz tries to justify the utility of formal reasoning: "Our thoughts are for the most part what I call ‘blind thoughts’. I mean that they are empty of perception and sensibility, and consist in the wholly unaided use of symbols... We often reason in words, with the object itself virtually absent from our mind."

So he definitely had the right concept - which is why formal logic is so old - but not the right language that most people would intuitively understand today.


Leibniz even invented a calculating machine. I didn't know he'd actually built one

https://en.wikipedia.org/wiki/Calculus_ratiocinator


Perhaps it has to be that way, the motivation to build a mechanical computer is based on the belief that computation can be mechanised.

It's not a "belief"; that's what computability is. This definition is the whole point of the work by Church and Turing that resulted in the lambda calculus and the Turing machine, respectively.

Kevin Buzzard has been the main mathematician involved with Lean

This is a recent talk where he discusses putting it together with LLMs (he's somewhat sceptical it'll be revolutionary for producing new mathematics any time soon)

https://www.youtube.com/watch?v=K5w7VS2sxD0


Yes, from this very same FA,

"Back in 2019, we demonstrated that a quantum computer could solve a problem that would take the fastest classical supercomputer thousands of years."

The actual article has much more measured language, and in the conclusion section gives three criteria for "practical quantum advantage":

https://www.nature.com/articles/s41586-025-09526-6

"(1) The observable can be experimentally measured with the proper accuracy, in our case with an SNR above unity. More formally, the observable is in the bounded-error quantum polynomial-time (BQP) class.

(2) The observable lies beyond the reach of both exact classical simulation and heuristic methods that trade accuracy for efficiency.

[...]

(3) The observable should yield practically relevant information about the quantum system.

[...] we have made progress towards (1) and (2). Moreover, a proof-of-principle for (3) is demonstrated with a dynamic learning problem."

So none of the criteria they define for "practical quantum advantage" are fully met as far as I understand it.

The key word is "practical" - you can get quantum advantage from precisely probing a quantum system with enough coherent qubits that it would be intractable on a classical computer. But that's exactly because a quantum computer is a quantum system; and because of superposition and entanglement, a linear increase in the number of qubits means an exponential increase in computational complexity for a classical simulation. So if you're able to implement and probe a quantum system of sufficient complexity (in this case ~40 qubits rather than the thousands it would take for Shor's algorithm), that is ipso facto "quantum advantage".

It's still an impressive engineering feat because of the difficulty in maintaining coherence in the qubits with a precisely programmable gate structure that operates on them, but as far as I can see (and I've just scanned the paper and had a couple of drinks this evening) what it really means is that they've found a way to reliably implement in hardware a quantum system that they can accurately extract information from in a way that would be intractable to simulate on classical machines.

I might well be missing some subtleties because of aforementioned reasons and I'm no expert, but it seems like the press release is unsurprisingly in the grayzone between corporate hype and outright deceit (which as we know is a large and constantly expanding multi-dimensional grayzone of heretofore unimagined fractal shades of gray)


I've heard that in the Tibetan tradiation where they ardently belive in reincarnation that thinking about the consequences of your actions for the next thousand years isn't unusual

The german way seems to be a wet sponge followed by a squeegee to wipe off excess water. Here's a masterclass from Frederic Schuller (and a rigorous advanced course in quantum mechanics)

https://youtu.be/GbqA9Xn_iM0?si=Cy7EQOvPtoRqgmhc&t=1070


One way I've seen it presented is that the early Greek philosophers were grappling with how to reconcile two basic facts: somethings stay the same (constancy or regularity), and some things change.

Heraclitus was before Parmenides and said that everything changes. Parmenides said that nothing changes, and then the atomists, most prominently Democritus, synthesised these two points of view by saying that there are atoms which don't change, but all apparent change is explained by the relative motions of the different basic atoms. Plato was influenced by all of these. But I would say the theory of forms accounts more for constancy or regularity more than change, no?

Btw, the central concept of Parmenides' philosophy is always translated as "Being", but I couldn't find the original Greek word. It isn't "ousia"?


I'm not sure what motivated Parmenides because he was more of a poet than anything - it just happened that his poetry was what we would now recognize as incredibly philosophical. He didn't really argue, he just wrote down what the "goddess" told him. But I think the basic problem is that everyone back then agreed that you can't get "something from nothing," and it sure seems like change requires being to come from non-being. The statue is there now, but before it was cast there wasn't a statue, just a chunk of bronze. If being can't come from non-being, how do you account for the "coming-to-be" of the statue? The Eliatic position as I understand it is that the change is just an illusion. Plato and Aristotle both react against this position and argue that it's silly (I'm very inclined to agree). They then give alternative accounts of what change really is.

I'm not sure about Plato, but the Aristotelian analysis is something like this: every thing that exists has the potential to exist in certain ways and not others, and it's said that the thing is "in potency" to exist in those potential ways. When something could exist in a certain way but right now doesn't, that's called a "privation." And the ways that the thing currently does exist are the "form" of the thing. So a substance changes when it goes from being in potency to being actual, and it does that by losing a privation. Aquinas follows Aristotle in giving the example: "For example, when a statue is made from bronze, the bronze which is in potency to the form of the statue is the matter; the shapeless or undisposed something is the privation; and the shape because of which it is called a statue is the form." Incidentally, Aquinas's short On the Principles of Nature (https://aquinas.cc/la/en/~DePrinNat) is a good overview of this theory, which is spread all over Aristotle (in the Categories, the Physics, and the Metaphysics).

As far as οὐσία is concerned, I think this is the complete Greek for Parmenides's poem: http://philoctetes.free.fr/parmenidesunicode.htm. In the places where that translation uses "being" you get slightly different words like γενέσθαι (to come into a new state of being) or εἶναι (just the infinitive "to be"). And looking at the definition of οὐσία (https://lsj.gr/wiki/%CE%BF%E1%BD%90%CF%83%CE%AF%CE%B1) it looks like most of the uses of that term specifically come well after Parmenides.


Ah, I was only thinking of Plato's theory of forms, not Aristotle's, but it makes sense it would be more about something taking on a form in Aristotle, since he was much more concerned with biology than Plato, whose forms are more timeless.

Thanks for the Parmenides poem. It seems much more straightforward than the various commentaries and analyses I've seen written about it.

VIII.16: ἔστιν ἢ οὐκ ἔστιν· :: It is or it is not

Very nearly "to be or not to be"...


It's more that category theory foregrounds the functions themselves, and their relationships, rather than on the elements of sets which the functions operate on. This higher-level perspective is arguably the more appropriate level when thinking about the structure of programs.

For more detail, see Bartosz Milewski, Category Theory for Programmers:

"Composition is at the very root of category theory — it’s part of the definition of the category itself. And I will argue strongly that composition is the essence of programming. We’ve been composing things forever, long before some great engineer came up with the idea of a subroutine. Some time ago the principles of structured programming revolutionized programming because they made blocks of code composable. Then came object oriented programming, which is all about composing objects. Functional programming is not only about composing functions and algebraic data structures — it makes concurrency composable — something that’s virtually impossible with other programming paradigms."

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


> Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe.

I'm not sure about that, because you still need some concept of set (or collection or class) to define a category, because you need a set of objects and mappings between them (technically that's a "small" category, but to define any larger category would require at least as much set-theoretical complication).

More exactly, whereas in set theory it's the membership relation between sets and their elements that is basic, in category theory it's the mapping between objects.

Nevertheless, the basic concepts of set theory can also be defined within category theory, so in that sense they're inter-translatable. In each case though, you need some ambient idea of a collection (or class or set) of the basic objects. Tom Leinster has a brilliantly clear and succinct (8 pages) exposition of how this is done here https://arxiv.org/abs/1212.6543

The thing is, even defining first-order logic requires a (potentially infinite) collection of variables and constant terms; and set theory is embedded in first-order logic, so both set theory and category theory are on the same footing in seemingly requiring a prior conception of some kind of potentially infinite "collection". To be honest I'm a bit puzzled as to how that works logically


Defining first-order logic doesn't really require set theory, but it does require some conception of natural numbers. Instead of saying there's an infinite set of variables, you can have a single symbol x and a mark *, and then you can say a variable is any string consisting of x followed by any number of marks. You can do the same thing with constants, relation symbols and function symbols. This does mean there can only be countably many of each type of symbol but for foundational purposes that's fine since each proof will only mention finitely many variables.

Allowing uncountably many symbols can be more convenient when you apply logic in other ways, e.g. when doing model theory, but from a foundational perspective when you're doing stuff like that you're not using the "base" logic but rather using the formalized version of logic that you can define within the set theory that you defined using the base logic.


Thanks, that sharpens it then to a question about natural numbers, or at least some idea of an indefinitely extensible collection of unique elements: it seems the ordering on the numbers isn't required for a collection of variables, we just need them to be distinct.

I don't think you need set theory to define set theory (someone would have noticed that kind of circularity), but there still seems to be some sleight of hand in the usual presentations, with authors often saying in the definition of first-order logic prior to defining set theory that "there is an infinite set of variables". Then they define a membership relation, an empty set, and then "construct the natural numbers"... But I guess that's just a sloppiness or different concern in the particular presentation, and the seeming circularity is eliminable.

Maybe instead of saying at the outset that we require natural numbers, wouldn't it be enough to give an operation or algorithm which can be repeated indefinitely many times to give new symbols? This is effectively what you're illustrating with the x, x*, x**, etc.

If that's all we need then it seems perfectly clear, but this kind of operational or algorithmic aspect of the foundation of logic and mathematics isn't usually acknowledged, or at least the usual presentation don't put it in this way, so I'm wondering if there's some inadequacy or incompleteness about it.*


If you allowed infinite graphs maybe. How would you define a functor or natural transformation in graph theory? Seems like you would need to construct a conceptual system that is just equivalent to category theory


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: