Those aren’t syllable divisions, they’re hyphenation points!
From the footnote on page 219 of Word by Word by Kory Stamper (formerly a lexicographer at Merriam-Webster):
> Here is the one thing that our pronunciation editor wishes everyone knew: those dots in the headwords, like at “co·per·nic·i·um,” are not marking syllable breaks, as is evident by comparing the placement of the dots with the placement of the hyphens in the pronunciation. Those dots are called “end-of-line division dots,” and they exist solely to tell beleaguered proof-readers where, if they have to split a word between lines, they can drop a hyphen.
The title of the submission should be changed to “Type in Wall·E”, “Typesetting in Wall·E”, or “Typography in Wall·E”; the word “typeset” is a verb or past participle, not a noun.
How does infinite descent work using a proof assistant? It's been quite a while, so I may be remembering incorrectly, but this is my understanding:
Coq uses an inductive type like "N = 0 : N | S : N → N", and a first-order theory with integers axiomatized this way admits nonstandard models (whose prefixes are isomorphic to the naturals but have elements not reachable by repeated application of S, defeating infinite descent). But universal quantification in system F (inherited by the calculus of inductive constructions) corresponds to a fragment of second-order logic, where there is only one model (– the theory is categorical).
Not my area at all, but surely you would just add the relevant induction schema as an axiom? Are nonstandard models relevant to proof assistants?
(to be clear - I know what nonstandard models are and how they work, and I've mucked around in coq/lean, just wondering why it matters to a proof assistant)
My line of thinking roughly was, "each type system corresponds to a particular logic system and vice versa; in a proof by induction, the different cases correspond to the type constructors, but what does the necessary axiom schema or second-order axiom of induction correspond to? (in order to avoid the problem of nonstandard elements). And proof by infinite descent seems to require something other than induction, so where does well-foundedness come from?"
I realize these questions may not even be posed sensibly; maybe the answer is as simple as, "the naturals are defined constructively here; of course they are the naturals and therefore are well-founded". As I said, it's been a while, so things are hazy in my mind.
I see, that's interesting. You inspired me to go read up about CIC, it's really cool, thanks! And in hindsight I realise my reply was totally wrong anyway haha.
> Research has generally linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict.
Correct. The good news that Elements still works otherwise, you just need to add the missing axiom.
But many other "proofs" have been found to be false. The book "Metamath: A Computer Language for Mathematical Proofs" (by Norm Megill and yours truly) is available at: https://us.metamath.org/downloads/metamath.pdf - see section 1.2.2, "Trusting the Mathematician". We list just a few of the many examples of proofs that weren't.
Sure, there can be bugs in programs, but there are ways to counter such bugs that give FAR more confidence than can be afforded to humans. Lean's approach is to have a small kernel, and then review the kernel. Metamath is even more serious; the Metamath approach is to have an extremely small language, and then re-implement it many times (so that a bug is unlikely to be reproduced in all implementations). The most popular Metamath database is "set.mm", which uses classical logical logic and ZFC. Every change is verified by 5 different verifiers in 5 different programming languages originally developed by 5 different people:
* metamath.exe aka Cmetamath (the original C verifier by Norman Megill)
* checkmm (a C++ verifier by Eric Schmidt)
* smetamath-rs (smm3) (a Rust verifier by Stefan O'Rear)
* mmj2 (a Java verifier by Mel L. O'Cat and Mario Carneiro)
Using indices is a (slight) improvement over pointers, since in Rust you will benefit from bounds checking (restricting you to the memory allocated for that tree).
(If you use u32 for the indices, you also save some space.)
It is the kind of low hanging optimization that I used to think was greatly oversold. And, to be fair, for many programs I would still wager it probably is. If you are chasing a ton of pointers, though, it is definitely worth considering how many more you can hold in memory with smaller sizes.
I think I remember a discussion a while back lamenting that we just tacitly accepted wide pointers. If anyone has a link going over that, I'd be delighted to read it again. I'm 90% sure I did not understand it when I first saw it. :D
The idea is that every game of poker is a walk through a tree. Keeping track of probabilities and outcomes allows for exploration of the correct way to play. Using indices made building the tree easier (though I am still one bug away from it all working).
This is beside the point, but the only way an engine can possibly search to depth 20 is by being extremely selective, using heuristics to guess move quality (such as static exchange analysis, the killer move heuristic, history tables, countermove tables, follow-up tables, late move reductions, &c). The discovery was that it's often better to guess move quality dynamically, based on the observed utility of similar moves in similar contexts, rather than statically. But this is not especially "simple", and it still (necessarily!) involves judging up front a few moves as likely-good and the vast majority of moves as unworthy of further examination.
Both type A engines (to use Shannon's terminology) and type B engines are relatively weak. The strong engines are all hybrids.
Per 400 years, there is one leap day every 4 years (100 leap days), except when the year is divisible by 100 (so we overcounted by 4 and there are 100 – 4 = 96 leap days), except when the year is divisible by 400 (so we need to add that day back and arrive at 100 – 4 + 1 = 97). This gives us 97/400 = 0·2425.
The tropical year is about 365·24219 days long, but that's not relevant to timekeeping.
From the footnote on page 219 of Word by Word by Kory Stamper (formerly a lexicographer at Merriam-Webster):
> Here is the one thing that our pronunciation editor wishes everyone knew: those dots in the headwords, like at “co·per·nic·i·um,” are not marking syllable breaks, as is evident by comparing the placement of the dots with the placement of the hyphens in the pronunciation. Those dots are called “end-of-line division dots,” and they exist solely to tell beleaguered proof-readers where, if they have to split a word between lines, they can drop a hyphen.