Hacker Newsnew | past | comments | ask | show | jobs | submit | more Tarean's commentslogin

But that stores all elements into memory?


Love this algorithm. It feels like magic, and then it feels obvious and basically like binary search.

Similar to the algorithm to parallelize the merge step of merge sort. Split the two sorted sequences into four sequences so that `merge(left[0:leftSplit], right[0:rightSplit])+merge(left[leftSplit:], right[rightSplit:])` is sorted. leftSplit+rightSplit should be halve the total length, and the elements in the left partition must be <= the elements in the right partition.

Seems impossible, and then you think about it and it's just binary search.


To me the big winning is that you don't have to memoize transitively. Occasionally someone asks me for help to optimize some react code, and then the dependency array contains some object/callback that has a dependency which has a dependency which wasn't memoized.


The big problem is that

- memoization sometimes crucial for performance.

- passing objects and functions around is common.

- you cannot compare objects and functions for semantic equality

If you wrote larger react apps you almost certainly had to useCallback at some point so that memoization worked, the compiler fixes that.

Whenever you construct an object or function the react compiler memoizes on their free variables so pointer equality is sufficient.

Though I do think the compiler caches too aggressively, even if there are no free variables and hoisting is sufficient or escape analysis shows it is unnecessary for semantics.


Computer-Checked proofs are one area where ai could be really useful fairly soon. Though it may be closer to neural networks in chess engines than full llm's.

You already use tons of solvers because proving everything by hand is mind numbingly tedious and time consuming, but tactics/solvers really struggle if you throw too many theorems and lemmas at them. Neural nets as search machines to pattern match relevant lemmas fit perfectly. Similarly induction and higher order unification are full-on code synthesis where brute-force iterating over all possible syntax trees is really inefficient.

And solvers do a backtracking solve anyway, so if the ai returns 95% irrelevant lemmas it doesn't matter. Still would be a dramatic improvement over manually searching for them.

Though I'm not convinced that computer checked proofs are necessarily good for communication. There are reasons beside page-count that proofs for human consumption are high-level and gloss over details.


Not a mathematician, but knowing how proof-checkers work I don't expect them to become practical without some significant future advancement, they certainly aren't right now.

The main problem is that in order to work they have to be connected to the formal definition of the mathematical objects you are using in your proof. But nobody thinks that way when writing (or reading!) a proof, you usually have a high-level, informal idea of what you are "morally" doing, and then you fill-in formal details as needed.

Computer code (kinda) works because the formal semantics of the language is much closer to your mental model, so much that in many cases it's possible to give an operational semantics, but generally speaking math is different in its objectives.


You're commenting on a thread about an interview where an actual Fields medalist explains how proof assistants are actually used to do meaningful mathematical work, today. The issues you raise are all discussed in the interview.


practical = practical for routine mathematical work, which they aren't right now, which Tao says in the article itself.

Never have I ever said or implied, here or anywhere else, that proof checkers are useless and bad and stupid.


> practical = practical for routine mathematical work, which they aren't right now, which Tao says in the article itself.

I see, I guess it would have been nice to be clearer from the start.

> Never have I ever said or implied, here or anywhere else, that proof checkers are useless and bad and stupid.

Did I say you did?


They = LLMs.

As I’ve interpreted it, they mean that LLMs and similar tech isn’t particularly helpful to theorem provers at the moment.


You yourself identified some problems that a modern AI guru would salivate over: Formal specifications from natural language, "filling in" formal details. etc

I'll add another one: NN-based speedups to constraint solvers, which are usually something like branch-and-bound. This is (i've heard) a very active area of research.


The whole article is about Terence Tao predicts it will be practical without a significant breakthrough.

>> I don’t think mathematics will become solved. If there was another major breakthrough in AI, it’s possible, but I would say that in three years you will see notable progress, and it will become more and more manageable to actually use AI.

He think you need another breakthrough to solve mathematics, i.e. making mathematicians obsolete. But even a major breakthrough doesn't happen, AI will be useful for mathematicians in 3 years.

Of course you can still disagree with Terence Tao. He being one of the best mathematicians doesn't make him an oracle.

> But nobody thinks that way when writing (or reading!) a proof

He even very specifically addressed this issue with:

>> Let’s say an AI supplies an incomprehensible, ugly proof. Then you can work with it, and you can analyze it. Suppose this proof uses 10 hypotheses to get one conclusion—if I delete one hypothesis, does the proof still work?

He clearly believes an ugly, incomprehensible proof is much better than no proof at all.


Actually, it is possible to do a high level outline of a proof development and later fill in the details. This is done by starting out with axioms. I.e., don't define the basic types but state that they exist as axioms and also write down their basic properties as axioms. Then later, when you have some development going on, fill in the blanks by turning the axioms into definitions.


Automated theorem proving is a concept that has been around for a long time already.

Isabelle's sledgehammer strategy combines automatic theorem provers (E, Z3, SPASS, Vampire, ...) and attempts to prove/refute a goal. In theory this seems fine, but in practice you end up with a reconstructed proof that applies 12 seemingly arbitrary lemmas. This sort of proof is unreadable and very fragile. I don't see how AI will magically solve this issue.


What is a "fragile" proof?


It no longer works when some definition or other theorem changes slightly. Most of these proof assistants provide ways to write proofs that can slightly adjust and "plug together" the other theorems in the "obvious" way.


So it's a brittle proof


> Computer-Checked proofs are one area where ai could be really useful fairly soon. Though it may be closer to neural networks in chess engines than full llm's.

Yes, if significant progress is made in sample-efficiency. The current neural networks are extremely sample-inefficient. And formal math datasets are much smaller than those of, say, Python code.


Lambdas have two hidden features: Variable capture lets you access variables/values from outer scopes, and statements/calling other functions lets you do control flow.

Monads are types that let you build chains of lambdas, so you can mix control flow and variable scoping with the some type-specific logic.

Async IO/streams/optionals are monads. It's no accident that these could be programmed as nested callbacks/nested loops/nested if statements. That's the nested chain of Monad->lambda->Monad->...

Monads work well with syntax sugar (think async/await), and some languages like Haskell can abstract over them in neat ways.


Orm's do a lot for you if you require related data from multiple tables:

- Eager loading the data efficiently is annoying, most ORM's can do batched select-in loading now

- Detecting changes with the DB is annoying if you update more than a single row, requires change tracking or a diff

- Saving changes is a topo sort so you don't delete while foreign keys exists. Also, gotta wait for a foreign key to exist when a new row is inserted

But just let me tell the ORM a spanning tree of the schema and load that! Why do all ORM's default to cyclic object graphs and lazy loading.


Ideally a query could return multiple cte tables.

A lot of ORMs default eager loading to select-in loading with multiple queries because it performs best. First A is loaded and then the B query has a `WHERE B.foreign_key in (...all A id's loaded in the first step..)`.

Still has overhead, but usually less than duplicating data or using another serialization format like returning hierarchical json objects


Often, the concrete problems we are interested in have some internal structure that make them easier to solve in practice. Solving Boolean formulas is NP-complete but we routinely solve problems with millions of variables.

ILP (and sat) solvers are interesting because they are great at ruling out large areas that cannot contain solutions. It's also easy to translate many problems into ILP or SAT problems.


TikTok did use ip tracking of reporters working on a story to try to find whistleblowers. According to TikTok that was a misguided employee who was let go shortly after.

Though they are hardly alone in the 'misguided employees' department. Ebay had a whole scheme where they flew out a security team to harass and stalk a couple that had criticized ebay in a blog post. https://www.npr.org/2022/09/30/1126078948/live-spiders-and-c...

I think Facebook and Uber had their own location tracking scandals as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: