The intent isn't to find good hires per se, but to whittle down the list of applicants to a manageable number in a way that doesn't invite discrimination lawsuits.
Same as why companies in the past used to reject anyone without a degree. But then everyone got a degree, leaving it to no longer be an effective filter, hence things like algorithm tests showing up to fill the void.
Once you've narrowed the list, then you can worry about figuring out who is "good" through giving the remaining individuals additional attention.
I have a suspicion that "good candidate" is being gerrymandered. What might have been "good" in 1990 might have become irrelevant in 2000+ or perhaps detrimental. I say that as someone who is actually good at algorithm questions himself. I think GP, as well as other Google defenders, are parroting pseudo-science.
I agree. But also if it works to get you jobs there, why wouldn't you defend it? I mean I might be inclined to do so as well, it guarantees me a place even if I lack soft skills for the role.
I'm not sure that it's just L1 vs L2, since the Wigner formulation of quantum mechanics uses real-valued quasi-probabilities, but ones which can take negative values.
Oh, and also, if you swap out h-bar in Wigner's equations with some wavelength \lambda, you can interpret it in terms of classical wave optics... somehow. I'm not sure.
Given that LLMs can make their own calls to the command line, I think it's obsolete ten times over. Much of the learning curve - and therefore most of the downside from CLIs, is now gone. A person can now learn only the most basic facts about operating systems - and let the AI handle the rest. Given all of that, I'm not sure where the world of software and IT is heading.
There's a danger though that people won't even learn the basics.
A) LLM's caused disasters because juniors have no clue on the context of the applying commands.
B) With the current enshittification, your comment it's the obsolete one. 100 times over. Why? Enjoy your crappy iOS'ified OS with a maze of dependencies (Python3 for instance), SIP and updates breaking everything.
C) If any, the newbies are the doomed ones, as they don't know anything about computers. Solaris SMF commands (or AIX ones) blindly applied ro RHEL systems? Why not?
By "the basics", I mean the Unix command line model, with its shells and file descriptors and piping. What matters less now are the arcane details, like which flags to use when running a command.
I don't know if you're aware that there's a formal analogy between matrix operations and regex operations:
Matrix vs Regex
--------------
A+B with A|B
A*B with AB
(1 - A)^{-1} with M*
To make the analogy between array programming and regex even more precise: I think you might even be able to make a regex engine that uses one boolean matrix for each character. For example, if you use the ASCII character set, you'd use 127 of these boolean matrices. The matrices should encode transitions between NFA states. The set of entry states should be indicated by an additional boolean vector; and the accepting states should be indicated by one more boolean vector. The regex operations would take 1 or 2 NFAs as input, and output a new NFA.
Didn't know that but I assume you can share most of the engine's logic anyway. Those kind of generalisations tend to break down once you get pratical implementations.
I suspect the same regarding the analogy with regex, but I still haven't finished learning an array language. Do you know what you'd use an array language for?
Given the amount of progress in AI coding in the last 3 years, are you seriously confident that AI won't increase programming productivity in the next three?
This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.
Okay, so if and when that happens, get excited about it _then_?
Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.
AI codegen isn't comparable to a highly-infectious disease: it's been a lot more than a few weeks. I don't think your analogy is apt: it reads more like rhetoric to me. (Unless I've missed the point entirely.)
From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.
I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?
The "COVID-19's doubling time was ≈3 days" figure was the output of an epidemiological model, based on solid and empirically-validated theory, based on hundreds of years of observations of diseases. "AI capabilities' doubling time seems to be about 7 months" is based on meaningless benchmarks, corporate marketing copy, and subjective reports contradicted by observational evidence of the same events. There's no compelling reason to believe that any of this is real, and plenty of reason to believe it's largely fraudulent. (Models from 2, 3, 4 years ago based on the "it's fraud" concept are still showing high predictive power today, whereas the models of the "capabilities forecasters" have been repeatedly adjusted.)
> But there are such experiments. String theory says that the result of such experiment is: Lorentz invariance not violated.
This is not a new prediction... String theory makes no new predictions, I hear. I don't understand why you need to be told this.
To your point, there exist various reformulations of physics theories, like Lagrangian mechanics and Hamiltonian mechanics, which are both reformulations of Newtonian mechanics. But these don't make new predictions. They're just better for calculating or understanding certain things. That's quite different from proposing special relativity for the first time, or thermodynamics for the first time, which do make novel predictions compared to Newton.
i^i isn't anything. Please don't write this. Of the two inputs to the function (w, z) -> w^z = exp(z ln(w)), only z is a complex number, so that bit is OK. The problem is that w is NOT a complex number but a point on a particular Riemann surface, namely: The natural domain of the function ln. That particular Riemann surface looks like an endless spiral staircase. The more grown-up term might be "a helix". When you write informally "w=i", that could mean any of ln(w) = i pi/2, i (2pi + pi/2), i(4pi + pi/2), etc. Incidentally, w^z is then always a real number. However, there's an infinite sequence of those numbers that it could equal.
I suppose that by pure convention, "w=e" is understood as denoting a single unique point on the helix. But extending that convention to w=i starts to look like a recipe for confusion.
Their argument is that ln(z) where z is a complex number is a multi-valued function, so the statement "Explore why i^i is real number" could be misinterpreted as i^i = a single well-defined real value.
Yes, but it seems strange to claim that i^i isn't anything. That just completely ignores what's interesting, namely that i(π/2 + 2πk) is real for all k ∈ Z.
In maths, an expression only ever equals a single number. You can't say i^i = e^(-pi/2) and then also say that i^i = e^(3 pi / 2), because if i^i equals two things, then those two things are also equal to each other, and then we get that e^(-pi/2) = e^(3 pi / 2), which is wrong.
Riemann surfaces are the only way to fix this. And they're not even that hard to understand, but I'm not sure if you do.
Apologies if this is pedantic but "multi-valued function" is not a thing. The function doesn't have multiple values here. Saying "multi-valued function" is not just a way of misleading people about what's really happening, but it's almost the perfect way to stop people from finding out. Do people who say "multi-valued function" know what's really happening? Do you know what a Riemann surface is?
Do you know what a Riemann surface is? Because if you don't, then you don't know what you're talking about - and you should stop getting people confused.