"Learning to get good at something in general" is the real thing these kind of tests proxy for.
There's a lot of things in life that are a pain in the butt, hard, and stupid that you just gotta do.
I went through the leetcode grind a few times, the last time I went through it I ended up taking a job at a startup that I was using as a "practice" interview before the FAANG ones because I could relocate out of California. I never would have interviewed there though if I wasn't laser focused committed to leaving my previous job for a better opportunity.
If you're the kind of person who grinds leetcode, you're _probably_ also the kind of person who spends the extra time reading all the documentation for redux. Or spends the extra time writing a tricky unit test before shipping something. That being said, there's still _plenty_ of people who fail leetcode phone interviews (e.g. myself) but go the extra mile, and FAANG companies still manage to let some questionable hires through the cracks. But engineer hiring is currently a filtering game, not a sourcing game.
You can either learn the rules of Monopoly and play or sit in the corner during board game night and complain you're not playing checkers. shrug
I think this is the most plausible explanation too, but is this true in practice? I would have thought most talented programmers don't want to waste their time on leetcode, so it would primarily select for people who can't succeed without dedicating time to it.
Have you seen this first-hand and compared leetcode grinders against say, prolific open source developers?
Most people in the world care about money, and practicing leetcode is worth way more money per time spent than basically anything else for a talented programmer, up until you hit the level where you can get into top paying companies.
You can get into well paying jobs without it, but then you need skills that are way harder to practice and usually you also need a lot of personal connections as those jobs often selects friends.
I'm not really convinced by this argument. Consistently improving your overall skills will lead you to outpace the competition significantly more in the long run than min-maxing, unless your industry is extremely bad at differentiating ability.
I suppose there's an argument to be gained that the marginal gain of Leetcode given thousands of hours of practice already is the better side of the tradeoff, but that's a circular argument. You're going to select your programmers based on whether they recently spent 30 hours doing Leetcode, rather than on what they got out of the 10,000 hours of regular coding practice they did beforehand? Doesn't seem like a good selection criteria to me.
That's why I asked whether OP had actually seen the effect he was describing and compared results directly. It's easy to come up with hypothetical benefits.
It is a bit tiring to explain the process to everyone, but here it goes:
Companies don't select programmers based on leetcode performance, no, they care a great deal about those 10,000 hours people spent. However if you fail that leetcode test you are out no matter what other experience you have. And it so happens that the set of companies paying twice the average for any experience levels tend to do leetcoding, so no matter what level of experience you have leetcoding is great. Although sometimes your experience might impress low tier companies but not high tier ones, so they downlevel you and ultimately your total compensation didn't move much.
If you haven't seen this process then you are either ignorant, don't live in an area with high tier companies, or don't have a lot of experience. Personally I did double my income by going through this process and I've seen many others do as well.
Now, lots of low tier companies started seeing high tier companies do this process, so they started copying it for some reason. But it doesn't work if you don't combine it with also paying significantly more than most others.
Ok, but companies do select based on Leetcode performance, in practice. It's the largest portion of the interview. It's useful as a filter, but it's treated as a benchmark. As evidenced by:
> Personally I did double my income by going through this
process and I've seen many others do as well.
That's insane. That's a broken market. If that's happening, it's not a reliable signal.
Ultimately, if the market is rational then your actual ability will matter more than anything else. The only time that isn't the case is if the market is irrational. You can argue the market is currently irrational (although I would say that's not quite what's going on), but placing a long-term bet on it staying that way... Bad idea, IMO.
But I think the actual explanation for why high-tier (read: large) companies use Leetcode is a combination of the fact that they're monopolies, so they aren't punished for bad selection criteria (see Google's use of lateral thinking puzzles for years before admitting they were completely useless for predicting job performance), and that they need a replicable interview procedure that can be applied consistently by a workforce that fundamentally isn't very good at interviewing, thinks they are, and doesn't care if they get it wrong. It's the Big Mac of interviewing methods.
> The only time that isn't the case is if the market is irrational.
There are other situations, such as when measuring actual ability is sufficiently difficult or impossible (subject to the constraints you pointed out - via a process that's repeatable by a largely undifferentiated set of interviewers).
In those situations, you have basically no choice but to look for proxy measures.
Now, obviously, proxies can be gamed to various degrees. Leetcode is not actually such a terrible proxy, though. Here are some points in favor:
1) On average, better engineers will require less practice time to achieve similar results,
2) There is a certain minimum level of capability required to even "grind it out"; that minimum doesn't serve as a sufficient floor for being a net productive engineer, but it gets you a reasonable chunk of the way there, and that's why so much of the evaluation criteria for these interviews focuses on communication ability, since that's the rest of what they care about
Taken together, you get a process that is reasonably good at eliminating false positives and still manages to tilt the field in favor of more skilled candidates. On the candidate's side, it has the benefits of being generalizable across multiple interviews and also getting easier each time you do it.
While I'm somewhat sympathetic to the claim that the process favors candidates who have more free time, I think it's a largely overstated concern. First, this is true of any interviewing process wherein a candidate can improve their performance through practice. To a first approximation this is all interview processes. Second, it doesn't take _that_ much time. If you put in 100 hours (which is basically "made it a moderately important priority for 2-3 months") and you still can't clear any interviews, there are a few possible explanations.
1) You're failing at the communication side of things. Thankfully this is also something you can practice!
2) You fall into an unfortunate edge case, e.g. extreme performance anxiety, which isn't reflected in your day-to-day work. This sucks! I don't think there's a process that doesn't have unfortunate edge cases; all we can do is try to minimize them.
3) You're not able to learn the material well enough to generalize it to novel interview questions. This is the process working as intended.
I've also heard a reason that goes something like "I can totally solve those problems, just not in 45 minutes". Problem-solving speed is going to be positively correlated with other traits of talented engineers; obviously some otherwise talented engineers will still fall on the right side of the curve. Again, this sucks, but please bring me a process that effectively rules out false positives without bringing in some false negatives. If you can manage it there's a huge market opportunity there.
These are interesting points, but they very much read like post-hoc rationalisations to me. Let me take a different tact, I want to try challenging some of the more fundamental assumptions.
Do you really believe in the quality of Leetcode as an evaluation criteria, or are you trying to justify it? Google used lateral thinking puzzles for something like 8 years before realising they were completely useless (not just unreliable - they had zero predictive power). We have precedent for these companies using selection methods that are bananas, and smaller companies copying them.
>There are other situations, such as when measuring actual ability is sufficiently difficult or impossible
Sure, that's an example of why the market might be irrational. But then I'd ask: do you think it's extremely difficult/impossible to judge other programmers? I mean that seriously. I don't find that difficult. I can usually size up another programmer pretty quickly. If I sniff around a bit and look at some example code, my intuitive read will be much more three-dimensional and reliable than giving them two Leetcodes. You don't find that to be the case?
How much do you spend on hiring one candidate? I'm guessing tens of thousands, if not more. You can't invest a day into trawling that candidate's GitHub? "But not all candidates have a GitHub." Ok, but some do, so that's not the reason.
What I think is actually extremely difficult is a large, beurocratic organisation developing a replicable, second-hand evaluation system. It's a standardised test, and it has the same problems as most standardised tests.
> you get a process that is reasonably good at eliminating false positives
Right, it's good as a filtering mechanism. But if it's just a filtering mechanism, it shouldn't take up the majority of the interview.
Like, here is one of the assumptions you're making: most good engineers grind Leetcode before a new job. I don't think that's true. I think the number of people who grind Leetcode, even among the best candidates, is extremely small. Like <10%. If that's the case, it's inherently a very flawed criteria. If nothing else, you're filtering for the candidates that everyone else is filtering for - which means you're filtering not for capable candidates, but for candidates who are overpriced.
Here's another thing that's absent from these discussions: algorithms questions were developed before Leetcode existed. They're not designed for a world where you can game them. Leetcode is a bug in the system, but a lot of your justifications are, "it's fine, because the selection criteria is actually designed to select people who game it." Really? You want people to game the criteria? I don't buy it.
I don't consider myself particularly talented, but it's really not that much time to practice. At 10+ years of experience, I spent maybe up to 40 hours of practice the last couple of times I switched to interviewing mode. Pretty easy to spread out over 2 weeks to a month to get in shape. I don't consider that an unreasonable or unacceptable burden.
It's not that it's unreasonable, it's that it's a bad criteria because it's so easy. In 40 hours, a college graduate can get to the point where they're outdoing, say, one of the principal architects of AlphaGo? Then that selection criteria sucks. In any other situation we avoid metrics that are easy to game.
The argument that it selects for industriousness strikes me as a cope. It might, but it's also so easy to game that it obscures actual ability (unless you're just using it as a filter). Is that trade-off really worth it?
> I would have thought most talented programmers don't want to waste their time on leetcode, so it would primarily select for people who can't succeed without dedicating time to it.
I was saying that it's not that much of a waste of time, because it's not that much of a time investment for a good outcome (doing well at interviews).
If you want to change the argument from "it's a waste of time" to "it's trivially easy and easy to game", then why are there people who complain about leetcode and not wanting to do it?
It is both a waste of time and easy to game. The reason it's a waste of time is that it's easy to game, so some of your rivals will be gaming it. Which means you have to do it too, so it just becomes a dumb tax you pay every time you interview for a new job.
That's all from the individual candidate's perspective, of course. From the employer's perspective, you're still only going to have a minority of your interviewees having prepared their cheat codes. So you're evaluating based on ability to prepare a cheat code. Which IMO is retarded. If there wasn't a site dedicated to preparing cheat codes (which is how it used to be), then it would be sensible.
Hmm, i think your understanding of the state of things is slightly backwards. Because Google is so large, interviews so many people, has such a strong brand name, and pays so much more than everyone else, their interview process is going to be a very well-known quantity. Therefore, the questions that they used to use all were being gamed, so they kept on adjusting and tweaking and upping the difficulty, until they ended up with where they are now.
So the point is, how do you interview if you know that everyone is going to know all the questions ahead of time? Seems like google's solution is they'll just make their questions difficult leetcode level. So, then their thinking is, sure if you're able to grind leetcode to a point that you can pass our interviews, then we'll be perfectly happy to hire you. So you've "cheated" the system by learning how to program a wide variety of difficult algorithm-heavy programs. Worst case scenario, your resume is a complete sham, but at least you're able to write the code that they're hiring you to write, and everything else that you're lacking, well, they'll be indoctrinating you in the google way anyways, so they don't care about your past experience.
That doesn't sound like an effective pipeline to me. That sounds like a really bad pipeline, and a lot of post-hoc justifications to minimise the issues.
I'm hesitant to take Google's use of a tool as an indication that the tool is good, or that the process is refined. They used brain teasers for years before figuring out they had zero predictive power. Ignoring the implications of the scale of that mistake, one thing it indicates is that they aren't able to effectively evaluate different interviewing methods.
If they had no ability to effectively evaluate, how did they conclude that brain teasers had no predictive power?
I see google's process along the same lines as "democracy is the worst form of government except all those other forms that have been tried".
Now ideally, what you'd want is to be able to perfectly read someone's mind and someone's intention, to be able to tell instantly their strengths and weaknesses, to be able to tell which of those weaknesses are trivial ones that will get smoothed over a week into the job, and which of those are ones that are actually long-term issues that will poison your organization. It's possible that individuals with such judgement exist (Paul Graham has said that YC's success was attributed to Jessica's ability to judge founders) and that they work at google. But now you also need a process that can interview and evaluate a thousand people every single week. So what works for small companies where you can agonize over every hire or for exec searches, doesn't make as much sense for mass hiring of peons.
So given those constraints, everyone knows your process, everyone wants to apply, you need to interview a thousand people a week and decide to hire a hundred of them, and they really don't all need to be rockstar founder quality, but they all need to be able to produce something, you can start to imagine what type of interview process you might end up with.
There's a lot of things in life that are a pain in the butt, hard, and stupid that you just gotta do.
I went through the leetcode grind a few times, the last time I went through it I ended up taking a job at a startup that I was using as a "practice" interview before the FAANG ones because I could relocate out of California. I never would have interviewed there though if I wasn't laser focused committed to leaving my previous job for a better opportunity.
If you're the kind of person who grinds leetcode, you're _probably_ also the kind of person who spends the extra time reading all the documentation for redux. Or spends the extra time writing a tricky unit test before shipping something. That being said, there's still _plenty_ of people who fail leetcode phone interviews (e.g. myself) but go the extra mile, and FAANG companies still manage to let some questionable hires through the cracks. But engineer hiring is currently a filtering game, not a sourcing game.
You can either learn the rules of Monopoly and play or sit in the corner during board game night and complain you're not playing checkers. shrug