I find it… troubling? That a technical interviewer can’t tell for herself whether your code will work. Wouldn’t you ideally want people who actually understand code to be giving the coding questions?
Are you saying that because this interviewer needs to run code to find logic errors, she's somehow not a competent engineer? Because I usually need to run code to find logic errors. Sometimes I use formal verification instead but that's pretty rare. Am I also not a competent engineer?
He's saying that in order to judge someone's ability at something, you need to understand that thing yourself. If you require the candidate to write perfect code using nothing but a whiteboard, you need to be able to read that code using nothing but a whiteboard as well.
If the interviewer cannot do this, how is he going to judge the result? Does he know how to run a compiler? Does he know how to run the code? Does he have the skill to judge the output? Is it really a good policy for a company to discard a possibly excellent candidate that just missed something silly that would normally be checked by a tool while you type?
If you can't tell the difference between sketched-out kinda sorta pseudocode and potentially workable C++ then you're certainly not competent in C++.
And if you fail to communicate the requirement for one or the other then you're certainly not a competent interviewer.
You also have to ask what exactly is being tested here? Is it the ability to remember syntax? To remember an algorithm? To improvise an algorithm? To recognise which algorithm is needed?
In my view, if the answer involves a topological sort the interviewer should know how to solve it and be able to follow and find errors in the candidates code. If the interviewer, knowing the answer, cannot find any issues then surely the code is fine (for code written in an interview)
It's also possible that she hadn't seen the particular algorithm used before, or that she was having an off day or stressing about a meeting immediately after the interview, or that there were errors that she did see and she didn't want to say "yeah there are errors here" because doing so could affect the candidate's confidence in the interviews after hers. I could imagine any of these being true. Or she could just be a bad interviewer.
That is irrelevant. Asking someone to type non trivial code outside of IDE and then expecting it to compile and run without issues is lunacy. Even junior programmers know this. The interviewer in this story was either an amateur, an idiot or on power trip.
I guess I'm also an idiot then, thanks, how kind of you to say that.
Actually hang on, I'm editing this to be slightly meaner. Your whole take that doing this is a sign that she's either an idiot or on a power trip is a very familiar thing that people say about women in tech and I'm honestly tired of it, because I can see myself doing exactly what she did and I don't like it when people say those things about me. Please don't do that.
You just can't tell someone to type code into google docs and expect it to just work and worst off all judge a persons skill on this basis. It takes minimal experience of programming to learn this. Hence all the jokes that people are surprised/suspicious when their code runs after first compile.
If you disagree with this you could have provided any sort of counterargument. Instead you took this weird "women in tech" angle. Wrong is wrong, interview here was wrong, gender did not play a role.
If you can't assess someone on their response then you're not interviewing them, you're just giving an exam by proxy. But that might very well be because the whole recruitment process is thoroughly stupid.
So, we can have an interviewer perform a possibly incorrect manual validation of an interviewee's possibly faulty code. Reading code is harder than writing it, and presumably the applicant has been asked to do something tricky.
Or they can run it, asking the real arbiter of truth whether it works or not.
Of course, "whether it works" is merely one (very important) metric of quality.
The interview is a proxy for working with the person. In this case, it's a proxy for pair programming / code review. A good chunk of what the interviewer, ideally, looks for when asking a coding question is communication from the interviewee - can the interviewee communicate what they are doing and why? Can they explain the intent of the thing they've just written? Do they have a clear picture of it in their head, and can they communicate it? When the interviewer spots problems with it, what does the ensuing discussion look like? - how well does the interviewee collaborate in solving them? If the interviewer is wrong, does the interviewee push back? How? Can they understand you, and can they make themselves understood?
Can you work with this person? Can you collaborate to write code, or will it be a daily struggle?
Whether the code actually builds and runs after the hour is up does not help answer these questions; it is arguably the least interesting part of the whole process. The time limit is artificial; if all the other things align but you didn't happen to get it working in one hour, you'll likely have got it in two. If they don't align, you'd likely never have got it.
Every recruiter says the same thing, but in practice the interviewer is only looking for the right answer, and that is what determines the outcome of the interview. I've never seen anyone helped out by soft skills while still having an incomplete solution.
The problem is it won't run. It was written in google docs without IDE. Everyone who programmed in their lives knows this. You cannot write out a whole algorithm like that and not make any even trivial error.
This is like writing code in notepad and creating a PR without building or running it to test once. What is this testing? There is no real world scenario where you are expected to work like this and for a good reason.
Yeah. If I were asking a trickier question and I got an interesting new variant of the solutions I'm aware of, I'd absolutely run it if I had time to type it up and everything. Most solutions are either something I've seen before verbatim or obviously wrong, though.
I think it depends on who gets to choose the programming language.
If you're interviewing for specific language skills, then what you say is clearly true.
If it's a general coding skills interview, I invite the candidate to code in whatever language they like. More often than not they choose a language I am sufficiently fluent in to follow along. In rare cases I need to ask them to explain a thing or two. In very rare (but generally quite fun) cases they choose a language I am totally unfamiliar with; this tends to lead to interesting discussions.
In all of those cases the full coding transcript is captured and, if necessary, can later be additionally reviewed by/with someone who knows the language well.
P.S. Also, their thought process and approach to problem solving is just as important as the code, and is largely independent of the choice of programming language.
Funny and illuminating examples of this are the excellent "hexing the technical interview" series (read in any order): https://aphyr.com/tags/interviews