Zach Holman’s 2016 article, Startup Interviewing Is Fucked, is an opinionated piece which questions the effectiveness of startup interviews from one professional’s experience. Nearly 20 days later after Holman’s article, Aline Lerner measured the effectiveness of technical reviews and published a post on interviewing.io sharing his results.
Today, I will review Lerner’s continuation of Holman’s review (review-ception), bringing to light errors in methodology to discredit Lerner’s blog post. This includes an erroneous assumption and conflict of interest in Lerner’s interviewer sample.
I would like to start off by pointing out that there is no bias with Lerner’s association to his site when it comes to interpreting the data. In fact, its rather telling that a business built on coding interviews is criticizing their product.
Overview of Lerner’s Post
The setup for Lerner’s research is simple. He extracts data from a site he has founded. The site is built to conduct technical coding interviews, pairing large-company interviewers with applicant interviewees. At the end of the interview process, interviewers rate an applicant on a scale from 1 to 4 (integers only).
Now that you know how data was retrieved, lets talk about the results. I can’t elaborate on every detail but what you need to know is Lerner found that individuals who are on average capable of giving a strong interview (avg. of 3 or more in interviews) may still fail up to 22% of the time. Lerner ends by criticizing the interview process for being unreliable.
Criticizing the blog post
While Lerner’s criticisms are not invalid (I tend to agree with them), there are some faults with the claims he makes. One fault lies in his assumption about the individuals involved in an interview. Most of his argument is built on the premise that interviewee’s are consistent. However, there is no evidence that supports this assumption so the variations in performance which he claims indicate a bad hiring-process, could very well be factors which cannot be controlled by an employer (Here’s a paper highlighting change in human performance with relation to time to show you how weird humans are). This means that the interview process is not inherently bad, but rather not suited to take into account the variable nature of humans (something which could be fixed, not requiring a replacement as he claims).
A secondary problem with Lerner’s argument is the sample of interviewers which provide data for him. Lerner did all the hard work of making sure that scores were not inconsistent based on the interviewer and created a whole algorithm to properly grade interviewee’s. However, he fails to take into account that Interviewing.io requires you pay 100$ per interview (as of today). Anyone using the site would be looking to consistently have good interviews. This is going into conspiracy theory territory but bare with me. If applicants want consistency and they are getting inconsistencies, they will obviously try again and again, paying 100$ per interview. Unlike the real world in which companies actually lose $$ giving inconsistent interviews (they need to hire someone to fill vacancies), here the interviewer benefits. This means that score variability is actually independent of the hiring process, making technical interviews innocent.
If you’re one for conspiracy theories, then its obvious that Lerner data is biased. If you don’t believe that the assumption I make is valid, then you should note Lerner’s argument is also built upon a no real information. That’s not to say that his argument is incorrect. In fact, I believe the sheer number of developers who are opposed to this kind of hiring process should be indicative that it’s problematic.
This is going to be my final blog post for who knows how long. Perhaps I’ll eventually start writing my own blog posts. If you have taken the time to read even one of those six, I want to thank you. This isn’t something I do because I want to, but knowing someone will take their time to look over something that I have done is satisfying.
Until next time