Software Engineering interviews and referrals are broken, but I have an idea

Gui Carvalho
5 min readJan 7, 2019

Over the years I’ve worked with dozens of engineers, if not hundreds. Out of this universe, most were competent, many were pretty good, and only a few were truly great. They were 10x engineers.

This latter group is very diverse, every one of them comes from a different background, with varying education levels, is great at different things, and each ended up taking different paths in life, but that’s not surprising.

What’s surprising is that a few of them confided to me over beers or during peacetime in the startup trenches that they felt their options were limited because they sucked at algorithms and interviewing in general.

Every time it happened, I was always in disbelief, and couldn’t shake that maybe all they were missing was self-confidence. The best engineers I’ve worked with couldn’t possibly underperform at the foundation of what every company looks for. I was wrong.

As the years went by, I kept interviewing people, kept going to the hiring panels that followed, and kept seeing the hires perform in real life. I watched many stellar candidates, capable of solving any algorithm question thrown at them, turn out to be mediocre co-workers. I also watched many borderline candidates, who almost didn’t get an offer, turn out to be great engineers.

Later I noticed it happened to me as well. After four straight performance reviews in which I received raving scores, I got told by my CTO during a 1-on-1 that it wasn’t clear during my interview process that I’d be anything special.

I’m not only talking about algorithms. I watched people trying to come up and implement different interview types, like system design, product discussions, practical projects, pair programming, and behavioral questions, with mixed results.

Over time I lost most of my confidence in the idea that an engineer can tell how good another engineer is in the space of an hour. Whatever the result is, it still carries an enormous amount of randomness.

The current standard process where a candidate gets interviewed by somewhere between four and seven engineers somewhat helps with this problem, but it’s still pretty bad. If a candidate goes through five rounds, nails two of them, only ok in another two, and suffers in the last one, what can you safely conclude? Not much.

On top of all this, adding to the noise we have a lot of factors nobody even considers but that play a role:

  • How much did the candidate prepare for the interview? For algorithms, you have Leetcode and Cracking the coding interviews, for system design, you have resources like The System Design Primer, not to mention whole websites to help people prepare, like HiredInTech. People that spend a few months preparing will do much better in the interview process, without changing much of their engineering caliber.
  • How well run was the interview? It’s not uncommon that the interviewer received little training on how to be efficient and effective. Usually, nobody checks how well an interviewer is interviewing.
  • External factors: Were both parts on time? How was the internet connection quality for that phone round? Even if you account for that, how do you know how much was the impact?
  • Psychological factors: The interviewee was nervous! Sign of lack of confidence? Maybe they’re in financial distress? Maybe they rarely get a shot because they went to a Community College and their resume looks weird. One hour is not enough time to assess people’s technical ability, let alone what psychological factors were in play.
  • Positive biases: How much of a bonus did the interviewee get because they looked like the interviewer? Or went to the same school? Or like the same programming language? Sometimes people naturally like each other, and that does affect their evaluation. Some of it is signal, as in cultural fit, but some of it is just unwarranted bonus points.
  • Negative biases. How much of a penalty did the interviewee get because they didn’t come from the same background? Or are you from a different gender? Or era? Or from another race? Or uses a programming language that the interviewer hates? Even if one is fully aware of their biases, and nobody ever is, that doesn’t eliminate its effects.

Now let’s talk about referrals.

In most companies, referrals only work to get people into the pipeline. They get interviewed with the same process. If the interviewee makes it, the referer gets a few thousand bucks as an incentive. If not, nothing happens, and everyone moves on.

As a result, the referrer has nothing to lose and you can get yourself referred to any company you want by joining Blind and creating a post asking for it. Someone already at the company, eyeing the referral bonus and without anything to lose, will agree to refer you for free.

This system wastes a lot of information about the candidate. If you worked with someone for a few years, you know much more about their work and behavior than a few 1-hour rounds can yield.

If I hunt down my old 10x contacts and try to get them to apply with this system, my success rate will be minimal. Also if I manage to get one through that ends up getting rejected, that’ll burn a personal bridge I’m not willing to risk.

I get that companies are trying to prevent people from bringing their friends, in case they don’t meet the bar, but you discourage that with different approaches.

For example, you could create a different kind of referral program, like a vouching system. In this program, if an engineer decides to vouch for someone, the process could be like this:

  • The manager would talk to the engineer vouching for the candidate and understanding more about her strengths and weaknesses, overall personality, and how deep the connection is.
  • If the manager sees potential, she’d call the candidate and talk about the available roles and expectations.
  • If the call goes well, the candidate would be brought on site so more team members would get to know her, to only assess the cultural fit and personality. Maybe play Settlers of Catan like Mark Pincus allegedly used to do at Zynga.
  • If the team thinks it’s a good fit, the candidate would be extended an offer.
  • If the candidate joined, the referral bonus would be delayed until the first full performance review cycle could be done. If the new engineer performed great, the engineer who vouched for her would get 2X of the bonus she’d receive otherwise. If the performance was lacking, she’d lose the bonus entirely. This would make people only vouch for whom they truly know and respect.

If such a process were in place, I’d be extremely happy to vouch for the 10x engineers I had the pleasure to work with over the years. And if they knew the process would be bullshit free, I’d have a much easier time convincing them to join.

--

--

Gui Carvalho

Startup guy, engineer, product thinker. Currently living in Florida.