In the beginning things were simple
The practice of engineering is continually evolving and alongside it support processes like interviews have had to adapt. It has moved so quickly that an engineering interview from twenty years ago would be nearly unrecognizable today and vice-versa. Gone are the days when candidates would be offered the opportunity to interview because they are coming from top school with zero experience. Gone are the boardrooms where candidates would sit across from two to three greying men (sadly accurate) and be asked interesting but totally irrelevant questions to the job they were about to perform. This for the most part has been replaced across the industry with the same proven system: initial screening call, technical test, pairing interview and if your company is really on the ball a sales pitch to ensure it is positioned as the best possible option. The modern interview is really the start of a relationship between a company and a candidate. It is the opportunity for both to sell themselves and ask the right question to determine if they are right fit. In competitive markets (like London) it is important you get this right, as someone who champions our process at OVO when I got it wrong I felt guilty and wanted to ensure it wouldn’t happen again.
The case study
So where to begin. OVO is growing rapidly and as part of the process we have decided that some small well defined projects or pieces of work could be ‘nearshored’. The expectation here (that we are testing) is that in the short term this would help fill the gap between our companies growth plans and our limited technical resource. The role in question was a front end position on a nearshore team. The recruitment for this is slightly complicated as we work with a recruiter in the host country to source candidates on our behalf. They handle the initial cv scan and conversation with the candidate. If they feel they have a good match the cv’s are sent forward in our process and reviewed in London. After a review a short-list of candidates are given a technical test to assess their skill level, it is identical to the one we use in London however we are in the process of making one that could better fit the assignment and help us find the right candidates.
Unsurprisingly recruitment is also difficult outside of London, it is almost like this border free EU has something to do with it and people have been moving to major tech cities for years…
We tend to see a few technical tests returned each week, in this case the test was returned along side another so not a huge volume. Depending on the level a technical test it takes about half an hour to properly review the work (based on our current metrics). This gives us a pretty good feel for the candidate and if we are going to proceed to the next round. This review was slightly shorter as the assignment is question was not going be be a pass. As a result I quickly wrapped up my assessment and returned the feedback to the recruiter.
The feedback was as follows
- Simple and clean interface.
- All files committed in a single commit, then follow up commits for readme, etc.
- No tests.
- This candidate is a No.
On the initial read of the feedback it is clear that it wasn’t a strong technical test but on reflection (now and when the recruiter followed up) I gave nothing back about why this was a problem and what could be improved. The aforementioned recruiter did get in touch a couple of days later when the candidate did request further feedback and my reaction went from annoyance to guilt as I realised what I had done.
The result
There was a part of me that doubted my own review, normally if a candidate writes back it is clear that we have missed something in the technical test. A hidden comment “//I know I should have tests here but I work two jobs and I did the solution in 25 minutes”. Even worse, the feedback doesn’t match the candidates test (this happens more often than we would like with a recruiter accidently sends the wrong git handle) so we say ‘no tests’ and there were clearly tests. In this case I wanted to pull in additional help, determined that we would hold up our side as a responsible employer in the interview process I asked for another person to review the technical test and send the feedback.
I was slightly relieved when the second review was also a no, as terrible as this sound it is human nature to be worried that the fault is with you. With this new information I returned a slightly better response to the recruiter and gave the following feedback:
- He is using redux but primarily saves state locally in the GameMainLogicComponent component, this suggests a pretty fundamental misunderstanding of redux
- He hasn't written any tests except for the one test in a file that creat-react-app inserts automatically, this test is incorrect and fails
- His code is inconsistent and very convoluted
- The main game logic is very brittle and would be difficult to extend
I understand that this isn’t much more useful to a candidate (that the original) but it at least it gives a better explanation of where the major problems were in the submission. I also gave better feedback to the recruiter about how to position the technical test to candidates and what we were looking for. They claimed our test was unfair because it didn’t specifically ask the candidate to write tests, we think is totally fair because we expect everyone to test their code, anyway that is now settled by asking candidates for ‘what they feel to be production code’.
Why feedback matters
So now we can talk about doing this the right way. First we need to start with a very simple premise; candidates deserve feedback. If this were twenty years ago this would have been a revolutionary idea (thankfully one that was adopted) but there are some very good reasons for this.
Candidates are judging your tech culture when they apply, they are looking for open and honest employment opportunities and being transparent about why they didn’t qualify for the next round or didn’t get the job is hugely valuable to them. It is an opportunity for them to learn about what they have been doing wrong and change for better outcomes in the future.
In addition it really improves your employer brand. As much as I hate the term employer brand, it is a real thing and it is something that needs to be cultivated. For example, are you attending a conference but not hiring junior engineers? Don’t skip having a conversation with them, explain where the company is (too small??) and why you are not taking on people in their position right now (don’t feel you can give the the mentoring and growth the need??). Above all a company needs to be transparent to current and potential employees.
One more thing, it really helps your own recruitment process by keeping all parties engaged with your company. Candidates can expect insightful and useful feedback. Recruiters know this will be coming and will be able to go back to candidates with actual reasoning for a yes or no decision. The channel feels open for candidates to get in touch and follow up with any questions.
How this should have been done
I scanned a few of the technical test submissions from earlier in the year when were were looking for a scala engineer. We used a pretty open technical test where candidates can put as much or as little effort into it as possible. It is a great test for creating a split between someone who can solve the problem or someone who can really express there technical level within a limited scope. I think it is more interesting to look at an example of a candidate that was a no, so here is the feedback:
- I really appreciate the effort put into thinking about what we wanted.
- Trying multiple methods is a good engineering trait, e.g. a simple implementation & and a shapeless implementation
- The CSV code has been copy & pasted from the blog post. Whilst it includes error-handling, it hasn’t been adapted to be useful in this solution. e.g. “Expected list element” would be meaningless to a user.
- The CSV code returns an Either type, which xxxxx has gone to the trouble of removing. Either would have been a good type to use
- When there are errors, they are not printed in a useful, human-friendly way
- Xxxxx repeatedly uses .get (on Either.*Projection and Option). This is generally considered very bad practice in Scala
- The shapeless solution copied from the blog post, in this submission, ends up being a convoluted way of converting List[String] to AddressBookEntry. It’s not clear here what’s been gained, particularly given that CsvFileReader has not been made generic in any way and returns List[AddressBookEntry]. A solution like this is usually done to make code generic, in which case the CsvFileReader could have been much more generic.
- The use of very specific types for AddressBookEntry’s fields is nice, particularly using value classes to prevent extra allocations
- There’s an unused case class ‘A’ in the Types.scala file — it’s not clear why it’s there. It looks like a mistake.
- The assertion method in the tests is very verbose, using pattern match to handle failure and success in detail. ScalaTest offers the EitherValues trait so you can match safely with .right.value and .left.value
- Pattern matching is used extensively in the tests. In general I’d avoid using conditionals at all in tests as they should really be tested themselves. Better use of assertions would allow conditionals to be avoided and would also make the code simpler and more readable
The above is an example is closer to what I should have been giving as feedback. It is a bit difficult to take in everything without having the code next to it (probably not fair to share a candidate's code), however with the code present the candidate would get a clear and concise review of their submission. It would be clear that we took the time to review the work and that if they had questions about the review they could reach out for further explanation. The takeaway here is that we can always be evolving alongside our processes, additional effort upfront will often lead to better results in the future.