Async whiteboard coding
Aligning our hiring with our day-to-day work
By Rastislav Vaško
When I applied to join Doist as an Android developer eight years ago, I expected to do some whiteboard coding through a video call. That was, and in many companies still is, a common way to measure the coding proficiency of a candidate. It’s also something that I dislike because being aware that someone is watching (and judging) me while I’m coding is very distracting.
To my surprise, after an initial interview, I was asked to do a take-home test project. The assignment was brief and practical, had a time limit of 10 hours, was compensated and I could do it in private and on my own schedule. These principles are all still part of our test projects.
Fun fact: A piece of my test project’s code is still part of the Todoist codebase. This is something that’s always crossing my mind whenever I’m tempted to add a // TODO(Rasto): Improve later
quick fix!
Remote and async
In retrospect, a take-home test project was an obvious step for a remote and async company.
It may well be that programmers good at whiteboard coding under supervision are also good at coding with an IDE in private. But the converse, in my experience, is not true.
Therefore by applying this method we’re discarding a subset of great candidates. Generally, it’s done for two reasons:
- Convenience: It’s practical and fast. The interviewer has a dedicated time slot for the interview and evaluation. And then they move on.
- Cheating: It’s hard to cheat when someone’s watching you, waiting for your response. In contrast, when the test project is done async, you have the whole Internet and (technically) more time available.
Both are true. They are also less relevant than picking the best candidates from the largest potential pool while creating a pleasant experience for the candidates. Regarding convenience, setting up a timeline for the candidate and the right process for all involved interviewers can minimise this very well.
As for cheating, using the Internet is perfectly fine. I’m using it daily when coding, why would I be bothered that a candidate does? Sticking to the expected time for implementing the test project is more important. It requires special care to choose the tasks that aren’t significantly easier if you have more time. This is not a downside, because we don’t want to evaluate candidates on how quickly they can type. Additionally, we have an interview with a developer only after the test project phase is successfully passed. One of the main topics there is to discuss the approaches taken in the test project and ask follow-up questions.
I’m also hopeful that whoever is competent to become an experienced software engineer realises that cheating at this step is not sustainable. When a 10-hour assignment takes you 20 hours to complete, how long are you able to endure working 16 hours a day once you are hired?
Growing pains
As the years went by, Doist grew and became a more popular place to apply for. When we started expanding the Android team around 2018, we had a lot more applications than previously. We realised that to distinguish and evaluate the high-quality candidates we needed a more thorough test project. So we prepared one! It had a clear assignment and mockup, but we intentionally didn’t make it explicit what, aside from the stated goal, we were expecting and assessing. The available time didn’t allow the candidate to optimise for everything. The intention was to look for those candidates that valued what we value and focused on that.
Long story short, we failed to hire any candidate for over a year. I’m now painfully aware that we rejected multiple great candidates, simply because our expectations weren’t explicit and we were critically judging any deviation from our model solution. In follow-up communication with some of the candidates, it turned out that some were even thinking of our desired approaches, but didn’t take it for one reason or another. They simply prioritised something else than us and we didn’t make that explicit.
Obviously, this would’ve been easier to notice during a live interview with two-way communication. But we’re not doing those! For some time I was thinking about how to modify the assignment to make these expectations clear, yet the task not too easy.
Day-to-day tasks
We set out to rework the test project with two main goals in mind:
- Clear and explicit objectives.
- Fast and objective evaluation.
The light-bulb moment came when we realised we didn’t go far enough in reconsidering our implicit assumptions on what a test project should be like. Instead of focusing on creating a working project from scratch based on an assignment, we start with an existing project. And the assignment consists of several smaller and focused tasks that closely resemble the day-to-day work of our engineers. We’re focusing on what we do the most, day in, day out.
One task is designed as a bug investigation based on a user report. Another task is a simple feature implementation based on a product request. Another task is doing a review of a pull request. This gives us a much broader overview of the candidate’s skills and problem-solving approaches while saving time on trivial work, which comes with setting up a project from scratch.

Example task from the test project README
Complementary to this we created an evaluation sheet. Each task is reviewed and assigned one of four levels, based on defined criteria. Once all tasks are assessed, a threshold defines whether the project is passing and the candidate moves to the next phase. Reviewing the whole test project takes between 1 and 1.5 hours. This is about twice as fast as previously, saving also our engineers’ time.

Evaluation sheet template
Room for creativity and feedback
The goal of the evaluation sheet is an objective assessment of the test project on its merit while minimising bias. At the same time, programming is a creative endeavour and the tasks are designed to focus on this aspect. This means candidates regularly come up with unexpected solutions that don’t clearly fall into any level bucket. We had to keep this in mind and the evaluation sheet needs to be flexible to leave room for the reviewer’s intuition and subjectivity.
Additionally, the test project assignment and repository aren’t static and we need to keep them up-to-date. Partially, this is due to the Android platform and libraries evolving. We’re always looking for feedback from candidates, both successful and rejected. Lastly, we’re trying to spot patterns in the submissions to detect common unplanned pitfalls and address them.
Conclusion
This setup started as an experiment on Android in late 2020. Since then, we’ve successfully used it in multiple hiring committees to hire four Android engineers. It’s also being adopted across other teams at Doist. For me, the most pleasant consequence was getting positive feedback even from rejected candidates that they enjoyed the process and the assignments.
In the end, we’re assessing candidates in how they perform on tasks that mirror what they’d be doing day in, day out. It feels good when things make sense.