Leetcode 0
Many developers have a very mixed attitude toward algorithm interviews. Some consider the algorithms section mandatory, others don’t. A lot of companies — even though for the actual day-to-day work and their real tasks, knowledge and understanding of clean architecture, design patterns, and deep familiarity with one framework or another matter much more — still include an algorithms section, often with non-trivial problems that are quite hard to solve under stress and on the clock.
Some people simply skip these companies when job-hunting; some get through anyway; some bomb out on fairly easy problems.
My Habr article1 with notes on preparing for the algorithms section drew a dozen angry comments — even though none of the authors had actually shown up for our interview. We won’t stoop to that level. Instead, we’ll try to build up knowledge in this area and develop the ability to apply it not just at the interview, but in practice too.
To share my own preparation experience, I decided to dedicate a series of blog posts to the topic. In this one, we’ll
talk about the leetcode2 service itself.
I’m subscribed to the leetcode subreddit, and I often see screenshots of profiles where people try to “grind” their
ranking by submitting a huge volume of solutions. Those rankings are visible, by the way, to paid users.
Beyond the number next to your avatar, that kind of “grind” doesn’t really give you anything. Neither does trying to solve problems straight in the on-site editor. First, even though it has a debugger, the editor doesn’t highlight typos and gives you no hints with function or method names. The approach resembles trying to make code work as described by Robert Martin in “Clean Code.” Second, after you submit that kind of solution, no understanding is left behind. Just a feeling of “victory” over the problem.
To prepare in an organized, systematic way, I created a private repository for my own solutions. The first iteration ended up with a fairly obvious structure:
task-slug-1/
go/
solution.go
solution_test.go
go.mod
python/
solution.py
solutions_test.py
task-slug-2/
...
I solved problems in several languages and treated the whole repository as one big IDE project. We’ll talk about
language choice in one of the next posts.
I run every solution against a single set of unit tests — some of which I take from the problem statement, the rest I come up with myself by sketching out possible edge cases. For many people, an extra benefit of this approach is the TDD practice it provides.
When the project got large enough — at the time of writing I’ve solved over two hundred problems — I decided to restructure it:
...
├── go
│ ├── task-slug-1
│ ├── task-slug-2
├── kotlin
│ ├── task-slug-1
│ ├── task-slug-2
├── mysql
│ ├── task-slug-1
│ ├── task-slug-2
├── python
│ ├── task-slug-1
│ ├── task-slug-2
...
I split tasks by language and started treating each task as its own project. That kept me from getting distracted by a huge tree in the file navigator, and it took the load off the IDE — indexing had started taking a noticeable amount
of time, mostly because of the JDK-based languages.
Many problems have multiple solutions. So I focus first on the simplest, clearest one. That’s the one that sticks the best. On top of that, when I run into an algorithm I haven’t met before, I write its name down in a separate note and look it up. For example, I didn’t know there was such a thing as Morris tree traversal3. The act of solving helped me discover that algorithm, even though I’d be unlikely to run into it in real-world tasks.
Someone will push back: why bother with algorithms at all? Algorithm problems force the brain to develop and think, rather than reach for a code generator producing generic framework solutions. On top of that, this kind of practice affects the quality and expressiveness of your day-to-day code. More often than not it ends up more compact and easier to follow — both for you and for your coworkers.
In the next posts, we’ll get into practical things. See you soon!