Here’s is the first in a series of unsolicited advice to my fellow software engineers grinding to climb the technical ladder. You may not agree with all of them, but I still invite you to reflect.
Learn in Public
Contribute to open source projects, join Kaggle competitions, be active in a programming community. Find ways to work on fun real life problems and build a portfolio along the way.
Learning in public does three things simultaneously: it builds your skills working in a complex project/problem instead of tutorial hell, it creates tangible proof of your abilities that goes way beyond a resume, and it puts you in rooms (virtual or otherwise) with people who can change your career trajectory. When you contribute to open source, you’re getting code reviews from maintainers who might work at your dream company. And unlike grinding LeetCode in private, everything you do in public compounds - a GitHub contribution from three years ago can still land in a hiring manager’s search today. The portfolio you build isn’t just code, it’s proof you can collaborate, communicate, and ship. Bonus points: You are also making the software engineering world a better place for everyone.
I’m very bullish on AI code assistants. It’s actually the second biggest reason I stepped down from being a director and went back to IC life. I’ll write more about that decision another time, but today I want to share how I’m using AI day-to-day—and, maybe more importantly, when it doesn’t quite fit.
My toolkit is pretty simple: VSCode, GitHub Copilot, Claude Sonnet, and occasionally KiloCode.
The hardest part of working with AI so far has been the waiting. When I’m in “agentic” mode—fire off a big prompt and wait—it kills my flow. If I don’t get an answer in five seconds, my brain is already somewhere else, usually for twice as long as the model took to respond. Because of that, I’ve built my routine around minimizing that problem:
“This is a temporary fix. I’ll come back for it later” – Famous last words that every developer has uttered at least once.
We’ve all been there. You’re staring down a deadline, your PM is breathing down your neck, and you need to ship something that works. So you write a quick hack, slap a // TODO: refactor this ugly mess comment on top, and move on with your life.
Concurrency is one of Go’s greatest strengths, but managing goroutines effectively at scale can be challenging. When you have millions of tasks to process, spawning a goroutine for each one can quickly overwhelm your system with excessive memory usage and context switching overhead. This is where worker pools come to the rescue.
A few years ago, I created bigopool, a lightweight Go library that implements high-performance worker pools with elegant error and result handling. Today, I have finally decided to write about it.