How Tech Leads Actually Review Code
(Turn Them Into Career Growth)
Code reviews are the most visible place to show tech lead thinking.
Think about it. Your manager doesn’t see most of your work. They don’t see you debugging at 11pm. They don’t see you helping a teammate over Slack. They don’t see you making smart architecture decisions in your head.
But your review comments? Those are written down. Visible to multiple people. Easy to share. Easy to reference in promotion discussions.
Most engineers waste this opportunity.
They check if the code compiles, tests pass, no obvious bugs. Type “LGTM” and move on.
Or they get thorough: “Fix this typo.” “Use const instead of let.” “This should be camelCase.” Then approve.
Neither approach shows strategic thinking.
The first says “I’m a rubber stamp.” The second says “I’m a human linter.”
Neither says “This person thinks like a tech lead.”
The Actual Difference
Senior engineer asks:
Does this code work?
Are there bugs?
Does it follow style?
Are there tests?
Tech lead asks:
Does this solve the RIGHT problem?
Will this work at 10x scale?
What happens when this breaks at 2am?
Can someone else maintain this in 6 months?
What technical debt are we creating?
But here’s what separates good tech leads from great ones: Great tech leads don’t just catch problems. They use reviews to help people grow.
Every PR is a chance to share context someone might be missing. To explain the “why” behind a pattern. To point someone toward a concept that’ll make them better.
That’s what we’re really talking about here.
Before You Start: Read the Room
Not every team wants strategic reviews.
Some teams optimize for speed. They want fast approvals, not thoughtful feedback. If you start leaving strategic comments and everyone’s annoyed you’re slowing things down, that’s data.
Watch how senior engineers and tech leads review:
Are they leaving thoughtful comments or quick LGTMs?
How long do PRs typically sit?
Do people engage with detailed feedback or ignore it?
What does your manager value—speed or quality?
If everyone’s doing “LGTM” reviews and PRs merge in 10 minutes, strategic reviews might make you look slow or annoying. Adjust accordingly.
This doesn’t mean give up. It means start smaller. One thoughtful comment instead of five. Pick your battles—only on significant PRs. Match the team’s pace. Or accept this might not be the right environment for you.
If your environment actively discourages thoughtful reviews, that’s important data for whether you can grow here.
The Relationship Factor
Here’s something nobody tells you: Strategic reviews only work if people trust you.
If you’re new to the team or haven’t built relationships, even great review comments can land badly. People think: “Who are they to tell me this?” or “They’re just showing off” or “They don’t understand our context.”
I learned this the hard way early in my career. I joined a new team, saw some code I thought could be better, and wrote what I thought was helpful feedback. Detailed. Thorough. Technically correct.
It did not go well.
The author got defensive. Other team members thought I was being arrogant. My manager had to smooth things over. I wasn’t wrong about the code—but I was wrong about the approach.
Build trust first:
Help people when they’re stuck
Be responsive on their PRs
Ask questions before giving advice
Acknowledge good work in reviews
Admit when you’re wrong
Once people know you’re genuinely trying to help, your strategic comments will be welcomed instead of resented.
If you’re new: Spend a month or two building relationships while practicing strategic thinking internally. Then start sharing your observations.
Four Things Tech Leads Actually Look For
Here’s what to look for, specific triggers to watch for, and how to turn each into a teaching moment.
1. Scale and Performance
Triggers:
See a database query? Think: “N+1 problem?”
See
.map()or.filter()on an array? Think: “How big can this array get?”See a loop inside a loop? Think: “What’s the complexity?”
See “load all” or “fetch all”? Think: “All of what? 100 rows or 100k?”
See caching? Think: “What happens when cache is stale?”
Example comment:
This works well for now. One thing to think about -
we're loading all users into memory. We have 500 today,
but expecting 50k by Q3. Might be worth paginating
early. Check /orders for how we did it there.
With a teaching moment:
Good approach here. Just flagging - this pattern bit us
hard at my last company when data grew. Query worked
fine for months, then started timing out randomly.
Paginating now would save us pain later. Happy to pair
on it if you want.
Or if you have a resource:
This is the N+1 problem - easy to miss, causes
weird issues later. This post explains it well: [link]
Not urgent, but worth fixing while we're here.
2. Failure Modes
Triggers:
See an external API call? Think: “What if it’s down?”
See a network request? Think: “What if it times out?”
See user input being used? Think: “What if it’s malformed?”
See async operations? Think: “What if they run in wrong order?”
See third-party service? Think: “What if they change their API?”
Example comment:
What happens if the payment service is down? User might
retry and create duplicate orders. Worth thinking through
the error case here.
With a teaching moment:
I learned this the hard way - had a rough incident because of
missing error handling on something just like this.
Try-catch plus an idempotency key would make this solid.
This Stripe post helped me understand the pattern: [link]
Or even simpler:
External calls need a plan for when they fail. What
should the user see if this times out?
I can walk you through the pattern we use if that helps.
3. Maintainability
Triggers:
See clever code? Think: “Can I understand this quickly?”
See magic numbers? Think: “What does 86400 mean?”
See a complex condition? Think: “Can this be named?”
See copy-pasted code? Think: “Should this be extracted?”
See no comments? Think: “Will this be obvious in 6 months?”
Example comment:
This works, but it took me a few reads to follow.
Future us debugging this at 2am will struggle.
What about splitting it up?
const activeUsers = users.filter(u => u.active);
const total = activeUsers.reduce((sum, u) => sum + u.total, 0);
With a teaching moment:
Clever solution. I used to write code like this too.
Breaking it into steps makes each line do one thing.
Easier to spot where things go wrong.
Or:
I can follow this, but it takes effort. Usually that
means it'll be hard to maintain later.
Not blocking - just something to consider. Simpler is
almost always better for code that will live a while.
4. Architecture Consistency
Triggers:
See a new pattern? Think: “Do we already have a pattern for this?”
See error handling? Think: “How do we usually handle errors?”
See state management? Think: “Does this match our approach?”
See a new abstraction? Think: “Is this the right level?”
See a different file structure? Think: “Does this fit our organization?”
Example comment:
This handles errors differently than our other endpoints.
Any reason not to use ErrorHandler? Check /api/orders
for an example.
With a teaching moment:
We usually use ErrorHandler for this. Consistency helps
when someone is debugging at 3am and assumes all our
endpoints work the same way.
If ErrorHandler doesn't fit here, we should probably
update it so everyone benefits.
Or:
We have a pattern for this - our architecture doc
explains the thinking: [link]
Not blocking, but worth keeping things consistent.
Makes life easier for everyone later.
Sharing Resources Without Being Annoying
When you share an article, don’t just drop a link. Give context.
Bad:
You should read this: [link]
Feels like homework. They won’t click it.
Better:
This explains it well: [link]
The part about timeouts is what matters here.
Best:
I struggled with this same thing until I found this: [link]
Section 3 clicked for me. Might help here too.
Now it’s a recommendation from someone who’s been there, not an assignment.
Resources worth bookmarking:
Keep a few go-to links for common issues:
Your team’s past incident reports (nothing teaches like real pain)
Internal architecture docs
One good article on N+1 queries
One good article on error handling patterns
Your team’s style guide
When you see the same problem twice, you’ll have something ready to share.
The Ego Trap
Here’s the uncomfortable truth: It’s very easy to use strategic reviews to show off instead of help.
You know you’re in the ego trap when:
You’re commenting to prove you’re smart, not to help them
You’re nitpicking to show you know more
You’re finding problems that don’t really matter
You’re commenting because you can, not because you should
Your review is more about you than the code
The test: Before posting any comment, ask yourself: “Does this genuinely help the author or prevent a real problem? Or am I just showing off that I know about this?”
If it’s the second one, delete it.
I catch myself doing this sometimes. I’ll write a comment, then realize I’m basically saying “look how much I know about distributed systems.” Delete. Nobody needs that.
Good strategic reviews make the author feel: “Oh, I didn’t think about that—good catch” or “That’s a helpful perspective” or “I learned something.”
Bad strategic reviews make the author feel: “This person is nitpicking” or “They’re trying to look smart” or “They don’t trust me.”
The difference is often tone and intent. Check yourself.
When Strategic Reviews Don’t Work
Let’s be honest about failure modes:
1. Everyone ignores your reviews
You leave thoughtful comments. Nobody responds. PRs merge without discussion.
This doesn’t mean your reviews are bad. It means your environment doesn’t value this kind of feedback, or people are too busy to engage. Document that you tried (for your brag doc). Keep doing good work. But recognize this is data about your environment, not your skills.
2. You slow things down too much
Your reviews take 30 minutes. PRs sit waiting for you. People are frustrated.
Speed up. Not every PR needs strategic review. Do quick approvals on small changes. Save the teaching moments for significant PRs and people you’re actively mentoring.
3. People think you’re being difficult
Your comments get pushback. People argue. You’re getting a reputation as a blocker.
Check the ego trap. Are you helping or showing off? Also check if your tone is friendly or critical. Maybe you need to build more trust first.
4. You’re wrong sometimes
You leave a comment about scale. Author points out you missed something. Your suggestion doesn’t work.
Admit it. “Oh good catch, I didn’t think about that. Thanks for explaining.” Being wrong and gracious about it builds more trust than being right and smug about it.
Actually, being wrong publicly and handling it well is a superpower. People remember that. It makes them more likely to listen next time because they know you’re not just defending your ego.
5. They’re wrong and won’t admit it
You raise a valid concern. They push back. You’re pretty sure you’re right, but they’re getting defensive.
This is tricky. A few options:
Ask more questions. “Help me understand—what happens in this scenario?” Sometimes walking through it together helps them see the issue without you having to say “you’re wrong.”
Bring in another perspective. “I might be missing something. Mind if we get [senior person]’s take?” This isn’t about escalating—it’s about getting more eyes on a tricky problem.
Let it go (with a paper trail). If it’s not critical, approve with a comment like “I still have concerns about X, but let’s see how it plays out.” If it breaks later, you’ve documented your concern without being a blocker. And honestly? Sometimes they’re right and you’re wrong. You won’t always know in the moment.
Escalate if it matters. If this could cause a real outage or security issue, bring it to your manager or tech lead. This should be rare. If you’re escalating every week, something else is broken.
The goal isn’t to win. It’s to make sure important concerns get addressed. Sometimes that means accepting you won’t convince them this time.
What To Do Monday
Step 1: Watch your team for 2-3 days
How do senior engineers review? How long do PRs sit? Do people engage with detailed feedback? This tells you what’s realistic here.
Step 2: Pick one focus area
Scale, failure modes, maintainability, or architecture. Just one. Watch for the specific triggers listed above.
Step 3: Build your resource library
Find 2-3 good articles or docs for your focus area. Internal docs, blog posts, book chapters. Things you can share when you see relevant issues.
Step 4: On your next substantial PR review
Use your normal process first (correctness, tests, bugs). Then spend 10 extra minutes on your focus area. Leave ONE comment that:
Points out a potential issue
Explains why it matters (not just what’s wrong)
Offers a resource or example if you have one
Ends with a question or offer to help
Step 5: Notice the response
Do they engage? Great, keep going. Do they ignore it? That’s data about your environment. Do they push back? Check your tone and intent.
That’s it. One strategic comment per review. Build the habit.
The Long Game
Here’s what happens when you do this consistently:
People start coming to you with architecture questions before they write the code. Because they know you’ll think about scale and failure modes, and they’d rather get that input early.
Junior engineers specifically request you as a reviewer. Because they learn something every time.
Your comments get referenced in other PRs. “We should handle errors like [your name] suggested in that other PR.”
When promotion discussions happen, people remember the time you prevented a production issue through a thoughtful review. They remember that article you shared that helped someone level up. They remember that you helped them grow.
They don’t remember your “LGTM”s.
Code reviews aren’t just about catching bugs. They’re about building your reputation, helping your team grow, and showing that you think like a tech lead.
One comment at a time.



