Sema Blog

Codebase Transparency in Practice:  A  Day-to-Day Look at Metrics-Equipped Development

Posted by Jason McInerney on Jan 16, 2019 11:48:16 AM
Find me on:

Last week I made the case for codebase transparency: why 21st-century dev leaders need real-time insight into the strengths and weaknesses of their codebase, and why new quality metrics are going to be the principal solution.  This week, I want to look at how this actually works in the day-to-day progress of a company. 

Case study 1: the regular meeting

It’s the first Wednesday of the month, and your dev team leads step away from the gentle hum of CPUs and caffeinated frontal lobes to gather in the meeting room, lattes and laptops in hand.  After the typical preliminaries you go around the room for progress reports.  It comes out that Sarah’s team is ahead of schedule, and Kai’s team is a week behind.  You have to decide how to respond, and whether to reallocate resources. 

This raises questions: is Sarah’s team ahead of schedule because they’re being efficient and amply staffed, or because they’re working fast and sloppy?  Is Kai’s team struggling and in need of more/better engineers, or is the code high-quality and just coming together a little slow?  In most companies, there is no way to resolve these questions beyond the subjective assessments of the engineering leads – which are valuable, but highly subjective.  The only objective measures you have are project timelines and maybe some functionality tests.  Probably this means that the deadlines dominate, and Kai gets another engineer and some encouragement to stay on pace.

But imagine instead that the day before the meeting you glance at the leading indicators of code quality:

Sema Architectural Quality Metrics

Looking at these metrics, you can get answers to the questions above (maybe Kai’s team has just been doing the hard work early).  These metrics will fluctuate during a build, but if one project is nearly finished and the extendibility metric is still in the gutter, you know something is awry and can head off the problem early. 

Meanwhile, the team leads have been checking these metrics too and sharing the results with their teams.  In the meeting you discuss not only where each team is on their timeline, but also where their quality metrics are at – which may be fine and expected for reasons they can explain, or may indicate hacky work.  If you need to temporarily prioritize speed over readability, you can make that decision – but you’ll have it on record, and you can go back and fix things up later if desired.  You won’t have hidden technical debt lurking and waiting to crash a feature at the most inopportune time (as these things tend to do).

Case study 2: the update launch

Now you’re a week away from an update launch, and one feature is still a little glitch.  Your approach to this will depend a lot on the particulars of the situation, but the process is always tricky and involved.  After all, you can’t just X-ray-vision the codebase and see which lines are causing the problem.

But a good suite of code quality metrics can get you halfway there.  To begin with, you can look straight into the packages and code files and see which have outlier quality metrics.  You can also look at a connections dashboard like this to see what interfaces with other features might be causing problems:

Sema dependance graph details-1No amount of data can magic away the glitches, but it will give you solid insights for where to start.  And even more importantly, these dashboards can alert you to potential problems that your product testing hasn’t detected yet.  Because your ability to find weak points in your code isn’t limited to your functional testing, you can head off those potential bugs that users would otherwise be swamping the help lines with during the first two weeks of the launch.

Once you’ve pinpointed which parts of the codebase need shoring up, you can also easily assess worker performance to find the best engineers for the job (as well as who maybe shouldn’t have been on that job in the first place)

Sema Developer File Change Patterns

Individual engineer metrics like this let you see who spends more time writing their own code vs. tweaking their own code vs. tweaking others’ code, and in what languages, and what the resulting quality looks like.  If you’ve been working with a team for a long time, you may have a go-to engineer for Java code quality, or for pinch-hitting on bugs in your messaging feature.  But when you’re new in the company, or when you need to get a read on recently hired engineers, engineer metrics can really accelerate the learning curve.  Team leaders can also track how new engineers are prioritizing different types of work.  This transparency across multiple levels allows a much greater degree of control and optimization across the entire process, from pseudocode to launch.

 The future, for now

Here at Sema, we’ve developed a pretty comprehensive set of metrics for code quality and engineer profiles (the screenshots above are excerpted from our demo dashboards).  Ultimately, this is just the beginning of what we hope to do: we’re also working on AI-based automatic code maintenance, including the ability to revise existing code to make it more readable and extendable.  For now, though, metric-driven codebase transparency represents a major step forward in how we develop software.  

This is not to say that metrics are a substitute for skill and judgment.  Airplanes have had electronic flight instrument systems for decades, and we still rely on the judgment of human pilots.  But flight instruments do provide major gains in capability and reliability that no commercial pilot would voluntarily turn down. 

For the same reasons, we believe that metric-driven codebase transparency will become standard practice sooner rather than later.  And in the short term, early adopters will have a definitive advantage.

Read part one of this post here

Topics: metrics, codebase transparency