I used to have a trick when I was job hunting. The interviewers would drill me on all the hot problems in coding, and clever new techniques in data science and machine learning. Then when my turn came, I’d ask targeted questions about how they’re employing these a specific system. As soon as I saw smirks, sidewise glances, and the word “well…” forming on someone’s lips, I knew I was in. It was my specialty: bug squashing, refactoring, modernizing, and just wiping out tech debt with feel-good newness.
Tech debt was shrouding their dreams.
It’s far too easy to get into some form of technical debt. You write a query to pull some data out of a database. It’s a one-off -- no need to add indexes that might speed it up, no need to even save the query. A few months later, you need to expose that query via an API, so you wrap it in a driver API call and throw it on a server. Next, you have some dynamic input that needs to go in through the API requests. The query works, so you fudge it a bit to properly place the dynamic data. Now would be an excellent time to make the query its own function and add a little more smarts behind it, but you are working on some really critical features.
So you wait.
Then, suddenly, you find that everyone needs to hit that API, the data coming in might change, and the query is starting to slow down as the database grows. Other features are depending on it, but you made a series of choices over time which put off the optimal implementation.
These are not bugs. You have incurred tech debt, and it has come due.
That’s a trivial example, easy to solve. Now imagine the same scenario with five, fifty, or five hundred developers, all making their own technology choices every day across many codebases. Yes, testing is critical, but not sufficient to detect rising tech debt and diminishing code quality.
Tech debt doesn’t have a set interest rate. It may lie dormant for years, only to surface as a significant performance bottleneck no one anticipated. Your tech debt decisions might be better or worse than someone else’s, and the two choices together could be either catastrophic or completely unrelated. Tiny decisions compound over time in unanticipated ways.
The drain of tech debt is hard to measure, and the longer the debt remains, the more the perception of harmlessness grows. “If it ain’t broke, don’t fix it” is the mantra of many mature enterprise software companies. Lack of visibility prevents executive-level decisions around clearing tech debt. Developers know, directors have a feeling, CTOs might -- or might not -- be aware.
Teams struggle to anticipate the future problems tech debt may cause, and weigh the benefits of investing in improvements versus taking the risk of waiting. It is hard to assess, and harder to explain.
The only way out is to measure it.
Get out and stay out
Much like the student loans that got you into your developer position, avoiding tech debt is nearly impossible, and getting out of it takes time and effort. However, your code contains clear indicators that tech debt is building, which can help you create a schedule and a budget, and stay in it. These same indicators also tell you how well you’re doing at getting out of tech debt.
When you need to assess how something changes over time or in response to another factor, you formulate some metrics that quantify that thing at a point in time, and then keep recording those metrics to watch it change. You see this in company revenue, startup KPIs, weight management, your bank account, and in every dashboard ever created. The same strategy can be applied to code quality.
- Analyzers provide excellent insights into potential issues. The output from static and dynamic analyzers are often used to threshold the build process, guide developers on style and standards, and alert for issues in the codebase at any given time. They can be used as metrics as well. Combining the results of these into a metric can be as simple as summed counts and weighted categories, or as complex probabilistic models encompassing multiple analyzers and historical data to predict code quality hot spots and the likelihood of tech debt accumulating in some regions of the code.
- Quality frameworks offer conceptualized measures of quality under which reference implementations provide numbers to compare over time. QMOOD is one example which explicitly defines a set of key metrics rolling up into six core attributes of quality. These can be tracked over time and compared to quickly see how tradeoffs are happening among things like understandability and extendibility.
- Architectural code quality tools like PASTA look at dependencies and their cycles, which can be used to measure quality changes in the code structure. Differentials of the architectural graph can quickly reveal changing dependencies, growing package complexity, and targets for joining or diverging.
- Process metrics for the development team are paramount to good cadence and happy developers. When linked to code changes and developer contributions, they also correlate with changes in code quality. Avenues for change can be realized quickly if a broader process metric puts the focus on tech debt.
Gathering metrics from multiple sources every sprint or development release can provide you with clear insight in code quality, and trends that lead to insidious tech debt. It can also provide insight across the entire organization into the value of reducing tech debt and improving code quality.
Contact us at firstname.lastname@example.org to learn more about the Sema Quality Framework.