they asked me if I could develop some useful metrics for technical debt which could be surveyed relatively easily, ideally automatically
This is where I would have said “no, that’s not possible” or had a discussion about risks where things you simply can’t cover with automated metrics would lead to misdirection and possibly negative instead of positive consequences.
They then explore what technical debt is and notice that even many things outside of technical debt have significant impact you can’t ignore. I’m quite disappointed they don’t come back to their metrics task at all. How did they finish their task? Did they communicate and discuss all these broader concepts instead of implementing metrics?
There’s some metrics you can implement on code. Test coverage, complexity by various metrics, function body length, etc. But they only ever cover small aspects of technical debt. Consequently, they can’t be a foundation for (continuously) steering debt payment efforts for most positive effects.
I know my projects and can make a list of things and efforts and impacts and we can prioritize those. But I find the idea of (automated) metrics entirely inappropriate for observing or steering technical debt.

















Did trust signals change? Part of my reviews has always been checking assumptions and broader (project) context. I don’t think polish implied understanding.