Dan Luu once remarked about that schools don't teach debugging, particularly STEM, don't teach debugging. Consider this my attempt.
STEM majors are by their nature problem-solving affairs. That inevitably involves debugging. A typical class for a STEM major is sitting through a lecture, attending TA hours, and then buckling down on some tough-as-nails assignment. It's in that assignment where you learn how to actually do the nuts-and-bolts math of Science™️.
This is possibly the most useful skill I picked up in engineering and I wasn't even taught it. It's also, in my experience, the number One differentiating factor between boot camp graduates and University graduates.
So I decided to take a stab at teaching debugging. This will probably be the first of a few posts on the matter. There will be a focus on software engineering but the general concepts are (probably) applicable in other engineering disciplines.
Big Point Number One: Focus on the Interfaces
Every engineering product is a system of inputs and outputs. When a problem arises, the first place we look is where the outputs don't line up as we'd expect for the given inputs. Finding the boundary where this occurs is the first step in debugging.
The debugging process starts like this: as you walk through your code, you'll notice where your assumptions -- and there's always assumptions you make while building -- went wrong and correct as needed. The code will probably be organized into functions, objects, or micro services. You might jump into an interpreter and try checking that a function returns as you'd expect. If you're writing more involved code, you might turn this into a test. This lets you check the expected outputs for a set of given inputs quickly, across many functions, to ensure it operates as you'd expect.
This may sound like Test-Driven-Development (TDD) and you'd be right. In the most minimal form, TDD involves writing inputs and expected outputs as tests before writing a function. Then you write the function that "fixes" those tests. [1] TDD probably works because it centers on debugging inputs and outputs. I don't know if this was an intentional insight.
Once you start to consciously think this way, it has a side effect on your work. You start to think of organizing around discrete blocks that do one thing for a given set of inputs and outputs. If each function, object, or service does only one thing, it makes debugging issues much easier because you can narrow in on the issue. This is where software engineering fundamentally diverges from "write-once" scripting.
Let's consider a hypothetical scenario. You have a block of code that reads from a database, writes to a file, then computes an average. These really should be three separate functions, not one. The reason you want to separate these out is to make it easy to debug later on. That probably will be with tests. Even if you don't use tests, you can write test programs or spot check on the interpreter. That may mean running the database query function and looking at its output before checking how the other functions behave. By contrast, if these are under one gigantic function then you'll need to keep a mental model of how the program is working while you step through it with a debugger.
This doesn't just apply to functions but also to program composition. Take a monolithic application. These programs have multiple components but each is accessed through some function or object. This is the interface, the API, that end users will interact with. Debugging will often start by looking at these interfaces. You see what got input and what came back. Related to this, many argue in favor of statically typed programming languages because they save you from many errors that come up here. If your output is of the wrong type than you expect, something probably went wrong.
Scaling up further, micro services have their own boundaries. It may be why people naturally think of using micro services as teams grow: the boundaries between where your responsibilities stop and others begin happens at the interface. Defining such interfaces is already a good practice.
tl;dr
Focus on the boundaries of inputs and outputs. That's where debugging starts. If you can confirm those are working as expected, you can look at other boundaries instead. Eventually you'll narrow it down to some faulty part that needs to be fixed.
Funny Anecdote
I studied materials science in University which gives a different background than most in software engineering.
In my upper years, my classmates and I would work on assignments using Excel as opposed to calculators. The reason was that we could rewrite calculations as a series of steps to be solved and play with values. That let us see where our math was going wrong, quicker. It also made it easier to compare with others who may have used slightly different equations.
[1]: There does appear to be a no-true-Scotsman ideal of TDD so this explanation may not suffice all. In particular, strict TDD practitioners focus on always writing the tests first and writing them one-by-one in a test-fix-verify cadence. Nobody really does this in practice, at least in my experience. Sometimes, its writing down the "big ideas" for tests I want first, then fixing them one-by-one. Other times, its sketching out your function ahead of time and then writing tests after the fact. That latter approach is particularly useful if you don't quite know how you want your code to look first. At any rate, my explanation is probably not going to be good enough but I think its sufficient.