When it comes to the modern day world of software, one thing above all has become clear to me over the last 20 years: On a long enough time line, failure is virtually guaranteed. That being the case, how our software approaches, adjusts for and reconciles failure is clearly its most important characteristic.
That's not to say that modern day software is all garbage, but in all the ways that actually matter, that largely appears to be the case. The cold hard reality is that as requirements and technology have advanced, software developers have struggled in earnest to keep up. Despite the rapid rate of advancement on the tooling side there is still a finite limit to the level of complication a single human being can grapple with on a mental level. The specific limit varies from developer to developer but regardless of how good anybody is there is no debate: We all have a limit.
As I've spent the last year writing and maintaining a load test framework, I've spent a huge portion of my time optimizing the performance of my own software (Note: this software provides live results to onlookers via a web application by collating the results of hundreds of thousands of requests a minute). Much of my time has been spent multi-threading code that was originally written to operate in a serialized fashion, tweaking mundane bits like how we calculate percentiles and adding feature after feature. Through all of this I have come to realize that I have limits on the amount of complexity I can hold in my head at any one time. There are some days where I honestly feel like Dr. Frankenstein staring powerlessly at his creation, regardless of whether it happens to be spreading joy or misery at the moment.
Why is this? In all honesty my own creation has grown and been optimized to the point that I can only grapple with a reasonably large but incomplete portion of it in my head at any given time. As more optimizations are made and more features are added, this situation is only going to get worse. A primary way to hedge against this inevitable outcome is to design software that is modularized in such a way that you can work on discrete components in a semi-independent fashion and then use automated testing processes like unit and integration testing to positively and negatively test for known common and edge case scenarios.
Sadly I have neither integration nor unit tests. I'd like to have them, but being on a relatively small team (2 Devs, 1 QA, 1 Performance Engineer) who are expected to demo new features at the end of every sprint, not to mention address a seemingly endless list of feature requests from internal customers, has relegated these desires to the backlog. Is that the right choice? Probably not. But given this combination of factors it is currently the only feasible approach.
The reason why this situation scares me is that failure is virtually guaranteed here on a long enough time line. Currently our approach isn't adequately addressing the obvious failure modes much less the obscure ones. The reality is that as software is being birthed it tends to be small, fast and likely exists to solve a single problem. Over time, short of real discipline and push back on the developer side of the fence, it will inevitably grow beyond that. One day that growth will mutate the would-be prodigy into a monstrosity.
The question that remains is: How well prepared are you to do combat with your monstrosity? With the proper preparation you can hedge your bets effectively while enabling less knowledgeable personnel to positively contribute to your code base. Without that preparation you will effectively be at the mercy of a multitude of factors that you have no direct control over and one day they absolutely will overtake you.
This is something we as a team are currently grappling with while ultimately hoping to come out on top. Until then, I hope that sharing will prove at least as educational for you as it has proven to be therapeutic for me.