Do you still remember BSoD (Blue Screen of Death)? In the times when Windows OS crashed because software was unable to handle hardware failure, users were cursing software developers for bad product. But has anything changed since then? Are there any new technologies to gracefully handle failures and avoid user disappointment? Or perhaps software developers have matured and never ever release programs that simply don't work?
We're tightly surrounded by technology these days and whenever I'm about to buy a new item I wonder if it will have a microprocessor and software running on it. Recently I've upgraded my TV from CRT to LED and, guess what, - now it's called "smart", which probably means I am dumb compared to it. And before TV there was smart phone, smart oven and I see there are coming along smart cars, smart robots, smart nano-medicine, smart prosthetic limbs...
And I wasn't so surprised to find out how buggy smart devices are - I've spent nearly a week trying to configure my laptop and desktop PC to stream media files to the TV. On the way I've seen my HDD disappeared from the drive list, then TV just rebooted (probably software crashed) and many more glitches that made me rip my hair in desperation.Software is complex and there's no reason to believe it will get simpler any time soon. Actually complexity cannot be reduced, it can only be changed in form, or moved to another place by introducing abstractions. Testing can cover only main scenarios plus few alternative and error scenarios, but typically one simple user can think of use case that no tester has ever dreamed of. There is a paradigm known as "crash early", that basically states that one should never try to handle an error if origin of this error is unknown. Sooner or later software encounters unexpected exceptions that are not handled because there is no code that knows how to handle it, and program crashes (hence the name of a paradigm is "_crash_ early"). Paradigm is good for debugging, but really frustrating for a user, and that's why some software developers decide to catch all errors and silently ignore them. Perhaps that's even worse, because errors happen for a reason, and user can get even more upset when he's clicking on button but nothing happens.
So, what will happen when software approaches infinite complexity? I hope we never do reach this point, but if this fate is unavoidable, then I suspect software development as a concept or technology will cease to exist. It's hard to imagine what comes next, I foresee a kind of "smart components" capable of decision making and self-testing that would interact with each other to build algorithmic system to solve defined problem. Perhaps a next generation neural-networks or self-aware computer systems?
I definitely know one system of infinite complexity - it's our universe. Can you imagine a software system that would make this universe run? ;)