I’ve sometimes thought about where we are and where we come from in terms of programming. Programmers have this habit of falling for new and shiny things. So I read this article with some interest as they were asking a couple of interesting things at the end. Namely,
Do we need a device driver layer? Do we need processes? Do we need virtual memory? Do we need a different security model? Do we need a kernel? Do we need libraries? Do we need installable packages? Do we need buffer management? Do we need file descriptors?
My answer, in short, would be a big “yes”… for the time being. Legacy may be a bad word in some circles but it is simply where we came from. We could not have gotten here without going the way we came. Do we need a device driver layer, virtual memory, kernels, etc?
Yeah, pretty much. Retooling is a massively, mindblowingly expensive operation if we were to remove those. So, in the short term (months-years) and medium term (years to decade), I don’t think you’ll see a lot of that stuff go away.
But that doesn’t mean we shouldn’t look at ways around it. I read an interesting article a while ago that was talking about how there is not a single current programming language that will be able to run on the processors of 10-20 years because there is no language that can handle the level of concurrency that we will need to deal with. Hopefully I’ll have retired a rich man by then so I don’t have to learn how to do it. But at some point, our systems are going to be run differently.
I think that this is a great opportunity for the cloud. You can actually ramp up new ways of thinking quite easily in the cloud. I believe this to be true (right now, at least) because you can easily provision and de-provision systems that are completely separate from your end users. What someone could do is build an AMI (as an example) that implements this new shiny way of doing things and it would not have to be wide spread.
Actually, I think that one of the biggest revolutions in the software industry has been the API revolution. What that means is that you can build part of your infrastructure on one technology while leaving the rest on another. You don’t need to commit to a technology. Given that we really don’t know the world will look in 20 years, that is probably a good thing.