Microservices for the Cynical Software Engineer

In this blog post I will put on my Software Engineering hat and address some of the common concerns around microservices. If you have any additional concerns I did not cover in this blog post, please do leave a comment.

For software engineers and solution architects, this all sounds very familiar. We've been getting told to modularise our code, encourage encapsulation and to utilise the separation of concerns for years. We've seen componentisation, domain-driven design and service oriented architecture attempt, and in most cases fail to revolutionise this space before. Yet, here we have what appears to be another bastardised offspring singing the same tune. Naturally the technology vendors and press eagerly jump on the band wagon in an attempt to sell you a vision of the 'oh so glorious' future. So, why as a software professional, who is still recovering from attempting to embrace the cloud, which left your system straddling two data centres in some sort of Frankenbuild state, be tempted to re-architecture yet again?

It’s a hard pill to swallow, but swallow we must: software is not immutable, far from it in fact. An evolving set of requirements and technologies forces us to adapt and flex for better or for worse. Even the juggernauts of the web, who sit on millions of lines of code, such as Google, Twitter and eBay have iterated through a number of software architectures. If you fail to keep up, you will fail full stop. This is the premise of how our process models, organisations and culture has evolved. Even if your application has remained stagnant but operational for years, you can bet that either the OS will get ripped out from below your feet, or somebody will slap you in the face with a new standard you fail to comply with.

I should point out at this stage that the adoption of new technologies should be done so following a pragmatic process which includes research, experimentation and comparative analysis… or just play with it.

So, what can we learn from the likes of Google, Twitter et al? Well, whatever architecture they started with; be that RoR, Django, etc. and whatever iterations they tried afterwards, they all gravitated towards a polyglot microservice architecture. In doing so they have each carved their own path. Developing in-house technologies, languages and methodologies which are now beginning to seep out into the public domain. This has led to an ecosystem of competing, varied and in some senses quite raw technologies. Which makes working with microservices today not typically as glamorous as one might have hoped. It is important to note that a lot of this work was done before the phrase ‘microservice’ was first coined, which really is just a label applied to distrusted systems modelled around certain principles.

Why did they all end up heading in this direction? Ultimately, they were all trying to solve the same problem, scale. Scalability of both their software, but also of their organisations. A microservice architecture encourages small teams to work on small code bases. This makes developing the software much more comprehensible for programmers as they only have to understand a highly cohesive subset of the code. Consequently, organisations are no longer as reliant on 'guru' developers who are masters of a particular part of the code base. Breaking this dependency on individual coders allows the organisation to utilise developers in a much more efficient manner and allows developers the chance to pick up and drop projects with ease.

Ok, so why now? In recent years we have seen the emergence of entirely new workloads. Ones that traditional software architecture was never designed to support. For instance, IoT has opened up a whole new market of innovation which has pushed our current architectures to their limits. Many have looked to Cloud providers to support these hyper scale workloads. However, we must realise that an inappropriate software design sat on top of infinite computing resources will still buckle under stress. As cloud platforms mature from IaaS shops to offer more distinct capabilities in the PaaS and SaaS world, we also need to rethink how we should effectively utilise 3rd party APIs as first class citizens of our own systems. In a microservice architecture, 3rd party APIs are a natural extension of your own decomposed and distributed services.

Agility, is another driving force in this space. Decomposing your system into a set of microservices supports isolated service deployments, service failures, and live upgrades. Many microservice implementations follow the container mantra, and statically link all dependencies to ensure each microservice is portable and self-contained. This allows the organisation to respond to problems and exploit opportunities at a much faster rate than previously, where they would need to build, test and deploy a new version of the entire application. In this sense microservice architecture relies heavily on having a DevOps culture within your work place.

What’s the catch? Like most things in life, there are trade-offs. By internalising simplicity to each microservices, you are externalising complexity to the whole system. What I mean by that, is that you now have to deal with all the complexities associated with distributed systems. What would have been a local method call, now requires a service lookup and a network call. How do you manage the network latency, timeouts, service discovery, etc.? With a distributed system these are issues you now need to deal with. Fortunately, the juggernauts along with many others are releasing their prebuilt solutions to handle much of this complexity. Microsoft have recently released a preview of their offering in this space, called Service Fabric. Service Fabric is a platform which handles most of the complexity of working with a distributed system for you. This is a product of particular interest to me and I will be providing some useful blogs about getting started with it shortly.

How does this change how I write my program? You can still use the standard languages and tools you have been developing applications in for years. However, you may wish to consider some of the custom languages which implement patterns such as the actor model. These languages, for example Erland, Golang, Akka, etc., abstract away the network call by giving you a proxy client to program against. In many ways this makes it feel like you are writing normal code which will just operate locally by masking the underlying complexities. There is a caveat however, using these languages often lulls you into a false sense of security and you actually forget that the method you are calling x times a second in that tight loop is actually a network call and could bottleneck your system.

Ok Ok, you've sold me, but isn't this just SOA? Yes and No. SOA was built on the same principles as microservices, and in that sense they share the same foundation. However, SOA was before the Cambrian explosion of REST APIs and relied on poorly defined interfaces and a ESB. Most implementations completely missed the point of SOA and ultimately created god services and highly coupling components. SOA also never really enforced the encapsulation of state with the code, which led many people to create a set of services which accessed a shared database. This really rendered many of the benefits of decomposing the system redundant. So to summarise, SOA had good intentions but was miserably executed, microservices learnt from those mistakes and refined its approach. Whether or not microservices will live up to their expectation will not become clear until the technologies mature and we start to see some more case studies. For now, I would strongly suggest having an experiment with the technology and consider how you could break down your monolithic application into loosely coupled, highly cohesive microservices.