our blog

Our Theory

September 25, 2018 / by Ian David Rossi / In microservices, continuous-delivery

To Be Or Not To Be A Microservice - Part 2

by Ian David Rossi

EVERYONE is doing microservices these days, along with cloud native everything. Interestingly, did you know that Thoughtworks, one of the most reputable thought leaders in the software development industry, does not recommend adoption of the microservices architecture, as of this year? Surprising, huh? Well, why? When the answer comes, you’ll say “Wow, that makes a lot of sense.”

Ready for it?

Most organizations are just not ready. While there is a lot more flexibility that comes with microservices–particularly in delivery–the close companion of flexibility is close behind: complexity. This is why I like to say that microservices are all or nothing. It has to be done right and very carefully in order to succeed.

In To “Be Or Not To Be A Microservice - Part 1”, I raised the question, “What is a microservice?” and answered it by saying that it doesn’t really matter as much as the approach that is taken in delivering your organization’s microservices. If people in your org are asking this question after a microservices project has already begun, something is really wrong. The microservices architecture is a holistic approach for delivering software. This must be spearheaded by qualified and capable individuals at a high level in the organization. The leadership must have a strong opinion about what the approach is and be willing to set and enforce standards around how individual microservices are delivered to the project.

This is where tooling and scalability play a significant role.

The Delivery Process

Since a microservice will become part of a greater whole once it reaches production, it is imperative that each microservice be delivered in a uniform way. While human checks and balances have worked out OK for a long time, automation has been able to provide a far higher level of quality. Therefore, the use of automated tools are essential in a standardized delivery process. Tools like Jenkins, Chef and RunDeck have successfully automated steps that may have previously been manual and have empowered teams with confidence in reliable, automated processes.

However, there can be a snare in using these tools. The tools are only there to support a process. If there is no process and the tools are still invoked, then you can end up with a lot of problems. We’ve all seen the “We need Jenkins” or “We need Chef” scenario play out. If the team (or team’s leadership) hasn’t yet agreed on what their process will look like, and someone starts writing automation code, then results could turn out to be surprising and sad. This is bad even when you have a single team delivering a handful of services. Now, imagine this happening with a microservices project that has 100 microservices! Imagine that you have 20 teams delivering all 100 of these microservices. The teams have not yet agreed on what the delivery process looks like. For example, testing and deployment–some teams have tests, others don’t. Some teams deploy to “dev”, “staging” and “prod” environments while others just deploy to “staging” and “prod”.

Now imagine that all of these microservices have…

Dependencies

…on each other. How do you test them? When you do test a microservice that has a dependency on other microservices, how do you know which environment to find your dependency in? Some organizations seem to have early wins in a microservices project…until they have to start managing dependencies. This is one of the big reasons why a microservices project calls for a standardized (and governed) delivery process.

If all microservices follow the same rules/standards and there is some order, then governance–as well as the tooling that will perform governance–can take place. For example, if all microservices march through the same named environments on their way to production, then they can easily discover dependencies when needed. Common processes and standards are the lifeblood of dependency management.

Tooling

While tooling is indispensable for proper delivery of microservices, they are not the sole answer to the automation question. Tooling starts with process and means nothing without it. They are really one in the same. There are two ways that tooling can be developed:

  1. Write the tooling and the process at the same time
  2. Write the process, then write the tooling

Which one do you think makes more sense? Usually, if #1 is taking place, the process is usually getting written by one person (the tooling developer) or maybe a few and there isn’t really mutual understand or agreement throughout the organization regarding the process that the tooling supports.

I witnessed one organization where there was a whole “DevOps team” who was responsible for writing tooling to deliver microservices. The leadership that supervised the DevOps team didn’t provide any process or even principles to follow in a process. Nor did the leadership give the DevOps team the authority to impose a process on the development teams. What was the outcome? Each development team (or even individual developers) used their own process and asked the DevOps team to write tooling to support it. The end result was that the DevOps team was writing and supporting a huge amount of tools, each of which only had one or two users. There was no uniformity and the DevOps team was unnecessarily large, with most of it being comprised of “support engineers” to put out fires.

With #2 above, where the process is established before tooling is created, the organization agrees on a process and then writes tools to support them. This provides for an optimal developer experience, one that is uniform across the organization, making it much easier for teams to communication and collaborate.

Choice of Tooling

Another important aspect of tooling is choice. There are many tools out there, and since the cloud native movement is burgeoning, they become more plentiful as time goes by. Many are quite opinionated. So how do you know what to choose? Some choices are obvious. For example, Kubernetes has become the go-to container orchestration tool. It’s a fairly easy decision to choose it over Mesos or Docker Swarm. But what about a “package manager”, like Helm? There are many tools in the same space. Draft, Gitkube, Ksonnet, Metaparticle, Skaffold and probably even more. How do you choose?

Here’s where I like to raise another question and then say: It doesn’t really matter. Well, as long as it fulfills your technical requirements. Most of these projects have good documentation that will allow any decent engineer to use it efficiently.

But I would like to draw attention to something I believe is more important: creating custom tools. I just do not see enough of this. Many engineers (or engineering managers) are afraid of building something that can’t be supported in the future, which is a legitimate concern. However, I have seen at least two different reputable organizations create custom tools that pretty much amounted to wrappers around other tools. Why? Developer experience. These custom tools shield developers from a large amount of tools that end up going into the delivery tooling layer of your organization’s platform. Why should all of your core application developers have to learn something like Helm? Oh, what if you want to dump Helm in the future for something else? Will everyone be retrained? This way, developers can interface only with your custom wrapper tools. Now your operations group (or DevOps group) will have the agility to provide important delivery pipeline functionality to developers using their own layer of custom tools. Then, these custom tools can now be delivered like a mature software product themselves, with versioned releases and quality assurance.

The other argument for custom tooling when it comes to cloud native is that the cloud native community in general feels that a good service mesh (which is supposed to manage discovery of dependencies, among other things) has not arrived yet. At least one that the average organization is willing to adopt. In the absence of a good service mesh, custom tools can go a long way towards gluing your microservices together–whether they are combining in a production environment to deliver your core product or finding each other to perform some type of testing in a pre-production environment.

Scalability

The principles that underpin scalability have already been covered really, but let’s drive the point home, shall we? Let me just clarify, that this isn’t scalability as in load testing and autoscaling. It’s about scaling your microservices project to more microservices and more developers. It’s quite clear that without uniformity, a robust process and good tooling to support it, a project that uses the microservices architecture will just not scale well. It would be hardly amusing to watch it play out. Which is why many who are standing by, won’t stick around for very long. Developers will suffer the most, in that they will be happy to develop their new feature in their own isolated development environment, but when it comes time to deploy to the whole of the microservices project, they will be gritting their teeth.

Conclusion

It bears repeating: Microservices is an all or nothing endeavor. It must be a deliberate, collaborative effort with governance and enforcement. Even though it was tons of fun, the agile days of five to ten years ago when rag tag teams would do everything from development to operations are coming to a close, especially with regards to microservices. There is complexity that must be managed. If you are reading this article and have not yet made the choice between a silo or microservices, choose carefully. It means your life.

Just kidding. :D I think we’re all glad that it’s not that grave. But joking aside, it could mean the difference between success and failure. Take a look around your organization. Do you think the leadership will lead with strong opinion and are ready to govern? Are you the leadership? Do you answer these questions affirmatively?

Microservices or not, I wish you success! I would love to hear your comments about this subject.