Back to all posts

Microservices for Startups: An Interview with Mike Hu of LogDNA

Posted by Jake Lumetta on February 16, 2018

This interview was done for our Microservices for Startups ebook. Be sure to check it out for practical advice on microservices. Thanks to Mike for his time and input!

 Mike Hu is the Head of Engineering at LogDNA, a cloud-based log management system that allows engineering and devops to aggregate all system and application logs into one efficient platform.

For context, how big is your engineering team? Are you using microservices and can you give a general overview of how you’re using them?

We are at 7 engineers. Our infrastructure is segmented by function, with multiple redundancies at every layer structured such that there is never a single point of failure.

Did you start with a monolith and later adopt Microservices? If not, did you start directly with microservices? Why?

We started with a microservices model from the beginning, taking lessons from established companies and proven strategies for deployment. The product we build requires a high level of availability and the only way to achieve this from what we know is to ensure from the ground-up that there was no single point of failure, which in our world would simply translate to downtime.

How did you approach the topic of microservices as a team, and was there discussion on aligning around what a microservice is?

We had the opportunity to begin everything with a very small team of two and there wasn't really any organizational need for us [to formally align on microservices]. Most of the discussions were impromptu discussions with a single goal in mind.

How much freedom is there on technology choices? Did you all agree on sticking with one stack or is there flexibility to try new?

We've experimented with different technologies both in previous pivots as well as during the building of our current product. We've tried to stick to things that we were familiar with unless there was a special need. It's always better to deal with the devil you know than the devil you don't.

How do you determine service boundaries and what lessons have you learned around sizing services?


Most of our service boundaries are split based on function; some of which are business (like a business process), some of which are technical (like one service per code repository). As an example, we have workers fanned out to various servers, each that perform a specific function, but all of which listen to a message queue for work. The intentions were to minimize impact due to bad code. From an engineering organization's perspective, as you make changes to the existing code-base, it is inevitable that you run into bugs which arguably is not really in your control, but you can certainly control how much of your infrastructure is impacted in case that does happen.

How have microservices impacted your development process or your ops and deployment processes?


I think the most impact is in automating the deployment process. There are many ways to achieve automation, we happen to have written our own Slackbot to handle deployments. The deployments can happen very quickly across all of our services, so it's important to ensure services can stop and start in a timely manner, otherwise it could mean that all of our services (that perform the same function) become inaccessible for a short period of time. We tackled this through two different means: one is to ensure our code is written to gracefully shut down and capable of spin-up within a set amount of time; two is to ensure the deployment automation performs rolling restarts of our services rather than restarting everything all at once.

How have microservices impacted the way you approach testing?

The biggest gain is to actually be able to have two (or even more) different versions of code running within the same infrastructure. This allows for the ability to test new code changes in parallel to existing services. For example, we transitioned some pipelining services by first splitting the message queue into both a test cluster as well as our live cluster, which enabled us to perform data integrity checks before pushing out to the rest of the infrastructure.

How have microservices impacted security and controlling access to data?

This is probably the biggest headache in all honesty. We have to build a LOT of firewall rules to limit access between clusters/services. Thankfully there are deployment frameworks (like SaltStack) that helps to bake this into the management of the infrastructure. The biggest takeaway for us was to try to use tools that support some kind of auth mechanism, rather than control inbound and outbound firewall rules.

Have you run into issues managing data consistency in a microservice architecture?

Not really. We run a centralized Mongo DB which has it's own replication mechanisms to deal with data consistency.

Thanks again to Mike for his time and input! This interview was done for our Microservices for Startups ebook. Be sure to check it out for practical advice on microservices.

Related Articles