What is Microservices & How it's Work?

Microservices architecture is an approach in which a single application is composed of many loosely coupled and independently deployable smaller services.

What are microservices?





Microservices (or microservices architecture) is a cloud-native architectural approach in which a single application is composed of many loosely coupled and independently deployable smaller components or services. These services typically

  • have their own technology stack, inclusive of the database and data management model;
  • communicate with one another over a combination of REST APIs, event streaming, and message brokers; and
  • are organized by business capability, with the line separating services often referred to as a bounded context.

While much of the discussion about microservices has revolved around architectural definitions and characteristics, their value can be more commonly understood through fairly simple business and organizational benefits:

  • Code can be updated more easily - new features or functionality can be added without touching the entire application
  • Teams can use different stacks and different programming languages for different components.
  • Components can be scaled independently of one another, reducing the waste and cost associated with having to scale entire applications because the single feature might be facing too much load.

Microservices might also be understood by what they are not. The two comparisons are drawn most frequently with microservices architecture are monolithic architecture and service-oriented architecture (SOA).

The difference between microservices and monolithic architecture is that microservices compose a single application from much smaller, loosely coupled services as opposed to the monolithic approach of a large, tightly coupled application

The differences between microservices and SOA can be a bit less clear. While technical contrasts can be drawn between microservices and SOA, especially around the role of the enterprise service bus (ESB), it’s easier to consider the difference as one of scope. SOA was an enterprise-wide effort to standardize the way all web services in an organization talk to and integrate with each other, whereas microservices architecture is application-specific.

The post "SOA vs. Microservices: What's the Difference?" goes into further detail.

How microservices benefit the organization





Microservices are likely to be at least as popular with executives and project leaders as with developers. This is one of the more unusual characteristics of microservices because architectural enthusiasm is typically reserved for software development teams. The reason for this is that microservices better reflect the way many business leaders want to structure and run their teams and development processes. 

Put another way, microservices are an architectural model that better facilitates a desired operational mode. In a recent IBM survey of over 1,200 developers and IT executives, 87% of microservices users agreed that microservices adoption is worth the expense and effort.

Independently deployable

Perhaps the single most important characteristic of microservices is that because the services are smaller and independently deployable, it no longer requires an act of Congress to change a line of code or add a new feature in the application.

Microservices promise organizations an antidote to the visceral frustrations associated with small changes taking huge amounts of time. It doesn’t require a Ph.D. in computer science to see or understand the value of the approach that better facilitates speed and agility.

But speed isn’t the only value of designing services this way. A common emerging organizational model is to bring together cross-functional teams around a business problem, service, or product. The microservices model fits neatly with this trend because it enables an organization to create small, cross-functional teams around one service or a collection of services and have them operate in an agile fashion.

Microservices' loose coupling also builds a degree of fault isolation and better resilience into applications. And the small size of the services, combined with their clear boundaries and communication patterns, makes it easier for new team members to understand the code base and contribute to it quickly—a clear benefit in terms of both speed and employee morale.



The right tool for the job

In traditional n-tier architecture patterns, an application typically shares a common stack, with a large, relational database supporting the entire application. This approach has several obvious drawbacks—the most significant of which is that every component of the application must share a common stack, data model, and database even if there is a clear, better tool for the job for certain elements. It makes for bad architecture, and it’s frustrating for developers who are constantly aware that a better, more efficient way to build these components is available.

By contrast, in a microservices model, components are deployed independently and communicate over some combination of REST, event streaming, and message brokers—so the stack of every individual service can be optimized for that service. Technology changes all the time, and an application composed of multiple, smaller services is much easier and less expensive to evolve with more desirable technology as it becomes available.

Precise scaling

With microservices, individual services can be individually deployed—but they can be individually scaled, as well. The resulting benefit is obvious: Done correctly, microservices require less infrastructure than monolithic applications because they enable precise scaling of only the components that require it, instead of the entire application in the case of monolithic applications.

There are challenges, too

Microservices' significant benefits come with significant challenges. Moving from monolith to microservices means a lot more management complexity - a lot more services, created by a lot more teams, deployed in a lot more places. Problems in one service can cause, or be caused by, problems in other services. Logging data (used for monitoring and problem resolution) is more voluminous and can be inconsistent across services. New versions can cause backward compatibility issues. Applications involve more network connections, which means more opportunities for latency and connectivity issues. A DevOps approach (as you'll read below) can address many of these issues, but DevOps adoption has challenges of its own.

Nevertheless, these challenges aren't stopping non-adopters from adopting microservices - or adopters from deepening their microservices commitments. New IBM survey data reveals that 56% of current non-users are likely or very likely to adopt microservices within the next two years, and 78% of current microservices users will likely increase the time, money, and effort they've invested in microservices (see Figure 1).

Three pie graphs showing 56 percent, 78 percent and 59 percent


Figure 1: Microservices are here to stay. Within the next two years, 56% of non-users are likely to adopt microservices, 78% of users will increase their investment in microservices, and 59% of applications will be created with microservices. (Source: 'Microservices in the enterprise 2021: Real benefits, worth the challenges.')

Microservices both enable, and require, DevOps



Microservices architecture is often described as optimized for DevOps and continuous-integration/continuous delivery (CI/CD), and in the context of small services that can be deployed frequently, it’s easy to understand why. 

But another way of looking at the relationship between microservices and DevOps is that microservices architectures actually require DevOps to be successful. While monolithic applications have a range of drawbacks that have been discussed earlier in this article, they have the benefit of not being a complex distributed system with multiple moving parts and independent tech stacks. In contrast, given the massive increase in complexity, moving parts, and dependencies that come with microservices, it would be unwise to approach microservices without significant investments in deployment, monitoring, and lifecycle automation.

Key enabling technologies and tools



While just about any modern tool or language can be used in a microservices architecture, there are a handful of core tools that have become essential and borderline definitional to microservices:

Containers, Docker, and Kubernetes

One of the key elements of a microservice is that it’s generally pretty small. (There is no arbitrary amount of code that determines whether something is or isn’t a microservice, but “micro” is right there in the name.)

When Docker ushered in the modern container era in 2013, it also introduced the compute model that would become most closely associated with microservices. Because individual containers don’t have the overhead of their own operating system, they are smaller and lighter weight than traditional virtual machines and can spin up and down more quickly, making them a perfect match for the smaller and lighter weight services found within microservices architectures.

With the proliferation of services and containers, orchestrating and managing large groups of containers quickly became one of the critical challenges. Kubernetes, an open-source container orchestration platform, has emerged as one of the most popular orchestration solutions because it does that job so well.

API gateways

Microservices often communicate via API, especially when first establishing the state. While it’s true that clients and services can communicate with one another directly, API gateways are often a useful intermediary layer, especially as the number of services in an application grows over time. An API gateway acts as a reverse proxy for clients by routing requests, fanning out requests across multiple services, and providing additional security and authentication.

Multiple technologies can be used to implement API gateways, including API management platforms, but if the microservices architecture is being implemented using containers and Kubernetes, the gateway is typically implemented using Ingress or, more recently, Istio.

Messaging and event streaming

While best practice might be to design stateless services, the state nonetheless exists and services need to be aware of it. And while an API call is often an effective way of initially establishing a state for a given service, it’s not a particularly effective way of staying up to date. Constant polling, “Are we there yet?” approach to keeping services current simply isn’t practical.

Instead, it is necessary to couple state-establishing API calls with messaging or event streaming so that services can broadcast changes in the state and other interested parties can listen for those changes and adjust accordingly. This job is likely best suited to a general-purpose message broker, but there are cases where an event streaming platform, such as Apache Kafka, might be a good fit. And by combining microservices with event-driven architecture developers can build distributed, highly scalable, fault-tolerant, and extensible systems that can consume and process very large amounts of events or information in real-time.

Serverless

Serverless architectures take some of the core cloud and microservices patterns to their logical conclusion. In the case of serverless, the unit of execution is not just a small service, but a function, which can often be just a few lines of code. The line separating a serverless function from a microservice is a blurry one, but functions are commonly understood to be even smaller than a microservice.

Where serverless architectures and Functions-as-a-Service (FaaS) platforms share an affinity with microservices is that they are both interested in creating smaller units of deployment and scaling precisely with demand.

Microservices and cloud services





Microservices are not necessarily exclusively relevant to cloud computing but there are a few important reasons why they so frequently go together—reasons that go beyond microservices being a popular architectural style for new applications and the cloud being a popular hosting destination for new applications.

Among the primary benefits of a microservices architecture are the utilization and cost benefits associated with deploying and scaling components individually. While these benefits would still be present to some extent with on-premises infrastructure, the combination of small, independently scalable components coupled with on-demand, pay-per-use infrastructure is where real cost optimizations can be found.

Secondly, and perhaps more importantly, another primary benefit of microservices is that each individual component can adopt the stack best suited to its specific job. Stack proliferation can lead to serious complexity and overhead when you manage it yourself but consuming the supporting stack as cloud services can dramatically minimize management challenges. Put another way, while it’s not impossible to roll your own microservices infrastructure, it’s not advisable, especially when just starting out.

Ask any Doubt related to this site...

Post a Comment

Ask any Doubt related to this site...

Post a Comment (0)

Previous Post Next Post