Reactive Microservices — 4

Kapil Raina
7 min readJun 16, 2021

This is an 8 part series on reactive microservices. [1 , 2 , 3, 4, 5, 6, 7, 8]

Part 2 & 3 of this services listed reactive microservices patterns. This part continues to list more patterns.

Isolation

Isolation or de-coupling is the cornerstone of and a trait that is basis of many of microservices benefits. The reactive microservices architecture take the fundamental microservices trait of “small-independent services operating together” to the next level via asynchronous boundaries.

“Isolate All Things !!”

  • Isolation of State: All access to microservice’s state should go via its API. The state can be shared as a read-only copy using events for optimization, but the management of state in the prerogative of microservice only. Allows microservice to evolve internally.
  • Isolation in Space: microservices can be distributed across network locations without impacting how they are invoked. To be elastic, the service must be deployable anywhere. Ref Location Transparency
  • Isolation in Time: Microservices should be non-blocking and not wait for another microservice in an attempt to create consistency of time between microservices. Distributed microservices have isolated notion of time. Ref Eventual Consistency , Asynchronous Integration — Event Driven
  • Isolation of failure: Usage of patterns like Bulkheading and circuit breakers to minimize the failure blast radius by preventing cascading failures. Isolated services can also isolate failures). Enables resilience.
  • Isolation of teams: small cross functional teams. Ref Conway’s and Reverse Conway’s law. Enables change velocity and continuous delivery.
  • Isolation of delivery: Continuous Delivery by rolling out changes incrementally for each service in isolation.
  • Isolated Scaling: Isolation makes it easier to scale each service independently and enable those to be monitored and tested separately.
  • Isolated Testing: Components can be tested and verified independently.
  • Isolated Hardware: Using virtualization and container technologies, the isolation goes right down to hardware.

Scalability

Scalability essentially is resilience to overload. Reactive microservices remain responsive during increased demand.

Scalability measures the ability of the services to maintain the responsiveness by adding computing instances that to which the load is distributed. It doesn’t not make any assertions on the performance of individual services, which may stay same. Theoretically there no limit on how far a service can be scaled, so the focus of reactive microservices is to reduce the bottlenecks to scaling and leverage scalability to the fullest possible. [*]

There are many scaling techniques and tools that almost all cloud providers have in their toolkit. And any cloud hosted microservice can leverage auto-scaling of the “elastic” infrastructure. Elasticity is a core reactive microservice tenant that an outcome of scalability. However, scaling distributed services (in either axis [*]), hits a cap as per the universal scalability laws

[*] Scale-Up: Parallelism in multi-core CPU systems. Scale-Out: Make use of multiple server nodes.

Scalability comes with two concerns:

Contention

  • Multiple service instances competing for the same limited resource. Contention limits parallelization.
  • Contention forces the services to wait for a resource and this wait becomes longer with more scaled up resources (which means more competition).
  • Any blocking processing like synchronous IO calls block, leading to inefficient thread utilization and thus the threads do not get freed up in time to accept more requests. Synchronous blocking operations introduces contention on thread-pool.
  • Contention can also be seen in form of DB locks and shared mutable state.
  • As per Amdahl’s Law, scaling in a system with contentions stop providing performance benefits after a certain point (diminishing returns). The law underlines that parts of system which cannot be parallelized affect the scalability of the overall system.
  • While contention can never be made zero, reactive system architecture aims to reduce it.

Coherence

  • Synchronizing the state of multiple nodes using protocols like heartbeat, Gossip or crosstalk.
  • The number of nodes that need to be synced up means more time in bringing the overall system to the same state. There is an exponential need in communication needed among nodes with each new one added. The time taken for this synchronization is called coherency delay and this delay increases with increased nodes.

As per Gunther’s Law, scaling a system with coherence needs leads to negative returns of performance since the efficiencies of scaling are negated by the time spent in coherence (coherence delay). Cost to coordinate exceeds all scalability benefits.

Fig Source: https://blogs.msdn.microsoft.com/ddperf/2009/04/29/parallel-scalability-isnt-childs-play-part-2-amdahls-law-vs-gunthers-law/

Reactive Microservices do not hide these scalability limitations but try to reduce the effects of coherence and contention using:

  • Autonomy: Using autonomous services that do not have dependency on other services. Autonomous services can be scaled without any contention issue with other services. Autonomy
  • Message-Driven across microservices: Using asynchronous messages between parts of system doesn’t couple them in time i.e. services can scale up and down as per their own rules. It enables non-blocking processing within services. Asynchronous Integration — Event Driven
  • Non-Blocking within Microservice — Async non-blocking concurrent processing within a microservice component enables optimum utilization of CPU cores. So, scaling by a certain factor gives potentially equal efficiency. Asynchronous Execution — Non-Blocking Parallelism
  • Location Transparency: Location Transparency
  • Eventual Consistency: Eventual Consistency
  • Long Running Transactions: Transaction Co-ordination — Sagas
  • Isolated Locks: Provide tools to reduce the scope of locks and area of contention, for strong consistency cases. Ideally locks should be avoided. (Isolated Contention)

Autonomy

The old saying in design patterns is “loose coupling, high cohesion”. Cohesive services are “self-contained” and can this take decisions independently without fear of fragility which may impact other service i.e. autonomous. The traits of isolation and autonomy are complementary. Autonomous services should be isolated to enable “loose coupling, high cohesion”.

Enabling autonomy enables services to act independently i.e. the behavior of the service is deterministic. Service API contract and protocol is the published behavior of the service which other parts of the application can rely up i.e. there is a guarantee of behavior that autonomous services bring, albeit it guarantees its own behavior ONLY.

Reactive Microservices services modelled for autonomy minimize or eliminate the dependence on other services and encapsulate all responsibilities of the service into its own design. All the information needed to resolve a conflict or manage failure, or a transaction is within the service itself. The need for co-ordination and communication is removed.

For example, DDD based aggregate design models a bounded context encapsulates state and behavior of the sub-domain or bounded context. In the same bounded context, the state and behavior of the context is consistent and that is the reason that a single bounded context is an excellent starting candidate for microservices. An aggregate can take scaling, consistency, availability and failure decisions independently and publish them via its contract.

State Ownership

Isolation and autonomy lead to the question of state management. A microservice is focused on a single responsibility and can be scaled independently. That microservice is often a stateful microservice and encapsulate state as well as behavior. So, isolation and autonomy carry through to the state data as well. Each microservice should own its exclusive state. Microservices owns its data. About DDD, each bounded context owns its state may expose interfaces to allow requests for the data or may publish the state based on some important event, but never share the mutable state with any other bounded context.

It is typical for the services to “join” data between two different services to compose the response. Joins used to be easy in a shared database but are not an option in a microservices. New patterns are needed to share the data between service both at rest as well in motion. It is important to underline that event-based techniques follow eventual consistency of state.

  • Event-Carried State Transfer

Event Driven microservices provide a natural way to share the state for other services to maintain a local read-only copy. The changes to the state are published as events for this local read-only copy to be kept in sync(eventually).

  • CQRS

This pattern provides benefits of modelling the command and read side of the application optimally. The command services publish the result of the command fulfilment and the interested query services transforms the same into a query model. Query model is optimized for the view & search needs of the data. CQRS involves projection of the state into different view models that are optimized for querying. Depending on the implementation, the query could be a part of Aggregate or could be a separate microservice which can be scaled and deployed independently.

  • Sync Lookup to Service

For simple cases the service that provides interface to access SoR data can be called directly via a sync interface. This impacts the autonomy of the service and should be used for simple enough user cases only.

  • Join: Materialized View

Materialize View Service by listening to two SoR services that emit state change events. Optimum for high cardinality relationships(M:N)

  • Join: Runtime View

Join to client service/application at runtime by sync query to individual services. Optimum for 1:N cardinality. This impacts the autonomy of the service and should be used for simple enough user cases only.

State ownership provides freedom to the services to represent and maintain the state in any form that is fit for the service. The state could be stored in RDMS or a no-SQL or as a series of events in an Event-Log using Event Sourcing technique. This technique is the so-called Polyglot persistence.

Location Transparency

The isolation and decoupling in space concept mean that the services can be scaled up or out i.e. scaled on cores in the same server CPU or multiple nodes across network. The Location transparency means:

  • The service address should be stable regardless of where the service is located.
  • Service should be moved as a single unit with same address.
  • The client should be able to send messages to the service whether the service is running or stopped or crashed.
  • This address may be a virtual address (like a VIP or a Service object in Kubernetes) and may hide the underlying runtimes of the workloads that provide the computation for the service. For stateless service, this has the obvious advantage of load balancing and failover to a passive service. Stateful services virtual address is used for determining sticky routing.
  • Enable Locality of Reference for stateful APIs.

This is an 8 part series on reactive microservices. [1 , 2 , 3, 4, 5, 6, 7, 8]

--

--

Kapil Raina

Tech Architect | Unpredictably Curious | Deep Breathing