Part 2: How to Enable Microservices with Container Orchestration tools

Migrating Applications to a Container-Managed Microservices Architecture

This is the second article in a four-part series that discusses the benefits of adopting a microservices architecture (MSA) for new applications. Here we focus on the use of containers for microservices and the orchestration tools to manage the containers.

Orchestration of services across platform domains

Modeling is done independently of the container platform and independently of how the service is orchestrated.

Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple microservice containers.

Benefits of container orchestration in a microservices architecture

Container orchestration is all about managing the lifecycle of containers, especially in large, dynamic environments. The orchestration engine is a set of programs to create, run, execute and manage containerized applications.

Software teams use container orchestration to control and automate many tasks, including:

  • Provisioning and deploying containers
  • Managing the redundancy and availability of containers
  • Scaling containers up or down so the application loads evenly across the host infrastructure
  • Maintaining a consistent deployment environment across the cloud or on-premise
  • Allocating resources between containers
  • Managing external exposure of services running in a container with the outside world
  • Service discovery, container networking and load balancing
  • Container health monitoring and reporting
  • Managing the container lifecycle

Microservices architecture shortfalls

There are many challenges companies must be prepared to address as they transition to an MSA. Here are nine:

  1. Microservices are complex.
  2. Microservices are expensive because they require more hardware.
  3. Microservices need to communicate with each other, which results in a high volume of remote calls. This can increase network latency and processing costs beyond what you might expect to pay when using traditional architectures.
  4. Managing microservices can be stressful, given the transactional management model and the need to use multiple databases.
  5. Deploying microservices can be complicated.
  6. Microservices can pose security challenges.
  7. Spinning up the environments is costly and complex due to the increased number of resources required.
  8. Large numbers of microservices are challenging to secure.
  9. Message flow increases with the number of microservices, which hinders performance.

Alternatives to container orchestration

Kubernetes may be the de facto leader in container orchestration, but it’s worth exploring alternatives before taking the plunge. Kubernetes has a steep learning curve, which means it could be costly. (See Figure 1.) Here are two Kubernetes competitors to consider:

  1. Docker Swarm. A hardy bit player, Docker includes a swarm mode for natively managing a cluster of Docker engines. Docker CLI can be used to create a swarm, then deploy application services to a swarm and manage swarm behavior. The main features of Docker Swarm include:

    • Cluster management integrated with Docker Engine
    • Decentralized design
    • Declarative service model
    • Scaling
    • Desired-state reconciliation
    • Multi-host networking
    • Service discovery
    • Load balancing
    • Secure by default
    • Rolling updates
  2. Apache Mesos and Marathon. Apache Mesos is an open-source cluster manager that allows effective resource sharing between applications. Marathon is a Mesos framework for container orchestration. Complex but flexible, Mesos is the opposite of virtualization because, in virtualization, one physical resource is divided into multiple virtual resources. Instead, Mesos includes numerous physical resources in a single virtual resource. Mesos is suited for the deployment and management of applications in large-scale clustered environments. It brings together the existing resources of the machines or nodes in the cluster into a single pool from which various workloads can access. Also known as node abstraction, this process removes the need to allocate specific machines to different workloads. Companies such as Airbnb, MediaCrossing, Xogito, Netflix, and Categorize use Mesos to manage their significant data infrastructure. The main features of Mesos include:

    • Web UI to monitor the cluster state
    • multi-resource scheduling
    • Fault-tolerant replicated master using ZooKeeper
    • Scalability to thousands of nodes
    • Isolation between tasks with Linux containers

Figure 1. Strengths and Weaknesses of Three Open-Source Container Cloud Orchestration Solutions

Part 2: How to Enable Microservices with Container Orchestration tools

Source: DZone and Kubernetes

Cloud-based container orchestration platforms

There are three leading cloud-based platforms to consider for container orchestration: Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP).

Here is a brief comparison of the three platforms. (See Figure 2.)

  1. Amazon Web Services Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easier to use than Kubernetes. EKS integrates with open-source Kubernetes tooling and AWS tools, including Route 53, AWS Application Load Balancer, and Auto Scaling. Team members that manage Amazon EKS are regular contributors to the Kubernetes project.
  2. Azure Kubernetes Service (AKS) is a management solution used to deploy and manage containerized applications. It offers serverless Kubernetes, continuous integration and continuous delivery (CI/CD) experience and enterprise-grade security and governance. AKS helps to build applications more extensible.
  3. Google Cloud Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications. Like Amazon EKS, GKE is an easy to use cloud-based Kubernetes service that provides an efficient and reliable environment for applications.

Figure 2. Cloud-Based Orchestration Comparisons

Part 2: How to Enable Microservices with Container Orchestration tools

Source: Sumo Logic

Summary

Capgemini Engineering’s top four best practices for optimizing container orchestration

  1. Create a clear plan from development to production Capgemini Engineering suggests implementing a DevOps practice, which will help build monitoring and gives all developers access to pre-production environments and running automation tests in build environments.
  2. Institute robust monitoring and automated issue reporting Capgemini Engineering suggests using APM/IT monitoring tools with good reporting capabilities, which will allow you to automate the creation and sending of reports.
  3. Ensure data is backed up automatically to enable swift disaster recovery and business continuity The backup processes copy data and then store it on different media or in a separate storage system that allows easy access in a recovery situation. Capgemini Engineering provides the expertise to build the solution.
  4. Create a roadmap for long-term expansion of production capacity Capgemini Engineering guides its customers to build strategies for their solutions.

RELATED ARTICLES:

List of website sources:


Author

Valérie Sauterey

Amit Goel

Director – Technology

Amit Goel has over 22 years of experience in the architecture, design and development of cloud, big-data, analytics, Java/J2EE and portal-based middleware in domains and architecting solutions in service provide various applications domain covering consumer service delivery platform (having flavors – content (mobile/PC/IPTV), rating & billing, enterprise oriented. Extensive project co-ordination experience involving design and development, managing deliveries

Contact US Capgemini Engineering

MEET OUR EXPERTS

You can work with a company built for now,
or you can with one engineering the virtual networking software of tomorrow.

GET IN TOUCH