The Internet of Things (IoT) can be defined as a system of interrelated computing devices with unique identifiers and the ability to transfer data over a network without human-to-human or human-to-computer interaction. Small IoT devices – from sensors to beacons to wearables – have their own processing power and create a massive amount of data for processing and analytics. It is often inefficient to send all this data to the cloud for processing. Also, data transfer relies on network availability and can pose security, data protection, and privacy challenges.
IoT analytics is moving from the cloud to the edge because of security, latency, autonomy, and cost. However, distributing and managing loads to several hundred nodes at the edge can be a painful and tedious process. A key requirement to distribute and manage the loads on edge devices is to use a lightweight production-grade solution such as Kubernetes.
Kubernetes is widely used in cloud-web environments but can be leveraged to manage workloads for IoT edge devices. Kubernetes addresses many of the complexity challenges that developers face while building and deploying IoT solutions. The industry has focused on shifting from traditional virtualization to container-based virtualization in recent years due to the numerous advantages of containers.
What is Kubernetes?
Kubernetes is a deployment and orchestration framework for containerized applications. It helps manage containerized applications in a clustered environment. Kubernetes allocates resources to containers and performs replication, scaling, failover, rolling updates, and other managerial tasks necessary to run applications reliably with efficient resource utilization. The figure below shows the basic architecture of Kubernetes.
Overview of Kubernetes
Source: Linux Academy
Containers provide an isolated context to host and run a microservice or an application. Containers need to be managed for resource and load distribution, scalability, and high availability. Kubernetes provides a layer over the infrastructure to address these challenges. It uses a Master-Node architecture and labels as name tags to identify its objects. Most of the distributed applications built with scalability in mind are made up of microservices and hosted and run through a container.
Challenges in IoT edge
Computation and resource constraints: IoT edge devices have limited CPU and memory, so they need to be used effectively and preserved for the solution’s mission-critical functionalities.
Remote and resource management: As the network quickly grows, a manual approach to deploying, managing, and maintaining devices will be challenging and time-consuming. Some of the key obstacles and challenges faced in IoT edge today include:
- Efficient usage, remote monitoring, and control of device resources, including CPU, memory, networking, and edge-device I/O ports
- Hosting and scaling of any combination of apps and the ability to control CPU cores and co-processing (e.g., GPU) to specific apps
- Automated and remote updates of security patches and the entire software stack with rollback capability to prevent bricking
- Automated connectivity to one or more of the backends (e.g., cloud or on-premises) and easy migration to different backends
- A secure and distributed firewall to securely route data over networks per policy
Security and trust: Security issues represent the greatest obstacle to the growth of IoT edge. It is necessary to keep the IoT edge devices away from unauthorized access. Discovery, authentication, and trust establishment in IoT edge and anonymity and traceability of devices are challenging in high-scale environments. An additional security layer is necessary to ensure that different IoT applications execute isolated from each other in the device.
Reliability and fault tolerance: With the increase of IoT devices in the system, self-managing and self-configuring solutions are required on the edge network. IoT applications must be able to recover from any issues that arise during their lifetime. Resiliency to failures and preventing denial of service attacks are some of the common challenges that exist in IoT edge.
Scalability: Sensors are increasingly controlling everything in IoT. The number of data collection points and the volume of data collected are increasing rapidly. In many applications, such as smart-city and smart-traffic systems, it is not uncommon for thousands of new sensors to be installed in a short period of time while the IoT environment remains operational. This challenge has increased the need to scale the IoT environment and data management. Also, cost and other parameters such as workload monitoring, storage capacity, dynamic resource allocation, and data transfer rate are challenging edge-based services.
Scheduling and load balancing: Edge computing is entirely dependent on load balancing and scheduling mechanisms to maintain large applications, where data is shared over multiple services. To ensure maximum utilization of computational resources, it is required to make data, software, and infrastructure available at a lower cost in a secure, reliable, and flexible manner. An efficient scheduling and load balancing mechanism is also needed.
How Kubernetes is benefiting edge devices
Scalability: The ability to scale is the key concern for many IoT solutions. The ability to serve more devices and handle terabytes of data in real-time requires an infrastructure that can independently scale horizontally or vertically. As containers are lightweight, they can be created in milliseconds compared to traditional virtual machines. A key benefit of Kubernetes is its ability to scale services easily across network clusters, independently scale the containers, and restart automatically without any impact on services.
High availability: Edge devices need to be available and reliable for IoT solutions that handle critical business tasks. Each container is assigned its own IP address, making it easy to balance loads between the containers and restart applications when containers fail to work. Containers can use different storage systems like AWS, Google Cloud, or Microsoft Azure to become cloud-independent.
Efficient use of resources: Kubernetes lowers the cost of hosting IoT applications due to its efficient use of resources. Also, it provides an abstraction layer on top of hosted virtual machines. Thus, administrators can focus on deploying services across the optimal number of virtual machines (VMs), which reduces the total cost of running VMs for an IoT application.
Deployment to the IoT edge: A key IoT challenge is deploying software updates to edge devices without interrupting services. Kubernetes can run microservices that progressively roll out changes to services. Kubernetes deployments roll out pod-version updates with a rolling update strategy. It helps to achieve zero-service downtime by keeping some instances up and running at any given time while performing the updates. Old pods are removed only after enabling new pods of the new deployment version that are ready to handle the traffic. Thus, it is possible to scale applications up or horizontally with a simple command.
Enabling DevOps for IoT: IoT solutions must be updated seamlessly with zero downtime to end-users to meet market demands. Kubernetes helps development teams quickly verify, roll out, and deploy changes to IoT services. Also, Kubernetes is supported on different cloud providers, such as AWS, Google Cloud, and Azure. It is even supported as on-premise software. Hence, it will be easy to migrate to any cloud service in the future.
Because of these features, Kubernetes has become one of the most common orchestration tools to be deployed and managed.
Challenges for Kubernetes in IoT edge
Edge gateways deal with various protocols such as Bluetooth, Wi-Fi, 3G, 4G, and 5G and must use compute resources effectively. Due to limited computational resources in edge gateways, it is not straightforward to run Kubernetes directly on edge servers.
If the servers allow the installation of all the Kubernetes components in the edge, everything will be fine on the developer machines. However, very quickly, you will encounter issues while running it in production. The issues include:
- Separating masters and nodes from the edge and moving the master to the cloud for better monitoring and control where masters run the control plane and nodes take the workload
- Creating a separate cluster for the database etcd to manage large loads
Dedicating nodes for ingress and egress traffic for better traffic management
- These issues will lead to the creation of multiple clusters, making the entire infrastructure very complex to manage.
Need for lightweight Kubernetes
Accommodate the challenges of IoT at the edge requires a compact, optimized version of Kubernetes. Here we profile the benefits and challenges of three Kubernetes variants with edge capabilities.
MicroK8s is an intermediary between edge clusters and standard Kubernetes. It requires a small footprint to run with limited resources, and it is possible to orchestrate full-blown cloud resource pools. Similar to Docker images, MicroK8s uses immutable containers for better security and simpler operations in Kubernetes. It helps to create self-healing, high-availability clusters that automatically choose the best nodes for the Kubernetes datastore. When a cluster database node is lost, another node is promoted without needing an administrator. MicroK8s is quick to install and easy to upgrade with excellent security, making it perfect for micro clouds and edge computing.
K3s is a Cloud Native Computing Foundation (CNCF) certified Kubernetes offering, meaning you can write YAML to operate against a regular Kubernetes and K3s cluster. The minimum RAM requirement to run a K3s cluster is 512 MB. It is possible to allow pods to run on the master as well as nodes. K3s creates an edge cluster, which provides further isolation between the edge and the cloud. This setup benefits scenarios in which edge pods cannot run outside the edge.
The advantages of standard Kubernetes apply to K3s as well. However, there are some limitations, including:
- K3s supports a single master, meaning that if the master goes down, you lose the ability to manage the cluster.
- The default database used in K3s is SQLite, which means it will be a problem for large databases where too many operations are occurring. Also, K3s has non-redundant elements such as database components that will make it more difficult to coordinate if the same pods are used for both edge and cloud nodes.
Like K3s, KubeEdge is a CNCF project. Its primary aim is to extend Kubernetes from the cloud to the edge and to meet the following edge computing challenges:
- Network reliability between cloud and edge
- Resource constraint on edge nodes
- Meet the highly distributed and scalability challenges of the edge architecture
KubeEdge is a lightweight node built upon Kubernetes. To overcome resource constraints on the edge, the control plane is separated from the edge node to the cloud, and a lightweight SQLite is used instead of etcd. That means that cloud and edge nodes are loosely coupled. An agent at the edge can autonomously manage services in case of a communication failure with the cloud.
KubeEdge has integrated with Kubernetes CRI, CSI, and CNI interfaces connecting to runtime, storage, and network resources. Also, KubeEdge is open to integrate with other CNCF projects, such as Envoy, Prometheus, and etcd.
One of the critical challenges that KubeEdge addresses is managing edge nodes that are geographically dispersed. KubeEdge enables “centralized management” of remote edge nodes and the applications running on them, which is a significant remote-management capability.
Managed cloud/edge services for IoT edge
Apart from different variants of Kubernetes, there are many managed cloud solutions available from Amazon, Microsoft, Google, and others that can address IoT edge challenges. Here are some of the services and their benefits.
AWS IoT Greengrass
This Amazon solution provides controls for building IoT devices that connect to the cloud and other devices. It enables the local execution of AWS Lambda code, data caching, messaging, and security. Devices are put together into groups, and each group can communicate over the local network. AWS IoT Greengrass enables fast communication, which translates into a near-real-time response.
Some of the benefits of AWS Greengrass include:
- Build intelligent devices faster: AWS IoT Greengrass supports various programming languages, development, and execution environments, such as AWS Lambda functions, Docker containers, native OS processes, and open-source software to develop, test, and launch your IoT applications.
- Deploy device software at scale: AWS IoT Greengrass supports remote deployment and seamless software updates on millions of devices without needing a firmware update.
- Operate offline: Devices can act locally on the data they generate, respond quickly to local events, and reduce latency.
- Secure device communications: AWS IoT Greengrass authenticates and encrypts device data for both local and cloud communications.
Azure IoT Edge
Azure IoT Edge is a fully managed service built on the Azure IoT Hub. It is used to analyze data on IoT devices rather than in the cloud. By moving most of the workload to the edge, only a few messages need to be sent to the cloud. It helps deploy cloud workloads such as artificial intelligence or business logic to run on IoT edge devices using standard containers. By moving certain workloads to the edge of the network, devices spend less time communicating with the cloud, react more quickly to local changes, and operate reliably even in extended offline periods.
Azure IoT Edge works on Linux or Windows devices that support container engines. Docker-compatible containers from Azure services or Microsoft partners are available to run business logic at the edge. It is easy to manage and deploy workloads from the cloud through Azure IoT Hub with zero-touch device provisioning using the cloud interface.
Some of the benefits of Azure IoT Edge include:
- Offloading AI and analytics workloads to the edge and reducing IoT solution costs
- Simplifying development because IoT edge code is consistent across the cloud and the edge and supports languages such as C, C#, Java, Node.js, and Python
- Responding in near-real-time by having the lowest latency possible between the data and the decision is critical. Rather than processing your data in the cloud, Asure IoT Edge processes it on the device itself.
- Operating offline or with intermittent connectivity to your edge devices reliably and securely, even when offline or have intermittent connectivity to the cloud. Azure IoT Edge device management automatically syncs the latest state of your devices after they are reconnected to ensure seamless operability.
Azure IoT Hub is a Microsoft foundational Platform-as-a-Service product, enabling device connectivity, management, and communication. It supports services such as:
- Device-to-cloud messaging
- Device authentication
- Support for HTTP, MQTT, and AMQP protocols
- Device monitoring and diagnostics
- Cloud-to-device messaging
- Device management
- Device and module twins or storing information about the current and desired properties of devices and their components and modules
- IoT edge to create program modules and deploy them across the network nodes
Industries that rely on IoT are focusing on deploying mission-critical services in edge devices to improve solutions’ responsiveness and reduce costs. Kubernetes-based solutions provide a common platform that can be used for deploying IoT services at the edge. It is one way to solve the challenge of scaling applications in unfamiliar environments.
The Kubernetes community is working to advance and innovate the solutions. The continuous advancements make it possible to build cloud-native IoT solutions that are scalable, reliable, and deployable in a distributed environment.
Although IoT edge solutions are packaged differently, they provide similar features with rich functionality that offers high scalability, reliability, fault tolerance, and built-in security for every IoT system layer.
When choosing the optimal Kubernetes solution for your edge application, it is essential to consider all relevant factors such as the nature of applications, cost, hardware compatibility, resource availability, and team skills.
- “Internet of Things,” ScienceDirect
- Benesch, Georganne, “People, Technology Power Machine Uptime,” Mar. 21, 2019, insight.tech
- Project EVE, LF Edge
- “The IoT Challenges,” Active Replication Fabric User’s Guide, eXtremeDB Users Guide, MCObject.com
- “Adopting Kubernetes to build IoT solutions,” Bosch ConnectedWorld Blog
- Yegulalp, Serdar, “What is Kubernetes? Your next application platform,” Apr. 3, 2019, InfoWorld
- Nolle, Tom, “Run Kubernetes at the edge with these K8s distributions,” Dec. 16, 2020, TechTarget
- “Rancher,” Awesome Kubernetes, Red Hat Spain
- “Lightweight Kubernetes,”
- Jeffries, Andy, “What’s the difference between k3s vs k8s,” Sep. 24, 20219, Civo
- Mertz, Mozelle, “KubeEdge and Its Role in Multi-Access Edge Computing,” 2020, Morioh
- Lai, Anni, “KubeEdge and Its Role in Multi-Access Edge Computing,” Jun. 12, 2020, The New Stack
- “AWS IoT Greengrass,” AWS
- “Build the intelligent edge with Azure IoT Edge,” Microsoft
- “Hybrid and multicloud solutions,” Microsoft Azure
- “Azure IoT Edge,” Microsoft Azure
- “Run Kubernetes at the edge with these K8s distributions”
 “Notes on Kubernetes API primitives and Cluster Architecture,” Tech Hoopla,
MEET OUR EXPERTS
You can work with a company built for now,
or you can with one engineering the virtual networking software of tomorrow.