Container Instances
Container instances address the primary drawback of running a container runtime directly on top of infrastructure, which is the high setup and maintenance overhead associated with it. Using the methods described in the previous section, the setup process and methods can be streamlined to a certain extent. However, the maintenance of infrastructure poses an entirely different challenge. Using infrastructure directly, developers need to take on the responsibility for routinely updating, patching, and rebooting their compute instances. It is also important to keep the container runtime up to date with the latest patches and CVEs while ensuring consistent configurations of these runtimes. Sizing the infrastructure becomes another challenge in this model because you need to always ensure compute capacity for container workloads to scale dynamically while still optimizing for cost. Instance provisioning times can be orders of magnitude higher than container creation and startup times, so always having “just enough headroom” is essential to seamlessly scale the container workload. Developers also need to ensure that logs and metrics from containers can be collected and pushed to analytics tools. Workload isolation is an equally challenging problem, particularly for multitenant applications and SaaS platforms. When building multitenant applications and platforms, the shared infrastructure needs to be managed carefully to prevent data leakage and container escape attacks. These often result in solutions that require a significant amount of custom code to ensure that containers are placed in optimally sized instances, containers have room to scale when needed, shared resources are well isolated, and the fleet can be managed and patched efficiently from an operational perspective.
Container instances address these concerns by offering a service that enables you to create one or more containers without managing infrastructure. The experience is like compute instance creation, in that it enables you to specify the CPU, the memory, network, and other resource characteristics required for one or more containers and then for providing container images to run. OCI provides the compute, the container runtime, and other resources, such as networking and storage; then it uses the metadata provided to pull the images and create a running container or set of containers. Here the OCI service takes care of creating and maintaining the underlying infrastructure. The service manages activities such as OS patching and restarts, container runtime setup, network setup, storage attachment, and so on. This greatly simplifies the workflow for developers while addressing the drawbacks of the traditional approaches. From the developer’s perspective, the workflow is very similar to launching a compute instance. Instead of providing an operating system image, the developer provides the CPU memory and other resource constraints, as well as the container images that need to be part of the container instance.
A container instance is a lightweight container-optimized VM that can have more than one container in it. This enables developers to start containers much faster than provisioning VMs while providing the same level of hypervisor-level isolation and avoiding the management overhead of traditional VMs. The hypervisor level allows for a better security posture, even in the face of container escape attacks. The containers within a container instance all share the CPU, network, and storage resources. This is somewhat like a pod in Kubernetes, although a container instance should really be thought of as a lightweight VM that can run one or more containers; it differs greatly in the level of container orchestration features when compared to platforms such as Kubernetes. Figure 3-4 shows how the container instance provides hypervisor level isolation and an environment that can run multiple containers that share the container instance’s resources.
FIGURE 3-4 Container Instance and Hypervisor-Level Isolation
Container instances integrate with OCI features such as instance pools and autoscaling for fleet management. This means that you can create a container-based workload fleet that is consistently configured and can scale elastically. The container instance also makes it easy to access container logs and metrics for each container within the container instance and execute commands within the containers. As with containers, container instances are also immutable. When a container instance is created, changing resources such as CPU or storage is done by creating a new container instance and discarding the old one. This includes updating the image tags and changing the configurations for the containers, in keeping with standard container lifecycle management practices. This dramatically improves the workflow for developers working with container applications by providing a fully managed platform for infrastructure and container runtimes.
Ideal applications for container instances include data processing jobs such as video encoding or data analytics, build jobs, CRUD applications, event-based actions, and task automation.