Skip links

A Look at Kubernetes Deployment

A Look at Kubernetes Deployment

Discover some of the Kubernetes deployment keys including the right basics, alignment of technical skill sets, and security.

Kubernetes has become the de facto standard across the industry to ensure container workloads are running to specifications and can scale, according to Sitaram Iyer, Venafi senior director of cloud native solutionsKubernetes is now described as the world’s largest orchestration platform for containerized workloads — with 83% of Cloud Native Computing Foundation (CNCF) members already using it in production.

“By automating containerized environments, Kubernetes enables a dynamic, multicloud, multi-open source environment that is also scalable, cost-efficient, and productive — a developers’ paradise,” Iyer said.

But for developers to truly reach and enjoy that paradise, they must deploy Kubernetes correctly.

Kubernetes Deployment: The Basic Steps

Initial introduction: A specific team will typically work on a company’s initial Kubernetes deployment, Iyer said. This team will often create small clusters for experimentation and “tire kicking.” The initial Kubernetes introduction may also provide an opportunity to understand the basics of Docker images and containers provisioned using a cloud provider-managed service, such as Google Kubernetes Engine (GKE) or Amazon Web Services‘ Elastic Kubernetes Service (EKS). However, many often start deploying Kubernetes on premises using existing virtual environments.

Exploratory stage: In this step, teams’ use of Kubernetes goes beyond experimentation, and clusters should be hosting “real” workloads, Iyer explained. These workloads could be non-critical components of an application or transitory workers used by the continuous integration (CI) service.

Fundamental stage: Once they’ve gained sufficient confidence in their ability to operate and manage Kubernetes clusters, teams should be comfortable with hosting distributed cloud native applications, according to Iyer.

Repeatable stage: As teams gain confidence and experience in hosting critical applications in production Kubernetes clusters, they will be well underway in migrating legacy applications to Kubernetes, Iyer said. The challenges of migrating legacy applications should be well understood, and some migrations should already be in place.

Optimization stage: In this stage, instead of using Kubernetes primitives directly, teams should now be using custom operator supersets, such as Knative Services and Argo Workflows, according to Iyer. Applications should be developed which take advantage of these operators and use design patterns that lend themselves to the Kubernetes controller-reconciler paradigm.

Best Practices in Kubernetes Deployment

Organizations have the best success in their Kubernetes deployment when they align the skills of the development team with the Kubernetes users (platform engineering/IT Ops teams) across skill sets, workflows/processes, and tooling continuously re-assess progress, according to Saad Malik, Spectro Cloud co-founder. “Treat the Kubernetes platform as a product, constantly getting feedback from your internal customers (developmet teams) and operators (platform engineering/IT Ops team).”

Malik added that while the entire organization might be pleased with the initial Kubernetes deployment, requirements could change rapidly, so companies should think about the holistic set of tooling that they will need beforehand and how the company will support it, including scaling, backup, and restore, updates and upgrades, patching, etc.

“Start from the premise that something will go wrong during the upgrade process and have clear business continuity and disaster recovery plans — test these often,” Malik said.

Companies also need to ensure Kubernetes deployment doesn’t compromise data security.

“Each new integration added into a cluster will bring its own set of challenges such as security vulnerabilities, compatibility issues — they are effectively additional layers of the platform that need to be managed,” Malik said.

“Even though Kubernetes clusters and containers are inherently secure, the key to best practice in deployment is ensuring that security and management is a top priority,” Iyer added. “This has become more important as Kubernetes has grown in maturity, as there is now more need for greater control and governance.”

Security and governance cannot be afterthoughts, especially if you are using Kubernetes in production, Malik added. “A true zero trust model should be a prerequisite, as well as a set of monitoring and auditing capabilities to give the visibility that you need for cost (Kubecost), policy enforcement (OPA, Kyverno), RBAC (Fairwinds RBAC manager), etc.”

Failure to adequately address security is the biggest mistake can make in Kubernetes deployment, Iyer said. “The way that many organizations deploy Kubernetes is significantly increasing risk. Developer teams are evolving from deploying large clusters to multiple smaller ones — each related to a single app, team unit, or business region.”

The issue with this more agile approach is that it creates more complexity and, therefore, more machine identities to manage because these clusters need to talk to each other securely, Iyer explained. “Within every Kubernetes cluster, every line of code and microservice needs a machine identity for secure communication. By deploying complex, multicluster strategies, a vast number of identities are being created that cannot be manually managed.

Many companies are turning to solutions like Istio to help manage this influx of machine identities, but this is actually compounding risk because Istio only supports self-signed machine identities, which leaves organizations vulnerable, according to Iyer. These certificates are not signed by a publicly trusted organization-managed Certificate Authority (CA) and cannot be revoked and never expire.

The Future of Kubernetes Deployment

Automation in the form of cost management tools like Kubecost will increasingly be integrated into Kubernetes platforms to provide real-time, granular cost monitoring and analysis across disparate environments. In addition, centralized management will enable more efficient resource-sharing across teams and infrastructure platforms, predicted Tobi Knaup, D2iQ CEO.

Many organizations are focusing on the next evolution of continuous delivery — “progressive delivery,” involving a gradual rollout of applications potentially to a subset of users and using the learnings to deliver to everyone, according to Iyer. “The idea of progressive delivery is also to provide business leaders with greater control so that they can manage delivery using things like feature flags. This is very beneficial when multi-tenancy is in play, as it allows organizations to manage deployment and scale only for relevant tenants.”

Iyer cautioned that there are challenges to progressive delivery as it potentially makes the systems complex and requires maintaining multiple versions of service and managing traffic.

Simplifying all the management points at scale and being able to automate will be key to successful future Kubernetes deployments, according to Malik. Being able to accommodate diverse requirements from development teams fast in an “as-a-Service” manner and quickly integrate new tools (such as multiple Kubernetes distribution service meshes, etc.) will be critical.

Efficiencies in the form of oversubscribing “virtualized Kubernetes clusters” or better-centralized management capabilities will be another dimension of evolution, according to Malik.

Leave a comment

Explore
Drag