NashTech Blog

Table of Contents
man with headphones facing computer monitor

Key takeaways:

We need to avoid Kubernetes Deployment Antipatterns for several reasons:

  1. Reliability: Antipatterns can lead to unreliable deployments, increasing the likelihood of system failures and downtime.
  2. Security: Antipatterns may introduce security vulnerabilities, compromising the integrity and confidentiality of the system.
  3. Maintainability: Antipatterns can make deployments harder to manage and maintain, resulting in increased complexity and technical debt.
  4. Consistency: Antipatterns may cause inconsistencies between environments, making it difficult to replicate and troubleshoot issues.
  5. Scalability: Antipatterns can hinder scalability efforts by introducing inefficiencies and bottlenecks in the deployment process.
  6. Compliance: Antipatterns may violate regulatory and compliance requirements, exposing the organization to legal and financial risks.

Anti-pattern 4 – Lack of deployment metrics

By “metrics” we actually mean the whole trilogy of:

  • Logging: This allows us to review the events and specifics of requests, typically after an incident.
  • Tracing: This enables a detailed exploration of the path of an individual request, typically after an incident.
  • Metrics: These help identify incidents (or ideally predict them) preemptively.

Metrics play a crucial role in Kubernetes compared to traditional virtual machines due to the distributed nature of services within a cluster, particularly when employing microservices. Unlike virtual machines, Kubernetes applications are highly dynamic, constantly coming and going in response to traffic fluctuations. It’s essential to understand how these applications adapt to varying workloads.

While the specific metrics solution you opt for may vary, ensuring comprehensive coverage for all your use cases is paramount. This includes:

  1. Obtaining critical information promptly, rather than relying solely on kubectl commands.
  2. Gaining insights into how traffic enters your cluster and identifying current bottlenecks.
  3. Validating and adjusting resource limits as needed.

However, the most critical use case is ensuring that you can determine whether your deployment has been successful. Merely having a container running does not guarantee that your application is operational or capable of handling requests.

Ideally, metrics should not be something that you look at from time to time. Metrics should be an integral part of your deployment process. Several organizations follow a workflow where the metrics are inspected manually after a deployment takes place, but this process is sub-optimal. Metrics should be handled in an automated way:

 

  1. A deployment takes place
  2. Metrics are examined (and compared to base case)
  3. Either the deployment is marked as finished or it is rolled back

It’s important to note that these actions do not involve any manual steps. Incorporating metrics into your deployments is a challenging endeavor. However, it underscores the ultimate objective and the critical role that metrics play in Kubernetes deployments.

Anti-pattern 5 – Combining production and non-production clusters

While Kubernetes is engineered for cluster orchestration, it’s crucial to avoid the pitfall of consolidating all your requirements into a single large cluster. At minimum, you should maintain two clusters: one for production and another for non-production purposes.

To begin with, combining production and non-production environments presents an evident risk of resource depletion. It’s essential to prevent a rogue development version from encroaching on the resources allocated for production services.

Numerous detrimental scenarios can arise, with one of the most common being:

  • A developer creates a feature namespace within a cluster that also hosts production.
  • They proceed to deploy their feature code in the namespace and commence integration testing.
  • During testing, the integration scripts generate dummy data, manipulate the database, or inadvertently interfere with the backend in a harmful manner.
  • Unfortunately, the containers contain production URLs and configurations, resulting in all integration tests unintentionally affecting the live production workloads

To avoid falling into this trap, it is much easier to simply designate production and non-production clusters.

Anti-pattern 6 – Deploying without memory and CPU limits

By default, applications deployed to Kubernetes lack specified resource limits. Consequently, an application could potentially monopolize the entire cluster. While this situation could be problematic in a staging cluster, it could prove catastrophic in a production environment, as even minor memory leaks or CPU fluctuations could disrupt other applications.

Therefore, it’s imperative that all applications, irrespective of the target cluster, have defined resource limits.

Kubernetes offers the advantage of resource elasticity, allowing your applications to scale dynamically to meet demand. However, if the cluster constantly kills or restarts your application as it starts to handle significant loads (for instance, during a spike in traffic to your e-commerce platform), you lose the benefits of using a cluster.

Conversely, setting resource limits too high wastes resources and reduces cluster efficiency. It’s crucial to strike a balance.

Furthermore, it’s important to analyze the resource usage patterns of your programming languages and underlying platforms. Legacy Java applications, for example, are notorious for encountering issues with memory limits. Understanding these patterns helps optimize resource allocation and application performance.

Continued on part 3.

Picture of Le Cao

Le Cao

I am Engineering Manager at NashTech Vietnam. I have been with the company for over 10 years and during this time, I have gained extensive experience and knowledge in the field of .NET, Frontend and DevOps. My primary responsibilities include managing and overseeing the development, testing, and deployment of software applications to ensure high quality and reliable products are delivered to our clients. I am passionate about exploring new technologies and implementing best practices to improve our development processes and deliverables. I am also dedicated to fostering a culture of collaboration and innovation within our team to achieve our goals.

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

Scroll to Top