NashTech Blog

How to Implement Tenant-Specific Resource Quotas with Capsule

Table of Contents

Managing multi-tenancy in Kubernetes comes with the challenge of ensuring that one tenant doesn’t monopolize cluster resources, leaving others starved. Capsule simplifies multi-tenancy by defining tenants at the namespace level, but to avoid resource contention, you need to define resource quotas for each tenant. In this blog, we’ll explore how to implement tenant-specific resource quotas with Capsule, including real-world examples and best practices.


🚀 Why Resource Quotas Matter in Multi-Tenancy

capsule

When multiple tenants share the same Kubernetes cluster, they compete for resources such as CPU, memory, and storage. Without proper resource limits, a single tenant could:

  • Overwhelm the cluster – High resource consumption by one tenant could cause service disruption for others.
  • Cause noisy neighbor issues – One tenant’s resource spikes could impact the performance of others.
  • Lead to scheduling failures – Kubernetes scheduler might struggle to allocate resources if they are overused.

Resource quotas help avoid these issues by:

Limiting the total CPU and memory that a tenant can consume.
Preventing a single pod or deployment from exhausting resources.
Ensuring fair resource distribution among tenants.


🛠️ How Capsule Manages Resource Quotas

Capsule allows defining resource quotas at the tenant level. When a tenant is created using Capsule, you can configure:

  • Namespace Quotas – Maximum number of namespaces a tenant can create.
  • CPU and Memory Quotas – Maximum CPU and memory a tenant can consume across all namespaces.
  • Storage Quotas – Maximum persistent volume claim (PVC) size a tenant can create.
  • Service and Pod Limits – Maximum number of services, pods, and deployments allowed.

Capsule automatically enforces these quotas across all namespaces assigned to a tenant, ensuring consistent resource management.


🔎 Step-by-Step Guide: Setting Up Resource Quotas with Capsule

1. Install Capsule

If Capsule is not installed yet, install it using Helm:

helm repo add clastix https://clastix.github.io/charts
helm install capsule clastix/capsule

Verify that Capsule is running:

kubectl get pods -n capsule-system

2. Create a Tenant

Let’s create a tenant named team-a with resource quotas:

apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: team-a
spec:
owners:
- kind: User
name: "dev1@example.com"
namespaceQuota: 3
nodeSelector:
kubernetes.io/os: linux
resourceQuotas:
hard:
requests.cpu: "2" # Max total CPU requests across all namespaces
requests.memory: "4Gi" # Max total memory requests across all namespaces
limits.cpu: "4" # Max CPU limits across all namespaces
limits.memory: "8Gi" # Max memory limits across all namespaces
persistentvolumeclaims: "10" # Max number of PVCs allowed
services: "5" # Max number of services allowed
pods: "10" # Max number of pods allowed

Apply the tenant:

kubectl apply -f tenant.yaml

What this does:

  • Tenant team-a can create up to 3 namespaces.
  • The system limits CPU requests to 2 cores and memory requests to 4 GB.
  • Pods and services are capped at 10 and 5 respectively.

3. Create a Namespace for the Tenant

Now that the tenant is created, create a namespace within the tenant:

apiVersion: v1
kind: Namespace
metadata:
name: team-a-namespace
labels:
capsule.clastix.io/tenant: team-a

Apply the namespace:

kubectl apply -f namespace.yaml

4. Apply a Resource Quota in the Namespace

Capsule sets the overall tenant quota, but you also need to define the namespace-level resource quotas:

apiVersion: v1
kind: ResourceQuota
metadata:
name: team-a-resource-quota
namespace: team-a-namespace
spec:
hard:
requests.cpu: "1"
requests.memory: "2Gi"
limits.cpu: "2"
limits.memory: "4Gi"
persistentvolumeclaims: "5"
services: "3"
pods: "5"

Apply the resource quota:

kubectl apply -f resource-quota.yaml

What this does:

  • CPU requests are limited to 1 core in this namespace.
  • Memory requests are limited to 2 GB in this namespace.
  • Pod and service counts are restricted within the namespace.

5. Create a Pod to Test Resource Quotas

Let’s create a pod that attempts to request more than the allowed CPU and memory:

apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: team-a-namespace
spec:
containers:
- name: test-container
image: nginx
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "3"
memory: "6Gi"

Apply the pod:

kubectl apply -f test-pod.yaml

This will fail with an error like:

Error from server (Forbidden): pods "test-pod" is forbidden: exceeded quota: team-a-resource-quota

Why?

  • The pod requests exceed the namespace-level resource quota for CPU and memory.

6. Monitor Resource Usage

You can check the resource usage of a tenant using:

kubectl describe resourcequota team-a-resource-quota -n team-a-namespace

Example output:

Name:                   team-a-resource-quota
Namespace: team-a-namespace
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 4Gi
pods 0 5
requests.cpu 0 1
requests.memory 0 2Gi

🚨 Troubleshooting Common Issues

1. Quota Not Enforced

  • Make sure you apply the quota in the correct namespace.
  • Verify that Capsule is managing the namespace using:
kubectl get ns team-a-namespace -o yaml

2. Pod Creation Failing Despite Available Quota

  • Make sure the namespace-level quota does not conflict with the tenant-level quota.
  • Check for namespace-specific limits using:
kubectl describe resourcequota -n team-a-namespace

🏆 Best Practices for Tenant-Specific Resource Quotas

Set realistic quotas – Set limits based on actual tenant requirements.
Monitor quota usage – Use Prometheus and Grafana to monitor real-time usage.
Avoid overprovisioning – Keep quotas conservative to prevent resource exhaustion.
Create quota templates – Create standard quota templates for faster tenant onboarding.


🌍 Real-World Use Case

A fintech company runs multiple tenant-based microservices in a shared Kubernetes cluster:

  • Capsule creates separate tenants for each business unit.
  • Resource quotas ensure that no single business unit consumes excessive CPU or memory.
  • Monitoring tools alert if a tenant reaches 80% of its quota.
  • Automated scaling adjusts quotas based on demand spikes.

This setup ensures fair resource distribution while maintaining high availability.


🎯 Conclusion

Setting tenant-specific resource quotas with Capsule ensures fair resource sharing and prevents cluster exhaustion. By defining quotas at both the tenant and namespace levels, you can effectively balance resource allocation while maintaining isolation and security.

That’s it for now. I hope this article gave you some useful insights on the topic. Please feel free to drop a comment, question or suggestion.

Picture of Riya

Riya

Riya is a DevOps Engineer with a passion for new technologies. She is a programmer by heart trying to learn something about everything. On a personal front, she loves traveling, listening to music, and binge-watching web series.

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

Scroll to Top