
Imagine running a backend pod on a node that’s already handling heavy database traffic. Suddenly, both the app and DB slow down—and your users are unhappy. Kubernetes, by default, doesn’t know your app’s preferences. It only schedules based on available resources.
But what if you could tell Kubernetes where your pods should or shouldn’t go—based on labels, zones, or other pods? That’s where Affinity in kubernetes come into play.
What is Affinity in Kubernetes?
Affinity in Kubernetes is a set of rules that influence how pods are scheduled based on node labels or the location of other pods. There are two main types:
- Affinity: “Keep these pods close together.”
- Anti-Affinity: “Keep these pods apart.”
You can also define these rules in two ways:
- Required: The scheduler requires the pod to follow the rule to schedule it.
- Preferred: The pod should follow the rule, but if it can’t, that’s acceptable.
Why Use Affinity in Kubernetes?
- Performance: Place related pods (like frontend and backend) close together to reduce network delays and improve speed.
- High Availability: Spread replicas across different nodes or zones to avoid downtime if one node fails.
- Cost Optimization: Schedule workloads on cheaper nodes (like spot instances) to save money.
- Compliance: Ensure sensitive apps run only in specific regions or zones to meet regulatory requirements.
Node Affinity – Match Pods to Nodes
Node affinity lets you tell Kubernetes to schedule pods only on certain nodes based on labels. There are two types of it:
preferredDuringSchedulingIgnoredDuringExecution = try to match, but it's okay if not.
requiredDuringSchedulingIgnoredDuringExecution = must match during scheduling.
For example, if some nodes have SSD storage and others don’t, you can use it to make sure your pod runs only on the SSD nodes.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
Node Anti-Affinity – Avoid Certain Nodes
Node Anti-Affinity does the opposite — it prevents pods from being scheduled on certain nodes. This can help you avoid nodes with specific roles or hardware, or ensure certain workloads are isolated.
Suppose you have nodes labeled with node-type=reserved and you want to avoid scheduling your pod on those nodes.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-type
operator: NotIn
values:
- reserved
Example: Prefer pods on nodes in zone-a
Suppose you want your pods to prefer running on nodes labeled with zone=zone-a to be closer to related services, but it’s okay if they run on other nodes when those are busy.
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: zone
operator: In
values:
- zone-a
Pod Affinity: Group Pods Together
Pod Affinity lets you tell Kubernetes to place certain pods close to each other. This means you can make sure pods that work together.
For example, sometimes different parts of your app — like a frontend and a backend — need to communicate with each other. Therefore, if they run on the same node, communication becomes faster and more efficient because they avoid network delays between nodes. Additionally, this setup can reduce latency and improve overall performance.
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: backend
topologyKey: kubernetes.io/hostname
This rule ensures that the pod runs on the same node as another pod with the label app=backend.
Pod Anti-Affinity – Keep Pods Apart
Pod Anti-Affinity directs Kubernetes to avoid placing specific pods together on the same node or zone. By defining label selectors and topology keys, it controls pod placement to avoid resource contention and reduce risk from node failures.
For example, imagine you have 3 replicas of a frontend app. By using Pod Anti-Affinity, Kubernetes will try to schedule each replica on a different node. This way, if one node fails, your app can still run on the others without interruption.
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: frontend
topologyKey: "kubernetes.io/hostname"
Real-World Use Cases
- Sending Non-Critical Workloads to Spot Instances: Use Node Affinity to place less important jobs (like batch processing) on cheaper spot nodes, helping reduce cloud costs.
- Run Helper Pods Near Main Pods: Imagine you have a helper pod that collects logs or metrics from your main app pod. In this situation, you can use Pod Affinity to place the helper pod on the same node as the app pod so they work faster together.
- Restricting Sensitive Data Workloads to Specific Zones: Use Node Affinity to ensure that data-sensitive applications only run in specific regions (like
zone=europe-west1) to comply with legal and compliance requirements. - Spreading App Replicas for High Availability: Use Pod Anti-Affinity to ensure that replicas of your app are scheduled on different nodes.This way, if one node fails, the others can continue running the app without interruption
Conclusion
Affinity and Anti-Affinity give you smarter control over pod placement in Kubernetes. Whether you want your pods placed together for better performance, these rules help you build more reliable and efficient applications.
If you want to try it out yourself, I’ve included practical implementation code in my GitHub repository.
