NashTech Blog

Hands on Kubernetes Gateway API With NGINX Gateway Fabric

Table of Contents
fiber optic, cable, blue-2749588.jpg
In the blog, we examined the Kubernetes Gateway API and its numerous benefits. We discovered how it simplifies managing network traffic within a Kubernetes cluster by directing incoming requests to appropriate services. The Kubernetes Gateway API lets us set more precise routing rules, improving control and optimization of network traffic. In the following sections, we will build on this and explore how to use the Gateway API with NGINX Gateway Fabric.

Overview of NGINX Gateway Fabric

NGINX Gateway Fabric is an open-source project that utilizes NGINX as the data plane to implement the Gateway API. This project’s objective is to implement the core Gateway APIs – Gateway, GatewayClass, HTTPRoute, TCPRoute, TLSRoute, and UDPRoute – to set up an HTTP or TCP/UDP load balancer, a reverse proxy, or an API gateway for applications operating on Kubernetes.

Set up NGINX Gateway Fabric

We will explore the process of installing NGINX Gateway Fabric in a Kubernetes cluster using Helm in depth. This process involves several steps, which we will discuss individually. We’ll begin by explaining how to install the NGINX Gateway Fabric and then provide a detailed guide on how to expose it.

Prerequisites

To complete the steps in this guide, you first need to install the following tools for Kubernetes management and development:
  • kubectl: A command-line interface for Kubernetes that allows you to manage and inspect cluster resources, and control containerized applications.
  • minikube: is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
  • Helm 3.0 or later, for deploying and managing applications on Kubernetes.

Start the cluster

From a terminal with administrator access (but not logged in as root), run:
minikube start
😄  minikube v1.32.0 on Darwin 14.2.1 (arm64)
✨  Automatically selected the docker driver. Other choices: virtualbox, ssh
📌  Using Docker Desktop driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.28.3 preload ...

Installing the Gateway API resources

💡 The Gateway API resources from the standard channel must be installed before deploying NGINX Gateway Fabric. If they are already installed in your cluster, please ensure they are the correct version as supported by the NGINX Gateway Fabric To install the Gateway API resources, run the following:
kubectl apply -f <https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml>
You can see the output likes this:
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
Or you can check if the custom resource is defined by using this command:
kubectl get crd

Install NGINX Gateway Fabric

Pull the latest stable release of the NGINX Gateway Fabric chart:
helm pull oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --untar
cd nginx-gateway-fabric
Create the NGINX-gateway namespace and permanently save the namespace for all subsequent kubectl commands in that context.
kubectl create namespace nginx-gateway
kubectl config set-context --current --namespace=nginx-gateway
To install the chart into the nginx-gateway namespace, run the following command.
helm install ngf .
ngf is the release’s name and can be changed to any name you want. This name is added as a prefix to the Deployment name.

Viewing and Updating the Configuration

To view the current configuration (<release-name>-config):
kubectl get nginxgateways ngf-config -o yaml
To update the configuration:
kubectl edit nginxgateways ngf-config
This will open the configuration in your default editor. You can then update and save the configuration, which is applied automatically to the control plane. To view the status of the configuration:
kubectl describe nginxgateways ngf-config

Configure Port Forwarding

Once NGINX Gateway Fabric has been installed, you can see the service ngf-nginx-gateway-fabric is created with the default service type LoadBalancer. Lookup the public IP of the load balancer, which is reported in the EXTERNAL-IP column in the output of the following command:
kubectl get svc ngf-nginx-gateway-fabric
Here is an example of the output:
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ngf-nginx-gateway-fabric   LoadBalancer   10.103.46.189   <pending>     80:31848/TCP,443:31239/TCP   2m34s
Now we will configure port forwarding from local ports 8080 and 8443 to 80 and 443 on the nginx-gateway Pod. This is needed in Minikube, a single-node Kubernetes cluster in a Virtual Machine (VM) on your local machine. It allows access to Kubernetes services in the Minikube VM from your local machine, redirecting traffic from a local port to the corresponding Minikube VM port, and simplifying local application testing and debugging. To configure port forwarding, run the following command:
kubectl port-forward deployment/ngf-nginx-gateway-fabric 8080:80 8443:443
To access a LoadBalancer deployment, use the “minikube tunnel” command. You can start the tunnel to create a routable IP for the ‘balanced’ deployment:
minikube tunnel
The expected output as below:
  Tunnel successfully started

📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress ngf-nginx-gateway-fabric requires privileged ports to be exposed: [80 443]

🏃  Starting tunnel for service ngf-nginx-gateway-fabric.
We have successfully installed the NGINX Gateway Fabric in the cluster. The subsequent step in the process involves routing the traffic towards the application. This is a vital step that ensures the application receives the necessary data to function effectively. For this purpose, explicit and detailed instructions have been prepared, which will guide you through the process effectively. In the next steps, we will proceed with creating shared Gateways, setting up routes, testing routes with curl.

High-level architecture

This diagram shows an example of NGINX Gateway Fabric exposing two web applications within a Kubernetes cluster to clients on the Internet:
NGINX Gateway Fabric pod consists of two containers:
  1. nginx: the data plane. This system comprises an NGINX master process and several NGINX worker processes. The master process manages the worker processes, which handle client traffic and evenly distribute it to the backend applications.
  2. ngf: the control plane. This is a Kubernetes controller, written using the controller-runtime library. It monitors Kubernetes objects such as services, endpoints, secrets, and Gateway API CRDs. These objects are then translated into NGINX configuration to configure NGINX.

Deploy the first application

In order to build the coffee application, we will be deploying two distinct applications, namely coffee-v1 and coffee-v2. To initiate this process, we will first need to utilize the following YAML. This will effectively create the necessary deployments for our applications. Please make sure to accurately apply the given YAML, as this is a crucial step in the process:
apiVersion: v1
kind: Service
metadata:
  name: coffee-service-v1
spec:
  selector:
    app: coffee-v1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coffee-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: coffee-v1
  template:
    metadata:
      labels:
        app: coffee-v1
    spec:
      containers:
      - name: coffee-v1
        image: docker.io/hahoang84/fake-service:v1.0.1
        ports:
        - containerPort: 8080
        env:
        - name: "NAME"
          value: "COFFEE"
        - name: "VERSION"
          value: "V1"
---
apiVersion: v1
kind: Service
metadata:
  name: coffee-service-v2
spec:
  selector:
    app: coffee-v2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coffee-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: coffee-v2
  template:
    metadata:
      labels:
        app: coffee-v2
    spec:
      containers:
      - name: coffee-v2
        image: docker.io/hahoang84/fake-service:v1.0.1
        ports:
        - containerPort: 8080
        env:
        - name: "NAME"
          value: "COFFEE"
        - name: "VERSION"
          value: "V2"
In this specific deployment, a pod is created that houses a singular coffee container. This coffee container is designed to listen attentively on port 8080. It serves the docker.io/hahoang84/fake-service:v1.0.1 image. The process of setting up this deployment is carefully designed to ensure a smooth operation. The port, being a critical aspect of the configuration, facilitates communication for the container. This deployment, therefore, represents an integration of multiple elements that work together to create a seamless experience for users engaging with the coffee container.

Create the shared Gateway

You’re now about to embark on the task of creating the shared Gateway. This is a crucial component for our Cafe application, as it will be the main tool utilized to route traffic. The role of the shared Gateway is essential because it effectively controls and directs the flow of data, ensuring that all data reaches its appropriate destination within the Cafe application. By creating this, you’ll be contributing to the smooth operation and the overall efficiency of the application. First, apply the following YAML to create the Gateway:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: cafe-gateway
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
This Gateway, called cafe-gateway, listens on ports 80 (HTTP) and 443 (HTTPS). It employs the ngf-gatewayclass GatewayClass. The Gateway sends HTTP traffic to all HTTPRoute resources across every namespace. You can check the status of the Gateway by using this command:
kubectl describe gateway cafe-gateway

Create the routing

Next, set up two different routes. The first, v1, will be used by most service consumers. The second, v2, will be accessed when a request header named version with the value v2 is specified. To create the HTTPRoute, apply the following YAML:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: coffee-route
spec:
  parentRefs:
  - name: cafe-gateway
  hostnames:
  - "cafe.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: coffee-service-v1
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /
      headers:
      - name: version
        value: v2
    backendRefs:
    - name: coffee-service-v2
      port: 80
The HTTPRoute named cafe-route matches all requests with the / path and forwards them to the coffee-svc service on port 80. The HTTPRoute is linked to the cafe-gateway Gateway via the parentRefs field. Check the status of the HTTPRoutes:
kubectl describe httproute coffee-route

Test the route with curl

Now that the HTTPRoute is in place, let’s use curl to display the response. Use the -i option to additionally show the HTTP response code and headers.
curl -is -H "Host: cafe.example.com" <http://localhost:80/>
This command should complete successfully:
Server: nginx/1.25.4
Date: Thu, 30 May 2024 14:22:56 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 24
Connection: keep-alive
X-Powered-By: Express
ETag: W/"18-/S5aYysmu93xpyhqZzInpCZ0M2U"

Hello From COFFEE (V1)!
But we supply the version: v2 header, note that our gateway routes the request to v2 as expected:
curl -is -H "Host: cafe.example.com" -H "version: v2" <http://localhost:80/>
The expected response as below:
HTTP/1.1 200 OK
Server: nginx/1.27.0
Date: Fri, 14 Jun 2024 09:57:56 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 24
Connection: keep-alive
X-Powered-By: Express
ETag: W/"18-U4qjRk0fi49MfsSJ6en/kMpd8P0"

Hello From COFFEE (V2)!
Requests to hostnames other than “coffee.example.com” should not be routed to the application, since my service HTTPRoute only matches requests with the “tea.example.com” needed hostname. To verify this, send a request to the hostname “test.example.com”:
curl -is -H "Host: tea.example.com" <http://localhost:80/>
You should receive a 404 Not Found error:
HTTP/1.1 404 Not Found
Server: nginx/1.25.4
Date: Thu, 30 May 2024 14:25:18 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.25.4</center>
</body>
</html>

Deploy the second application

To build the tea application, we will deploy two separate applications: tea-service-v1 and tea-service-v2. To begin, we will use the following YAML. This will create the necessary deployments for our applications. It’s important to correctly apply this YAML, as it’s a critical step in the process:
apiVersion: v1
kind: Service
metadata:
  name: tea-service-v1
spec:
  selector:
    app: tea-v1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tea-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tea-v1
  template:
    metadata:
      labels:
        app: tea-v1
    spec:
      containers:
      - name: tea-v1
        image: docker.io/hahoang84/fake-service:v1.0.1
        ports:
        - containerPort: 8080
        env:
        - name: "NAME"
          value: "TEA"
        - name: "VERSION"
          value: "V1"
---
apiVersion: v1
kind: Service
metadata:
  name: tea-service-v2
spec:
  selector:
    app: tea-v2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tea-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tea-v2
  template:
    metadata:
      labels:
        app: tea-v2
    spec:
      containers:
      - name: tea-v2
        image: docker.io/hahoang84/fake-service:v1.0.1
        ports:
        - containerPort: 8080
        env:
        - name: "NAME"
          value: "TEA"
        - name: "VERSION"
          value: "V2"

Deploy the second HTTPRoute

The YAML snippet below illustrates how two Services, tea-service-v1 and tea-service-v2, are listed as backends for a single route rule. This rule will distribute traffic, assigning 90% to tea-service-v1 and 10% to tea-service-v2. As we are utilizing the pre-existing gateway for these applications, the only component we need to create is the HTTPRoute.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tea-route
spec:
  parentRefs:
  - name: cafe-gateway
  hostnames:
  - "tea.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: tea-service-v1
      port: 80
      weight: 90
    - name: tea-service-v2
      port: 80
      weight: 10
The weight parameter represents a proportional distribution of traffic, not a percentage. Therefore, the sum of all weights in a single route rule becomes the denominator for all backends. weight is optional and defaults to 1 if not specified. If a route rule has only one backend, it will receive 100% of the traffic, regardless of the specified weight.

Test the second route with curl

Now that the HTTPRoute is in place, let’s use curl to display the response with the -i option to additionally show the HTTP response code and headers.
curl -is -H "Host: tea.example.com" <http://localhost:80/>
Our expectations, based on the configuration we’ve set up, are that the majority of the traffic, precisely 90%, will be directed towards v1. The remaining fraction, which constitutes 10%, is then expected to be directed towards v2. This distribution between the two versions is by design, set to monitor and compare the performance of v1 and v2 under different loads.

Cleanup

In the system, you have the ability to view all of the individual pods that have been successfully deployed to the entire cluster. This comprehensive view enables you to monitor the status and performance of each pod, ensuring optimal operation within the cluster.
default        coffee-v1-76877b9b86-tcnxl                          minikube
default        coffee-v2-55794d4769-hbpvm                          minikube
default        tea-v1-5c86d46468-2nkw2                             minikube
default        tea-v2-577cd7f869-lrppf                             minikube
nginx-gateway  ngf-nginx-gateway-fabric-65c7cf876c-6c7rq           minikube 
If you’d like to cleanup the work you’ve done, simply delete the kind cluster where you’ve been working.
minikue delete --all
minikube stop

Conclusion

Keep in mind that the Gateway API is intended to replace Ingress as the favored method for managing gateways in Kubernetes. During this transition, both options are accessible, allowing users to select what best suits their requirements. In conclusion, while Ingress is a reliable technology, the Kubernetes Gateway API provides greater flexibility, standardization, and scalability for managing and configuring gateways in Kubernetes clusters.
Picture of Ha Hoang

Ha Hoang

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

Scroll to Top