Configuring Kong Ingress Controller on GKE to Use an HTTP Load Balancer
When using the Kong Ingress Controller on Google Kubernetes Engine (GKE), the default setup often creates a TCP load balancer. However, many use cases require an HTTP(S) load balancer to manage traffic at the application layer. In this guide, we’ll walk through the steps to configure the Kong Ingress Controller to create an HTTP load balancer instead of a TCP one.
Prerequisites
Before proceeding, ensure that:
You have a GKE cluster up and running.
Kong Ingress Controller is installed in your cluster.
Your GKE cluster has HTTP(S) load balancing enabled (this is usually the default).
Step-by-Step Configuration
1. Verify HTTP(S) Load Balancing in GKE
GKE supports HTTP(S) load balancing out of the box, but it’s always a good idea to double-check your cluster settings. HTTP(S) load balancing is necessary for this configuration to work seamlessly.
2. Create an Ingress Resource
Define an Ingress resource to expose your application via HTTP. Use the appropriate annotations to ensure the Kong Ingress Controller processes the resource correctly.
Example Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kong-ingress
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Replace
example.com
with your domain name.Replace
my-service
and80
with the name and port of your backend service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: “gce”
spec:
rules:
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: kong5-kong-proxy
port:
number: 80
3. Configure the Kong Proxy Service
By default, Kong may create a TCP load balancer due to its Service type. To avoid this, change the Service type to NodePort
or ClusterIP
and let the GKE’s HTTP(S) load balancer handle ingress traffic.
Example Service configuration:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backend-config
spec:
healthCheck:
healthyThreshold: 1
port: 8100
requestPath: /status
timeoutSec: 10
type: HTTP
unhealthyThreshold: 10
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"backend-config"}}'
spec:
type: NodePort
ports:
- name: kong-proxy
port: 80
protocol: TCP
targetPort: 8000
- name: kong-proxy-status
port: 8100
protocol: TCP
targetPort: 8100
selector:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong-app
app.kubernetes.io/name: kong
This configuration exposes Kong’s proxy on NodePort, allowing GKE’s HTTP(S) load balancer to route traffic correctly.
4. Add Service Annotations for HTTP Load Balancer
To explicitly tell GKE to create an HTTP(S) load balancer, add the following annotation to your Kong proxy service:
metadata:
annotations:
cloud.google.com/load-balancer-type: "HTTP"
5. Ensure Kong Listens on HTTP/HTTPS
Verify that Kong’s configuration allows it to handle HTTP and HTTPS traffic. Check or create a KongIngress
resource to configure proxy listeners.
Example:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: proxy-config
proxy:
protocols:
- http
- https
Attach this configuration to the Ingress or Service resource as needed.
6. Deploy and Verify
Apply the configurations:
kubectl apply -f kong-ingress.yaml kubectl apply -f kong-proxy-service.yaml
Wait for the load balancer to be created. You can monitor this in the GCP console under "Network Services" > "Load Balancing."
Test your application by accessing the domain (e.g.,
http://example.com
).
Additional Tips
HTTPS Support: To enable HTTPS, integrate a certificate manager like Cert-Manager to automatically provision TLS certificates.
Firewall Rules: Ensure that firewall rules allow traffic to the NodePort range if you’re using a NodePort service.
Monitoring and Debugging: Use tools like
kubectl logs
and GCP’s monitoring features to troubleshoot issues.