DevOpsil
Kubernetes
90%
Fresh
Part 6 of 8 in Kubernetes from Zero to Hero

Kubernetes Ingress vs Gateway API: When to Migrate and How to Do It Without Breaking Everything

Aareez AsifAareez Asif10 min read

Ingress Has Served Us Well, But It's Showing Its Age

Here's the thing — the Kubernetes Ingress resource was designed in a simpler time. Single service, single hostname, maybe some path-based routing. It worked. But as teams started needing header-based routing, traffic splitting, cross-namespace references, and TLS passthrough, everyone turned to annotations. And annotations are where good APIs go to die.

Every Ingress controller implemented its own annotation scheme. What works on NGINX Ingress doesn't work on Traefik. What works on Traefik doesn't work on HAProxy. Your "portable" Kubernetes manifests became vendor-locked the moment you added nginx.ingress.kubernetes.io/rewrite-target.

The Gateway API is the official answer to this mess, and after running it in production for over a year, let me tell you why it's worth the migration — and how to do it without an outage.

What Gateway API Actually Changes

The Gateway API isn't an incremental improvement over Ingress. It's a fundamentally different model built around role-oriented design. Understanding this distinction matters before you touch any YAML.

Ingress model: One resource type does everything. The cluster admin and the application developer both edit the same Ingress object.

Gateway API model: Responsibilities are split across multiple resources:

GatewayClass  →  Managed by infrastructure provider (like a StorageClass)
Gateway       →  Managed by cluster operators (ports, TLS, addresses)
HTTPRoute     →  Managed by application developers (routing rules)

This separation is not bureaucratic overhead — it's a security boundary. Your app developers can define their own routing rules without needing permissions to modify the gateway's TLS certificates or listener configuration.

Side-by-Side: The Same Routing in Both APIs

Let's see what a typical setup looks like in both approaches.

Ingress (NGINX)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - api.example.com
    secretName: api-tls-cert
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /v2/(.*)
        pathType: ImplementationSpecific
        backend:
          service:
            name: api-v2
            port:
              number: 8080
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-v1
            port:
              number: 8080

Gateway API

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: gateway-infra
spec:
  gatewayClassName: nginx
  listeners:
  - name: https
    protocol: HTTPS
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - name: api-tls-cert
        namespace: production
    allowedRoutes:
      namespaces:
        from: Selector
        selector:
          matchLabels:
            gateway-access: "true"
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-routes
  namespace: production
spec:
  parentRefs:
  - name: production-gateway
    namespace: gateway-infra
  hostnames:
  - api.example.com
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /v2
    backendRefs:
    - name: api-v2
      port: 8080
      weight: 80
    - name: api-v2-canary
      port: 8080
      weight: 20
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: api-v1
      port: 8080

Look at the Gateway API version. Traffic splitting is a first-class field, not an annotation hack. Cross-namespace references are explicit with proper RBAC. TLS configuration lives on the Gateway, not the route. Everything that was shoved into annotations is now a proper, typed, validated API field.

When You Should Migrate

Not every team needs to migrate today. Here's my honest assessment:

Migrate now if:

  • You're using advanced routing features via annotations (canary, header matching, traffic splitting)
  • You manage multiple teams sharing ingress infrastructure
  • You're starting a new cluster or greenfield project
  • You're hitting the limits of Ingress's one-resource-does-everything model

Wait if:

  • You have simple routing needs (host + path to service) and Ingress works fine
  • Your controller doesn't support Gateway API yet
  • You're mid-migration on something else and can't absorb another change

Here's the thing — the Ingress API isn't being removed any time soon. It's stable, it works, and controllers will support it for years. This isn't a "migrate or die" situation. It's a "migrate when the benefits outweigh the effort" decision.

The Migration Strategy That Doesn't Break Things

I've migrated three production clusters from Ingress to Gateway API. Here's the approach that worked every time.

Phase 1: Run Both in Parallel

Install a Gateway API-compatible controller alongside your existing Ingress controller. Most modern controllers (NGINX Gateway Fabric, Envoy Gateway, Cilium) support both APIs simultaneously.

# Install the Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml

# Deploy your Gateway controller (example: NGINX Gateway Fabric)
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric \
  --namespace nginx-gateway \
  --create-namespace \
  --set service.type=LoadBalancer

Verify the GatewayClass is available:

kubectl get gatewayclass
# NAME    CONTROLLER                       ACCEPTED
# nginx   gateway.nginx.org/nginx-gateway  True

Phase 2: Create the Gateway and Mirror One Route

Start with a non-critical service. Create the Gateway and an HTTPRoute, but point it at a separate load balancer IP. Test with direct requests before touching DNS.

# Get the new gateway's external IP
kubectl get gateway production-gateway -n gateway-infra -o jsonpath='{.status.addresses[0].value}'

# Test directly against the new gateway
curl -H "Host: api.example.com" https://203.0.113.50/healthz --resolve api.example.com:443:203.0.113.50

Phase 3: Shift DNS with Weighted Records

Use weighted DNS to gradually shift traffic from the old Ingress load balancer to the new Gateway load balancer:

# Week 1: 90% old, 10% new
api.example.com  A  198.51.100.10  weight=90  (old Ingress LB)
api.example.com  A  203.0.113.50   weight=10  (new Gateway LB)

# Week 2: 50/50
# Week 3: 10/90
# Week 4: 0/100 — decommission old Ingress

Phase 4: Migrate Remaining Routes

Once the first service is stable on Gateway API, migrate the rest one at a time. Each migration follows the same pattern: create HTTPRoute, test with direct IP, shift DNS, verify, remove old Ingress.

# Track migration progress
kubectl get ingress --all-namespaces | wc -l    # Should decrease
kubectl get httproute --all-namespaces | wc -l  # Should increase

Gateway API Features Worth Knowing

Beyond basic routing, Gateway API gives you capabilities that required third-party CRDs or controller-specific hacks with Ingress.

Header-Based Routing

rules:
- matches:
  - headers:
    - name: x-api-version
      value: "beta"
  backendRefs:
  - name: api-beta
    port: 8080

Request Mirroring

rules:
- matches:
  - path:
      type: PathPrefix
      value: /api
  backendRefs:
  - name: api-primary
    port: 8080
  filters:
  - type: RequestMirror
    requestMirror:
      backendRef:
        name: api-shadow
        port: 8080

URL Rewriting

rules:
- matches:
  - path:
      type: PathPrefix
      value: /legacy
  filters:
  - type: URLRewrite
    urlRewrite:
      path:
        type: ReplacePrefixMatch
        replacePrefixMatch: /v2
  backendRefs:
  - name: api-v2
    port: 8080

All of these are part of the standard API — not annotations, not CRDs, not controller-specific extensions. They work the same way regardless of which Gateway API implementation you choose.

Common Pitfalls I've Hit

Cross-namespace references need ReferenceGrants. If your HTTPRoute in namespace production references a backend in namespace shared-services, you need an explicit ReferenceGrant in the target namespace. Without it, the route silently fails to attach.

apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
  name: allow-production-routes
  namespace: shared-services
spec:
  from:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    namespace: production
  to:
  - group: ""
    kind: Service

Gateway listeners have explicit namespace selectors. If your HTTPRoute isn't attaching to the Gateway, check that the route's namespace matches the Gateway's allowedRoutes.namespaces selector. This catches people who are used to Ingress where any namespace can reference any IngressClass.

Status conditions are your debugging friend. Every Gateway API resource reports detailed status conditions. Get in the habit of checking them:

kubectl get httproute api-routes -n production -o yaml | grep -A 20 "status:"

Debugging Gateway API Route Issues

When an HTTPRoute isn't working as expected, the Gateway API's status conditions are your first stop. Unlike Ingress, where misconfigurations often fail silently, Gateway API resources report detailed status on every resource.

# Check if the HTTPRoute is accepted by the Gateway
kubectl get httproute api-routes -n production -o jsonpath='{.status.parents[*].conditions}' | jq .

The key conditions to look for:

ConditionStatusMeaning
AcceptedTrueRoute is attached to the Gateway and configured
AcceptedFalseRoute failed to attach — check the reason field
ResolvedRefsTrueAll backend references are valid
ResolvedRefsFalseA backend service or ReferenceGrant is missing

Common failures and their fixes:

# Problem: Route not attaching — "NotAllowedByListeners"
# The Gateway's allowedRoutes namespace selector doesn't match
kubectl label namespace production gateway-access=true

# Problem: "BackendNotFound" — service doesn't exist
kubectl get svc api-v2 -n production
# If missing, create the service first

# Problem: "RefNotPermitted" — cross-namespace reference without ReferenceGrant
# Create a ReferenceGrant in the target namespace (see earlier section)

For a comprehensive health check across all your routes, use this script:

#!/bin/bash
# scripts/check-gateway-health.sh
echo "=== Gateway Status ==="
kubectl get gateways -A -o custom-columns=\
'NAMESPACE:.metadata.namespace,NAME:.metadata.name,CLASS:.spec.gatewayClassName,READY:.status.conditions[?(@.type=="Programmed")].status'

echo ""
echo "=== HTTPRoute Status ==="
kubectl get httproutes -A -o custom-columns=\
'NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNAMES:.spec.hostnames[*],ACCEPTED:.status.parents[*].conditions[?(@.type=="Accepted")].status'

echo ""
echo "=== Unattached Routes ==="
kubectl get httproutes -A -o json | \
  jq -r '.items[] | select(.status.parents[].conditions[] |
    .type == "Accepted" and .status == "False") |
  "\(.metadata.namespace)/\(.metadata.name): \(.status.parents[].conditions[] | select(.type == "Accepted") | .reason)"'

Run this after every migration step to confirm all routes are healthy before moving to the next service.

TLS Certificate Management With Gateway API

One of the biggest operational improvements Gateway API offers is cleaner TLS management. With Ingress, every team managed their own TLS secrets. With Gateway API, TLS terminates at the Gateway, and cert-manager integrates directly.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wildcard-example-com
  namespace: gateway-infra
spec:
  secretName: wildcard-example-com-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
    - "*.example.com"
    - "example.com"
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: gateway-infra
spec:
  gatewayClassName: nginx
  listeners:
    - name: https-wildcard
      protocol: HTTPS
      port: 443
      hostname: "*.example.com"
      tls:
        mode: Terminate
        certificateRefs:
          - name: wildcard-example-com-tls
      allowedRoutes:
        namespaces:
          from: Selector
          selector:
            matchLabels:
              gateway-access: "true"
    - name: http-redirect
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: Same

With this setup, application teams never touch TLS configuration. They create HTTPRoutes that reference the Gateway, and TLS termination happens automatically. No more forgotten certificate renewals in individual Ingress resources. No more teams copying TLS secrets across namespaces.

The http-redirect listener gives you a place to attach an HTTPRoute that redirects all HTTP traffic to HTTPS:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-to-https-redirect
  namespace: gateway-infra
spec:
  parentRefs:
    - name: production-gateway
      sectionName: http-redirect
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
            statusCode: 301

Every HTTP request hitting port 80 gets a 301 redirect to HTTPS. Defined once, enforced cluster-wide.

Final Thoughts

The Gateway API is the future of Kubernetes networking — that much is clear. But "the future" doesn't mean you need to drop everything and migrate today. If Ingress is working for you and your routing needs are simple, there's no shame in waiting.

When you do migrate, do it gradually. Run both APIs in parallel, shift traffic with weighted DNS, and migrate one service at a time. The worst thing you can do is a big-bang migration on a Friday afternoon.

Let me tell you why I'm genuinely optimistic about Gateway API: it's the first time the Kubernetes networking model has been designed with real-world multi-team operations in mind. The role separation, the explicit cross-namespace security model, the typed API fields — this is what production networking should look like.

Share:
Aareez Asif
Aareez Asif

Senior Kubernetes Architect

10+ years orchestrating containers in production. Battle-tested opinions on everything from pod scheduling to service mesh. I've seen clusters burn and helped rebuild them better.

Related Articles