Skip to content

Kubernetes Ingress with OCI Network Load Balancer

This cluster uses OCI’s Always Free Network Load Balancer combined with Envoy Gateway for ingress.

In a typical cloud Kubernetes deployment:

flowchart LR
    User((User)) --> LB[Cloud Load Balancer<br/>$$$/month]
    LB --> NP[NodePort Service<br/>:30000-32767]
    NP --> Pod[Application Pod]
  1. A cloud load balancer receives traffic on ports 80/443
  2. The load balancer forwards traffic to NodePort services
  3. Kubernetes routes traffic to the appropriate pods

This requires a paid load balancer resource.

This cluster uses the OCI Always Free Network Load Balancer for a stable ingress IP:

flowchart LR
    User((User)) --> DNS[Cloudflare DNS]
    DNS --> NLB[OCI Network<br/>Load Balancer]
    NLB --> Ingress[Ingress Node<br/>hostPort :80/:443]
    Ingress --> Envoy[Envoy Pod]
    Envoy --> Pod[Application Pod]

    style NLB fill:#FFD700
    style Ingress fill:#90EE90
    style DNS fill:#87CEEB
  1. DNS points to the Network Load Balancer’s public IP
  2. NLB forwards TCP traffic to the ingress node
  3. Envoy Gateway binds ports 80/443 on the host via hostPort
  4. Envoy routes traffic to backend services

The NLB provides a stable IP that doesn’t change if the ingress node is replaced.

External DNS creates A records in Cloudflare pointing your domain to the ingress node’s public IP:

k3s.example.com -> 132.226.43.62

The Envoy proxy runs as a Kubernetes pod but binds directly to the host’s network ports using hostPort:

containers:
- name: envoy
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443

When traffic arrives at the ingress node on port 80 or 443, it goes directly to the Envoy container.

The Envoy proxy only runs on the ingress node via nodeSelector:

nodeSelector:
role: ingress

The ingress node is labeled during K3s installation with --node-label role=ingress.

flowchart TB
    subgraph Internet
        User((User))
        DNS[DNS: k3s.example.com<br/>→ 132.226.43.62]
    end

    subgraph Ingress["Ingress Node (132.226.43.62)"]
        HP[hostPort :443]
        Envoy[Envoy Container]
    end

    subgraph Cluster["K3s Cluster Network"]
        GW[Gateway API<br/>public-gateway]
        HR[HTTPRoute]
        SVC[Backend Service<br/>ClusterIP]
        Pod[Application Pod]
    end

    User -->|HTTPS| DNS
    DNS -->|TCP :443| HP
    HP --> Envoy
    Envoy --> GW
    GW --> HR
    HR --> SVC
    SVC --> Pod
flowchart TB
    subgraph HostNetwork["hostNetwork: true - Not Used"]
        HN_Pod[Pod]
        HN_Host[Host Network Namespace]
        HN_DNS["resolv.conf - Host DNS"]
        HN_Pod --> HN_Host
        HN_Host --> HN_DNS
        HN_DNS -.->|Cannot resolve| K8S_SVC[kubernetes.default]
    end

    subgraph HostPort["hostPort - Used"]
        HP_Pod[Pod]
        HP_Cluster[Cluster Network Namespace]
        HP_CoreDNS[CoreDNS]
        HP_Ports["Host Ports 80/443"]
        HP_Pod --> HP_Cluster
        HP_Cluster --> HP_CoreDNS
        HP_Pod -.-> HP_Ports
        HP_CoreDNS -->|Resolves| K8S_SVC2[kubernetes.default]
    end

Using hostNetwork: true would give the pod the host’s network namespace. This causes DNS resolution issues because the host’s /etc/resolv.conf does not know about Kubernetes services.

Using hostPort keeps the pod in the cluster network namespace while still binding to host ports. The pod can resolve Kubernetes services via CoreDNS.

All public applications share a single Gateway resource (public-gateway in the default namespace). This is required because only one Envoy deployment can bind to hostPort 80/443 on the ingress node.

The Gateway defines multiple HTTPS listeners, one per hostname:

listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
- name: https-docs
port: 443
protocol: HTTPS
hostname: "k3s.sudhanva.me"
tls:
certificateRefs:
- name: docs-tls
- name: https-argocd
port: 443
protocol: HTTPS
hostname: "cd.k3s.sudhanva.me"
tls:
certificateRefs:
- name: argocd-tls
namespace: argocd

All HTTP traffic is redirected to HTTPS using a 301 permanent redirect. Each application has two HTTPRoutes:

flowchart LR
    subgraph Request["User Request"]
        HTTP[HTTP :80]
        HTTPS[HTTPS :443]
    end

    subgraph Gateway["public-gateway"]
        L80[http listener]
        L443[https-* listener]
    end

    subgraph Routes["HTTPRoutes"]
        Redirect[docs-redirect<br/>301 → HTTPS]
        Main[docs-route<br/>Backend Service]
    end

    HTTP --> L80
    L80 --> Redirect
    Redirect -->|301 Redirect| HTTPS
    HTTPS --> L443
    L443 --> Main
    Main --> App[Application]
  1. A route attached to the HTTPS listener that serves traffic
  2. A redirect route attached to the HTTP listener
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: docs-redirect
spec:
parentRefs:
- name: public-gateway
sectionName: http
hostnames:
- "k3s.sudhanva.me"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301

When the Gateway references a TLS secret in a different namespace, a ReferenceGrant is required:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-gateway-to-secrets
namespace: argocd
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: envoy-gateway-system
to:
- group: ""
kind: Secret

This allows the public-gateway in the envoy-gateway-system namespace to access TLS secrets in the argocd namespace. Similar ReferenceGrants are needed for each namespace containing TLS secrets (e.g., default for docs-tls).

All traffic enters through one node. If this node fails, the cluster is unreachable. For production workloads, consider:

  • Multiple ingress nodes with round-robin DNS
  • A cloud load balancer (not free)

OCI security lists must allow inbound traffic on ports 80 and 443 from 0.0.0.0/0. The ingress node’s iptables must also accept this traffic.

Envoy terminates TLS using certificates issued by Cert Manager. Traffic between Envoy and backend services is unencrypted within the cluster network.

ApproachCostComplexityAvailability
Paid Cloud Load Balancer~$12/monthLowHigh
OCI Free NLB + hostPort (this cluster)FreeMediumSingle node (expandable)
hostPort onlyFreeLowSingle node, IP tied to instance
NodePort + DNSFreeLowSingle node

This cluster uses the free OCI Network Load Balancer with hostPort for a stable IP and Gateway API compatibility.