Skip to content

Access K3s Cluster via SSH Tunnel - kubectl Setup

The K3s control plane runs in a private subnet. Access requires using the ingress node as a jump host.

flowchart LR
    subgraph Local["Your Machine"]
        Terminal[Terminal]
        Kubectl[kubectl]
    end

    subgraph Public["Public Subnet"]
        Ingress[k3s-ingress<br/>Jump Host]
    end

    subgraph Private["Private Subnet"]
        Server[k3s-server<br/>API :6443]
    end

    Terminal -->|SSH| Ingress
    Ingress -->|SSH -J| Server
    Kubectl -->|:16443 tunnel| Ingress
    Ingress -->|:6443| Server

Get IP addresses from Terraform:

Terminal window
cd tf-k3s
terraform output
Terminal window
ssh ubuntu@<ingress-public-ip>
Terminal window
ssh -J ubuntu@<ingress-public-ip> ubuntu@10.0.2.10
Terminal window
ssh -J ubuntu@<ingress-public-ip> ubuntu@<worker-private-ip>
sequenceDiagram
    participant Local as Local Machine
    participant Ingress as k3s-ingress
    participant Server as k3s-server

    Note over Local,Ingress: Terminal 1: Start Tunnel
    Local->>Ingress: ssh -N -L 16443:10.0.2.10:6443
    Ingress->>Server: Forward to :6443

    Note over Local: Terminal 2: Use kubectl
    Local->>Local: kubectl --server=localhost:16443
    Local->>Ingress: Request via tunnel :16443
    Ingress->>Server: Forward to :6443
    Server->>Ingress: Response
    Ingress->>Local: Response

Open a terminal and start the tunnel:

Terminal window
ssh -N -L 16443:10.0.2.10:6443 ubuntu@<ingress-public-ip>

Port 16443 avoids conflicts with local Kubernetes clusters.

In a new terminal:

Terminal window
mkdir -p ~/.kube
ssh -J ubuntu@<ingress-public-ip> ubuntu@10.0.2.10 "sudo cat /etc/rancher/k3s/k3s.yaml" | \
sed 's|server: https://127.0.0.1:6443|server: https://127.0.0.1:16443|g' | \
sed 's|name: default|name: k3s-oracle|g' | \
sed 's|cluster: default|cluster: k3s-oracle|g' | \
sed 's|user: default|user: k3s-oracle|g' | \
sed 's|current-context: default|current-context: k3s-oracle|g' \
> ~/.kube/k3s-oracle.yaml

To use alongside other clusters:

Terminal window
cp ~/.kube/config ~/.kube/config.backup
KUBECONFIG=~/.kube/config:~/.kube/k3s-oracle.yaml kubectl config view --flatten > ~/.kube/config.merged
mv ~/.kube/config.merged ~/.kube/config
kubectl config use-context k3s-oracle
Terminal window
kubectl get nodes

Without merging:

Terminal window
KUBECONFIG=~/.kube/k3s-oracle.yaml kubectl get nodes

Or export for the session:

Terminal window
export KUBECONFIG=~/.kube/k3s-oracle.yaml

If argocd-ingress is configured:

https://cd.<your-domain>
Terminal window
ssh -L 8080:localhost:8080 -J ubuntu@<ingress-public-ip> ubuntu@10.0.2.10 \
"sudo kubectl port-forward svc/argocd-server -n argocd 8080:443"

Open https://localhost:8080 in a browser.

Username: admin

Password:

Terminal window
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d

Remove old host keys after cluster recreation:

Terminal window
ssh-keygen -R <ingress-public-ip>

Verify the ingress node is running in OCI Console and security list rules allow SSH from your IP.

Ensure the SSH tunnel is running in another terminal.

Use autossh for automatic reconnection:

Terminal window
brew install autossh
autossh -M 0 -N -L 16443:10.0.2.10:6443 ubuntu@<ingress-public-ip>

Or add keep-alive options:

Terminal window
ssh -N -L 16443:10.0.2.10:6443 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 ubuntu@<ingress-public-ip>