Deploy Applications to K3s with ArgoCD GitOps
This guide explains how to add new applications to the cluster using the GitOps workflow.
Application Structure
Section titled “Application Structure”All applications are managed through ArgoCD using the App of Apps pattern:
argocd/├── applications.yaml # Root app that manages all others├── apps/ # Your applications│ └── docs/ # Example: docs app│ ├── deployment.yaml│ ├── service.yaml│ ├── gateway.yaml│ └── httproute.yaml└── infrastructure/ # Cluster infrastructure ├── argocd/ ├── cert-manager/ ├── envoy-gateway/ ├── external-dns/ └── external-secrets/Adding a New Application
Section titled “Adding a New Application”-
Create application directory
Terminal window mkdir -p argocd/apps/my-app -
Create Kubernetes manifests
At minimum, you need:
deployment.yaml- Your application podsservice.yaml- Internal servicehttproute.yaml- External HTTP routing (optional)
-
Add to applications.yaml
Add a new Application resource:
---apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata:name: my-appnamespace: argocdfinalizers:- resources-finalizer.argocd.argoproj.iospec:project: defaultsource:repoURL: https://github.com/nsudhanva/k3s-oracle.gittargetRevision: HEADpath: argocd/apps/my-appdestination:server: https://kubernetes.default.svcnamespace: my-appsyncPolicy:automated:prune: trueselfHeal: truesyncOptions:- CreateNamespace=true -
Commit and push
Terminal window git add argocd/apps/my-app argocd/applications.yamlgit commit -m "feat: Add my-app"git push -
Verify in ArgoCD
ArgoCD will automatically sync within ~3 minutes, or force sync:
Terminal window argocd app sync my-app
Example: Web Application with HTTPS
Section titled “Example: Web Application with HTTPS”deployment.yaml
Section titled “deployment.yaml”apiVersion: apps/v1kind: Deploymentmetadata: name: my-appspec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: nginx:alpine ports: - containerPort: 80 resources: requests: cpu: 10m memory: 32Mi limits: cpu: 100m memory: 128Miservice.yaml
Section titled “service.yaml”apiVersion: v1kind: Servicemetadata: name: my-appspec: selector: app: my-app ports: - port: 80 targetPort: 80httproute.yaml
Section titled “httproute.yaml”apiVersion: gateway.networking.k8s.io/v1kind: HTTPRoutemetadata: name: my-app-routespec: parentRefs: - name: public-gateway namespace: default hostnames: - "my-app.k3s.sudhanva.me" rules: - backendRefs: - name: my-app port: 80Using Private Container Images
Section titled “Using Private Container Images”If your app uses a private container registry (like GHCR):
spec: template: spec: imagePullSecrets: - name: regcred containers: - name: my-app image: ghcr.io/your-username/my-app:latestThe regcred secret is created during cluster bootstrap with GitHub credentials.
Application with Secrets
Section titled “Application with Secrets”For applications that need secrets from OCI Vault, use External Secrets Operator:
externalsecret.yaml
Section titled “externalsecret.yaml”apiVersion: external-secrets.io/v1kind: ExternalSecretmetadata: name: my-app-secretsspec: refreshInterval: 1h secretStoreRef: name: oci-vault kind: ClusterSecretStore target: name: my-app-secrets data: - secretKey: API_KEY remoteRef: key: my-app-api-keyMonitoring Deployments
Section titled “Monitoring Deployments”Check ArgoCD Status
Section titled “Check ArgoCD Status”# Via kubectlssh -J ubuntu@132.226.43.62 ubuntu@10.0.2.10 \ "sudo kubectl get applications -n argocd"
# Via ArgoCD CLIargocd app listargocd app get my-appCheck Pod Status
Section titled “Check Pod Status”ssh -J ubuntu@132.226.43.62 ubuntu@10.0.2.10 \ "sudo kubectl get pods -n my-app"View Logs
Section titled “View Logs”ssh -J ubuntu@132.226.43.62 ubuntu@10.0.2.10 \ "sudo kubectl logs -n my-app -l app=my-app"Resource Limits
Section titled “Resource Limits”The cluster runs on OCI Always Free tier with limited resources. Keep resource requests minimal:
| Resource | Recommended Request | Limit |
|---|---|---|
| CPU | 10m - 50m | 100m - 200m |
| Memory | 32Mi - 64Mi | 128Mi - 256Mi |
Total cluster capacity: 4 OCPUs, 24 GB RAM (shared across all workloads).