Service Accounts in Kubernetes
Identity and Authentication in a Kubernetes Cluster.
Software Engineer | Technical Writer
The Kubernetes API provides access to the Kubernetes objects in a cluster. However, authentication is required before any entity can interact with the objects in a Kubernetes cluster. The handy kubectl command or (k command as most of us ubiquitously set it in our terminal) requires a user to be set in the kubeconfig. This user’s credentials, combined with the cluster’s URL and a default namespace, are used to establish a context to tell kubectl how to connect to a cluster. This is the typical mechanism for interacting with a Kubernetes cluster in a terminal. However, sometimes a Kubernetes workload (for example, a pod) needs to communicate with the Kubernetes API of the cluster in which it is deployed. The kubeconfig is stored outside the cluster, so a pod cannot authenticate with a user in the kubeconfig; hence, the need for a different mode of authentication. Enter service accounts.
A service account is a distinct identity within a Kubernetes cluster. It is stored in the Kubernetes cluster (in etcd), and as such, it is Kubernetes-managed. Service accounts allow application pods or internal processes in a cluster to authenticate and interact with the Kubernetes API server.
Note: Service Accounts are namespaced. Every Kubernetes namespace gets a default service account upon creation.
This article will discuss two points:
Use cases of Service Accounts in Kubernetes.
Example usage of Service Accounts with k8s-wait-for.
Prerequisites:
Basic understanding of containers.
Basic knowledge of Kubernetes terminologies.
Use cases of Service Accounts in Kubernetes
CI/CD pipelines
CI/CD workflows can be integrated with a Kubernetes cluster to spin up ephemeral services to run CI jobs. An example is Gitlab’s CI/CD integration with K8s. A service account is required to ensure restricted access for the CI jobs, and in some cases, to allow access to the cluster.
Monitoring and Logging
Monitoring tools like Prometheus need to collect logs, metrics, and traces across workloads in a cluster. At the base level, Prometheus needs to gather logs from nodes, ingresses, services, and pods, depending on the monitoring setup. For these log collections to happen, a service account is needed to grant Prometheus the required privileges in the Kubernetes cluster it exists in.
Dependent-startup pipeline
A service can be dependent on another service to function (for example, an API that needs a database). To solve this problem, the deployment of the dependent services that the application needs to function should be ordered. A reliable pattern to solve this problem in Kubernetes is to check the deployment status of the dependent services by interacting with the cluster’s API. A service account is used to authenticate against the cluster’s API to get the deployment status of the dependent services.
Example Usage of Service Accounts in a Dependent-Startup Pipeline
It is not uncommon for a deployment to be executed in order. For example, look at the infographic below:

All the services for this mythical application need to be deployed in this order. A tool that helps with this is k8s-wait-for. It is a simple script that allows waiting for a Kubernetes service, job, or pod to enter the desired state.
In this example, we will demonstrate the API deployment dependency of a Todo API by running migrations first before deploying the API. Referenced source code can be found here.
Here’s what the API deployment looks like:

For this implementation, the database migration will be run as a Kubernetes Job. To listen for when to trigger the API deployment after the DB migration job is successful with k8s-wait-for, we’ll use an init container on the API deployment manifest.
An initContainer is a container in a pod that runs to completion before the pod containers are started.
Database migration Job manifest:
apiVersion: batch/v1
kind: Job
metadata:
name: go-todo-pre-deploy
labels:
app: go-todo
tier: job
spec:
template:
spec:
containers:
- name: pre-deploy
image: docker.io/teniolafatunmbi/go-todo
command:
[
"sh",
"-c",
'migrate -source file:///app/internal/database/migrations -database "postgres://$DB_USER:$DB_PASS@$DB_HOST:$DB_PORT/$DB_NAME?sslmode=disable" up',
]
envFrom:
- configMapRef:
name: go-todo-config
restartPolicy: Never
API Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-todo-deployment
namespace: go-todo
labels:
app: go-todo
tier: application
spec:
selector:
matchLabels:
app: go-todo
tier: application
template:
metadata:
name: go-todo-app
labels:
app: go-todo
tier: application
spec:
serviceAccountName: go-todo-sa
initContainers:
- name: wait-for-go-todo-pre-deploy
image: ghcr.io/groundnuty/k8s-wait-for:v2.0
imagePullPolicy: Always
args:
- "job"
- "go-todo-pre-deploy"
containers:
- name: go-todo
image: docker.io/teniolafatunmbi/go-todo
ports:
- containerPort: 9000
envFrom:
- configMapRef:
name: go-todo-config
readinessProbe:
httpGet:
path: /todos
port: 9000
initialDelaySeconds: 3
periodSeconds: 3
livenessProbe:
httpGet:
path: /
port: 9000
initialDelaySeconds: 3
periodSeconds: 3
The initContainers stanza in the go-todo-deployment defines a container that runs the k8s-wait-for script with the job go-todo-pre-deploy arguments. What that does is to wait for the go-todo-pre-deploy to be successful. If the job is successful, the wait-for-go-todo-pre-deploy init container runs successfully, and the API deployment proceeds.
There’s one field to note in this go-todo-deployment manifest - the serviceAccountName field. Remember that k8s-wait-for checks the status of a specified Kubernetes object. For that to happen, it needs to interact with the Kubernetes API server. And, if there’s no authentication identity, reading from the Kubernetes API server will fail. You’ll get an error that reads like this:
Error from server (Forbidden): services is forbidden: User "system:serviceaccount:default:default" cannot list resource "<resource-name>" in API group "" in the namespace "<namespace-name>"
This is because every pod is assigned a default service account, but it has limited privileges. Hence, a need for a dedicated service account with only the required privileges for the init container to run.
Regarding the service account to use: A Pod is assigned a default service account with limited permissions. It typically lacks the ability to list or modify most cluster resources beyond basic API discovery. Whether or not to update the privileges in the default service account or use a dedicated service account is a matter of choice. I use a dedicated service account (
go-todo-sa) in this case because I like to be explicit.
Service Account manifests:
Creating a service account for use requires three Kubernetes objects: the ServiceAccount, a Role, and a RoleBinding.
The Role defines the privileges to grant.
The ServiceAccount defines the name (and other metadata) of the service account.
The RoleBinding attaches a defined Role to a ServiceAccount.

Manifests:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-todo-sa
namespace: go-todo
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: go-todo
name: pod-reader
rules:
- apiGroups: ["", "batch"]
resources: ["pods", "jobs"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: go-todo
subjects:
- kind: ServiceAccount
name: go-todo-sa
namespace: go-todo
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
With this setup - a job to run database migrations, a dedicated service account to read jobs and pods, and an init container to listen for the completion status of the database migration job, we have an API deployment pipeline that is executed in a defined order.
Conclusion
Service accounts are a critical component for authentication in Kubernetes. Understanding how they work is crucial to building and orchestrating Kubernetes workloads and components that interact with the Kubernetes API server securely.



