Getting started with PodSecurityPolicy on EKS
As you might already know, security is not something that a specialized department should be responsible for. Instead, we all should bear in mind that security matters and build secure solutions from the very beginning. Today we’re gonna talk about some practices in the world of containerized apps and Kubernetess. So let’s say we have some best practices for the containerized workloads. But how to enforce that? We’ll tell you right away.
And the answer is actually pretty straightforward. Do you want to enforce some security policies? Use Pod Security Policy resource. In reality, Pod Security Policy is just an admission controller which is able to check if pods comply to assigned set of rules.
For instance, we can check if pod is not running in privileged mode or we can also restrict the use of certain volumes types, this comes handy for example when you don’t want to allow access to host filesystem.
So why do I even write this post? We have some specialized resource so we just create it and that’s it, right? Well, it’s not that easy. And that’s why I wrote this article. Let’s begin.
Key principles
As already mentioned, Pod Security Policy is just another Kubernetes resource. The following document contains a really simple policy that prohibits the run of privileged containers.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
But when you apply this document with kubectl
, it just does nothing.
That’s because PSP mechanism is deeply integrated with RBAC system.
In order to use this policy, you need to authorize the respective accounts
to use it first.
RBAC magic
So this is the basic ClusterRole which authorizes the use of verb use
on
the PSP created before.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:example
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- example
Now we need to bind this ClusterRole
to the certain ServiceAcccout
.
Here I need to step in and add some context. Why are we talking about ServiceAccounts
when the workload is created by some users from flesh and bones?
The thing is that usually, we don’t create pod resources by ourselves but
we’re using some controllers instead. You can check more details in the
official documentation. So the whole job is done by someone else hence
we need to bind ClusterRole
accordingly. To show you the whole picture,
here are ServiceAccount
and Deployment
resources we want to use for our
workload.
apiVersion: v1
kind: ServiceAccount
metadata:
name: app
namespace: app-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: app-test
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
serviceAccountName: app
containers:
- name: app
image: mycustomnamespace/app:v1.0.0
ports:
- containerPort: 80
Now we know that ServiceAccount
app
lives in the app-test
Namespace
so
we can create a suitable ClusterRoleBinding
.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp:example:app:app-test
roleRef:
kind: ClusterRole
name: psp:example
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: app
namespace: app-test
From now, we should be able to run our workload as long as it does not run in privileged mode. Sweet.
EKS-specific configuration
But let’s talk about EKS. This offering has enabled PSP out of the box. Do you have any experience with EKS? If yes, you know then that you are able to create any workload with no restrictions.
However, it does not mean that policy engine is not enabled. The reason is that PSP is a pretty comprehensive topic so EKS comes with preconfigured policy bound to all authenticated users.
The key elements are PodSecurityPolicy
eks.privileged
and
ClusterRoleBinding
eks:podsecuritypolicy:authenticated
.
What I’m saying here is that in order to apply your own rules you need to delete those resources and start on the green field.
kubectl delete podsecuritypolicy eks.privileged
kubectl delete clusterrolebinding eks:podsecuritypolicy:authenticated
But when you do so, please be prepared for the following: no new pods can be created from now. You’ll most likely get a similar error since we need at least one policy in order to start any pod with Pod Security Policy enabled.
unable to validate against any pod security policy: []
All mentioned is already documented by Amazon, personally I really like this blog post with a step-by-step guide.
RBAC pitfalls
When applying ultra restrictive policies, time to time you might encounter situations when the pod starts even when it does not comply with the PodSecurityPolicy. From my experience, it’s always caused by incorrectly configured RBAC.
For instance, you’ve allowed all verbs to all resources in the extensions
API group
so now your ServiceAccount
is assigned to all PodSecurityPolicy
resources
hence the restrictions don’t work as intended.
Please also note that some resources are availabel in multiple api groups. For instance,
PodSecurityPolicy
can be inpolicy
andextensions
at the same time! Dangerous stuff 😀
But don’t worry, you can address such issues with auth can-i
command!
kubectl auth can-i use podsecuritypolicy/example --namespace app-test --as app
Wrap
Security really matters and you should not skip PodSecurityPolicy
just because it’s not as straightforward as other Kubernetes resources.
Sure, this component can’t save the whole world but it can help you with some
basics. Moreover, it’s no so complex as you could see in this post. Give it a shot!
Do you have any specific questions regarding PSP? Do not hesitate to reach us, we’re always happy to help.
Additional resources
- Policy order
- EKS step-by-step guide for PSP
- EKS documentation for PSP
- Docker best practices, see the section about least privileged user
- Kubernetes security context
- Checking API access with Kubectl