Suppose you need multiple applications to share files seamlessly, without worrying about running out of storage space or struggling with complex configurations. That’s where AWS Elastic File System (EFS) comes in. EFS is a fully managed, scalable file system that multiple AWS services or containers can access. In this guide, we’ll take a simple yet comprehensive journey through the process of mounting AWS EFS to an Amazon Elastic Kubernetes Service (EKS) cluster. I’ll make sure to keep it straightforward, so you can follow along regardless of your Kubernetes experience.
Why use EFS with EKS?
Before we go into the details, let’s consider why using EFS in a Kubernetes environment is beneficial. Imagine you have multiple applications (pods) that all need to access the same data—like a shared directory of documents. Instead of replicating data for each application, EFS provides a centralized storage solution that can be accessed by all pods, regardless of which node they’re running on.
Here’s what makes EFS a great choice for EKS:
- Shared Storage: Multiple pods across different nodes can access the same files at the same time, making it perfect for workloads that require shared access.
- Scalability: EFS automatically scales up or down as your data needs change, so you never have to worry about manually managing storage limits.
- Durability and Availability: AWS ensures that your data is highly durable and accessible across multiple Availability Zones (AZs), which means your applications stay resilient even if there are hardware failures.
Typical use cases for using EFS with EKS include machine learning workloads, content management systems, or shared file storage for collaborative environments like JupyterHub.
Prerequisites
Before we start, make sure you have the following:
- EKS Cluster: You need a running EKS cluster, and kubectl should be configured to access it.
- EFS File System: An existing EFS file system in the same AWS region as your EKS cluster.
- IAM Roles: Correct IAM roles and policies for your EKS nodes to interact with EFS.
- Amazon EFS CSI Driver: This driver must be installed in your EKS cluster.
How to mount AWS EFS on EKS
Let’s take it step by step, so by the end, you’ll have a working setup where your Kubernetes pods can use EFS for shared, scalable storage.
Create an EFS file system
To begin, navigate to the EFS Management Console:
- Create a New File System: Select the appropriate VPC and subnets—they should be in the same region as your EKS cluster.
- File System ID: Note the File System ID; you’ll use it later.
- Networking: Ensure that your security group allows inbound traffic from the EKS worker nodes. Think of this as permitting EKS to access your storage safely.
Set up IAM role for the EFS CSI driver
The Amazon EFS CSI driver manages the integration between EFS and Kubernetes. For this driver to work, you need to create an IAM role. It’s a bit like giving the CSI driver its set of keys to interact with EFS securely.
To create the role:
- Log in to the AWS Management Console and navigate to IAM.
- Create a new role and set up a custom trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<account-id>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<oidc-provider-id>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.<region>.amazonaws.com/id/<oidc-provider-id>:sub": "system:serviceaccount:kube-system:efs-csi-*"
}
}
}
]
}
Make sure to attach the AmazonEFSCSIDriverPolicy to this role. This step ensures that the CSI driver has the necessary permissions to manage EFS volumes.
Install the Amazon EFS CSI driver
You can install the EFS CSI driver using either the EKS Add-ons feature or via Helm charts. I recommend the EKS Add-on method because it’s easier to manage and stays updated automatically.
Attach the IAM role you created to the EFS CSI add-on in your cluster.
(Optional) Create an EFS access point
Access points provide a way to manage and segregate access within an EFS file system. It’s like having different doors to different parts of the same warehouse, each with its key and permissions.
- Go to the EFS Console and select your file system.
- Create a new Access Point and note its ID for use in upcoming steps.
Configure an IAM Policy for worker nodes
To make sure your EKS worker nodes can access EFS, attach an IAM policy to their role. Here’s an example policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticfilesystem:DescribeAccessPoints",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Resource": "*"
}
]
}
This ensures your nodes can create and interact with the necessary resources.
Create a storage class for EFS
Next, create a Kubernetes StorageClass to provision Persistent Volumes (PVs) dynamically. Here’s an example YAML file:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
fileSystemId: <file-system-id>
directoryPerms: "700"
basePath: "/dynamic_provisioning"
ensureUniqueDirectory: "true"
Replace <file-system-id> with your EFS File System ID.
Apply the file:
kubectl apply -f efs-storage-class.yaml
Create a persistent volume claim (PVC)
Now, let’s request some storage by creating a PersistentVolumeClaim (PVC):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
Apply the PVC:
kubectl apply -f efs-pvc.yaml
Use the EFS PVC in a pod
With the PVC created, you can now mount the EFS storage into a pod. Here’s a sample pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/data"
name: efs-volume
volumes:
- name: efs-volume
persistentVolumeClaim:
claimName: efs-pvc
Apply the configuration:
kubectl apply -f efs-pod.yaml
You can verify the setup by checking if the pod can access the mounted storage:
kubectl exec -it efs-app -- ls /data
A note on direct EFS mounting
You can mount EFS directly into pods without using a Persistent Volume (PV) or Persistent Volume Claim (PVC) by referencing the EFS file system directly in the pod’s configuration. This approach simplifies the setup but offers less flexibility compared to using dynamic provisioning with a StorageClass. Here’s how you can do it:
apiVersion: v1
kind: Pod
metadata:
name: efs-mounted-app
labels:
app: efs-example
spec:
containers:
- name: nginx-container
image: nginx:latest
volumeMounts:
- name: efs-storage
mountPath: "/shared-data"
volumes:
- name: efs-storage
csi:
driver: efs.csi.aws.com
volumeHandle: <file-system-id>
readOnly: false
Replace <file-system-id> with your EFS File System ID. This method works well for simpler scenarios where direct access is all you need.
Final remarks
Mounting EFS to an EKS cluster gives you a powerful, shared storage solution for Kubernetes workloads. By following these steps, you can ensure that your applications have access to scalable, durable, and highly available storage without needing to worry about complex management or capacity issues.
As you can see, EFS acts like a giant, shared repository that all your applications can tap into. Whether you’re working on machine learning projects, collaborative tools, or any workload needing shared data, EFS and EKS together simplify the whole process.
Now that you’ve walked through mounting EFS on EKS, think about what other applications could benefit from this setup. It’s always fascinating to see how managed services can help reduce the time you spend on the nitty-gritty details, letting you focus on building great solutions.