
Running containers in ECS Fargate is great until you need persistent storage. At first, it seems straightforward: mount an EFS volume, and you’re done. But then you hit a roadblock. The container fails to start because the expected directory in EFS doesn’t exist.
What do you do? You could manually create the directory from an EC2 instance, but that’s not scalable. You could try scripting something, but now you’re adding complexity. That’s where I found myself, going down the wrong path before realizing that AWS already had a built-in solution that simplified everything. Let’s walk through what I learned.
The problem with persistent storage in ECS Fargate
When you define a task in ECS Fargate, you specify a TaskDefinition. This includes your container settings, environment variables, and any volumes you want to mount. The idea is simple: attach an EFS volume and mount it inside the container.
But there’s a catch. The task won’t start if the mount path inside EFS doesn’t already exist. So if your container expects to write to /data, and you set it up to map to /my-task/data on EFS, you’ll get an error if /my-task/data hasn’t been created yet.
At first, I thought, Fine, I’ll just SSH into an EC2 instance, mount the EFS drive, and create the folder manually. That worked. But then I realized something: what happens when I need to deploy multiple environments dynamically? Manually creating directories every time was not an option.
A Lambda function as a workaround
My next idea was to automate the directory creation using a Lambda function. Here’s how it worked:
- The Lambda function mounts the root of the EFS volume.
- It creates the required directory (/my-task/data).
- The ECS task waits for the directory to exist before starting.
To integrate this, I created a custom resource in AWS CloudFormation that triggered the Lambda function whenever I deployed the stack. The function ran, created the directory, and ensured everything was in place before the container started.
It worked. The container launched successfully, and I automated the setup. But something still felt off. I had just introduced an entirely new AWS service, Lambda, to solve what seemed like a simple storage issue. More moving parts mean more maintenance, more security considerations, and more things that can break.
The simpler solution with EFS Access Points
While working on the Lambda function, I stumbled upon EFS Access Points. I needed one to allow Lambda to mount EFS, but then I realized something, ECS Fargate supports EFS Access Points too.
Here’s why that’s important. Access Points in EFS let you:
✔ Automatically create a directory when it’s first used.
✔ Restrict access to specific paths and users.
✔ Set permissions so the container only sees the directory it needs.
Instead of manually creating directories or relying on Lambda, I set up an Access Point for /my-task/data and configured my ECS TaskDefinition to use it. That’s it, no extra code, no custom logic, just a built-in feature that solved the problem cleanly.
The key takeaway
My first instinct was to write more code. A Lambda function, a CloudFormation resource, and extra logic, all to create a folder. But the right answer was much simpler: use the tools AWS already provides.
The lesson? When working with cloud infrastructure, resist the urge to overcomplicate things. The easiest solution is often the best one. If you ever find yourself scripting something that feels like it should be built-in, take a step back because it probably is.