Using container images with AWS Lambda


Container Image Support has just been announced for AWS Lambda and it’s a pretty big deal — I’m very excited because it’s something I’ve wanted for years!

Not Docker

To be clear, what’s been announced is not actually Lambda running Docker per se — it’s specifically using container images as the packaging mechanism. A little like zipping up your OS image, which then gets unzipped in Lambda’s environment when your function executes. But it’s very similar — the image is actually the main feature I (and many others) have wanted — because it allows you to bundle all your dependencies, including native binaries and libraries, using any (Linux) OS you want, into a portable package that can easily be tested locally and deployed remotely.

Loads of room

You also get a massive 10GB to do it all in, which overcomes another pain point many have had with trying to get large functions running on Lambda (eg those with ML models) — and is a huge step up from the previous 250MB limit.

Use standard tools

In this post I’ll show you how to use Container Image Support in Lambda with regular docker commands and versatile images like Alpine. We’ll see how you can test your images locally, and deploy them to Lambda.


Let’s say we’re a web publisher and we want to create a service that can convert PDF files to PNGs whenever they’re uploaded, so we can use them as images in our web pages. In this case, we’ve found a PDF-to-PNG converter tool called pdftocairo which does just that, so we want to use it in our Lambda function.

Bring your own OS

In this case I’ve chosen to use Alpine Linux to base the container image on. It’s a popular distribution for container images because it has a very small footprint and a strong track record with security.

Getting our PDF conversion working

First we’ll create a container image we can run locally with docker to check that our tool works. We’ll create a program that will take a file at /tmp/input.pdf and turn it into a PNG file per page in /tmp/output, eg /tmp/output/converted-1.png, /tmp/output/converted-2.png, etc. We’ve chosen /tmp as it’s the only directory under which we can write files in Lambda (something to keep in mind if you’re used to a Docker environment where you can write to any OS path). Once we’ve confirmed this works, we can add the functionality we need to turn it into a Lambda handler and transfer the input/output files to/from S3.

We can use package managers

The package manager on Alpine Linux is called apk (similar to yum on Amazon Linux or apt on Ubuntu) — so we use apk addto install the tools we need, and then we copy over our program from the build image step.

Adding a Lambda handler

To turn this into an image that Lambda can execute, we can just modify our Go program to execute a handler function in the same way we would for the Go Lambda runtime.

Testing our Lambda handler end-to-end

So far, we haven’t been able to check the rest of our function’s implementation though — the Lambda handler and the S3 upload/download.

  1. Include it in the same image we deploy to Lambda
  2. Create a new image for testing
  3. Copy the emulator locally and mount it during docker run


Container support requires your Lambda function code point to an image URI from an ECR Repository. The demo repo includes an infrastructure stack that will set this up for you, but here’s a guide if you want to do it manually:

Using our live function

Congratulations! Now we can check that it works.


In this post I’ve shown you how container image support in Lambda makes it easy to create complex applications that rely on binary tools.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Michael Hart

Michael Hart

Principal Engineer at Cloudflare. Previously: VP Research Engineering at Bustle, AWS Serverless Hero, creator of LambCI.