Simple Rust-based AWS Lambda Functions

Photo by Venti Views on Unsplash

Simple Rust-based AWS Lambda Functions

Disclaimer: At the time of writing, many of the tools discussed are in preview and not GA, so whilst it is cool to use them in personal projects like the one this blog post is based on, think carefully before using them in production.

Accompanying Repo: https://github.com/Aiden-Ziegelaar/HelloWorldRustLambda

The Toolset

Recently I've been working with Rust deployed on AWS lambda, and there's a great suite of tools for managing, templating, and working with AWS Lambdas in Rust. The approach I've used has the following toolset:

  • Cargo Lambda - This is a great tool for cross-compiling your lambda functions for architectures other than the target you're compiling on. It uses Zig's cross compiler under the hood (though you can swap this out for cross if you prefer). It also provides templating functionality based on the liquid template language and comes with some very sensible defaults.

  • AWS Lambda Rust Runtime - This is a series of crates that abstracts away a large portion of the lambda-specific implementation requirements for a custom lambda binary. The effect is a very similar development experience to existing lambda development, i.e. you write your handler following a template, the only difference is a small amount of boilerplate that wraps your handler. It also provides event types for a significant number of AWS upstream triggers for Lambda, including a special crate for handling HTTP integrations.

  • AWS Rust SDK - This provides bindings to the AWS API from Rust, if you've never looked into the way that AWS maintains SDKs in so many languages, I'd encourage you to check out Smithy IDL.

  • Terraform - This is going to be our infrastructure as code (IaC) tool. The Serverless Application Model (SAM) from AWS has some great Rust integrations, however, I prefer Terraform as it allows for a much wider range of resources, providers and configuration.

Writing Lambda Functions

The first thing to do is to install the prerequisite tooling:

You'll also need an AWS account and AWS credentials on your machine, I like to use a tool called granted to manage these credentials.

Let's take a look at a basic "Hello World" lambda function, we can initialise one in an empty repository by using the command:

cargo lambda init

When prompted enter y as we'll be making a HTTP-based function.

This will scaffold out a few files for us, including a cargo.toml, which is conceptually like our package.json from NodeJS or pipfile from Python.

Now let's take a look at src/main.rs

use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};

async fn function_handler(event: Request) -> Result<Response<Body>, Error> {
    // Extract some useful information from the request
    let who = event
        .query_string_parameters_ref()
        .and_then(|params| params.first("name"))
        .unwrap_or("world");
    let message = format!("Hello {who}, this is an AWS Lambda HTTP request");

    // Return something that implements IntoResponse.
    // It will be serialized to the right response event automatically by the runtime
    let resp = Response::builder()
        .status(200)
        .header("content-type", "text/html")
        .body(message.into())
        .map_err(Box::new)?;
    Ok(resp)
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}

This probably looks fairly familiar, we have some imports and a function handler. The main difference is the small amount of boilerplate at the bottom containing the main function. This is the block that initialises the lambda runtime. You're likely used to other lambda runtimes such as Python or NodeJS in which AWS provides the runtime for you. That's not going to be the case for our runtime as we're deploying a binary, we need to compile the runtime into our program. This main function is also the place to do any cold start initialisation, like spinning up a database connection or retrieving secrets and credentials, which we'll do later.

We can now build our function for deployment using cargo lambda, I like to use the arm64 target as it's slightly higher performance per dollar than x86_64. We're also going to use the --release flag as it will remove debug symbols and profiling to give us a smaller deployed binary and faster execution.

cargo lambda build --arm64 --release

Deploying

Now we have a build artefact, we can look at how to deploy this lambda, we can do so with some fairly simple terraform:

locals {
    app_name = "my_app"
    lambda_name = "hello_world_lambda"
}

data "aws_iam_policy_document" "lambda_assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "lambda_role" {
    name               = "${local.app_name}-${local.lambda_name}"
    assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}

resource "aws_iam_role_policy_attachment" "basic_execution_role_policy_attachment" {
    role        = aws_iam_role.lambda_role.name
    policy_arn  = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

data "archive_file" "lambda_archive" {
  type        = "zip"
  source_file = "${path.module}/target/lambda/HelloWorldRustLambda/bootstrap"
  output_path = "${path.module}/target/archive/HelloWorldRustLambda.zip"
}

resource "aws_lambda_function" "lambda" {
  filename      = data.archive_file.lambda_archive.output_path
  function_name = "${local.app_name}-${local.lambda_name}"
  role          = aws_iam_role.lambda_role.arn

  handler = "bootstrap"

  source_code_hash = data.archive_file.lambda_archive.output_base64sha256

  runtime = "provided.al2023"

  architectures = ["arm64"]

  memory_size = 1024
}

resource "aws_lambda_function_url" "lambda_url" {
  function_name      = aws_lambda_function.test.function_name
  authorization_type = "NONE"
}

output "lambda_url" {
  value = aws_lambda_function_url.lambda_url.url
}

Here we're creating a main.tf file that will add our binary to an archive, deploy it to a lambda function and assign it a lambda function URL.

You will likely need to change the source file path for the archive to whatever your folder name is, in my case it was HelloWorldRustLambda but yours may be different.

It's interesting to note, that because our Lambda is a binary, our runtime is the empty provided.al2023 runtime. This runtime isn't unique to Rust, you could just as easily write a lambda function in bash and deploy it in the same runtime.

After this, we can simply run terraform init and terraform apply to deploy our new lambda function. Which should spit out a URL that looks something like this:

https://some-random-string.lambda-url.region.on.aws/

We can now test our lambda with curl:

curl https://some-random-string.lambda-url.region.on.aws

Should return:

Hello world, this is an AWS Lambda HTTP request

Or with URL params:

curl https://some-random-string.lambda-url.region.on.aws?name=Aiden

Should return:

Hello Aiden, this is an AWS Lambda HTTP request

Now we have our basic workflow down we can start to add other AWS services.

Accessing AWS Services

For our purposes let's return a random value from AWS secrets manager along with our hello world message. The first thing we need to do is to install the SDK credentials helper and secrets manager SDK crates:

cargo add aws-config aws-sdk-secretsmanager

Then we will need to modify our lambda code to initialise some credentials and call the secrets manager:

use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};

/// This is the main body for the function.
/// Write your code inside it.
/// There are some code example in the following URLs:
/// - https://github.com/awslabs/aws-lambda-rust-runtime/tree/main/examples
async fn function_handler(event: Request, client: &aws_sdk_secretsmanager::Client) -> Result<Response<Body>, Error> {
    // Extract some useful information from the request
    let who = event
        .query_string_parameters_ref()
        .and_then(|params| params.first("name"))
        .unwrap_or("world");

    // should probably have some error handling here
    let random_string = client.get_random_password().send().await?.random_password;

    let message = format!("Hello {who}, this is an AWS Lambda HTTP request, your random value is: {random_string}", who=who, random_string=random_string.unwrap_or_default());

    // Return something that implements IntoResponse.
    // It will be serialized to the right response event automatically by the runtime
    let resp = Response::builder()
        .status(200)
        .header("content-type", "text/html")
        .body(message.into())
        .map_err(Box::new)?;
    Ok(resp)
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    let sdk_config = aws_config::from_env().load().await;
    let secrets_manager = aws_sdk_secretsmanager::Client::new(&sdk_config);

    run(service_fn(|event| function_handler(event, &secrets_manager))).await
}

Finally, we will also need to add a new policy to our lambda to allow the action:

data "aws_iam_policy_document" "secretsmanager_access_policy_document" {
    statement {
        sid = "AllowGetRandomPassword"

        effect = "Allow"

        actions = [
            "secretsmanager:GetRandomPassword"
        ]

        resources = [
            "*",
        ]
    }
}

resource "aws_iam_policy" "secretsmanager_table_access_policy" {
    name        = "${local.app_name}-GetRandomPassword"
    description = "Allow GetRandomPassword Access Policy for app ${local.app_name}"
    policy      = data.aws_iam_policy_document.secretsmanager_access_policy_document.json
}

resource "aws_iam_role_policy_attachment" "lambda_role_policy_attachment" {
    role        = aws_iam_role.lambda_role.name
    policy_arn  = aws_iam_policy.secretsmanager_table_access_policy.arn
}

Then we can run the same commands to build and deploy as before:

cargo lambda build --arm64 --release

terraform apply

Now we can make the same curl request to test it out:

curl https://some-random-string.lambda-url.region.on.aws?name=Aiden

It should return something like:

Hello Aiden, this is an AWS Lambda HTTP request, your random value is: some_random_string

Cleanup

Finally, we can clean up our deployed resources:

terraform destroy

Once the command has been completed, you're all done!

Hopefully, this post has demonstrated that creating and deploying Rust Lambda functions is just as easy as deploying any other runtime.