8.5 KiB
description
| description |
|---|
| Deploy Rust Lambda function to AWS |
You are helping the user deploy their Rust Lambda function to AWS.
Your Task
Guide the user through deploying their Lambda function to AWS:
-
Prerequisites check:
- Function is built:
cargo lambda build --releasecompleted - AWS credentials configured
- IAM role for Lambda execution exists (or will be created)
- Function is built:
-
Verify AWS credentials:
aws sts get-caller-identityIf not configured:
aws configure # Or use environment variables: # export AWS_ACCESS_KEY_ID=... # export AWS_SECRET_ACCESS_KEY=... # export AWS_REGION=us-east-1 -
Basic deployment:
cargo lambda deployThis will:
- Use the function name from Cargo.toml (binary name)
- Deploy to default AWS region
- Create function if it doesn't exist
- Update function if it exists
-
Deployment with options:
Specify function name:
cargo lambda deploy <function-name>Specify region:
cargo lambda deploy --region us-west-2Set IAM role:
cargo lambda deploy --iam-role arn:aws:iam::123456789012:role/lambda-execution-roleConfigure memory:
cargo lambda deploy --memory 512- Default: 128 MB
- Range: 128 MB - 10,240 MB
- More memory = more CPU (proportional)
- Cost increases with memory
Set timeout:
cargo lambda deploy --timeout 30- Default: 3 seconds
- Maximum: 900 seconds (15 minutes)
Environment variables:
cargo lambda deploy \ --env-var RUST_LOG=info \ --env-var DATABASE_URL=postgres://... \ --env-var API_KEY=secretArchitecture (must match build):
# For ARM64 build cargo lambda deploy --arch arm64 # For x86_64 (default) cargo lambda deploy --arch x86_64 -
Complete deployment example:
cargo lambda deploy my-function \ --iam-role arn:aws:iam::123456789012:role/lambda-exec \ --region us-east-1 \ --memory 512 \ --timeout 30 \ --arch arm64 \ --env-var RUST_LOG=info \ --env-var API_URL=https://api.example.com
IAM Role Setup
If user doesn't have an IAM role, guide them:
Option 1: Let cargo-lambda create it
cargo lambda deploy --create-iam-role
This creates a basic execution role with CloudWatch Logs permissions.
Option 2: Create manually with AWS CLI
# Create trust policy
cat > trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}
EOF
# Create role
aws iam create-role \
--role-name lambda-execution-role \
--assume-role-policy-document file://trust-policy.json
# Attach basic execution policy
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# Get role ARN
aws iam get-role --role-name lambda-execution-role --query 'Role.Arn'
Option 3: Create with additional permissions
# For S3 access
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
# For DynamoDB access
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
# For SQS access
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/AmazonSQSFullAccess
Memory Configuration Guide
Help user choose appropriate memory:
| Memory | vCPU | Use Case | Cost Multiplier |
|---|---|---|---|
| 128 MB | 0.08 | Minimal functions | 1x |
| 512 MB | 0.33 | Standard workloads | 4x |
| 1024 MB | 0.58 | Medium compute | 8x |
| 1769 MB | 1.00 | Full 1 vCPU | 13.8x |
| 3008 MB | 1.77 | Heavy compute | 23.4x |
| 10240 MB | 6.00 | Maximum | 80x |
Guidelines:
- IO-intensive: 512-1024 MB usually sufficient
- Compute-intensive: 1024-3008 MB for more CPU
- Test different settings to optimize cost vs. performance
Timeout Configuration Guide
| Timeout | Use Case |
|---|---|
| 3s (default) | Fast API responses, simple operations |
| 10-30s | Database queries, API calls |
| 60-300s | Data processing, file operations |
| 900s (max) | Heavy processing, batch jobs |
Note: Longer timeout = higher potential cost if function hangs
Deployment Verification
After deployment, verify it works:
-
Invoke via AWS CLI:
aws lambda invoke \ --function-name my-function \ --payload '{"key": "value"}' \ response.json cat response.json -
Check logs:
aws logs tail /aws/lambda/my-function --follow -
Get function info:
aws lambda get-function --function-name my-function -
Invoke with cargo-lambda:
cargo lambda invoke --remote --data-ascii '{"test": "data"}'
Update vs. Create
First deployment (function doesn't exist):
- cargo-lambda creates new function
- Requires IAM role (or use --create-iam-role)
Subsequent deployments (function exists):
- cargo-lambda updates function code
- Can also update configuration (memory, timeout, env vars)
- Maintains existing triggers and permissions
Advanced Deployment Options
Deploy from zip file
cargo lambda build --release --output-format zip
cargo lambda deploy --deployment-package target/lambda/my-function.zip
Deploy with layers
cargo lambda deploy --layers arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1
Deploy with VPC configuration
cargo lambda deploy \
--subnet-ids subnet-12345 subnet-67890 \
--security-group-ids sg-12345
Deploy with reserved concurrency
cargo lambda deploy --reserved-concurrency 10
Deploy with tags
cargo lambda deploy \
--tags Environment=production,Team=backend
Deployment via AWS Console (Alternative)
If user prefers console:
-
Build with zip output:
cargo lambda build --release --output-format zip -
Upload via AWS Console:
- Go to AWS Lambda Console
- Create function or open existing
- Upload
target/lambda/<function-name>.zip - Configure runtime: "Custom runtime on Amazon Linux 2023"
- Set handler: "bootstrap" (not needed, but convention)
- Configure memory, timeout, env vars in console
Multi-Function Deployment
For workspace with multiple functions:
# Deploy all
cargo lambda deploy --all
# Deploy specific
cargo lambda deploy --bin function1
cargo lambda deploy --bin function2
Environment-Specific Deployment
Suggest deployment patterns:
Development:
cargo lambda deploy my-function-dev \
--memory 256 \
--timeout 10 \
--env-var RUST_LOG=debug \
--env-var ENV=development
Production:
cargo lambda deploy my-function \
--memory 1024 \
--timeout 30 \
--arch arm64 \
--env-var RUST_LOG=info \
--env-var ENV=production
Cost Optimization Tips
- Use ARM64: 20% cheaper for same performance
- Right-size memory: Test to find optimal memory/CPU
- Optimize timeout: Don't set higher than needed
- Monitor invocations: Use CloudWatch to track usage
- Consider reserved concurrency: For predictable workloads
Troubleshooting Deployment
Issue: "AccessDenied"
Solution: Check AWS credentials and IAM permissions
aws sts get-caller-identity
Issue: "Function code too large"
Solution:
- Uncompressed: 250 MB limit
- Compressed: 50 MB limit
- Optimize binary size (see
/lambda-build)
Issue: "InvalidParameterValueException: IAM role not found"
Solution: Create IAM role first or use --create-iam-role
Issue: Function deployed but fails
Solution:
- Check CloudWatch Logs
- Verify architecture matches build (arm64 vs x86_64)
- Test locally first with
cargo lambda watch
Post-Deployment
After successful deployment:
-
Test the function:
cargo lambda invoke --remote --data-ascii '{"test": "data"}' -
Monitor logs:
aws logs tail /aws/lambda/my-function --follow -
Check metrics in AWS CloudWatch
-
Set up CI/CD: Use
/lambda-github-actionsfor automated deployment -
Configure triggers (API Gateway, S3, SQS, etc.) via AWS Console or IaC
Report deployment results including:
- Function ARN
- Region
- Memory/timeout configuration
- Invocation test results