AWS Lambda Managed Instances with Java 25 and AWS SAM – Part 5 Lambda function initial performance measurements
Source: Dev.to
In part 1 of the series, we explained the ideas behind AWS Lambda Managed Instances and introduced our sample application.
In part 2 we described what a Lambda Capacity Provider is and how to create it using AWS SAM.
Part 3 covered how to create Lambda functions and attach them to a capacity provider.
In part 4 we discussed monitoring, currently unsupported features, challenges, and pricing of LMI.
In this article we’ll measure the initial Lambda function performance.
Lambda function initial performance measurements
First, we measure the performance of the GetProductById function. Its implementation can be found here.
The function was initially implemented in a non‑optimal way (we’ll explore optimizations in the next article).
Send a request to the function via an API Gateway GET request to products/1 (assuming a product with ID 1 already exists) and inspect the CloudWatch metrics.
First invocation (cold start)
{
"time": "2026-02-24T06:54:10.882Z",
"type": "platform.report",
"record": {
"requestId": "3c16579f-33eb-42b6-aae0-f204872d7ebd",
"metrics": {
"durationMs": 1581.56
},
"spans": [
{
"name": "responseLatency",
"start": "2026-02-24T06:54:09.302Z",
"durationMs": 1579.247
},
{
"name": "responseDuration",
"start": "2026-02-24T06:54:10.881Z",
"durationMs": 1.222
}
],
"status": "success"
}
}
The responseLatency is ≈ 1 580 ms.
Latency can vary depending on the EC2 instance type used by the capacity provider. In [part 2] we allowed the following EC2 types: m7a.large, m6a.large.
Second invocation (warm start)
{
"time": "2026-02-24T06:57:03.768Z",
"type": "platform.report",
"record": {
"requestId": "371879fb-588f-4c7f-b80a-8586abaf547d",
"metrics": {
"durationMs": 41.937
},
"spans": [
{
"name": "responseLatency",
"start": "2026-02-24T06:57:03.727Z",
"durationMs": 40.681
},
{
"name": "responseDuration",
"start": "2026-02-24T06:57:03.768Z",
"durationMs": 0.461
}
],
"status": "success"
}
}
Now the responseLatency is only ≈ 41 ms.
Why is there still latency?
The blog post Introducing AWS Lambda Managed Instances: Serverless simplicity with EC2 flexibility states that LMI pre‑provisions execution environments, eliminating cold‑start overhead such as:
- Lambda extensions (if any)
- Runtime initialization (e.g., JVM startup)
- Function initialization
What LMI cannot eliminate is the language‑specific warm‑up. For Java, this includes:
- Lazy class loading
- Just‑In‑Time (JIT) compilation
These activities improve performance for subsequent invocations but still add a few tens of milliseconds on the first request after a new environment is provisioned.
Important notes
| Feature | Availability for LMI |
|---|---|
| Amazon Corretto CRaC | Not supported (CRaC project) |
| SnapStart | Not supported (SnapStart docs) |
| Class Data Sharing (CDS) | Not usable (no access to underlying EC2) |
| AOT cache (Leyden) | Not usable (no access to underlying EC2) |
If the observed latency (≈ 40 ms) is acceptable for your workload, you may choose to keep the current implementation. Otherwise, you can explore JVM‑specific warm‑up techniques (e.g., eager class loading, custom JIT tuning) in future articles.
Lambda Function First Reaches a Particular EC2 Instance
Because of your Auto Scaling policy, new EC2 instances may be started automatically, so you don’t need to do anything else. In the next article we’ll optimize our Lambda function to improve this cold‑start time.
Conclusion
In this article we measured the initial Lambda‑function performance (the Java code was intentionally not optimized) and still observed a noticeable cold start when the request reached the underlying EC2 instance for the first time.
We discovered that a Lambda function using the Lambda Managed Instances compute type cannot eliminate language‑specific warm‑up periods—e.g., the JVM warm‑up.
In the next article we’ll optimize our Lambda function to significantly reduce this cold‑start time.
If you like my content, please follow me on GitHub and give my repositories a star!
Also, check out my website for more technical content and upcoming public‑speaking activities.