For a simple skill, I noticed that the cold start time increased dramatically after adding an analytics API call to my lambda.

So, I wrapped the call and found that the single call accounted for approximately 70% of my startup time when the lambda size was 128 MB even though the actual memory used was 52 MB.

Compare that to the significantly faster time with a 1024 MB lambda sizing where even though the call results in 60% of the start time, the overall time is 4.5 seconds faster:

#1 – Small Lambda Size

Memory Size: 128 MB

Max Memory Used: 52 MB

Duration: 5080.36 ms

Billed Duration: 5100 ms

Analytics API Call Duration: 3501 ms (69% of duration)

#2 – Larger Lambda Size

Memory Size: 1024 MB

Max Memory Used: 50 MB

Duration: 594.48 ms

Billed Duration: 600 ms

Analytics API Call Duration: 344 ms (58% of duration)

Check Your #lambda Size – It Matters! #VoiceXP #Voicefirst Click To Tweet


I did additional tests where the only thing that changed was the lambda size and found that the larger the lambda size, the faster the API call happened. The duration in the below table is just for the single call initialization of the analytics logger:

Lambda Size (MB) Analytics API Call Duration (ms)
128 3501
256 1562
320 1081
384 901
448 782
512 719
576 619
640 554
1024 344


I was surprised that a single call would affect my startup time so much and the use of a single package would require me to increase the lambda size. I would be interested in understanding what that single line of code is doing that has such consequences.

Even though the underlying cause is currently a mystery, the solution was to increase the lambda size from 128 MB to 512 MB.

Using the Lambda Pricing Page, we learn that we are charged for lambda compute usage for a single execution based on this formula:

Duration (s) * (Lambda Size in MB / 1024) = Total compute (GB-s)

This calculation doesn’t include request charges as the number of calls is less than the free 1 million requests per month.

Lambda Size (MB) API Call Duration (s) Total Compute (GB-s)
128 3.501 0.438
256 1.562 0.391
320 1.081 0.338
384 0.901 0.338
448 0.782 0.342
512 0.719 0.360
576 0.619 0.348
640 0.554 0.346
1024 0.344 0.344

These numbers indicate that we are actually spending more (higher GB-s value) when we go with 128 MB lambda sizing instead of either 512 MB or 1024 MB. This is of course looking at the micro level of a single API call and we need to consider the bigger picture. But in this case, we can decrease the time it takes to cold start the Alexa skill lambda and it saves us money on AWS charges.

Thanks to Alexa Champion, Terren Peterson, for his excellent article, Performance Tuning Alexa Skills using AWS Lambda. I also appreciate VoiceLabs for being receptive of my feedback and encouraging other developers to join forces to solve this.

Extra Credit

If you are interested in doing some testing of your own, let me know if you solve the mystery. Here are the packages that my skill is using:

"dependencies": {
   "alexa-sdk": "^1.0.9",
   "aws-sdk": "^2.54.0",
   "lodash": "^4.17.4",
   "node-bitarray": "^0.1.0",
   "voicelabs": "^1.0.1"

I have tried moving the require for voicelabs before lodash and the slow timing is still associated with voicelabs. I need to try this with a newer version of alexa-sdk and wonder if there is any issue that voicelabs references alexa-sdk 1.0.7. I am storing user session in DynamoDB, if that is another clue.

Enter for a Chance to Win an Amazon Echo Show!

VoiceXP is rewarding one lucky winner with a FREE Amazon Echo Show valued at $229.99 plus FREE Shipping. All you have to do is enter by email before the deadline. Signing up also gets you access to the latest trends, tips, and tactics regarding Voice Experience™ Solutions. Click the image below to get started!

Register to Win!


Send this to a friend