Get a rid of servers - let's try AWS lambda
-
Upload
aleksandar-nenov -
Category
Technology
-
view
400 -
download
0
Transcript of Get a rid of servers - let's try AWS lambda
[email protected] | www.cloudwebops.com
phone: +381 (0) 66 398 398 | skype: cloudwebops
HQ: Vojvodjanskih brigada 28, Novi Sad, SERBIA
AWS Standard Consulting Partner APN ID: 490243
Get a rid of servers let's try AWS lambda Version: March 21 , 2016. Author: CLOUDWEBOPS Notice: CLOUDWEBOPS holds no responsibility for misuse of any of the instructions mentioned in this document. Any unauthorized copy of this document,any of it 's part, or publishing it without referencing to CLOUDWEBOPS as author, is strictly forbidden.
What is Lambda? How Lambda works Execution Context Create Our First Lambda Function Monitoring with CloudWatch
What is Lambda? Lambda the microservice without servers Lambda is a compute service that performs code execution. Instead of needing to provision an EC2 instance and getting it configured to run a few functions, we can simply upload just the code that we want to execute. There are many reasons to consider using Lambda, such as: very easy deployment and administration of functions, not worrying about selecting and configuring the proper EC2 instances to run our code, etc. Lambda also helps us achieve scalable, highavailable workloads right out of the box. Security is a shared responsibility, just like it is the case with all AWS services. AWS does its part to ensure our code runs in its own sandbox, away from all other processes. Its our responsibility to protect our keys that interact through the AWS SDK. Lambda helps keeping costs down by charging us only for the execution of our workloads, and not for idle resources like we do when using EC2, which is the best part of it all. And just like most services, AWS offers a free tier for Lambda that we can apply to both new and existing accounts. The free tier covers the first million requests, and up to 3.2 million seconds per month. If you think about it, that's actually a lot of processing that can be done before we get charged anything. It's worth mentioning that Lambda is not designed for all workloads, it suits some more than it does to others. Maybe the best example is a code that doesn't require human interaction, and it does not mean we can't use it as a part of a larger system.
2
Nowadays, most companies will develop applications and deploy them on servers — whether onpremises, or in the cloud. This means that they figure out how much server, storage and database power they need ahead of time, and deploying all of the hardware and software necessary to run the application. Let's say we don't want to deal with all of this, and we are just looking for a new model that can handle all of the underlying infrastructure deployment for us.
Image 1. Data Processing Architecture with Servers 1
Well, Amazon Web Services Lambda Service offers us a way to do just that today.
1 Image from AWS re:Invent presentation „ARC308The Serverless Company Using AWS Lambda“
3
Image 2. Data Processing Architecture Without Servers 2
When we use Lambda, instead of deploying these massively large applications, we just deploy an application with some singleaction triggers, and only pay for the compute power we actually use, priced in 100 millisecond increments of usage. We can have as many triggers as we like running in tandem or separately. It triggers the programmed actions, once the conditions are met.
How Lambda works Imagine that you have a shopping cart on your web application. In a microservice web app, it may be that you have an addtocart operation running on the same machine as the cartstatus and displaycart operations. Every time a user adds an item to the cart, the addtocart operation is called, which puts a record into the database. The cartstatus operation is then called to show that the cart now has an item in it. The displaycart operation is called to show the contents if the cart is clicked on. The data flow is within the EC2 instance or container that has the shoppingcart service, inmemory in this simplified example shown here:
Image 3: Data flow within the EC2 instance or container
The three operations are written as Lambda functions, with a somewhat different data flow. We use the API Gateway to route the user interactions with the system to the appropriate Lambda functions. That means when a user adds an item to the cart, API Gateway routes that request to the addtocart Lambda function, which then writes a record into a DynamoDB table. An event notifies the cartstatus function of a change to DynamoDB, and the UI responds with a change by showing that there are
2 Image from AWS re:Invent presentation „ARC308The Serverless Company Using AWS Lambda“
4
items in the cart.The API Gateway routes a request to the displaycart Lambda function once the cart is clicked. We can see the Lambda data flow in the image below:
Image 4: The Lambda data flow
In order to maintain state, in the microservice app version, we scale all three operations based on the overall load across them, and state is transferred inmemory, thus creating complication around caching and/or routing sessions to instances. In the Lambda version, each function scales individually and asneeded. There's no cluster coordination of any kind, because all state is in DynamoDB , and all that at the cost of a small increase in latency. The key benefits when using Lambda, as we saw in this oversimplified example, are actually very profound, as the application scales. And remember that you are only paying for actual requests. Instantiating an EC2 instance or a container is actually preprovisioning, which demands that we predict load onto and off of our AWS. At low scales, this means that we can put up a system on Lambda for a trivial amount of money, but the benefits are really noticable at higher scales. With microservices on compute instances with or without containers, we are making choices on how to combine or separate services for scaling. As our app grows, we'll almost certainly discover that we were wrong in some way in terms of separating the concerns for scaling purposes or on which metrics to scale. So, we'll be discovering where we were wrong via bad user experience or wasted resources. Finally, we’ll end up correcting via refactoring. With Lambda, each function scales independently, which puts the burden on AWS to scale the functions. We dont need to concern ourselves with it. The cost is in both doing things differently and spreading our application across many events and functions.
5
Lambda terminology: Push In the Push model an event source will launch the Lambda function in response to an event. Event's published via the Push model do not have a guaranteed order Pull Pull model Lambda will retrieve events from another source, Lambda will pull the events and the order they are published.
Function A Function is the code we want to execute, and here we have two options for setting up a
function. We can edit the code in the Lambda editor, or we can upload a deployment package with our function in other libraries. Using the inline editor we can view and edit our function from the Lambda console. This is useful for functions that aren't requiring libraries other than what is included in our execution environment
Invocation Role The Invocation Role grants permission to an event source to communicate with the
Lambda function, the permissions are slightly different based on the model being used. For the Push model, permission must be granted to the event source. An access policy allows the event source permission to the Lambda to invoke a sync action. The trust policy gives the event source permission to assume that role. For the Pull model, permission must be granted to Lambda to pull from the event source. The access policy grants Lambda the permission to pull from the event source, while the trust policy grants Lambda permission to assume the role. This role is being set up via IAM, and the execution role defines what resources the function has access to when running.
Execution role defines what resources the function has access to when running. For example, if we
are invoking a function via NS3 event, the execution role needs to be given permissions to access the S3 bucket to read the file. We make this possible via the access policy. When the function is executed, Lambda will assume the execution role. This is permitted via the trust policy that grants Lambda the right to perform this. We set this role up the same way we did with the invocation role. When our Lambda function executes, it runs within its own container, i.e., not connected in anyway to our VPC. We need to keep this in mind when designing how our workload will function, since it changes the way we access our resources.
6
Execution Context
Image 5. Lambda Execution Context
The container runs on top of an EC2 instance with the allocated memory we choose for the function. Based on the allocated memory the function receives a share of the CPU. The ratio of memory to CPU is the same as the general purpose EC2 instance type. The Lambda documentation tells us that 128 megabytes of memory equates to roughly 6% CPU share. The CPU share determines the amount of time allocated to the function which will, in the end, effect things such as the latency between the invocation and execution of the function. It's very important to understand this trade off of performance and cost when using this with our work loads. There is no EC2 affinity, which means that while the first execution might run on one EC2 instance during its lifetime, subsequent executions might run on any other number of EC2 instances. Lambda does not give us access to the underline EC2 resources such as the network interface or file system except for a temp folder. Even if the execution context happens to run on the same EC2 instance as the last run, any files added to the temp folder or objects added to memory are no longer available. This means we need to ensure our functions are stateless.
7
Create Our First Lambda Function In order to create our first Lambda function, we will select the Lambda Service. If we are creating it for the first time, we will select „Get Started Now“.
Image 6. Get Started with Lambda
After that, we select one of the blueprints. Blueprints are sample configurations of event sources and Lambda functions.
8
Image 3. Select Blueprint
We will select „Hello world“ blueprint. Then we'll configure our lambda function. We need to select the „Role“ and resources for the lambda function. Lambda function code can be edited or uploaded from the .zip file. At this point, we only choose the amount of memory we want to allocate for our function. Then, AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, like as an M3 type. For example, if we allocate 256 MB to our Lambda function, it will receive twice the CPU share than if we allocated 128 MB. We can request additional memory in 64 MB increments up to 1536 MB.
Image 7. Configure function
Once we finished the configuration of the lambda function we click on „Next“. After that, we perform a „Review“ of our newly created function, and click on „Create function“.
9
Image 8. Review lambda function
Once the Lambda function is created, it is visible in the lambda dashboard.
Image 9. Lambda HelloWorld Function
Here we can see the code of our function, event sources, API endpoints as well as monitoring. If we perform a test of our lambda function via the „Test“ button, we can see that action in the „Monitoring“ section.
10
Image 10. Test lambda Hello world function
Monitoring of the lambda function can also be performed using „CloudWatch“.
Monitoring with CloudWatch CloudWatch is a service that used for monitoring of the operation and performance of an AWS environment. With Lambda we can use the log stream functionality of CloudWatch to monitor the execution of a function.
Image 11. Monitoring Lambda function using CloudWatch
Lambda includes a set of metrics we can use to monitor the overall statistics of our functions, and with CloudWatch log streams we can get insights into the execution of our functions by using
11
console.log. Each console.log message will be written out to the log stream, which means that we can add messages that indicate our current „location“ in the code. We can output the values of variables in order to see their state at any point during the execution. If there is any error, we can see what happened, and all the structures behind it. Each execution of the function writes a minimum of three messages in the log stream. The start of the execution is logged along with the time it was launched in a unique identifier called the request ID. The end of the execution is logged and it contains the request ID with the time needed for the execution to complete. Lastly, we get a report message showing the duration of the execution, the memory allocated, and the memory used during the execution of the function. If we don't add anything else and the execution succeeds, this is all we will have in the log stream. If the execution fails, we will see an error message included in two messages. One will indicate failure of the task, with the full stack trace, and the other will summarize the error message.
Image 12. CloudWatch logs for AWS Lambda function
If we combine these messages and the error stack trace it can help us track down the reason why it went wrong. It's not desirable to have a stack trace in production without any additional information that can pinpoint the solution for fixing the problem. If we use CloudWatch metrics, Lambda provides us with insights including the request account, error count and latency.
12
Image 13. CloudWatch metrics for AWS Lambda function
The request account tells us how many times the function was executed, the error count metric tells us how many of those executions did not succeed, latency gives us insight into the time it takes in milliseconds for an execution to start after it has been invoked. Just like other CloudWatch metrics, we have a possibility to create alarms, such as sending a notification when the error count exceeds, for example 10 errors in a 5minute window.