Serverless App Deployment in Aws
Introduction
The internet is littered with plenty of basic examples of how to create serverless applications. None of those however cover any, even slightly, more advanced topics. In this post i will show steps needed to deploy an application that will interface with few key components of the AWS infrastructure. Those are:
- S3 - will host the static frontend for our app
- API gateway - will host the endpoints for the webapp requests. Some request and response processing can be performed there
- Lambda - this is where the actual business logic will be executed. In our deployment we will use both javascript and Rust code.
The application itself is going to be simple meme generator. The user will be able to upload the image, specify the caption, its position and size.
All the code needed to run this example can be obtained from my github Repository
The frontend
The example of the frontend can be seen at the Skiddadle web app. Its a Vue.js application hosted in the S3 bucket. User uploads the image to which the caption needs to be added. The base image is then uploaded to the S3 folder in the same bucket where the webapp is hosted into a static directory. S3 offers a method to upload the files directly to S3 bucket using a POST requests. The POST based upload must be performed using a Signature version 4.
S3 file upload
Signature version 4
The process of creating the signature is described in quite detail on the AWS documentation site with a pseudocode and request and response examples. In our application the policy creation/validation will be performed by a lambda function. In a traditional application this task would be most likely performed on the server but since we're serverless... ;)
Policy lambda
Lets start by creating a new lambda function. Lets call it: sign-s3-request
.
- We will upload it from the .zip file.
- Runtime:
Node.js 8.10
- Handler:
index.handler
.
With this code in the handler:
exports.handler = async event => {
const expiration = d.format('YYYY-MM-DDTHH:mm:ss\\Z');
const isoDate = d.format('YYYYMMDDTHHmmss\\Z'); // ISO8601 Long format
const simpleDate = d.format('YYYYMMDD');
const credential = `${API_KEY}/${simpleDate}/${region}/${serviceName}/aws4_request`;
const s3Policy = {
expiration: `${expiration}`,
conditions: [
{
bucket: `${bucket}`,
},
['starts-with', '$key', `static/${event.params.querystring.filename}`],
{
acl: 'private',
},
['starts-with', '$Content-Type', 'image/'],
{
'x-amz-server-side-encryption': 'AES256',
},
['starts-with', '$x-amz-meta-tag', ''],
{
'x-amz-credential': credential,
},
{
'x-amz-algorithm': 'AWS4-HMAC-SHA256',
},
{
'x-amz-date': `${isoDate}`,
},
],
};
const base64Policy = Buffer(JSON.stringify(s3Policy), 'utf-8').toString(
'base64'
);
const signatureKey = getSignatureKey(
API_SEC,
simpleDate,
region,
serviceName
);
const signature = crypto
.HmacSHA256(base64Policy, signatureKey)
.toString(crypto.enc.Hex);
const policy = {};
policy['signature'] = signature;
policy['key'] = `static/${event.params.querystring.filename}`;
policy['acl'] = 'private';
policy['base64Policy'] = base64Policy;
policy['x-amz-credential'] = credential;
policy['x-amz-algorithm'] = 'AWS4-HMAC-SHA256';
policy['x-amz-date'] = `${isoDate}`;
policy['x-amz-signature'] = signature;
const response = {
statusCode: 200,
headers: { 'Access-Control-Allow-Origin': '*' },
body: {
policy,
},
};
return response;
};
API_KEY and API_SEC are the API user id and the password.
The getSignatureKey
is taken almost exactly from the AWS site.
function getSignatureKey(key, dateStamp, regionName, serviceName) {
const kDate = crypto.HmacSHA256(dateStamp, 'AWS4' + key);
const kRegion = crypto.HmacSHA256(regionName, kDate);
const kService = crypto.HmacSHA256(serviceName, kRegion);
const kSigning = crypto.HmacSHA256('aws4_request', kService);
return kSigning;
}
Its important to remember to return the headers: { 'Access-Control-Allow-Origin': '*' }
in the response to bypass the CORS protection. Obviously it would be way better to specify the actual domain instead of the *
.
Policy API gateway
We create the API gateway endpoint and name the resource as: sign-s3-request
and we add GET method to it. Once the enpoint is created we should see the following:
Then in the Integration Request we need to set up the mapping of the request parameters. The mapping allows for customization of the parameters passed within the response to the lambda function. There are two ways to do this:
- enabling
Use Lambda Proxy integration
. This will copy all body data, parameters and query strings and expose it in the event object in the lambda function. It will also add all the request data into the event object. The downside is that it will disable the possibility of transforming the lambda function response. It is however good option for most simple deployments.
- adding an explicit mapping that will depend on the
Content-Type
header in the response. This is great option if we need to structure the event in a specific way.
The second option was the one used. As shown above in the Mapping Template section we add the Content-Type
for which we wish the gateway to apply the transformation. The template mapping is editable and allows for plenty of flexibility for transforming the request. More information about the mapping is available in the AWS mapping docs. The above mapping will allow us to access the filename field with: event.params.querystring.filename
.
There is no more to be done for this resource so all we need to do now is to deploy the API: Actions -> Deploy API
. Assuming the deployment stage is called policy the resulting API URL will be: https://<random_string>.<region id>.amazonaws.com/policy/sign-s3-request
.
Now a curl request such as:
curl -XGET 'https://6ukgq70no1.execute-api.eu-central-1.amazonaws.com/policy/sign-s3-request?filename=hello.txt' | python -mjson.tool
should yield following example response:
{
"body": {
"policy": {
"acl": "public-read",
"base64Policy": "eyJleHBpcmF0aW9uIjoiMjAxOS0wMS0yOFQxODo1MzozM1oiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJwb3dlcm9ma2VrIn0sWyJzdGFydHMtd2l0aCIsIiRrZXkiLCJzdGF0aWMvaGVsbG8udHh0Il0seyJhY2wiOiJwdWJsaWMtcmVhZCJ9LFsic3RhcnRzLXdpdGgiLCIkQ29udGVudC1UeXBlIiwiaW1hZ2UvIl0seyJ4LWFtei1zZXJ2ZXItc2lkZS1lbmNyeXB0aW9uIjoiQUVTMjU2In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlGNEpFN09RWUtST1lGUUEvMjAxOTAxMjgvZXUtY2VudHJhbC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTkwMTI4VDE4NTMzM1oifV19",
"key": "static/hello.txt",
"signature": "cc4014e979534e93d1fffb72b2c494f92e786b02b008af12fedd6eb748775a06",
"x-amz-algorithm": "AWS4-HMAC-SHA256",
"x-amz-credential": "AKIAIF4JE7OQYKROYFQA/20190128/eu-central-1/s3/aws4_request",
"x-amz-date": "20190128T185333Z",
"x-amz-signature": "cc4014e979534e93d1fffb72b2c494f92e786b02b008af12fedd6eb748775a06"
}
},
"headers": {
"Access-Control-Allow-Origin": "*"
},
"statusCode": 200
}
Using the above response we can now in our frontend construct the following POST request that will upload the provided file to the S3 bucket.
const policy = this.$store.state.policy;
const formData = new FormData();
formData.append('key', policy.key);
formData.append('acl', policy.acl);
formData.append('Content-Type', this.file.type);
formData.append('policy', policy.base64Policy);
formData.append('x-amz-credential', policy['x-amz-credential']);
formData.append('x-amz-algorithm', policy['x-amz-algorithm']);
formData.append('x-amz-date', policy['x-amz-date']);
formData.append('x-amz-signature', policy['x-amz-signature']);
formData.append('x-amz-server-side-encryption', 'AES256');
formData.append('file', this.file);
this.$store.dispatch('uploadToS3', formData)
If the POST request is successfull the specified image will end up in the S3 bucket.
Caption rendering
Now when the image is uploaded into the S3 bucket we can render the specified caption onto the image. The business logic responsible for adding the caption to image is written in Rust. This sample project is also opportunity to check out the rust in the cloud enviroment.
Caption rendering lambda
As before we will start with the lambda setup. Since last few weeksmonths a runtime enviroment for Lambda is available for Rust. This allows you to easilly run Rust in the lambda container. The only other way to run Rust code in lambda would be to create FFI bindings or compile the Rust code to Webassembly. The FFI method did not work for me unfortunately as there were issues with the Rust code when it was called from javascript. The issue with compiling the code to Webassembly is that some libraries are simply not compatible with the Webassembly (this is the issue with the version of imageproc library i was using). This should change once Webassembly becomes more widely used.
Rust AWS runtime
The Rust code needs very little preparation in order to be compatible with the lambda container. We just need to import the lambda_runtime
crate. Specify the function that is going to be executed in the lambda and compile the code into binary.
#[macro_use]
extern crate lambda_runtime as lambda;
#[derive(Deserialize, Clone, Debug)]
struct Request {
...
}
fn my_handler(req: Request, c: lambda::Context) -> Result<Response, HandlerError> {
Ok(Response {
...
})
}
fn main() -> Result<(), Box<dyn Error>> {
simple_logger::init_with_level(log::Level::Info)?;
lambda!(my_handler);
Ok(())
}
The small complication exist with the request object that is going to be passed to the handler function. Since we are dealing with Rust (the type system is strong in this one) we need to create out own type that will represent the parameters passed into the handler. Same as before we can use the option Use Lambda Proxy integration
. However the object passed in will be quite complex and this will force us to create plenty of deserializations rules to turn it from JSON to Rust struct. Luckily there is already a library where somebody rustified all eventsfor us already. However same as before we will not use the Proxy option and we will create the maping manually.
Lets prepare the enviroment to compile the Rust code for lambda enviroment. We need to start from getting the lambda compatible cargo target:
rustup target add x86_64-unknown-linux-musl
We also need to download the cross compiler:
brew install filosottile/musl-cross/musl-cross
Now the cross-compiler binary needs to be linked to the cargo target:
ln -s /usr/local/bin/x86_64-linux-musl-gcc /usr/local/bin/musl-gcc
Some tutorials mention setting up a .cargo/config
file with following content:
[target.x86_64-unknown-linux-musl]
linker = "x86_64-linux-musl-gcc"
This however did not work for me and i had to create the above soft link. Maybe its worth adding the above .cargo/config
entry as its operation might get fixed in future versions of the cargo.
The Cargo.toml file will require also a bit of extra love. The below section neewd to be added in order to create the compiled binary:
[package]
autobins=false
[[bin]]
name = "bootstrap"
path = "src/main.rs"
The autobins=false
in the package section will stop the cargo from naming the binary after the project directory (which is the default behavior).
The full Cargo.toml and the full content of the lambda function is available in the github repository. Once the code succesfully compiles just zip the binary and upload it to the lambda:
zip -j lambda_rust.zip ./target/x86_64-unknown-linux-musl/release/bootstrap
Caption API endpoint
We create the API endpoint, we add the resource /meme
and the POST method. In the Integration Request
, as mentioned in previous section we dont use the Lambda Proxy Integration
. Instead we create the mapping manually:
With this simple mapping the resulting request object passed to the Rust lambda handler function will be as follows:
#[derive(Deserialize, Clone, Debug)]
struct CustomEvent {
image: String,
bucket_address: String,
posx: u32,
posy: u32,
scale: u32,
caption: String,
}
#[derive(Deserialize, Clone, Debug)]
struct Request {
body: CustomEvent,
}
With such a simple object we dont have to write any additional deserializers for serde. The lambda function will fetch the image from the bucket, render the caption at the specified location and send the resulting image bytes back to the requester. This is slightly convoluted way of doing this as ideally you would upload the resulting meme image back to the bucket. This way was just more fun as sending the binary data from lambda function is requires some extra work.
So the response object from the lambda looks as follows:
Ok(Response {
status_code: 200,
body: Body{ meme_data: encode(&meme_buf), meme_type: content_type},
headers: headers,
is_base64_encoded: Some(true),
})
#[derive(Deserialize, Serialize, Clone)]
pub struct Body{
meme_data: String,
meme_type: String
}
The headers vector contains Content-Type
header with the content type that is set dynamicaly depending on the image type.
The body contains the rendered image bytes encoded into base64 and the content type. The is_base64_encode
is set true since we are encoding the body content. This then requires a little bit of work on the API gateway side. In the Method Execution
section we need to set the following:
Obviously the Content-Type
could be statically set there to i.e.: image/png
but what if the response from the lambda will be carrying the .jpg file. Therefore the Content-Type
response header is dynamically updated by API gateway to the value we return from the lambda handler by accessing the Response object. We can tell API gateway to use different headers from the response depending on the use case. In this case its integration.response.header.Content-Type
.
The next step is to set up the mapping for different supported mime types. The mapping for all image types is the same:
{
"meme_data": $input.json('$.body.meme_data'),
"meme_type": $input.json('$.body.meme_type')
}
So this means that the API gateway will return to the client object containing two fields: the meme image data and the mime time for the image. This just shows how easy it is to anipulate the response data within the API gateway and how easily (with a bit of preconfiguration) we can dynamically control the content of the response.
Conclusion
Hopefully with this simple example i was able to show how easy it is to use AWS infrastructure to easilly create serverless application. The multitude of supported runtimes and the flexibility of the API gateway allows for quick development of applications. The API gateway offers powerfull functionality for validating the requests and responses wchich allows clients and lambda business logic to be more focused on the actual problem solving.