Category Whitepapers and Guides
Recently on a client engagement I encountered a challenge that I was expecting one day I would have to face. That is end-to-end (E2E) test automation in a CI/CD pipeline with serverless applications. I knew this would be an inevitable problem due to the ever-growing adoption of serverless technologies.
This is not to say I am against serverless. Actually, I am all for it. I have utilised AWS Lambdas many a time in my own personal projects as the backend to my React apps. However, what I was not expecting was to encounter this issue both so soon into my career and likewise, wasn’t expecting to be one of those on the forefront of utilising existing technology in innovative ways to solve this very fresh issue.
Now I am going to share with you the whirlwind of a ride that I went through in order to come up with this E2E test automation solution, so you don’t have to!
The backend system of the project consisted of multiple AWS Lambda functions with regular checks and calls to AWS DynamoDB, along with heavy authentication to Salesforce. There was also other security layers in place throughout the stack due to a wish to restrict access to the application.
As we are practicing DevOps by actively working to reduce silos and following the ‘shift-left’ mindset, I scheduled a call with the team in order to discuss the most practical approach to implement E2E test automation in our pipelines. During this meeting the following approaches were discussed:
After a thorough discussion it was decided that the first option would be best as it should involve less work (tight time constraint in place) and have the added benefit of testing all the individual Lambda functions, DynamoDB tables and API Gateway to make sure they deployed correctly.
Not only was it theoretically possible, it was a popular approach to take towards serverless testing. With this in mind, I set about implementing the solution locally, handling authentication through Salesforce with two factor authentication in place.
The end implementation of this ended up being a Docker-compose file containing the frontend application along with the appropriate testing tool. For this project, TestCafe was chosen because it matched the requirement for cross domain navigation.
With the tests passing locally in Docker, all seemed set for success for running the E2E tests in the pipeline. However, this is where the other layers of security access came in play. For the deployed backend environment there was an added layer of authentication with both IP whitelisting and verified Microsoft accounts. This explained why it worked perfectly locally but not in the pipeline.
After this experience, it was apparent that the serverless backend architecture had to run locally in the pipeline in order to perform the E2E tests.
This brought about the technical challenge of running the Lambdas locally, as I really did not want to make a complex container system that may or may not behave anything like the deployed application. This is where AWS Serverless Application Model (SAM) got involved.
For deploying the whole backend system, we were making use of SAM templates. Now, not only does this make it easier to deploy multiple Lambdas and complexities, it also has a CLI tool. This SAM CLI tool also contains commands that allow you to run all the Lambdas defined in your template locally.
After some upskilling into the SAM CLI tool, I ended up being able to run the Lambdas with an API gateway configured to connect to with the `start-api` command. One limitation of this tool is that it does not create the DynamoDB tables.
However, now that I was familiar with the SAM CLI tool, I was able to quickly upskill into how to run DynamoDB with the AWS CLI tool. I then looked into how to containerise the DynamoDB server and found that AWS have a DynamoDB local image which I could put into the Docker-compose file.
For setting up the DynamoDB tables in the server, I created a container with the AWS CLI tool installed. Then the container would run a script to create all the tables and put in any required items or data.
So now the test architecture entailed the following:
At this point I started falling down the rabbit hole.
I was so close.
There was only the SAM CLI tool left to containerise and then it should be able to be run in the pipeline.
Or so I thought.
There have been a few blogs published about containerising the SAM CLI tool in order to use it in pipelines.
When I say few, I mean two in English and one in Japanese that I could find through a Google search. The difficulty with running SAM CLI in a container is that the tool itself creates containers with Docker for the Lambda functions. Every time I pinged the local API gateway that was running in the container, I would get an error back that the Lambda failed to reply with a valid JSON response.
At my wits end, I took a break. In fact, I didn’t think about the problem over the whole of Christmas 2020. Instead, I let my brain cool off and made sure an abundance of alcohol was enjoyed due to plans being cancelled.
Upon my return from a two-week break, I suddenly found a new command in the AWS SAM CLI documentation. This was the ability to keep the containers that the tool creates ‘warm’ meaning it just leaves them running on start up. This was great as I was finally able to get into the containers to do debugging.
What I found was an issue with copying over the files from the host machine when running the SAM CLI tool in Docker. I had tried two lines of possibilities but neither would work.
One ran Docker in Docker with SAM CLI tool creating a child container. This solution seemed to network correctly but the child container would never get the code needed. The second solution was having the SAM CLI tool container create sibling containers instead. While I managed to get these sibling containers to have the required files, I would get a network issue for the response as it had left the SAM CLI container’s network.
This discovery was both relieving and crippling at the same time. I was thrilled that I had discovered the root of the problem but also dismayed by the impossibility of solving this issue I now faced. It seemed impossible.
I thought to myself, “well if I can’t containerise it, I wonder if I could run the SAM CLI tool in the pipeline with Docker-compose running in parallel? After all, not everything has to be containerised, especially if the container itself needs to use Docker.”
With some research, the overall documentation seemed not to mention if the AWS SAM CLI tool was installed in Azure pipelines. I did, however, find documentation about how the AWS CLI tool was available in Azure pipeline. This is where I took the plunge and created my own personal Azure account to see if the SAM CLI tool was there, along with all the other requirements needed.
This is the script I ran in the pipeline:
I was thoroughly thrilled and excited to find out that SAM CLI is in fact installed in Azure Pipelines! Armed with this knowledge, I set about writing a script to run the two in parallel and was finally able to complete the E2E testing framework running in an Azure Pipeline.
My key takeaway from this whole experience is to work more closely with the pipeline team (if your pipeline is fulfilled externally). Find out if all the required technologies you intend to use are available in the pipeline already as documentation might be outdated or lacking. It might also save you a headache!
I will be making the source code for this implementation with an example application available on my GitHub along with a read me of required setup steps here: https://github.com/TheTreeofGrace/aws-sam-cil-docker
Grace Tree is a Delivery Consultant at ECS, specialising in test automation and DevOps. She started her digital transformation career through the ECS Academy in 2019 and went on to succeed on multiple projects for BP via ECS. Grace has also received internal recognition from ECS for her technical prowess, being awarded with the Change Markers Award in 2020.