Category Whitepapers and Guides
[vc_row][vc_column width=”1/1″][vc_column_text]While highly technical in places, this article goes through some solutions ECS Digital has been able to provide for a client to improve testing strategies and reduce costs. Although, not for everyone, we hope that sharing our technical expertise in this area can benefit the community at large.
One of our clients has been going through a change period where we are re-platforming their whole tech stack. As part of this process, we felt that now was a really great time to address an underlying issue that we have experienced with the old tech stack.
That problem is test data.
When I speak of test data, this applies to not only the unit and integration tests, but also our functional UI tests.
We had two fundamental problems we wanted to solve:
It was the responsibility of the developer to test his component with any unstructured data they saw fit. If a developer creates a component that expects data of shape A and creates a test with data shape A, the test will pass. If, however, over time the real data that is passed to the component changes to shape B we have no idea if our component will still work until quite late in the development process, which introduces a long feedback loop.
Our functional UI tests ran on a long living full stack. There was a known data set that we could reset to, which was all stored as json in its own repository – completely separated from the rest of stack and its tests. To update the fixture data on the full stack you would need to understand what test cases were already using the json, then manually change the json, create a PR, get it reviewed, merged and then run the mechanism to reset the data on the long living stack.
At the start of the project this fixture data was very useful. It allowed our functional UI tests to be robust and repeatable. As a result, when all our tests passed we had high confidence our site was releasable.
Unfortunately, over time and as software naturally adapts our fixture data started to become harder to update and maintain. Some parts were updated inconsistently, we had no clarity on what tests were tied to fixture data and shouldn’t be updated. Eventually our fixture data became unmaintainable or would break other tests as it was updated.
We spent a lot of time thinking about how to solve both of these problems and after quite some time and several approaches we finally achieved something that we felt was clean, and maintainable.
Like a lot of the industry we are migrating to a GraphQL back end.
This opened an interesting opportunity as GraphQL uses types and fields to develop a `query language` for your API. You are only ever able to query for fields that exist on their corresponding types, otherwise GraphQL will complain.
GraphQL also supports something called schema introspection, which provides a mechanism to pull down a schema for any enabled GraphQL server. This can be useful to see what queries your API will support.
Another tool called GraphQL code generator can take a GraphQL schema as an input and will output all the type definitions of your GraphQL schema as a typescript file, along with any type descriptions present on your schema. (shown below)
https://github.com/dotansimha/graphql-code-generator[/vc_column_text][vc_single_image image=”2900″][vc_column_text]Problem One Solved
Now that we had the capability for translating our production GraphQL server types into typescript definitions, we were satisfied that we could start to build a fixture generator package matching our production GraphQL server. A key part of building this package was to provide a consistent API for all clients of the fixture generator package. We also ensured that whenever the logic for building fixture data started to become complicated unit and integration tests were baked in.
Once the generator package was in place, the workflow used was anytime a client of the fixture generator package is run, the schema introspection and generating our type files comes as a precursor. The whole process takes around a second and once completed the fixture generator typescript package will build. If the schema has changed, and the fixtures no longer adhere to the types, the build will fail and you are alerted straight away.
This provides huge benefit to our tests as it now means that our tests ask for the data they require. The complexity around managing test data is no longer the responsibility of the tests. We also know that the data will be correct as per the production schema even over time. Finally, if the types do change, we only need to fix it in one place for all our tests to be updated.
You can see an example of how you would use the fixture generator below[/vc_column_text][vc_single_image image=”2901″][vc_column_text]Problem Two solved
The fixture generator brought us closer to solving problem two, but we still had no way to run our functional UI tests and somehow pass our fixture generator data to our front end. The front end was still querying the long living GraphQL environment.
Another tool Apollo GraphQL provides some powerful tools around stubbing whereby you can pass your GraphQL Schema, as well as overrides for type definitions in your resolver map. Once you have defined what data you want to return when you query a type you can start a local GraphQL server.
Once running we could point our front end to our local stubbed GraphQL server.
The final piece was to have our tests once again define the data they required and then spin up the local GraphQL server.
We have also been using Cypress as our new functional UI test tool.
Cypress as a tool is groundbreaking and is revolutionising UI tests. It runs in the same run loop as your application in the browser and provides new features for UI testing such as playback mode. I’d really recommend looking if you haven’t already
In our tests we run a Cypress task to start up our short living mock GraphQL server and provide the fixture data that we want GraphQL to run with straight from the test.[/vc_column_text][vc_single_image image=”2902″][vc_column_text]Once again this means that our tests explicitly ask for data. Previously if we wanted this test to work with 4 related articles, we would have had to edit a separate repository. Try to understand if the data we want to edit is already being used by other tests. Create a pull request, get it approved, merged, then run the reset data mechanism.
Now it’s as simple as updating the variable inside the test and it is clear what data the test needs to run and practically removes the previous feedback loop.
If you want to understand at a deeper level how this works, it’s all open source.
Take a look at the repositories below:
Or alternatively, find out how ECS Digital can help improve your test strategy by contacting us.[/vc_column_text][/vc_column][/vc_row]