In Part 1, we saw how you could create a ContactFlow that would allow you to collect data from your customers with a simple question, and then inserting it into the session attributes. In Part 2, we showed how you could then extract that data and deliver it to an Elasticsearch cluster. In this part we are focusing on turning that data into insightful visualisations, graphs and dashboards using Kibana.
Kibana is usually used as part of the ELK stack (Elasticsearch, Logstash and Kibana) and together they make a great solution for quickly visualising your data. We’ll take you through a couple of these in the following blog.
First of all, if you are following this 3 part series then you’ll be at the point where you have a fresh installation and you’ve started to deliver data, so we still need to configure Kibana. The first thing you will need to do is go to the Management section on the left menu, and then click on Index Patterns. Patterns basically work by allowing you to use
* to configure a search pattern that will find indexes. Let’s use our recently acquired CloudWatch indexes as an example and imagine that we have data from the last few days. (the script we built automatically rotates index names based on date, the following will make this clearer.
Now, if we specify a search index, we’ll get a number of different results depending on how we write it
Index search string
|cwl-*||all indexes starting with
|*.11.*||all indexes that contain
|*.01||all indexes that end
This is important because as you get more advanced at extracting insights, you will have different data streams. For example you can send the Contact Trace Records, which will probably all be prepended with
ctr so you can make a distinction. Dashboards can then be used to display data from multiple indexes so you can create rich insights into multiple data sources. for the purposes of our demo, configure the index to
cwl-* so we will find all the relevant indexes for our Cloudwatch Logs.
Once you have this, press Next Step. You will then be asked what the index uses for it’s Time Filter field name. You can use the drop down and choose
Timestamp and then press Create Index Pattern.
You can now navigate to the
Discover tab on the left menu so we can look at some data. You can select a date filter on the top right of the screen, when you select to change it you should see a pop down that’s similar to the below. You need to select a timeframe for which it will contain data (so whenever you placed test calls from the demo).
Now that you have selected a filter range you should be able to see some log entries that have been indexed into Kibana. In the central column you can select particular fields to filter the data or change the data on the right. The first thing we are going to do is limit the view to one particular piece of data we are really interested in, which will allow us a better view. We’re looking for the
Answer that we recorded earlier. This will be contained in a field called
Parameter.Key which if you press once the menu changes as follows:
You can press the
+ here to filter for records that contain a
Parameters.Key which contains the
Answer field. The result is that you only see those records in the right hand pane. If you have more than one, you need to expand one of them to see the results. You’ll see the entire record and some similar output to the following:
Ok, spicy cheese isn’t really what I want – but, you can now see where we are at. We need to create some fake data, and because we don’t know what it is you are using this for, we’ll be using a made up set of data based on things you might say to a supermarket. I’m going to make 20 calls (ultimately you can do the same here, or you could be testing this with live customers and can therefore create variance in the responses).
NOTE: I will be the only caller in this scenario so the data is obviously bias, but it will prove a point around how to visualise the data.
Number of Calls
|8||I want a refund|
|5||I’m not happy with the service I’m getting|
|2||I bought food that has gone rotten|
|2||I want to buy some speciality cheese|
|1||Can I place an order please|
|1||How many reward points do I have|
|1||Is my turkey in store yet|
It’s worth making these calls prior to following the next part of the blog so that you have some data to work with and you can immediately see the results of any changes you make. Once you have made some calls, you can navigate to the Visualize tab on the left hand menu. Choose Create a Visualization and then select Tag Cloud. You will need to select the right index, if you are following along there will only be one. If you have multiple, make sure you are selecting the ones that begin with
You are presented with a set of Visualisations that you can chose from. You are going to choose a Tag Cloud for this particular visualisation.
You will now have a blank visualisation that you need to populate with data. You can look in the Buckets section and then click the Tags option, which will change the box to
Select an aggregation. You need to choose Terms and then in the Field box, select
Parameters.Value.keyword which contains the text from the answer. You can have auto-refresh enabled in your dashboard, or you can click the small triangle play button above the filter box you are editing to populate the Tag cloud with the data. Make sure to also set the Size of the sample above 5, or you won’t see many results.
At this point you should see very similar to the visual below, unless you used your own utterances. You can now save the visualisation, give it a name of your choice as we’ll use this later and add it to a dashboard.
It’s worth noting here, that we are trying to accept raw text and interpret it using a Lex Bot slot called
QueryString. You can see there is definitely room for improvement, we do know however that there is an ever progressing set of models behind the Lex platform (think Alexa) which can apply to the slots. Ultimately, we’re looking to use this raw data to build models (as well as use products in AWS such as Comprehend) to draw further insights, such as key phrases, that could be used to make call routing decisions.
- If you can detect
rewards pointsas a key phrases, you could then respond with the answer
Is my $product in store yet, could prompt to ask for the customers order number, to which they could be told whether it had arrived
refundcan clearly lead to complaints/customer services
Whilst the transcriptions are not 100% accurate today, you can already draw from this that refunds are the most popular subject and that people should be either redirected to customer services and/or assisted with issues causing them to ask for a refund. The future will bring accelerated improvement of NLP and Transcriptions as the dataset to learn by continues to grow.
Create another visualisation using the same method, but this time chose a vertical bar chart. This time we’re going to look for the number of calls we have seen. There are better ways to detect number of calls once you have a more mature configuration of your Contact Center. For example you could use the Contact Trace Records instead of CloudWatch Logs, but that is beyond the scope of this blog series. Configure the visualisation as follows.
Also, you need to filter out the rest of the events that you don’t want, so here I’m going to chose only the
Disconnect ContactFlowModuleType. This helps record a singular event as part of a Contact Flow, however as mentioned before, there are better ways to do this once you are consuming the CTR.
This should produce a graph that tracks the number of calls over time. As you can see from the example here we’re only getting around 15 minutes as this is when I added the demo data. In a real contact centre context this is way more powerful. Don’t forget to save the visualisation…
Finally, you can create a visualisation that will show the answers in a data table, this can be easier to export and work with over a Tag Cloud. Create a new visualisation and select Data Table, then configure the filters as follows.
Again make sure you alter the size of the sample set, otherwise you don’t see enough results. You’ll end up with a data table that looks similar to the following. Don’t forget to save the Visualisation…
Now that you have a group of visualisations, you’re ready to make a dashboard. Navigate to Dashboard on the left menu and then choose to create a dashboard. Click on the Add button, either in the main window or at the top right. You’re now presented with your visualisations. You can add them in any order and drag them around, resize them and position them how you see fit. Obviously you can create multiple visualisations and dashboards for many different views. If you are considering using Kibana at scale and for various teams, you can configure SAML authentication and role separation so you can give different teams different access and permissions.
Remember we added a filter to the Number of Calls and it still keeps this as a filter as part of the visualisation. You can add additional filters to this dashboard. Here I have selected the specific timeframe in which the test calls were placed so that we can view exactly what happened.
As you can see, this tool gives you a really simple way to see into the data you are collecting. This is just scratching the surface and we have plenty more examples where you can drive insights from actually activity in your Contact Centers. The thing with using these tools is that they are inexpensive to test. We have worked with large enterprises who can’t afford to risk massive changes without understanding the implications and impact. To that end you can insert Amazon Connect into existing Contact Center flows, or as we have done in the past, seek to gain insights from customers without intrusive techniques that allow us to design better interactions.
Ultimately the customer of your contact center and there experience should be the central focus of any experiment or adoption of these technologies. If you want some more information, please don’t hesitate to contact us.