Testing your bot is a very important step in your onboarding journey to ensure your bot is ready to graduate and become a fully functioning member of your customer support team.
To ensure you are ready to launch your test should include the following phases:
Note - To do this exercise as a group offline, we have a handout attached at the bottom of the page.
Prepare the Test Environment
There are 3 main ways to test your virtual agent. The first and second options would have a better outcome, however, the third is available when the others are not available.
- Connect to CRM and Launch to a Test Environment
- Connecting to your CRM with a Test Email Trigger
- Using the Ultimate Test Bot button
Connect to CRM and Launch to a Test Environment
We recommend connecting to your CRM and embedding the widget onto a sandbox, staging, or testing environment of your website or help center. This will provide as much of a 'real' experience as possible - where you can assess your Virtual Agents' impact. This means testing the Actions and Triggers you have set up in their entirety. To ensure your CRM reporting is not impacted, create a group with a test email to have the tickets handled separately from the day-to-day.
Connecting to your CRM with a Test Email Trigger
If you don’t have a testing environment you can still test by connecting to your CRM but will need to be creative as to where you locate the widget or have it routed off of a rule based on employee email addresses.
You can follow the instructions to set this up in Zendesk Support here.
Use the Ultimate Test Bot Button
This is where you can quickly review dialogues and the conversation logs to track recognition capabilities and quickly assess dialogue structure. We have an article here on how to use the test bot button here.
Prepare your Testers
How do you determine who should be a tester?
Ideally, you don't want the same people that built the flows, as they will be biased based on knowing how the flows are built, and may not be able to see issues such as typos or formatting if they created them.
Then it comes down to finding value for the testers to help - people are more likely to dedicate time to help test if they understand the value. Therefore we would recommend 2 types of stakeholders, process-focused and experience-focused groups. This ensures that you get specific feedback from those respective groups on two main factors that they are better positioned to offer and have a vested interest in it. These people can come from wherever in the business, however, for experience, we think operations and training type team members are best and for experience, members from social and marketing teams are able to best assess whether it aligns to brand image, tone of voice and can anticipate what can happen if the Virtual Agent makes a mistake.
An additional person to include from your support team to help with the validation would be people who are relatively new to the team. Why?
They have a unique attribute that can make this task especially good for them, which is a fresh set of eyes. They are not influenced by past events and are likely knee-deep in the documentation so can assess where there are discrepancies to ensure everything is up-to-date and correct. In addition to this, they get to learn your company's tone of voice and the way requests will be responded to in the first line of defense - which will accelerate their onboarding process.
What do they need to know?
It is important to brief your testers on the intents the bot understands and what exactly you are asking them to review. Especially if you ask them to focus on the experience and tone of voice, you may wish to share with them the persona you built earlier in the onboarding to ensure they match. Testing can be as robust and detailed as you like with a team as large or small as you like, but we do have some tried and tested rules we would recommend you ask your testers to follow.
Golden Rules
❌ Don’t troll the bot.
E.g. “How to take the best selfie with my trainers”
✅ DO simplify your questions and ask one at a time
✅ DO ask the same question in different ways
✅ DO try to get through entire dialog flows and test their different stages and options (i.e. when the bot fails to recognize your issue right at the beginning, record this, but then do another run trying to get past that point and deeper into the dialog flow).
✅ DO take screenshots to accompany your recommendations and feedback.
✅ DO take screenshots of any and all scenarios in which the bots fail to understand your query and/or simply deliver a bad user experience.
Collecting and Iterating on Feedback
Collecting feedback is the most important part of the testing process. The more detailed the feedback the easier the bot builders' lives will be easier, first to make improvements but also they can quickly add more expressions based on how each of your testers would ask about that topic. This is all with the end goal of getting your virtual agent ready to be set live.
The feedback should be collected with screenshots, on whether the response was accurate, matched the persona tone, and if it met the expectations to identify areas for improvement.
You will want to create a shared file with your testers and your builders so that they can track the progress of the test, see feedback from others to perhaps upvote and save duplications, and track whether a resolution has been implemented.
In addition to this, your builders can review the conversations that happened in the Conversation Logs to ensure everything happened as intended, such as actions.
Once all feedback has been reviewed and you feel happy with your Virtual Agents' performance, they are ready to be launched and become a full member of your support team.