
6 rue Dewoitine
78140 Vélizy-Villacoublay
France

Parc de Sophia Antipolis
06560 Valbonne
France
We have designed our test systems to interact with the phone network as the user does. The test design flexibility of MultiDSLA enables it to mimic some of the more complex interactions performed by users and the network – interactive voice response (IVR) is a good example.
IVR is essential in numerous applications – call centres, auto-attendants, voicemail and telephone banking for example. IVR tests can be run at specified intervals; in a production environment there may be concern about the resilience of the IVR platform in times of high call volume. Sequencing tests throughout the day to include both quiet and busy times will provide a detailed insight into IVR behaviour.
The first thing you need when testing IVR response is a full set of the prompts in wave file format (*.wav) if these are available. Don’t worry if they are not, as you can make recordings from the network under test. The MultiDSLA system can be used to make these recordings – see ‘Recording prompts’ below.
What IVR behaviours do you want to test?
You may need to determine whether...
Figure 1: Typical User/IVR Interaction
Figure 2: Voicemail Interactions
The test sequence in Figure 3 is typical of a building block for IVR testing.
There are many similar blocks, but the principle is always the same. Notice how four prompts have been listed consecutively, mimicking the way the IVR works. Tip: Always set the speech level to -99dBm. We only need to tell the MultiDSLA system which prompt to expect; we do not want it to be played out audibly.
You may need to test more than one set of prompts, so you need a way of building the blocks into a complete test. There are two ways to do this, either by using a Scenario (look up ‘Scenario’ in MultiDSLA Help) or by using the Remote Access scripting API (see MultiDSLA_RAS.pdf, which you can find from the Start menu by selecting All Programs > Malden Electronics > Quick Start Guides).
Figure 3: Interactions Translated into a Tasklist
You can use either PESQ or POLQA to analyse IVR behaviour, but PESQ has two characteristics which make it more suitable for this application. First, PESQ is more sensitive to ‘clipping’ – where the beginning or end of a speech utterance is missing. Second, PESQ provides a ‘percentage confidence’ result (essentially how well the time characteristics of the expected and actual prompts match) which is a reliable indicator of whether the IVR system has played the expected prompt. The speech quality score – PESQ or POLQA – gives a good indication of match between the expected and the actual prompts, unless you are using recordings of prompts made from the network under test. In this case, neither PESQ nor POLQA will return a reliable score, and the PESQ Confidence indicator becomes invaluable.
Example:
Figure 4: Beginning of prompt missing (front-end clipping)
Figure 4 shows how MultiDSLA detects that the beginning of a prompt is missing. The prompt says something like “Thank you for calling the speech-enabled auto-attendant”, but the initial “th” sound is missing. PESQ reports a Confidence of over 80%*, suggesting that the correct prompt has been heard, but also shows that a 62ms portion is missing at the front end.
* A few sample measurements will usually be sufficient to determine the target percentage.
Simple – set an Alert to notify you when exceptions occur – for example you might choose to receive an Alert on Speech Level, on Confidence (see previous paragraph) or Speech Quality Score. Drill down to the graphs to confirm the exceptions and figure out what is happening. The audio replay feature is really powerful here.
If you want the test system to trigger the prompts, program the necessary steps (DTMF sequences and/or speech files and waits) in place of the Wait event. Once you have a recording, use a third party wave file editor such as Adobe Audition or Audacity to save prompts to separate files.
Figure 5: Sample Tasklist for recording prompts from network under test
You may need to build a number of tasklists to test the different stages of the IVR transactions. Running these individually is tedious, but they can be combined in a Scenario such that they function in sequence. The example Scenario below shows how four tests have been combined to create a test suite which performs these steps:
Contact Opale Systems or your distributor for more information.