Available in Classic and VPC
To increase user satisfaction with a chatbot, ample testing must be conducted. After model training is complete, a testing and validation step is needed to check whether the chatbot works as intended by the creator and whether there are any items that require further improvement. NAVER Cloud Platform's Chatbot Builder provides advanced testing tools. You can directly test trained conversations after setting up the test environment.
To run the test, the conversation model training must be completed first, so finish the build. Otherwise, you cannot properly run the tests.
- Manual test: Run the test by manually entering questions
- Automated test: Upload an Excel file to run the test automatically. After uploading a list of questions in a specified format, you can review all the responses extracted through automated testing at once.
- Quality test: This is an advanced feature of the automated test. You can run tests automatically whenever the beta version changes, ensuring data quality.
Manual test
You can test the learned conversations after directly selecting the test environment, such as specifying the messenger to link and date/time conditions.
To manually test a completed chatbot:
- In the NAVER Cloud Platform console, navigate to
> Services > AI Services > CLOVA Chatbot > Domain. - Click [Run builder] of the domain you want to run the Chatbot Builder.
- In the Chatbot Builder, click Manual Test menu.
- Set up the test environment.
- Test environment: You can select the beta or production environment in which to perform the test.
- Date/Time: You can perform tests by setting a specific date/time.
- Messenger: You can select the messenger to perform the test.
- Context: You can start tests after setting a specific context. If a test is required from the middle flow of the chatbot service, you can set a specific context and start testing from that point. However, the welcome message is not supported when starting the test after setting the context.
- User variable: You can set the value stored in the user variable to a specific value before starting the test.
- Voice: You can test after setting a specific voice. We recommend testing after aligning your settings with the voice settings for CLOVA AiCall.
- Select the test start method.
- Start with a welcome message: You can start the test with a welcome message.
- Start immediately: You can start the test immediately without a welcome message.
- When the test window appears, enter your question.
- For information on how to run the test, see Run tests.
- During the test, click [Check Settings] to view the test environment settings.
- Click [Initialize] to change the test settings and run the test again.
- Check the test results.
- For information on how to check test results, see Check test results.
When manually testing within the Chatbot Builder while the domain is set to AiCall, the values for the built-in variables *cicRequest.session.callInfo.callee and *cicRequest.session.callInfo.caller are empty, which makes testing difficult. However, you can set these values during test setting to test quickstart using callee and caller information even within manual tests.
Run tests
Enter the test query directly into the test window to run the test.
- When building an FAQ chatbot service, you must test whether various user utterances not registered in the training set are matched to appropriate conversations that align with their intent. This is a test of the chatbot service's intent analysis performance.
- When building a goal-oriented chatbot service designed to perform specific tasks, we recommend running the quickstart flow test in addition to the intent analysis performance test. Test whether the chatbot runs the defined quickstart properly according to the flow of various user-generated situations, and check for any missing parts in the quickstart flow.

Check test results
On the right tab of the test progress interface, you can view the analyzed information, response results, and log information. Use this information to verify that the chatbot functions correctly as designed by the creator. By clicking the speech bubble for the turn you want to view on the left test interface, you can revisit the information for that turn, not just the last turn's information.
Analyzed Information tab
You can view the information analyzed during a specific turn.

- Entity: You can view information of an entity analyzed from the user's utterance.
- Slot: If a task has started, you can check which entity has been filled in a specific slot.
- Context: You can view the information and count of context deducted and added during a specific turn.
- User variable: You can view the information of user variables updated during a specific turn.
- Voice settings: If you have configured voice settings, you can view them by clicking ‘More’ below the speech bubble.
Response result
You can view the information of the response/message provided by the system. When a match is made for a specific conversation, you can view the reason for the match. You can also view the reason for the match when a failure message or no-response message is returned.

- Exact matching: This refers to cases where general question data identical to the entered test query is registered in the conversation and matched.
- Regular expression matching: This refers to cases where the pattern of the entered test query matches a regular expression question registered in the conversation.
- Intent classifier matching: This refers to cases where the intent analyzed by the intent classifier for the entered test query matches an intent question registered in the conversation.
- Model matching: This refers to cases where the conversational model analyzes the input test query and determines it matches the conversation it deems similar.
- Move to another conversation: This refers to cases where the conversation was moved using the conversation move function of the follow-up action.
Log Check
You can check the log. If you encounter an issue and contact customer support, provide the raw logs for that turn to receive a more accurate response.
Automated test
Create the questions requiring testing in an Excel file and upload it. You can then download the results as an Excel file.
To run automated tests:
- In the NAVER Cloud Platform console, navigate to
> Services > AI Services > CLOVA Chatbot > Domain. - Click [Run builder] of the domain you want to run the Chatbot Builder.
- In the Chatbot Builder, click Automated Test > [Batch Test] tab.
- Open the test settings and configure the date/time conditions and messenger conditions.
- Click [Load] button to register the test file.
- You can upload only files in xls and xlsx formats.
- If you do not have any test files, click [Download Test Template] to download the template and create a test file. Simply enter the anticipated questions in Column A of the template file.
- When the test message popup appears, click [OK].
- Click [Manage Task] to view the task results.
- Click the Excel file link in the task details section of the task list.
- You can view the results for the uploaded test file.
Quality test
You can test the quality of chatbots by version via the quality test. Quality assessments are automatically conducted whenever the latest beta version changes. Register the key questions requiring management within the domain and the expected matching conversation names as quality assessment data. Once the assessment is complete, you can view the pass rate and download detailed results from the Manage Tasks page.
To run a quality test:
- In the NAVER Cloud Platform console, navigate to
> Services > AI Services > CLOVA Chatbot > Domain. - Click [Run builder] of the domain you want to run the Chatbot Builder.
- In the Chatbot Builder, click Automated Test > [Quality Test] tab.
- Click [Change Settings] in the quality test settings.
- Change the quality test service area to ON.
- Click [Upload] in the quality test file area.
- Upload the quality test file to the quality test file upload window and click [OK].
- You can upload only files in xls and xlsx formats.
- If you do not have any test files, click [Download Template for Quality Test] to download the template and create a test file. (Enter the query to test in Column A. Enter the conversation name that should match as the correct answer in Column B)
- Select the date/time conditions and messenger for testing.
- To disable the date/time setting conditions, select "No selection."
- Click [Save] in the quality test settings.
- When the popup appears, click [OK].
- Once the chatbot build is complete, quality assessment is performed automatically.
- The pass rate is displayed in the quality test window, and you can view the test-completed files in the job results.