View code

Prev Next

Available in Classic and VPC

Explains how to check the code applied to your environment and how to view existing test app information in the playground menu.

View code

To check the code for tasks performed in playground, follow these steps:

  1. From the NAVER Cloud Platform console, click i_menu > Services > AI Services > CLOVA Studio in order.
  2. From the my product menu, click the [Go to CLOVA Studio] button.
  3. On the left side of the screen, click the Playground menu.
  4. After performing or loading a task, click the View code button.
  5. Check the API request information in the view code window.
    clovastudio-playground-viewsource_ko.png
    • For more information on the API, see CLOVA Studio API guides.
    • As for the code type, curl and python are provided.
    • You can click [Copy] to copy the API information to the clipboard.
    • The code structure displayed may vary depending on the model in use.
    • You can set whether to use an AI filter and check the AI filter guides.
    • For more accurate code information, save the task first, then click the [View code] button.
Note
  • To review API information for previously created test apps, click API key > [Deprecated] tab on the left side of the screen.
  • You can apply for a service app without creating a test app first. For more details, see Apply for service app.

AI Filter

From the point of view of complying with NAVER AI code of ethics, you can set the AI filter feature. AI filter is a feature that detects and notifies users when inappropriate results, such as profanity, are output in the service app.

Currently, AI filter provides analysis results for inappropriate language, such as profanity, and assigns a label between 0 and 2 based on the risk level detected in the content. The descriptions for each label are as follows:

Label Description
0 High possibility of inappropriate expressions such as profanity in the text
1 Possibility of inappropriate expressions such as profanity in the text
2 Low possibility of inappropriate expressions such as profanity in the text

Based on the results of the AI filter analysis, you must prepare appropriate countermeasures to reduce the risk of content. For example, if the AI filter response to an output is 0, it is safer to inform the end user that the output cannot be returned, and suggest a new input. However, since the AI filter is a model focused on detecting risks, false positives may occur and risk detection may be difficult depending on continuously changing environmental factors (laws, new words, contextual meaning, changes in meaning of words and sentences according to social changes, arbitrary meaning for specific words, etc.), so it cannot be a perfect safety device. If you are concerned about inappropriate output such as profanity, you should pay attention from the stage of designing the prompt to input, as well as preparing an appropriate countermeasure using the AI filter.

Caution

The service limits of the AI filter are as follows:

  • The AI filter limits the requested text to a maximum of 500 characters. Texts longer than 500 characters cannot be parsed normally.
  • If there are too many unusual formats, emojis, and special characters in the requested text, it may not be analyzed properly.