CLOVA Studio AI Code of Ethics
    • PDF

    CLOVA Studio AI Code of Ethics

    • PDF

    Article Summary

    Available in Classic and VPC

    CLOVA Studio AI Code of Ethics was written to help you understand various situations they may encounter while using CLOVA Studio from an AI ethical perspective, and to prevent and avoid problems arising from these situations.

    All members of NAVER develop and use services in compliance with the NAVER AI Code of Ethics announced by NAVER. To this end, NAVER is implementing separate efforts and policies, such as developing and applying the AI Filter function. We would like to point out that users wishing to use CLOVA Studio must also abide by the same principles and policies. Below, we explain the detailed provisions and purpose of the NAVER AI Code of Ethics in detail, guide you through how to implement them, and the obligations of NAVER and its users that must be fulfilled when using CLOVA Studio.

    Chapter 1 Understanding NAVER AI Code of Ethics for Using CLOVA Studio

    NAVER AI Code of Ethics

    NAVER will make cutting-edge AI technology as a daily tool that anyone can use easily and conveniently. We will open up various opportunities and possibilities by never stopping the challenge of presenting new experiences of connection to users. To this end, all members of NAVER will abide by the following ethical principles in the development and use of AI.

    The full text presents a perspective on how NAVER views AI through the phrase "everyday tools that anyone can use easily and conveniently". At the same time, it contains NAVER's corporate philosophy of connection, challenge, and diversity. And the last sentence specifies NAVER members' compliance with AI ethical standards.

    1. AI development for people

    NAVER develops a daily tool for people to use the artificial intelligence. NAVER will prioritize human-centered values​in the development and use of AI. NAVER has been developing technology to add convenience to users' daily lives, and also improving AI to be used as a daily tool. NAVER recognizes that AI is a technology that can make our lives easier, but like everything else in the world, it cannot be perfect. NAVER will continue to study and improve AI so that it can become a daily tool for people.

    The first clause declares that we will prioritize human-centered values ​​in the development and use of AI. In other words, it indicates that NAVER develops and uses AI for people. The use of artificial intelligence can make our lives easier, but it is not perfect, just as anything in this world. Moreover, we propose a direction for continuously evaluating and improving AI to make it an indispensable tool for everyday life.

    2. Respect for diversity

    NAVER will develop and use AI to prevent unfair discrimination against everyone, including users, by considering the value of diversity. NAVER has implemented technologies and services to enhance the meaning of connection through diversity. In the process, we have opened various opportunities and possibilities to users, and we have been working hard to prevent unfair discrimination without reasonable standards. NAVER will prevent unfair discrimination in AI services and provide experiences and opportunities where various values​coexist.

    The second clause presents the value of respect for diversity. The value of diversity is one of the values that makes connection more meaningful, and NAVER, a technology platform, is committed to this value. NAVER has included the value of diversity in its AI Code of Ethics, believing that creating a society with more unique creators and entrepreneurs and connecting more diversity through this is of great significance.

    3. Combination of reasonable explanation and convenience

    NAVER will do its best to help anyone use AI conveniently, and to give reasonable explanations to users when AI is involved in their daily lives. NAVER will strive to realize this concretely, considering that the ways and levels of reasonable explanation about AI may vary. NAVER's AI is not a technology for technology's sake, but will be a tool for anyone to use easily, even without technical knowledge. NAVER pursues the convenience of its services, and if there is a user request or need, NAVER will explain AI services at the user's level so that they can easily understand.

    The third clause specifies accountability for AI services as a method of realizing transparency, one of the items presented in domestic and foreign AI principles. However, in the scope of the explanation, if the explanation is excessive or difficult for users to understand, there may be a risk of harming the convenience of the service itself. Accordingly, while pursuing the convenience of the service, it specifies that if there is a user request or need, we will explain AI service at the user's eye level so that they can easily understand.

    4. Service design considering safety

    NAVER will pay attention to safety and design AI services that do not harm humans in the entire service process. In order to prevent situations in which AI, a daily tool for people, threatens people's lives and bodies, NAVER will design services with safety in mind throughout the entire process, conduct tests, and continue to review safety after deploying.

    In the fourth clause, based on human-centered values, we present human life and physical safety as specific standards of safety. In detail, it specified that we put human safety as the top priority in designing AI services, and we continuously review the safety during and after design, testing, and deployment.

    5. Privacy protection and information security

    In the process of developing and using AI, NAVER will strive to protect users' privacy beyond legal responsibilities and obligations for personal information protection. In addition, we will apply design considering information security throughout the entire AI service process, including the development stage. NAVER goes beyond fulfilling its legal responsibilities and obligations in using personal information and actively protects personal privacy. Our design considers information security throughout the entire service process so that users are able to fundamentally block situations in which they become concerned about information security. For AI services, we will make efforts to ensure that users can freely use AI services to add convenience to their lives without worrying about their privacy or the security of their information.

    In the fifth clause, it states that NAVER will strive to protect privacy beyond legal responsibilities and obligations for personal information protection in the context of AI services, consistent with the privacy center's personal information protection principles that NAVER has. Currently, as we apply the Privacy by Design principle to our services, it emphasizes that the Privacy by Design principle is applied from the initial design to AI services as well. Privacy By Design means the design of services with personal information protection applied.

    Chapter 2 Using CLOVA Studio to practice NAVER AI ethical standards

    NAVER operates screening and issuance process for a service app and provides an AI filter function so that users can use CLOVA Studio while practicing NAVER AI Code of Ethics.

    The process of reviewing and issuing service apps is a procedure to check compliance with NAVER AI ethical standards in order to prevent potential risks of service apps through CLOVA Studio.

    The AI Filter function is to detect inappropriate results such as profanity being output from the service app through CLOVA Studio and to inform the user.

    Users can prevent AI services from harming human-centered values or outputting expressions that undermine diversity when using CLOVA Studio through their own action plans, including those provided by NAVER. They must strive to ensure that the result does not threaten human life and body safety or infringe on personal information and privacy.

    Although NAVER and its users are aware that the outcome of AI services cannot be completely predicted in advance, NAVER will continue to improve its AI ethics practices. We inform you in advance that not all details of improvement work may not be notified to users.

    Chapter 3 Obligations of NAVER to provide CLOVA Studio and obligations of users

    1. Although the output of AI services cannot be completely controlled in advance, NAVER and its users must strive to reduce potential risks by complying with NAVER AI Code of Ethics and policies. Therefore, NAVER fulfills the obligations under each of the following subparagraphs to users so as to understand various situations users may encounter while using CLOVA Studio from an AI ethical perspective, and to help prevent problems.
      • We provide an explanation of the NAVER AI Code of Ethics and policies that apply to CLOVA Studio.
      • When reviewing and approving CLOVA Studio service apps, we suggest improvements to comply with NAVER AI ethical standards and policies.
      • We have the CLOVA Studio AI Ethics Guide specify the obligations that NAVER and its users must abide by.
      • When using CLOVA Studio, we provide technical tools such as NAVER AI ethical standards and AI Filter function for implementing policies.
      • If we receive inquiries about AI ethics from users, we will take measures to improve communication and related matters to a reasonable extent.
    2. Users are responsible for the following items when using CLOVA Studio.
      • Users are prohibited from maliciously using CLOVA Studio. Also, users must not pose any reputational risk to NAVER or CLOVA Studio through malicious use. Malicious use is representative of intentionally generating results that violate the NAVER AI Code of Ethics and CLOVA Studio AI ethics guide, and also includes problems caused by users violating their obligations mentioned in Chapter 3.
      • Users are obliged to use AI Filter when using CLOVA Studio. However, regarding "inappropriate results determined to be problematic or potentially problematic" that occurred despite the use of AI Filter, you must notify it immediately upon discovery to NAVER. Through this, users are obliged to actively cooperate with NAVER so that it can improve related areas.
      • Users may disclose information such as output results using CLOVA Studio to third parties and the outside world only through their own service within the certain scope of use agreed with NAVER under the conditions of service app review and issuance approval. In other cases, information about CLOVA Studio cannot be disclosed to third parties or external parties without NAVER's prior written consent, and in case of violation, your use of CLOVA Studio service may be suspended. Users are obligated to cooperate with NAVER in order to actively resolve and communicate issues related to the results disclosed to the outside world.

    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.