Setting up access control with Ranger plugin
  • PDF

Setting up access control with Ranger plugin

  • PDF

It can be used in a VPC environment.

You can implement security rules for your big data ecosystem using Apache Ranger. The Ranger project enables you to consistently define and implement security guidelines throughout all Hadoop applications.

Activate Ranger plugin

  1. Access the Ambari UI, click Services > Ranger > Configs > Ranger Plugin, and then change the settings.

    • Plugins that can be enabled are determined by the services installed in the cluster. (Default: Plugin enabled)
    • Change the plugin's status to ON to manage security policies using Ranger. Here, enable HDFS, Hive and YARN plugins and save the changes.
  2. HDFS, Hive, and YARN services require a restart. Click the [Restart] button at the top right to apply the changes.

chadoop-3-8-02_en.png

  1. Click the [OK] button when the Dependent Configurations prompt is displayed.
    chadoop-3-8-04_en.png

Ranger Admin UI

  1. Access SSL VPN.

  2. Select the cluster to access the Ranger admin UI from the Cloud Hadoop console's cluster list, and then click [View by application] > Ranger.

    chadoop-3-8-03_en.png

    You can also access the Ranger admin UI directly through the URL below.

     ```
     https://<PUBLIC-DOMAIN>:6182/
     ```
    

    chadoop-3-8-05_en.png

    Note

    The ID and password for the Ranger UI account in Cloud Hadoop version 1.3 or earlier are set to admin and admin respectively.
    The ID and password of the Ranger UI account in Cloud Hadoop version 1.4 or later are set to admin and {password entered by the user} respectively.

  3. Once you access the Ranger admin UI, check on which service the policies are applied to.

    • You can see the HDFS, Hive, and YARN policies are there for which the plugins have been enabled in the preparation stage.

    chadoop-3-8-06_en.png

  4. Click HDFS's plugin, and view the rules of the policy created as a default in the List of Policies page.

    chadoop-3-8-07_en.png

    • you can see in the details page that the hdfs and ambari-qa users have read, write, and execute permissions for all paths.

      chadoop-3-8-08_en.png

Create Ranger policy

  1. Click the HDFS plugin from the initial screen of the Ranger admin UI, and then click the [Add New Policy] button from the List of Policies page.
    chadoop-3-8-09_en.png

  2. Set the new rules as below, and then click the [Add] button.

    • You can enter a different path under Resource Path. In this example, HDFS home directory of the admin account is used. The status of the [Recursive] button must be enabled in order to apply the permission to all files or sub-directories under a specific directory.
    • Select ncloud in Select User field so that ncloud, which is the SSH connection account, can access this path.
    • In this example, all read, write, and execute permissions have been granted.

    chadoop-3-8-10_en.png

Note

Cloud Hadoop creates an HDFS home directory by default for the admin account (e.g., suewoon) configured during installation.

  1. Check that the rule you created with the ls command is applied correctly.

    • You can connect to any node in the cluster.
    • Connect to the node via SSH, and create a new directory under the ncloud account's /user/suewoon directory with the mkdir command. Test if the directory has been created successfully with the ls command.

    chadoop-3-8-11_en.png


Was this article helpful?