The latest service changes have not yet been reflected in this content. We will update the content as soon as possible. Please refer to the Korean version for information on the latest updates.
Available in VPC
Implementing security rules for the big data ecosystem with Apache Ranger is possible. The Ranger project enables security guidelines to be defined and enforced in a uniform way across all Hadoop applications.
Enabling the Ranger plugin
The plugins you can enable depend on the services installed in your cluster. (Default: plugins disabled; enabled when Kerberos authentication is configured in version 2.3 or higher)
To manage the security policy with Ranger, you must change the plugin status to ON. You can enable plugins for HDFS, Hive, YARN, and other services.
-
After accessing the Ambari UI, enabling the plugin in the Services > Ranger > [CONFIGS] > RANGER PLUGIN tab is mandatory to manage permissions using Ranger. HDFS Ranger Plugin to the
ONstate and save the configuration.

-
When the HDFS Ranger Plugin is enabled, the Enable Ranger for HDFS checkbox is automatically checked in Services > HDFS > [CONFIGS] > ADVANCED > Advanced ranger-hdfs-plugin-properties.

-
Set the value of dfs.permissions.enabled in the Advanced hdfs-site item to true. This setting must be included in Ranger Audit to leave a history.

-
HDFS, HIVE, YARN, and PRESTO services require a restart. Click [ACTIONS] > Restart All in the top right corner and click [CONFIRM RESTART ALL] in the popup window to apply the changed settings.

On the left sidebar click [...] > Restart All Required to restart all components that need to be restarted.
- When the Dependent Configurations prompt appears, click the [OK] button.

Ranger Admin UI
- Access SSL VPN.
- For SSL VPN information, see Access a cluster node with SSH
- Select a Cloud Hadoop cluster to access the Ranger Admin UI and click [View by Application] > Ranger Web UI.
- For more information about accessing the Ranger UI, see View by application
Alternatively, check the domain address in the Cluster details in the console. You can also access the Ranger Admin UI directly using the URL below:
https://{Domain address}:6182/

- The Ranger UI account (ID/Password) for Cloud Hadoop version 1.3 is set to admin/admin.
- The Ranger UI account (ID/Password) for Cloud Hadoop version 1.4 or higher is set to admin/{password entered by the user}.
-
Access the Ranger Admin UI to see which services the policy is applied to.
- When you access the Ranger Admin UI, you'll see that the HDFS, HIVE, and YARN Policy exist with the plugin enabled as a pre-action.

- When you access the Ranger Admin UI, you'll see that the HDFS, HIVE, and YARN Policy exist with the plugin enabled as a pre-action.
-
After clicking plugins in HDFS, check the List of Policies screen to see the rules in the Policy created by default.

- Click Action >
You can verify that the Select User has Read, Write, and Execute permissions for all paths.

Create a Ranger Policy
-
On the first screen of the Ranger Admin UI, click [{Cluster name}_hadoop] Policy, which is exposed by default in the HDFS plugin.

-
Select the [Add New Policy] button and add a policy as shown below.
- You can enter a different path for Resource Path. Here, we used the HDFS home directory of the administrator account. Set the [Recursive] button status to active to apply the permissions to all files or subdirectories under a specific directory.
- Select
sshuserfor Select User so that the SSH connection account,sshuser, can access this path. - Here, we granted full Read, Write, and Execute permissions.


Cloud Hadoop creates an HDFS home directory by default for the cluster administrator account (such as df-test17) that you set up during installation.
- Verify that the rule created with the
lscommand is applied.- You can access any node in the cluster.
- Access SSH into the node and use the
mkdircommand to create a new directory in the/user/{클러스터 계정명}directory of yoursshuseraccount, as shown below. - Test that the directory was appropriately created with the
lscommand.
$ hadoop fs -mkdir /user/{Cluster account name}/tmp
$ hadoop fs -ls /user/{Cluster account name}

- You can view access attempt logs through the Ranger Audit UI.

- Cloud Hadoop version 1.8 provides additional PRESTO plugins.

- Cloud Hadoop version 1.8 additionally provides the Presto Ranger Plugin.

Trino access permissions management
You must configure Trino access control in the order of catalog, schema, and table levels. To control permissions at the schema level, you need to first establish access control at the catalog level. Create a policy at the catalog level before creating a separate policy for the schema level.
To control permissions at the catalog level, follow these steps:

- Select Ranger Web UI > Trino > Add New Policy.
- Enter
hivein catalog. - Enter a new user to grant permission in Allow Conditions > Select User.