Available in VPC
You can manage Cloud Hadoop cluster's information and status using the web UIs provided by NAVER Cloud Platform's Cloud Hadoop (such as Ambari and Hue).
Some web UIs, such as HDFS name node UI, can only be accessed through SSH tunneling. For more information, see Access Web UI using tunneling.
Preliminary task
In order to access a cluster node, the following preparations need to be made in advance:
| Item | Description | Guide |
|---|---|---|
| Set SSL VPN | Secure access from the outside to the network configured within NAVER Cloud Platform.
|
|
| Set ACG | Add the fixed IP of the device to access the cluster's ACG settings and the allowed port for the target page.
|
Set firewall (ACG) |
| Authentication key | Private key (.pem) required for accessing the cluster. | Manage authentication key for direct cluster access |
| Domain | Domain required for access to the cluster node You can view it in the domain item of the cluster details. |
Check cluster details |
1. Set SSL VPN
SSL VPN must be set to ensure secure access from outside to the network configured within NAVER Cloud Platform.
For detailed information for SSL VPN setting, see Set SSL VPN.
2. Set ACG rules
To change the ACG rules of a cluster:
- In the VPC environment on the NAVER Cloud Platform console, navigate to
> Services > Compute > Server > ACG. - Select the ACG of the cluster you want to access and click [ACG settings].

- Enter the following 4 information items and add ACG Rule:
- Protocol: TCP
- Access source: IP of the local device that communicates with SSH
- Allowed port:
8443for Ambari, and8421for HDFS NameNode - Note (optional)

-
Application web UI list by cluster version and type
The following is the list of application web UIs provided for each version and type of Cloud Hadoop clusters. -
Cloud Hadoop cluster versions 1.3 and 1.4.
| Cluster version | Cluster type | Cluster add-on | Application Web | Application Web Reference Site |
|---|---|---|---|---|
| 1.3 1.4 |
Core Hadoop | Provided by default | Ambari Web Console | https://ambari.apache.org/ |
| Core Hadoop | Provided by default | Hue Admin | https://gethue.com/ | |
| Core Hadoop | Provided by default | Zeppelin Notebook | https://zeppelin.apache.org/ | |
| Core Hadoop | Provided by default | Ranger | https://ranger.apache.org | |
| HBase | Provided by default | Ambari Web Console | https://ambari.apache.org/ | |
| HBase | Provided by default | Ranger | https://ranger.apache.org | |
| Spark | Provided by default | Ambari Web Console | https://ambari.apache.org/ | |
| Spark | Provided by default | Hue Admin | https://gethue.com/ | |
| Spark | Provided by default | Zeppelin Notebook | https://zeppelin.apache.org/ | |
| Spark | Provided by default | Ranger | https://ranger.apache.org | |
| Presto | Provided by default | Ambari Web Console | https://ambari.apache.org/ | |
| Presto | Provided by default | Hue Admin | https://gethue.com/ | |
| Presto | Provided by default | Zeppelin Notebook | https://zeppelin.apache.org/ | |
| Presto | Provided by default | Presto Coordinator | https://prestodb.io/ | |
| Presto | Provided by default | Ranger | https://ranger.apache.org |
- Cloud Hadoop cluster versions 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, and 2.3
| Cluster version | Cluster type | Cluster add-on | Application Web | Application Web Reference Site |
|---|---|---|---|---|
| 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 |
Core Hadoop with Spark | Provided by default | Ambari Web Console | https://ambari.apache.org/ |
| Core Hadoop with Spark | Provided by default | Hue Admin | https://gethue.com/ | |
| Core Hadoop with Spark | Provided by default | Zeppelin Notebook | https://zeppelin.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Ranger | https://ranger.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Namenode | https://hadoop.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Yarn Timeline Server | https://hadoop.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Yarn Resource Manager | https://hadoop.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Tez | https://tez.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Oozie | https://oozie.apache.org/ | |
| Core Hadoop with Spark | Provided by default | Spark History Server | https://spark.apache.org/ | |
| Core Hadoop with Spark | Presto | Presto Coordinator | https://prestodb.io/ | |
| Core Hadoop with Spark | HBase | HBase Master | https://hbase.apache.org/ | |
| Core Hadoop with Spark | Impala | Impala Server, Impala Statestore, Impala Catalog | https://impala.apache.org/ | |
| Core Hadoop with Spark | Kudu | Kudu Master | https://kudu.apache.org/ | |
| Core Hadoop with Spark | Trino | Trino Coordinator | https://trino.io/ | |
| Core Hadoop with Spark | NiFi | NiFi | https://nifi.apache.org/ |
You cannot use the File Browser upload feature of Namenode UI from Cloud Hadoop 1.5 and higher. Use Hue for upload.
Access Web UI
You can access each web UI using DNS.
From Cloud Hadoop 2.1 and higher, you must complete the NCloud account login authentication to access the web UI due to SSO integration.
To access the available web UIs:
- In the VPC environment on the NAVER Cloud Platform console, navigate to
> Services > Big Data & Analytics > Cloud Hadoop. - Select the cluster to access the web UI from the cluster list, and then click [View by application].
- Click the web UI link in the note field from the popup window's web UI list.
- Go to the NCloud login authentication page and access the web UI with NCloud account login.

Due to a reinforced SSL security policy, an error may occur when accessing the self-signed certificate URL if the user environment is macOS Catalina or Chrome. Click the empty space in the error screen and enter thisisunsafe with the keyboard to access.

The following warning message is displayed if you are using a Mozilla Firefox browser. Click [Advanced], and then click [Accept the risk and continue].


After executing Kerberize, the Kerberos authentication is added when accessing the web UI page. For more information about Kerberize, see the Secure Hadoop configuration (optional) guide.
Access through private IP
Access can be made using the private IP of each node with SSL VPN turned on. Click the Quick Links from each Ambari UI menu and FQDN will be used instead of the host's private IP. Replace FQDN with private IP and access.
Change password
Ambari
To set and change the Ambari UI access password, click [Manage cluster] > Initialize cluster admin password on the Cloud Hadoop console.
For more information, see Initialize cluster admin password.

Hue
To change the password for accessing the HUE UI:
-
Run PuTTY and access the edge node through SSH. For more information, see the Access cluster node through SSH guide.
-
Run the following commands to change the password:
$ pwd /usr/hdp/3.1.0.0-78/hue/build/env/bin # Cloud Hadoop 1.x /usr/nch/3.1.0.0-78/hue/build/env/bin # Cloud Hadoop 2.x $ sudo -s $ cd /usr/nch/3.1.0.0-78/hue/build/env/bin $ echo "from django.contrib.auth.models import User; u = User.objects.get(username='existing user name'); u.set_password('new password'); u.save() " | ./hue shellOr, change the password using the
hue changepasswordcommand as follows:$ cd /usr/nch/3.1.0.0-78/hue/build/env/bin $ sudo ./hue changepassword 'existing user name' Changing password for user 'existing user name' Password: Password (again): Password changed successfully for user 'existing user name'
Zeppelin Notebook
In Cloud Hadoop, Zeppelin Notebooks are managed by Ambari.
Therefore, it is recommended to access the Ambari UI and perform tasks rather than directly accessing the cluster to change files and run scripts.
To change your Zeppelin Notebook password:
- After accessing the Ambari UI, log in with the cluster admin account ID and password.
- Click Zeppelin Notebook from the left-side bar.
- Click the [CONFIGS] tap at the top.
- Click the Advanced-zeppelin-shiro-ini item, and edit the password.

- Click [Save] at the bottom right corner.
- Click [Actions] > Restart All at the upper right, and click [Confirm Restart All] in the popup window.
If you follow the above sequence and access Zeppelin Notebook, you will see that the password change has been applied.