Access web by console application

Prev Next

Available in VPC

You can manage Cloud Hadoop cluster's information and status using the web UIs provided by NAVER Cloud Platform's Cloud Hadoop (such as Ambari and Hue).

Note

Some web UIs, such as HDFS name node UI, can only be accessed through SSH tunneling. For more information, see Access Web UI using tunneling.

Preliminary task

In order to access a cluster node, the following preparations need to be made in advance:

Item Description Guide
Set SSL VPN. Secure access from the outside to the network configured within NAVER Cloud Platform
  • Add VPN range to subnet's route table
  • Download, install, and run SSL VPN client
Set ACG. Add the source IP and allowed ports to the cluster’s ACG:
  • Source: Your fixed IP (click [myIp] to fill).
  • Allowed ports: 8443 for Ambari, 8421 for HDFS NameNode.
Firewall settings (ACG)
Authentication key Private key (.pem) required for access to the cluster. Manage authentication key for direct cluster access
Domain Domain required for access to the cluster node. You can view it in the domain item of the cluster details. Check cluster details

1. Set SSL VPN

SSL VPN must be set to ensure secure access from outside to the network configured within NAVER Cloud Platform.
For more information on SSL VPN settings, see Set SSL VPN.

2. Set ACG rules

To change the ACG rules of a cluster:

  1. In the VPC environment of the NAVER Cloud Platform console, navigate to i_menu > Services > Compute > Server > ACG in order.
  2. Select the ACG of the cluster you want to access and click the [ACG settings] button.
    cloudhadoop-server-acg1_ko
  3. Enter the following four information items and add ACG Rule:
    • Protocol: TCP
    • Access source: IP of the local device used for SSH communication.
    • Allowed port: 8443 for Ambari, and 8421 for HDFS NameNode.
    • Note (optional)
      cloudhadoop-server-acg2_ko
Note
  • Application web UI list by cluster version and type.
    The following is the list of application web UIs provided for each version and type of Cloud Hadoop clusters.

  • Cloud Hadoop cluster versions 1.3 and 1.4.

Cluster version Cluster type Cluster add-on Application Web Application Web Reference Site
1.3
1.4
Core Hadoop Provided by default Ambari Web Console https://ambari.apache.org/
Core Hadoop Provided by default Hue Admin https://gethue.com/
Core Hadoop Provided by default Zeppelin Notebook https://zeppelin.apache.org/
Core Hadoop Provided by default Ranger https://ranger.apache.org
HBase Provided by default Ambari Web Console https://ambari.apache.org/
HBase Provided by default Ranger https://ranger.apache.org
Spark Provided by default Ambari Web Console https://ambari.apache.org/
Spark Provided by default Hue Admin https://gethue.com/
Spark Provided by default Zeppelin Notebook https://zeppelin.apache.org/
Spark Provided by default Ranger https://ranger.apache.org
Presto Provided by default Ambari Web Console https://ambari.apache.org/
Presto Provided by default Hue Admin https://gethue.com/
Presto Provided by default Zeppelin Notebook https://zeppelin.apache.org/
Presto Provided by default Presto Coordinator https://prestodb.io/
Presto Provided by default Ranger https://ranger.apache.org
  • Cloud Hadoop cluster versions 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, and 2.3
Cluster version Cluster type Cluster add-on Application Web Application Web Reference Site
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
Core Hadoop with Spark Provided by default Ambari Web Console https://ambari.apache.org/
Core Hadoop with Spark Provided by default Hue Admin https://gethue.com/
Core Hadoop with Spark Provided by default Zeppelin Notebook https://zeppelin.apache.org/
Core Hadoop with Spark Provided by default Ranger https://ranger.apache.org/
Core Hadoop with Spark Provided by default Namenode https://hadoop.apache.org/
Core Hadoop with Spark Provided by default Yarn Timeline Server https://hadoop.apache.org/
Core Hadoop with Spark Provided by default Yarn Resource Manager https://hadoop.apache.org/
Core Hadoop with Spark Provided by default Tez https://tez.apache.org/
Core Hadoop with Spark Provided by default Oozie https://oozie.apache.org/
Core Hadoop with Spark Provided by default Spark History Server https://spark.apache.org/
Core Hadoop with Spark Presto Presto Coordinator https://prestodb.io/
Core Hadoop with Spark HBase HBase Master https://hbase.apache.org/
Core Hadoop with Spark Impala Impala Server, Impala Statestore, Impala Catalog https://impala.apache.org/
Core Hadoop with Spark Kudu Kudu Master https://kudu.apache.org/
Core Hadoop with Spark Trino Trino Coordinator https://trino.io/
Core Hadoop with Spark NiFi NiFi https://nifi.apache.org/
Caution

You cannot use the File Browser upload feature of Namenode UI from Cloud Hadoop 1.5 and higher. Use Hue for upload.

Access Web UI

You can access each web UI using DNS.

Note

From Cloud Hadoop 2.1 and higher, you must complete the NCloud account login authentication to access the web UI due to SSO integration.

To access the available web UIs:

  1. In the VPC environment of the NAVER Cloud Platform console, navigate to i_menu > Services > Big Data & Analytics > Cloud Hadoop in order.
  2. Select the cluster to access the web UI from the cluster list, and then click the [View by application] button.
  3. Click the web UI link in the note field from the popup window's web UI list.
  4. Go to the NCloud login authentication page and access the web UI with NCloud account login.
    chadoop-sso-1-1
Caution

Due to a reinforced SSL security policy, an error may occur when accessing the self-signed certificate URL if the user environment is macOS Catalina or Chrome. Click the empty space in the error screen and enter thisisunsafe with the keyboard to access.

chadoop-3-3-06-vpc_ko

The following warning message is displayed if you are using a Mozilla Firefox browser. Click the [Advanced] button, and then click the [Accept the risk and continue] button.

chadoop-3-3-07-1-vpc_ko

chadoop-3-3-07-2-vpc_ko

Note

After executing Kerberize, the Kerberos authentication is added when accessing the web UI page. For more information on Kerberize, see the Secure Hadoop configuration (optional) guide.

Access through Private IP

Access can be made using the private IP of each node with SSL VPN turned on. Click the Quick Links from each Ambari UI menu and FQDN will be used instead of the host's private IP. Replace FQDN with private IP and access.

Change password

Ambari

To set and change the Ambari UI access password, navigate to [Manage cluster] > Initialize cluster admin password on the Cloud Hadoop console.
For more information, see Initialize cluster admin password.
chadoop-3-3-04_ko

Hue

Hue does not sync with LDAP automatically. In a Secure Hadoop environment, when you create Hue for the first time, you must sync LDAP accounts through the Hue UI.

To sync LDAP accounts:

  1. Change the login type from LDAP to Local.
    chadoop-3-3-zeppelin_ko
  2. Click [Manage Users].
    chadoop-3-3-zeppelin_ko
  3. Click [Sync LDAP users/groups] to start synchronization.
    chadoop-3-3-zeppelin_ko

Change password

To change the password for accessing the HUE UI:

  1. Run PuTTY and access the edge node through SSH. For more information, see the Access cluster node through SSH guide.
  2. Change a password by running the appropriate commands depending on the account type (LOCAL vs. LDAP):
  • LOCAL account

    $ cd /usr/hdp/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 1.x
    $ cd /usr/nch/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 2.x
    
    $ pwd
    
    $ sudo -s   # root
    
    $ echo "from django.contrib.auth.models import User; 
    u = User.objects.get(username='existing user name');
    u.set_password('new password');
    u.save()
    " | ./hue shell
    

    Or, change the password using the hue changepassword command as follows:

    $ cd /usr/hdp/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 1.x
    $ cd /usr/nch/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 2.x
    
    $ sudo ./hue changepassword 'existing user name'
    Changing password for user 'existing user name'
    Password:
    Password (again):
    Password changed successfully for user 'existing user name'      
    
  • LDAP account

    • Use ldapmodify command to change the password.
      ldif=/tmp/modify-account.ldif
      HASH_USER_PW=$(slappasswd -h "{SSHA}" -s "new_password")
      
      cat <<EOF | tee $ldif
      dn: uid=account name,ou=users,dc=USER,dc=GUIDE
      changetype: modify
      replace: userPassword
      userPassword: ${HASH_USER_PW}
      EOF
      
      ldapmodify \
        -H ldap://localhost:389 \
        -D "cn=root,dc=USER,dc=GUIDE" \
        -w "Kerberos_password" \
        -f "$ldif"
      

Zeppelin Notebook

In Cloud Hadoop, Zeppelin Notebooks are managed by Ambari.
Therefore, it is recommended to access the Ambari UI and perform tasks rather than directly accessing the cluster to change files and run scripts.

To change your Zeppelin Notebook password:

  1. After accessing the Ambari UI, log in with the cluster admin account ID and password.
  2. Click Zeppelin Notebook from the left-side bar.
  3. Click the [CONFIGS] tap at the top.
  4. Click the Advanced-zeppelin-shiro-ini item, and edit the password.
    chadoop-3-3-zeppelin_ko
  5. Click the [Save] button at the bottom right corner.
  6. Click [Actions] > Restart All at the upper right, and click [Confirm Restart All] in the popup window.

If you follow the above sequence and access Zeppelin Notebook, you will see that the password change has been applied.