UI access and password settings by service
    • PDF

    UI access and password settings by service

    • PDF

    Article Summary

    Available in VPC

    You can submit management tasks or applications through NAVER Cloud Platform's Cloud Hadoop console or web UI (Ambari, Hue, etc.).

    Note

    Some web UIs such as HDFS name node UI can only be accessed via SSH tunneling. For more details, see Web UI connection using tunneling.

    Preparations

    In order to access a cluster node, the following preparations need to be made in advance:

    ItemDescriptionGuide
    Set SSL VPNSecure access from the outside to the network configured within NAVER Cloud Platform
  • Add VPN range to subnet's route table
  • Download, install, and run SSL VPN client
  • Set SSL VPN
  • SSL VPN user guide (VPC)
  • Set ACGAdd the allowed port of the access page and the fixed IP of the device needed to access the cluster ACG setting
  • Access source: fixed IP of the user (Click the [myIp] button to enter)
  • Allowed port: 8443 for Ambari, 8421 for HDFS NameNode
  • Firewall settings (ACG)
    Authentication keyPrivate key (.pem) required for access to the clusterManaging authentication key for direct cluster connection
    DomainDomain required for access to the cluster node
    The detailed information of the cluster can be viewed from the domain list
    Check cluster details

    1. Set SSL VPN

    SSL VPN must be set for secure access to be established from the outside to the network within NAVER Cloud Platform.
    For detailed information for SSL VPN setting, see Set SSL VPN.

    2. Set ACG rules

    The following describes how to change the ACG rules of a cluster.

    1. From the NAVER Cloud Platform console, click Services > Compute > Server > ACG, in order.
    2. Select the ACG of the cluster you want to access, and click the [ACG settings] button.
      cloudhadoop-server-acg1_ko
    3. Enter the 4 information items below and add an ACG Rule.
      • Protocol: TCP
      • Access source: IP of the local equipment used for SSH communication
      • Allowed port: 8443 for Ambari, and 8421 for HDFS NameNode
      • Note (optional)
        cloudhadoop-server-acg2_ko
    Note
    • Application web UI list by cluster version and type
      The following is the list of application web UIs provided for each version and type of Cloud Hadoop clusters.

    • Cloud Hadoop cluster version 1.3, 1.4

    Cluster versionCluster typeCluster add-onApplication WebApplication Web Reference Site
    1.3
    1.4
    Core HadoopProvided by defaultAmbari Web Consolehttps://ambari.apache.org/
    Core HadoopProvided by defaultHue Adminhttps://gethue.com/
    Core HadoopProvided by defaultZeppelin Notebookhttps://zeppelin.apache.org/
    Core HadoopProvided by defaultRangerhttps://ranger.apache.org
    HBaseProvided by defaultAmbari Web Consolehttps://ambari.apache.org/
    HBaseProvided by defaultRangerhttps://ranger.apache.org
    SparkProvided by defaultAmbari Web Consolehttps://ambari.apache.org/
    SparkProvided by defaultHue Adminhttps://gethue.com/
    SparkProvided by defaultZeppelin Notebookhttps://zeppelin.apache.org/
    SparkProvided by defaultRangerhttps://ranger.apache.org
    PrestoProvided by defaultAmbari Web Consolehttps://ambari.apache.org/
    PrestoProvided by defaultHue Adminhttps://gethue.com/
    PrestoProvided by defaultZeppelin Notebookhttps://zeppelin.apache.org/
    PrestoProvided by defaultPresto Coordinatorhttps://prestodb.io/
    PrestoProvided by defaultRangerhttps://ranger.apache.org
    • Cloud Hadoop cluster version 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1
    Cluster versionCluster typeCluster add-onApplication WebApplication Web Reference Site
    1.5
    1.6
    1.7
    1.8
    1.9
    2.0
    2.1
    Core Hadoop with SparkProvided by defaultAmbari Web Consolehttps://ambari.apache.org/
    Core Hadoop with SparkProvided by defaultHue Adminhttps://gethue.com/
    Core Hadoop with SparkProvided by defaultZeppelin Notebookhttps://zeppelin.apache.org/
    Core Hadoop with SparkProvided by defaultRangerhttps://ranger.apache.org/
    Core Hadoop with SparkProvided by defaultNamenodehttps://hadoop.apache.org/
    Core Hadoop with SparkProvided by defaultYarn Timeline Serverhttps://hadoop.apache.org/
    Core Hadoop with SparkProvided by defaultYarn Resource Managerhttps://hadoop.apache.org/
    Core Hadoop with SparkProvided by defaultTezhttps://tez.apache.org/
    Core Hadoop with SparkProvided by defaultOoziehttps://oozie.apache.org/
    Core Hadoop with SparkProvided by defaultSpark History Serverhttps://spark.apache.org/
    Core Hadoop with SparkPrestoPresto Coordinatorhttps://prestodb.io/
    Core Hadoop with SparkHBaseHBase Masterhttps://hbase.apache.org/
    Core Hadoop with SparkImpalaImpala Server, Impala Statestore, Impala Cataloghttps://impala.apache.org/
    Core Hadoop with SparkKuduKudu Masterhttps://kudu.apache.org/
    Core Hadoop with SparkTrinoTrino Coordinatorhttps://trino.io/
    Core Hadoop with SparkNiFiNiFihttps://nifi.apache.org/
    Caution

    You cannot use the provide File Browser upload feature of Namenode UI from Cloud Hadoop 1.5 and higher. Please use Hue for upload.

    Web UI access

    You can access the web UI using DNS.

    Note

    From Cloud Hadoop 2.1 and higher, you must complete the Ncloud account login authentication to access web UI due to SSO integration.

    The following describes how to access the available web UIs:

    1. From NAVER Cloud Platform console, click the Services > Big Data & Analytics > Cloud Hadoop menus sequentially.
    2. Select the cluster to access the web UI from the cluster list, and then click the [View by application] button.
    3. Click the web UI link in the remarks field from the pop-up window's web UI list.
    4. Go to the NCloud login authentication page and access through the personal information consent after NCloud account login.
      chadoop-sso-1-1.png
    Caution

    Due to reinforced SSL security policy, an error may occur when accessing the self-signed certificate URL if the user environment is macOS Catalina, or Chrome. Click the empty space in the error screen and enter thisisunsafe with the keyboard to access.

    chadoop-3-3-06-vpc_en.png

    The following warning message will be displayed if you are using a Mozilla Firefox browser. Click the [Advanced] button, and then click the [Accept the risk and continue] button.

    chadoop-3-3-07-1-vpc_en.png

    chadoop-3-3-07-2-vpc_en.png

    Note

    After executing Kerberize, Kerberos authentication is added when accessing the web UI page. For detailed information about Kerberize, see Secure Hadoop configuration (optional) guide.

    Access via private IP

    Access can be made using the private IP of each node with SSL VPN turned on. Click the Quick Links from each Ambari UI menu and FQDN will be used instead of the host's private IP. Replace FQND with private IP and access.

    Change password

    Ambari

    To set and change the Ambari UI access password click [Manage cluster] > Initialize cluster admin password menu from Cloud Hadoop console.
    For more details, see Initialize cluster admin password.
    chadoop-3-3-04_en.png

    Hue

    The following describes how to change the password for accessing the HUE UI:

    1. Run PuTTY and access the edge node via SSH. (see SSH cluster node access guide)

    2. Run the command below to change the password.

      $ pwd
       /usr/hdp/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 1.x
       /usr/nch/3.1.0.0-78/hue/build/env/bin  # Cloud Hadoop 2.x
       
      $ echo "from django.contrib.auth.models import User; 
      u = User.objects.get(username='existing user name');
      u.set_password('new password');
      u.save()
      " |  ./hue  shell
      

      Or, use the hue changepassword command as below to change the password.

      $ sudo ./hue changepassword example
      Changing password for user 'existing user name'
      Password:
      Password (again):
      Password changed successfully for user 'existing user name'
      

    Zeppelin Notebook

    In Cloud Hadoop, Zeppelin Notebooks are managed by Ambari.
    Therefore, it is recommended to access the Ambari UI and perform tasks rather than directly accessing the cluster to change files and run scripts.

    Here's how to change your Zeppelin Notebook password.

    1. After accessing the Ambari UI, log in with the cluster admin account ID and password.
    2. Click Zeppelin Notebook from the left-side bar.
    3. Click [CONFIGS] tap at the top.
    4. Click Advanced-zeppelin-shiro-ini item, and edit the password.
      chadoop-3-3-zeppelin_ko
    5. Click the [Save] button at the bottom right corner.
    6. Click [Actions] > Restart All button at the upper right, and click [Confirm restart all].

    If you follow the above sequence and access Zeppelin Notebook, you will see that the password change has been applied.


    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.