Using HUE
    • PDF

    Using HUE

    • PDF

    Article Summary

    Available in Classic

    Hue is a component available in Core Hadoop, Spark, and Presto Cluster Type.
    This guide describes the HUE features in Cloud Hadoop clusters, and how to use Hive Editor and HUE browser in HUE.

    HUE components

    HUE(Hadoop User Experience) is the web-based user interface used with Apache Hadoop clusters.
    HUE is grouped with other Hadoop ecosystems for running Hive and Spark jobs.

    Cloud Hadoop Cluster's HUE supports the following components:

    • Browser

      • Document: Shows workflows, queries, and script files saved in HUE.
      • File: Shows files saved in HDFS.
      • S3: Shows files stored to Object Storage buckets.
      • Table: Shows tables saved in Hive Warehouse.
      • Job: Shows the status and logs of Oozie jobs that have been run.
    • Editor

      • Hive: Runs Hive queries.
      • Scala, PySpark: Runs interactive statements like spark-shell.
      • Spark Submit Jar, Spark: Submits .jar and .py files as a Spark job.
      • Java: Executes .jar files via an Oozie workflow.
      • Distcp: Runs Distcp jobs via an Oozie workflow.
      • Shell: Executes .sh files via an Oozie workflow.
      • MapReduce: Runs MapReduce applications via an Oozie workflow.
    • Scheduler

      • Workflow: Creates an Oozie workflow.
      • Reservation: Schedules the created workflows.

    HUE access

    By default, HUE is installed in Core Hadoop and Spark Type, and it can be accessed in the following two ways.

    Connect via the console's web UI list

    You can access the HUE web UI through View by application on the Cloud Hadoop console. Please refer to View by application for more information.

    Connect via domain

    You can access the HUE web UI via domain as follows.

    1. Please connect to the NAVER Cloud Platform console.
    2. Click Classic from the Platform menu to switch to the Classic environment.
    3. Click Services > Big Data & Analytics > Cloud Hadoop menus, in that order.
    4. Click the cluster item to view, and then check the domain address in Public domain in the displayed details page.
      cloudhadoop-clusterlist-domein_C_en.png
    5. Enter the public IP address and port number in the web browser's address field as follows to open the HUE webpage.
      http://{domain address}:8000
      
    6. Once the login page is displayed in the browser, enter the admin account and password set upon cluster creation to log in.
      • Initializing the cluster admin account in the console doesn't initialize the HUE password. The password must be changed in the HUE webpage.

    Execute Hive query

    Here's how to run an Hive query.

    1. [query] button’s cloudhadoop-hue-C-icon and then click Editor > Hive (Hive UI) to launch the editor.
    2. Select a database to execute the query from the list in the editor window.
    3. Enter the query in the query editor window, and then click the [Run] button.
      • The query results are displayed in the Results tab.
      • You can check the list of queries executed in the Query history tab.
        cloudhadoop-hue1_C_en.png

    View browser

    Click the menu icon on the left at the top menu bar, and then click the browser you want in the browser area.

    • File browser

      • View HDFS files
      • The default directory address for HDFS: hdfs://user/account name
      • You can navigate to another directory by clicking in front of the account name or the root slash
      • [Create new]: Create a new file or directory
      • [Upload]: Upload file to the current directory
        cloudhadoop-hue2_en.png
    • S3 browser

      • View all buckets that can be authenticated with the user's API access key
      • S3's default directory address: s3a://bucket name
      • You can navigate by clicking the root slash
      • [Create new]: Create a new file or directory
      • [Upload]: Upload file to the current directory
        cloudhadoop-hue3_C_en.png
    • Table browser

      • View databases and tables created in Hive
        cloudhadoop-hue4_en.png
    Note

    Hue's Scala, PySpark, Spark Submit Jar, and Spark editors are only available when you select the Cluster Type as Spark when creating the cluster. If you want to use it in another cluster type, please download and install it from Scala Homepage.


    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.