Available in Classic
Hue is a component available on the Core Hadoop, Spark, and Presto cluster types.
This guide explains the features of Hue on a Cloud Hadoop cluster and how to use the Hive editor and Hue browser in Hue.
Hue components
Hue (Hadoop User Experience) is a web-based user interface that works with Apache Hadoop clusters.
Bundled with other Hadoop ecosystems, Hue can be used to run Hive tasks and Spark jobs.
Hue on Cloud Hadoop clusters supports the following components:
-
Browser
- Document: View workflows, queries, and script files saved in Hue.
- File: View files saved in HDFS.
- S3: View files saved in the object storage bucket.
- Table: View tables saved in the Hive warehouse.
- Job: View the status and logs of executed Oozie Jobs.
-
Editor
- Hive: Run Hive queries.
- Scala, PySpark: Run interactive commands like
spark-shell. - Spark Submit Jar, Spark: Enables each jar and py file to be submitted as a Spark Job.
- Java: Run jars through Oozie workflows.
- Distcp: Run Distcp tasks through Oozie workflows.
- Shell: Run .sh files through Oozie workflows.
- MapReduce: Run the MapReduce application through Oozie workflows.
-
Scheduler
- Workflow: Create Oozie workflows.
- Reserve: Schedule created workflows.
Access Hue
Hue is installed on the Core Hadoop, Spark, and Presto types, and can be accessed in the following 2 ways:
Connect via the console's web UI list
You can access the Hue Web UI through View by application on the Cloud Hadoop console. For more information, see View by application
Connect using a domain
To access the Hue Web UI using a domain:
- From the NAVER Cloud Platform console, navigate to Services > Big Data & Analytics > Cloud Hadoop.
- Click on the cluster item you want to check and confirm the public domain address in the details interface displayed.

- In your web browser's address bar, enter the public domain address and port number as follows to access the Hue web page:
http://{Public domain}:8000 - Once the login page is displayed in the browser, enter the admin account and password set upon cluster creation to log in.
- Resetting the cluster administrator account from the console will not reset Hue account information. You must change your password on the Hue web page.
Run Hive query
To run a Hive query:
- Click
on the [Query], then click Editor > Hive (Hive UI) to launch the editor. - In the editing window, select the database on which you want to run the query from the list.
- Enter the query in the Query Editor window, then click [Run].
- You can view the results of the query you ran in the Results tab.
- You can check the list of queries you have run in the Query history tab.

View browser
Click the menu icon on the left side of the top menu bar, then select the desired browser from the browser component.
-
File browser
- View HDFS files.
- Base directory address of HDFS: hdfs://user/accountname
- You can move by clicking in front of the account name or the root's slash.
- [Create new]: Create a new file or directory.
- [Upload]: Upload files to the current directory.

-
S3 browser
- View all buckets that can be certified with the user's API access key ID.
- Base directory address of S3: s3a://bucketname
- You can move by clicking the root's slash.
- [Create new]: Create a new file or directory.
- [Upload]: Upload files to the current directory.

-
Table browser
- View databases and tables created in Hive.

- View databases and tables created in Hive.
Hue's Scala, PySpark, Spark Submit Jar, and Spark editors can only be used if Spark is selected as the cluster type when a cluster is created. To use it in a different cluster type, download and install it from the Scala website.