- Print
- PDF
Using HUE
- Print
- PDF
Available in Classic
Hue is a component available in Core Hadoop, Spark, and Presto Cluster Type.
This guide describes the HUE features in Cloud Hadoop clusters, and how to use Hive Editor and HUE browser in HUE.
HUE components
HUE(Hadoop User Experience) is the web-based user interface used with Apache Hadoop clusters.
HUE is grouped with other Hadoop ecosystems for running Hive and Spark jobs.
Cloud Hadoop Cluster's HUE supports the following components:
Browser
- Document: Shows workflows, queries, and script files saved in HUE.
- File: Shows files saved in HDFS.
- S3: Shows files stored to Object Storage buckets.
- Table: Shows tables saved in Hive Warehouse.
- Job: Shows the status and logs of Oozie jobs that have been run.
Editor
- Hive: Runs Hive queries.
- Scala, PySpark: Runs interactive statements like
spark-shell
. - Spark Submit Jar, Spark: Submits .jar and .py files as a Spark job.
- Java: Executes .jar files via an Oozie workflow.
- Distcp: Runs Distcp jobs via an Oozie workflow.
- Shell: Executes .sh files via an Oozie workflow.
- MapReduce: Runs MapReduce applications via an Oozie workflow.
Scheduler
- Workflow: Creates an Oozie workflow.
- Reservation: Schedules the created workflows.
HUE access
By default, HUE is installed in Core Hadoop and Spark Type, and it can be accessed in the following two ways.
Connect via the console's web UI list
You can access the HUE web UI through View by application on the Cloud Hadoop console. Please refer to View by application for more information.
Connect via domain
You can access the HUE web UI via domain as follows.
- Please connect to the NAVER Cloud Platform console.
- Click Classic from the Platform menu to switch to the Classic environment.
- Click Services > Big Data & Analytics > Cloud Hadoop menus, in that order.
- Click the cluster item to view, and then check the domain address in Public domain in the displayed details page.
- Enter the public IP address and port number in the web browser's address field as follows to open the HUE webpage.
http://{domain address}:8000
- Once the login page is displayed in the browser, enter the admin account and password set upon cluster creation to log in.
- Initializing the cluster admin account in the console doesn't initialize the HUE password. The password must be changed in the HUE webpage.
Execute Hive query
Here's how to run an Hive query.
- [query] button’s and then click Editor > Hive (Hive UI) to launch the editor.
- Select a database to execute the query from the list in the editor window.
- Enter the query in the query editor window, and then click the [Run] button.
- The query results are displayed in the Results tab.
- You can check the list of queries executed in the Query history tab.
View browser
Click the menu icon on the left at the top menu bar, and then click the browser you want in the browser area.
File browser
- View HDFS files
- The default directory address for HDFS:
hdfs://user/account name
- You can navigate to another directory by clicking in front of the account name or the root slash
- [Create new]: Create a new file or directory
- [Upload]: Upload file to the current directory
S3 browser
- View all buckets that can be authenticated with the user's API access key
- S3's default directory address:
s3a://bucket name
- You can navigate by clicking the root slash
- [Create new]: Create a new file or directory
- [Upload]: Upload file to the current directory
Table browser
- View databases and tables created in Hive
- View databases and tables created in Hive
Hue's Scala, PySpark, Spark Submit Jar, and Spark editors are only available when you select the Cluster Type as Spark when creating the cluster. If you want to use it in another cluster type, please download and install it from Scala Homepage.