Using Hive

Prev Next

Available in VPC

This guide describes a method to save data in an Object Storage bucket and run a simple Hive query through Hue and Beeline.

Architecture example

Save data that needs to be kept in an Object Storage bucket, and use the Cloud Hadoop cluster only when necessary.

chadoop-4-5-007

When running the Hive query, the query is run to the next step.

  1. Submit query to Hive server in Cloud Hadoop cluster in Hive client.
  2. The server processes the query and requests metadata from the Hive metadate DB (MySQL) installed on the master server.
  3. Server loads the data stored in Object Storage bucket.
  4. Hive server returns result to the client.

Create Hive table

Example: Upload a sample data file to Object Storage bucket of NAVER Cloud Platform, and create Hive External Table to allow the data to be used on Hive.

Note

To use data in Object Storage bucket, the following configuration is needed in hive-site.xml.

fs.s3a.access.key=<API-ACCESS-KEY>
fs.s3a.connection.ssl.enabled=false
fs.s3a.endpoint=http://kr.objectstorage.ncloud.com
fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
fs.s3a.secret.key=<API-SECRET-KEY>

To create a Hive table:

  1. Download sample data. Unzip the data and upload the AllstarFull.csv file to your Object Storage bucket.

    • Configure the bucket path by referring to the example below.
      s3a://deepdrive-hue/input/lahman2012/allstarfull/AllstarFull.csv
      
    • Since Hive reads the location of the data file by folder, save one data item per folder. (Recommended)
    Note

    The provided sample data is a portion of Lahman's Baseball Database Version 2012, and all copyrights of the data belong to Sean Lahman.

    chadoop-4-5-001_ko

  2. Create Hive External Table with the following syntax in Hue Hive Editor.

    • location: Designate a bucket path with a data set file saved.
    DROP table allstarfull;
    
    CREATE external TABLE if not EXISTS `allstarfull` (
            `playerID` VARCHAR(20),
            `yearID` INT,
            `gameNum` INT,
            `gameID` VARCHAR(30),
            `teamID` VARCHAR(4),
            `lgID` VARCHAR(4),
            `GP` INT,
            `startingPos` INT
    )
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY ','
    location 's3a://deepdrive-hue/input/lahman2012/allstarfull' # Example of Object Storage bucket path where the AllstarFull.csv file is stored
    

Run Hive query

You can run a Hive query using various tools provided by Cloud Hadoop. This guide describes how to run a Hive query with the following .

  • * Hive interpreter of Hue * Beeline based on SQLLine CLI among Hive clients

Hive interpreter of Hue

To run Hive query using Hive interpreter of Hue:

  1. Log in to Hue and run HiveQL query in Hive interpreter.

    SELECT * FROM allstarfull LIMIT 10;
    
  2. Check if the file uploaded to s3 is connected to the Hive table through the result page.
    chadoop-4-5-002_ko

Beeline

To run Hive query using Beeline:

  1. Directly access the host with the Hive client using SSH and start a Beeline session with the following command:

    • Enter the master node (m-00x) host name into [HIVE-SERVER2-SERVER].
    beeline -u "jdbc:hive2://[HIVE-SERVER2-SERVER]:10000" -n hive
    
    Note

    For how to access the host, see Access a cluster node with SSH.

  2. When the Beeline prompt is displayed, run HiveQL query.

    • The following query does not print the result but saves it in Object Storage.
    beeline> INSERT OVERWRITE DIRECTORY 's3a://bucketname/output/' ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
    SELECT playerid, sum(gp) from allstarfull group by playerid;
    
    Note

    Before saving the query, make sure the specified location (directory) exists in the Object Storage bucket.

  3. Run the following command to check if the result was saved successfully.

    hadoop fs -cat s3a://bucketname/output/000000_0
    

    chadoop-4-5-003_ko

  4. The result is as follows:

    aaronha01,24
    aasedo01,1
    abreubo01,2
    adamsac01,0
    adcocjo01,2
    ageeto01,2
    aguilri01,3
    aguirha01,1
    alexado01,0
    alfoned01,1
    allendi01,6
    allenjo02,1
    alleyge01,1
    allisbo01,2
    alomaro01,12
    ...
    

Activation of LLAP of Hue

To activate LLAP in the editor list of Hue:

Note

You can select the cluster created in Cloud Hadoop console and access Ambari Web UI through [View by Application]. For more information, see Ambari UI.

  1. Access Ambari and activate Hive LLAP in [Hive] > [CONFIGS] > [SETTINGS]. Add settings, press [SAVE] to save, and press [RESTART] to restart.

chadoop-4-5-008_ko

  1. Activate Hue Hive LLAP Module in [Hue] > [CONFIGS] > [Hue Service Module]. Add settings, press [SAVE] to save, and press [RESTART] to restart.

chadoop-4-5-009_ko

  1. You can see the activated LLAP editor in the editor list of Hue.

chadoop-4-5-010_ko