Analyze Hive warehouse data with Presto (Trino)

Prev Next

The latest service changes have not yet been reflected in this content. We will update the content as soon as possible. Please refer to the Korean version for information on the latest updates.

Available in VPC

This guide explains how to use Presto's Hive Connector to analyze data stored in a Hive data warehouse.

Note
  • Up to Cloud Hadoop 1.9, it was used under the name Presto, and in Cloud Hadoop 2.0, it is used under the name Trino.

Architecture

Presto is mainly for interactive queries, while Hive is used for batch jobs. You can schedule batch jobs using applications such as Oozie and Airflow.

Presto not only enables access a variety of data sources using connectors, but it also allows you to query multiple data sources in a single query. When you use Hive Connector, Presto only uses data stored in Hive metadata and Hive warehouse. It does not use HiveQL or Hive's query execution engine (MapReduce).

chadoop-4-8-001_ko

Configure Presto connector

You can change configuration settings related to Presto connector in the Ambari UI page.
For more information about accessing and using the Ambari UI, see the Ambari UI guide.

To change the Presto connector configuration, follow these steps:

  1. After you access the Ambari UI, click Presto > [CONFIGS] > Advanced trino.connectors.properties.
  2. On the configuration settings page, enter the connector in connectors.to.add if you want to add a connector or in connectors.to.delete if you want to delete it, then click the [Save] button.
    chadoop-4-8-002_ko
  • Presto requires a config file {connector-name}.properties under /etc/presto/catalog to use each connector object. Therefore, to integrate multiple Hive clusters with Presto, you need to configure the config file for each cluster.

To create connector-name-1.properties and connector-name-2.properties files, configure connectors.to.add as follows:

{"connector-name-1":["key1=value1",
     "key2=value2",
     "key3=value3"],
"connector-name-2": ["key1=value1"]
        }

In this guide example, you need to add Hive Connector, so enter the following in connectors.to.add:

  • Enter the Private IP of the master node in <METASTORE-HOST-IP>.
{"hive":["connector.name=hive-hadoop2",
        "hive.metastore.uri=thrift://<METASTORE-HOST-IP>:9083",
        "hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml",
        "hive.s3.use-instance-credentials=false",
        "hive.s3.aws-access-key=<API-ACCESS-KEY>",
        "hive.s3.aws-secret-key=<API-SECRET-KEY>",
        "hive.s3.endpoint=https://kr.object.ncloudstorage.com"]
        }
  1. Click [ACTIONS] > Restart All. Click the [CONFIRM RESTART ALL] button in the popup window and then restart the service to apply the new configuration.
    chadoop-4-8-002-1_ko
Note

Hadoop configuration files (/etc/hadoop/conf/core-site.xml and /etc/hadoop/conf/hdfs-site.xml) must exist on the node running Presto.

Note

For more information on Presto configuration, see Presto Documentation

Run Hive table queries

In this guide, we executed queries on the allstarfull table created in the Using Hive guide.

To run the Hive table queries, follow these steps:

  1. After you access the node with the Presto CLI components installed, run the CLI using the following commands:
    • Enter the Private IP of the edge node in <COORDINATOR-HOST-IP>.
/home1/cdp/usr/nch/3.1.0.0-78/trino/bin/trino-cli --server <COORDINATOR-HOST-IP>:8285 --catalog hive --schema default
  1. Execute the query on tables in Hive databases and then check the result as follows:
presto:default> describe allstarfull;

chadoop-4-8-003_ko

presto:default> SELECT playerid, sum(gp) from allstarfull group by playerid;

chadoop-4-8-004_ko