- Print
- PDF
Analyzing Hive Warehouse data using Presto(Trino)
- Print
- PDF
Available in VPC
This guide describes how to analyze data stored in Hive data warehouse with Presto's Hive Connector.
- Up to Cloud Hadoop 1.9, it was used under the name Presto, and in Cloud Hadoop 2.0, it is used under the name Trino.
Architecture
Presto is mainly for interactive queries, while Hive is used for batch jobs. You can schedule batch jobs using applications such as Oozie and Airflow.
Presto not only enables access to a variety of data sources using connectors, but it also allows you to query multiple data sources in a single query. When you use Hive Connector, Presto only uses data stored in Hive metadata and Hive warehouse. It does not use HiveQL or Hive's query execution engine (MapReduce).
Configure Presto connector
You can change configuration settings related to Presto connector in the Ambari UI page.
For more information about accessing and using Ambari UI, see the Ambari UI guide.
The following describes how to change the Presto connector configuration.
- After you access the Ambari UI, click Presto > [CONFIGS] > Advanced connectors.properties.
- On the configuration settings page, enter the connector in
connectors.to.add
if you want to add a connector or inconnectors.to.delete
if you want to delete it, then click the [Save] button.
- Presto requires a config file
{connector-name}.properties
under/etc/presto/catalog
to use each connector object. Therefore, to integrate multiple Hive clusters with Presto, you need to configure the config file for each cluster.
To create connector-name-1.properties
and connector-name-2.properties
files, configure connectors.to.add
as follows:
{"connector-name-1":["key1=value1",
"key2=value2",
"key3=value3"],
"connector-name-2": ["key1=value1"]
}
In this guide example, you need to add Hive Connector, so enter the following in connectors.to.add
:
- Enter the Private IP of the master node in
<METASTORE-HOST-IP>
.
{"hive":["connector.name=hive-hadoop2",
"hive.metastore.uri=thrift://<METASTORE-HOST-IP>:9083",
"hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml",
"hive.s3.use-instance-credentials=false",
"hive.s3.aws-access-key=<API-ACCESS-KEY>",
"hive.s3.aws-secret-key=<API-SECRET-KEY>",
"hive.s3.endpoint=https://kr.object.ncloudstorage.com"]
}
- Click [ACTIONS] > Restart All. Click the [CONFIRM RESTART ALL] button in the popup window and then restart the service to apply the new configuration.
Hadoop configuration files (/etc/hadoop/conf/core-site.xml
and /etc/hadoop/conf/hdfs-site.xml
) must exist on the node running Presto.
For more information on Presto configuration, see Presto Documentation.
Run Hive table queries
In this guide, we executed queries on the allstarfull
table created in the Using Hive guide.
The following describes how to run the Hive table queries.
- After you access the node with Presto CLI components installed, run CLI with the following commands.
- Enter the Private IP of the edge node in
<COORDINATOR-HOST-IP>
.
- Enter the Private IP of the edge node in
/usr/lib/presto/bin/presto-cli --server <COORDINATOR-HOST-IP>:8285 --catalog hive --schema default
- Execute the query on tables in Hive databases and then check the result as follows:
presto:default> describe allstarfull;
presto:default> SELECT playerid, sum(gp) from allstarfull group by playerid;