- Print
- PDF
Data Flow scenarios
- Print
- PDF
Available in VPC
The data pipeline can be conveniently configured and scheduled from NAVER Cloud Platform Data Flow. You can learn more about how to use Data Flow in Getting started with Data Flow and Using Data Flow, but we recommend reading through the Data Flow scenario first from beginning to end. After you take a look at these use scenarios and see how to use Data Flow, you can make better use of it. The entire use process and step-by-step descriptions of Data Flow are as follows:
1. Set usage permissions
2. Pre-set the environment
3 Data Flow subscription
4 Create job
5 Create trigger
6 Create workflow
7 Job monitoring
1. Set usage permissions
If you need to manage and share Data Flow with multiple users, you can set permissions for each user. Permissions for each user can be configured as administrator and user permissions through Sub Account of NAVER Cloud Platform, and roles can be defined for each permission. The setting of usage permissions is not mandatory, so you can configure or remove them at any time when there is a need for permissions during the use of Data Flow.
Sub Account is a service provided free of charge upon subscription. For more information on Sub Account and its pricing plans, see Services > Management & Governance > Sub Account in the NAVER Cloud Platform portal.
You can refer to the following user guide:
2. Pre-set the environment
To use Data Flow, it has to be integrated with data source. It is used to extract source data or store converted data.
Data Flow supports the integration of Data Catalog and Object Storage from NAVER Cloud Platform, and these 2 services are used for source data node and target data node.
Thus, before using Data Flow, you have to subscribe to Data Catalog and Object Storage.
If Data Catalog is not subscribed to, then you will be guided to Data Catalog subscription first during the Data Flow subscription. You can refer to the following user guide for Data Catalog subscription:
If Object Storage is not subscribed to, then you will be guided to Object Storage subscription first during the Data Catalog subscription. You can refer to the following user guide for Object Storage subscription:
3. Data Flow subscription
Request subscription to Data Flow. You can refer to the following user guide:
4. Create job
Create a job, which is a configuration component of a data management workflow. Job is a file defining which source data to open, and which conversion job to proceed with, and then lastly where to save. You can refer to the following user guide:
5. Create trigger
Create a trigger, which is a configuration component of a data managing workflow. Trigger is a reserving file for job schedule. You can refer to the following user guide:
6. Create workflow
Configure a data management workflow. Use the previously created job and trigger to configure a data pipeline. You can refer to the following user guide:
7. Job monitoring
When executing a job or a created workflow via the process followed above, an execution history is left. To view the statistics data for job success rate, job duration, and number of jobs, use the dashboard from Data Flow. You can refer to the following user guide: