- Print
- PDF
Using Data Forest CLI
- Print
- PDF
Available in VPC
Data Forest provides Command Line Interface (CLI). Users may run deep learning solutions and programs they want by getting GPU assigned dynamically when they use Data Forest CLI. This guide introduces how to use Data Forest CLI based on a usage scenario.
Preparations
Creation of Data Forest account and notebook as well as cluster environment configuration is required.
- Connect to the console in NAVER Cloud Platform.
- Click the [Create Account] button from Services > Big Data & Analytics > Data Forest > Accounts menu.
- Create an account that will submit a Data Forest app.
- Click the Services > Big Data & Analytics > Data Forest > Notebooks > [Create notebook] button and create notebook.
- Configure the environment so that access to a Data Forest cluster is enabled.
- Refer to Getting Data Forest started to learn about how to configure the environment.
Using Data Forest CLI
Step 1. Check notebook information
The following describes how to check the notebook details.
From the NAVER Cloud Platform console, click the Services > Big Data & Analytics > Data Forest menu, in that order.
Click the Notebooks menu on the left. You can see the list of the notebooks you've created.
Name Description Notebook name Name of created notebook Account Account that owns notebook Notebook type Type of created notebook. Only provides Jupyterlab type as of now Condition Notebook node's condition Server specifications Server specifications of notebook node VPC VPC in which notebook is created Subnet Subnet applied to notebook node Creation time Date and time when the notebook is created Click the [button] at the end of the notebook list to see the notebook details.
Item Description Account name Account that owns notebook Notebook type Type of created notebook. Only provides Jupyterlab type as of now Notebook name Name of created notebook Notebook ID Unique notebook ID Server specifications Server specifications of notebook node VPC VPC in which notebook is created Subnet Subnet applied to notebook node ACG ACG applied to notebook node Additional storage Additional storage information Domain Domain assigned to public IP Authentication key name Name of the authentication key applied to the notebook SSH access account OS account name for directly accessing notebook node using SSH Set user Information on user settings applied to the notebook Bucket Object storage bucket information
Access notebook web page
The following describes how to access a notebook's web page:
- Before proceeding, ensure that JupyterLab's port 80 is added to the ACG of the notebook.
- From the Notebooks menu, click Go to domain from the notebook's details screen.
- If the notebook nodes are created within a Public Subnet, web pages can be accessed directly using the public IP without requiring additional tunneling settings.
- Once the login screen of the JupyterLab web page appears, enter your password to log in.
- The password is the one set in the Access Password field on the user settings screen during the notebook creation process.
- If you forgot your password or need to change your password, click the user settings [details/reset] button on the details screen of the notebook to reset your password.
Step 2. Downloading Data Forest CLI
Access to the terminal from a notebook.
Download the Data Forest CLI execute file.
$ wget http://dist.kr.df.naverncp.com/repos/df/notebook/static/dfctl
$ chmod +x dfctl
Step 3. Proceed with authentication and access approval
- Request login via [notebook ID] by checking the check notebook details page.
$ ./dfctl login -i {notebook ID}
- Check the authentication request information.
Item | Description |
---|---|
deviceCode | Temporary code of the currently approving CLI |
userCode | Code to check the requesting user |
verificationUri | Temporary URL to proceed authentication |
Access the verificationUri and the check the deviceCode information from CLI and enter the userCode.
Proceed the NCP SSO authentication from the current page.
The login step may be omitted if there is a recent access history in the browser cookie.
Click the [Approve authentication] from the device access approval screen.
You can check the access allowed printed from CLI once the access has been approved.
Data Forest CLI
The commands available in Data Forest CLI are as follows:
Usage:
$ ./dfctl [Command] [SubCommand]...
Commands:
hdfs
Data Forest CLI: hdfs
chmod
Usage:
$ ./dfctl hdfs chmod [FLAGS] FILE
Flags:
-R, --recursive
: applies to all the directories and the sub directories that exists within the directories.
Output:
- Edits the system file or access permission of the object.
<example>
[forest@2e11777d04cf ~][edge-df_beta]$ ./dfctl hdfs chmod 777 /user/test-df/dst_file
chown
Usage:
$ ./dfctl hdfs chown [FLAGS] FILE
Flags:
-r, --recursive
: applies to all the directories and the sub directories that exists within the directories.
Output:
- Edits the ownership of the file or directory.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs chown -r forest /user/test-df
get
Usage:
$ ./dfctl hdfs get [REMOTE_FILE] [LOCAL_FILE]
Flags:
- N/A
Output:
- Copies the file from remote to local.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs get /test/df /test/local
ls
Usage:
$ ./dfctl hdfs chmod [FLAGS] FILE
Flags:
-a, --all
: shows hidden files or directories.-H, --human-readable-a, --all
: improves the readability of the file size by using K, M, and G units.-l, --long
: prints the detailed information.-R, --recursive
: applies to all the directories and the sub directories that exists within the directories.
Output:
- Displays the file list of the current directory.
<example>
[forest@2e11777d04cf ~][edge-df_beta]$ ./dfctl hdfs ls
default user dir: /user/test-df
dst_file
src_file
test
mkdir
Usage:
$ ./dfctl hdfs mkdir [FLAGS] DIR
Flags:
-p, --parents
: creates the parent path as well.
Output:
- Create a new directory.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs mkdir /user/test-df/test
mv
Usage:
$ ./dfctl hdfs mv [FLAGS] [SRC_FILE] [DST_FILE]
Flags:
-n, --no-clobber
: does not overwrite when the name of the file being transferred already exists.-T, --no-target-directory
: handles the transferred files as general files.
Output:
- Transfers to another directory or edits the file or directory name.
<example>
[forest@2e11777d04cf ~][edge-df_beta]$ ./dfctl hdfs mv /user/test-df/src_file /user/test-df/dst_file2
[forest@2e11777d04cf ~][edge-df_beta]$ ./dfctl hdfs ls /user/test-df
dst_file2
put
Usage:
$ ./dfctl hdfs put [LOCAL_FILE] [REMOTE_FILE]
Flags:
- N/A
Output:
- Copies the file from local to remote.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs put /test/local /test/df
rm
Usage:
$ ./dfctl hdfs rm [FLAGS] FILE
Flags:
-R, --recursive
: applies to all the directories and the sub directories that exists within the directories.-f, --force
: ignores if the file does not exist.
Output:
- Delete the file.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs rm /user/test-df/a
touch
Usage:
$ ./dfctl hdfs touch [FLAGS] FILE
Flags:
-c, --no-create
: edits the time of the file and the file is not created.
Output:
- Creates a valid blank file.
<example>
[forest@2e11000d04cf ~][edge-df_beta]$ ./dfctl hdfs touch /user/test-df/a