Integration with the Cloud Data Streaming Service using Logstash
    • PDF

    Integration with the Cloud Data Streaming Service using Logstash

    • PDF

    Article Summary

    Available in VPC

    It describes how to use Logstash to send and view data from the Cloud Data Streaming Service to the Search Engine Service.

    Preparations

    Before following this guide, you need to complete the subscription for the following jobs:

    • Create VPC and server
    • Create the Cloud Data Streaming Service cluster
    • Create the Search Engine Service cluster

    The example shows how to run Logstash on the Server and then send Kafka data to the Search Engine Service .

    Set network

    This is an example of a network setup.

    STEP 1. Set ACG

    The following describes how to set ACG so that access to port 9092 of the Cloud Data Streaming Service broker node is allowed.

    1. From the NAVER Cloud Platform console, click the Services > Compute > Server > ACG menus, in that order.
    2. Select 'cdss-b-xxxxx' from the ACG list and click the [Set ACG].
    3. Enter the ACG rule, and then click the [Add] button.
      cdss-5-4_ko
      • Protocol: TCP
      • Access source: IP of the server on which Logstash will run
      • Allowed port: 9092
    4. Click the [Apply] button.

    The following describes how to set ACG so that access to port 9200 of the Search Engine Service manager node is allowed.

    1. From the NAVER Cloud Platform console, click the Services > Compute > Server > ACG menus, in that order.
    2. Select 'searchengine-m-xxxxx' from the ACG list, and then click the [Set ACG].
    3. Enter the ACG rule, and then click the [Add] button.
      cdss-5-6_ko
      • Protocol: TCP
      • Access source: IP of the server on which Logstash will run
      • Allowed port: 9200

    Install Logstash

    This is an example of installing Logstash on a Server. The installation process includes processes for both ElasticSearch and OpenSearch. You must install it according to the version you are using to conduct a normal testing.

    STEP 1. Install Java

    1. Enter the following command to install java.
    yum install java-devel -y
    

    STEP 2. Install Logstash

    The following describes how to install Logstash.

    1. Enter the following command to download Logstash to the /root path.
    # When using Elasticsearch version (Install OSS version)
    wget https://artifacts.elastic.co/downloads/logstash/logstash-oss-7.7.0.rpm
    
    # When using OpenSearch version
    wget https://artifacts.opensearch.org/logstash/logstash-oss-with-opensearch-output-plugin-7.16.3-linux-x64.tar.gz
    
    1. Install the downloaded file by entering the following command:
    # When using Elasticsearch version
    rpm -ivh logstash-oss-7.7.0.rpm
    
    # When using OpenSearch version
    tar -zxvf logstash-oss-with-opensearch-output-plugin-7.16.3-linux-x64.tar.gz
    
    1. Enter the following command to modify the logstash.conf file before starting Logstash.
    • When using the Elasticsearch version
    mv /etc/logstash/logstash-sample.conf /etc/logstash/conf.d/logstash.conf
    vi /etc/logstash/conf.d/logstash.conf
    
    • ElasticSearch version logstash.conf
    input {
     kafka {
      bootstrap_servers => "${bootstrap_servers}"
      topics => "cdss_topic"
     }
    }
    
    output {
      elasticsearch {
        hosts => ["http://${ses manager node1 ip}:9200", "http://${ses manager node2 ip}:9200"]
        index => "cdss-%{+YYYY.MM.dd}"
      }
    }
    
    • When using the OpenSearch version
    # When installing in the /root/ path, the {installation path} is /root/logstash-7.16.3. 
    mv {Installation path}/config/logstash-sample.conf {Installation path}/config/logstash.conf
    vi {Installation path}/config/logstash.conf
    
    • OpenSearch version logstash.conf
    input {
     kafka {
      bootstrap_servers => "${bootstrap_servers}"
      topics => "cdss_topic"
     }
    }
    
    output {
      opensearch {
        hosts => ["https://${ses manager node1 ip}:9200", "https://${ses manager node2 ip}:9200"]
        index => "cdss-%{+YYYY.MM.dd}"
        user => ${userID}
        password => ${password}
        ssl_certificate_verification => false
      }
    }
    
    • Logstash Conf Comment
    ${bootstrap_servers} - Enter the Cloud Data Streaming Service broker node's IP:Kafka port.  ex) 172.16.19.6:9092,172.16.19.7:9092,172.16.19.8:9092
    ${ses manager node1 ip} - Enter the Search Engine Service manager node’s IP address.
    ${ses manager node2 ip} - Enter the Search Engine Service manager node’s IP address (Do not enter if the manager node is not redundant).
    ${userID} - For OpenSearch, the ID you entered when creating the cluster.
    ${password} - For OpenSearch, the password you entered when creating the cluster.
    
    

    Run Logstash

    An example of running Logstash is as follows:

    # When using Elasticsearch version
    systemctl start logstash
    
    # When using OpenSearch version
    # Use nohup to run in the background.
    # Specify the path for logstash.conf using the -f option
    nohup {installation path}/bin/logstash -f ~{installation path}/config/logstash.conf &
    

    Build CDSS integrating environment

    An example of running Logstash is as follows:

    • Install Java
    yum install java-devel -y
    
    • Install Kafka binary code
    wget https://archive.apache.org/dist/kafka/2.4.0/kafka_2.12-2.4.0.tgz
    tar -zxvf kafka_2.12-2.4.0.tgz 
    
    • Produce
    ./kafka_2.12-2.4.0/bin/kafka-console-producer.sh --broker-list ${broker list} --topic cdss_topic
    >this is my first message
    >this is my second message
    
    # ${broker list} - Enter the broker node's IP:Kafka port. ex) 172.16.19.6:9092,172.16.19.7:9092,172.16.19.8:9092
    

    View Cloud Data Streaming Service data

    An example of viewing data from the Cloud Data Streaming Service from the Search Engine Service is as follows:

    GET cdss-2022.08.08/_search
    
    {
      "took" : 3,
      "timed_out" : false,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      },
      "hits" : {
        "total" : {
          "value" : 2,
          "relation" : "eq"
        },
        "max_score" : 1.0,
        "hits" : [
          {
            "_index" : "cdss-2022.08.08",
            "_type" : "_doc",
            "_id" : "VtmKe4IBicE7MyrTaKJ5",
            "_score" : 1.0,
            "_source" : {
              "@version" : "1",
              "@timestamp" : "2022-08-08T03:40:44.335Z",
              "message" : "this is my first message"
            }
          },
          {
            "_index" : "cdss-2022.08.08",
            "_type" : "_doc",
            "_id" : "V9mKe4IBicE7MyrTg6IW",
            "_score" : 1.0,
            "_source" : {
              "@version" : "1",
              "@timestamp" : "2022-08-08T03:40:51.248Z",
              "message" : "this is my second message"
            }
          }
        ]
      }
    }
    

    Use Kafka SSL

    If the Cloud Data Streaming Service uses SSL, you can set it up by adding a certificate.

    • INPUT

      input {
        kafka {
          bootstrap_servers => "{BrokerNode-HostName}:9093" 
          topics => "test"
          ssl_truststore_location => "/etc/logstash/conf.d/kafka.client.truststore.jks"
          ssl_truststore_password => "${password}"
          security_protocol => "SSL"
        }  
      }
      
      • You can check it in
        ${BrokerNode-HostName} - Cluster list -> Broker node information -> View details -> TLS. <example> "networktest12-d-251:9093,networktest12-d-252:9093,networktest12-d-253:9093"
      • ${password} - separate certificate password

    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.