Dashboard

Prev Next

Available in Classic

In the Dashboard menu, you can view the detailed status (Health, Status) of the cluster created by the Search Engine Service through the detailed cluster dashboards, and information such as indexes, Documents, and storage capacity being used. Depending on the Cluster Health, you can use Kibana to determine the cause of the cluster's problems.

Dashboard page

The basic description of the Dashboard menu for using the Search Engine Service is as follows:
ses-dashboard-classic_screen_ko

Field Description
① Menu name Name of the menu currently being viewed
② Basic features Create clusters (refer to Create cluster), view Search Engine Service details, refresh page
③ List List of clusters held
  • [Shortcut] button: Click to access Kibana installed in the cluster (refer to Access Kibana)
  • Detailed cluster dashboard

    If you click a cluster in the cluster list, you can check the detailed dashboard of that cluster. The following describes the cluster's dashboard details.
    ses-dashboard-classic_details_ko

    Field Description
    ① List of nodes List of nodes in the selected cluster
    ② Cluster information
  • Cluster Health: cluster status (refer to Cluster status for a description of each status)
  • Number of Indices: number of indices in the cluster
  • Number of Shards: number of shards in the cluster
  • Number of Document: number of documents in the cluster
  • Number of Segments: number of segments in the cluster
  • Disk Used: disk usage
  • ③ Index information
  • Health: display index status with colors (green, yellow, red)
  • Status: index status
  • Index: index name
  • Primary: number of Primary Shards storing indexes
  • Replica: number of Replica Shards storing indexes
  • Document: number of documents in index
  • Deleted Document: number of Documents deleted
  • Store size: data capacity of the index
  • Cluster status

    The Search Engine Service expresses the Cluster Health in 4 ways: running, warning, error, unknown. For each status, check the description of the status and how to solve the problem.

    Note

    Use Kibana's Console to identify and resolve the cause of cluster problems. For more information on using Kibana, see Use Kibana.

    Running

    The cluster is functioning normally without any problems.

    Warning

    The Primary Shard is correctly allocated to the data node, but the Replica Shard is not allocated correctly.

    • In most cases, the Replica Shard is assigned to the data node after a certain period, and the cluster status changes to Running.
    • If the status of the cluster does not change to Running after some time, check the following:
      • You can run the following command to determine the index of the yellow status, the unallocated shard, and the cause of the unallocated shard.
        GET _cat/indices
        GET _cat/shards
        GET _cluster/allocation/explain
        
      • If the disk usage of all data nodes is 85% or above, then shards can't be allocated.
        • You can check the disk usage of a node by running the following command:
          GET _cat/allocation?v
          

    Error

    This happens when neither the Primary Shard nor the Replica Shard is allocated to a data node.

    • You can find the index in the red status and the reason for unallocated shards by running the following command:

      GET _cat/indices
      GET _cluster/allocation/explain
      
    • You can try to restore the index in the red status by running the following command:

      • Restoration may fail.
      • If the restoration fails, then you must delete the index before you can restore the cluster to its normal operating status.
      POST _cluster/reroute?retry_failed=true
      

    Unknown

    This is what happens when the Search Engine Process is terminated or there is a temporary network error.

    • This is usually due to Out Of Memory (OOM).
    • If you can access Kibana, run the following command in the Console to check whether the manager node and data node are output normally.
      GET _cat/nodes
      
    • If you cannot access Kibana, restart the cluster (refer to Restart cluster).