Difference between revisions of "Elastic Search"

(ElasticSearch Quick Notes)
(metricbeat - AWS Service Metrics monitoring)
 
(11 intermediate revisions by the same user not shown)
Line 63: Line 63:
 
# cd kibana-7.15.1-linux-x86_64/
 
# cd kibana-7.15.1-linux-x86_64/
 
</source>
 
</source>
 +
 +
start kibana
 +
<source>
 +
# ./bin/kibana
 +
</source>
 +
 +
https://www.elastic.co/guide/en/kibana/current/start-stop.html
 +
 +
=NGINX Quick Notes=
 +
==Installation==
 +
resource:
 +
* https://blog.ruanbekker.com/blog/2017/09/16/nginx-reverse-proxy-for-elasticsearch-and-kibana-5-on-aws/ **
 +
* https://burnhamforensics.com/2019/02/06/how-to-install-and-configure-nginx-for-kibana/
 +
* https://www.cyberciti.biz/faq/amazon-linux-ami-install-linux-nginx-mysql-php-lemp/
 +
 +
'''nginx settings'''
 +
to add server based auth:
 +
 +
<source>
 +
# echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
 +
</source>
 +
 +
it will ask password, save it
 +
 +
create a file. i.e. kibana.conf under /etc/nginx/conf.d/ with the following conf:
 +
 +
<source>
 +
server {
 +
    listen 80;
 +
 +
    server_name {{IP-or-DNS}}; # i.e. 217.138.24.32
 +
 +
    auth_basic "Restricted Access";
 +
    auth_basic_user_file /etc/nginx/htpasswd.users;
 +
 +
    location / {
 +
        proxy_pass http://localhost:5601;
 +
        proxy_http_version 1.1;
 +
        proxy_set_header Upgrade $http_upgrade;
 +
        proxy_set_header Connection 'upgrade';
 +
        proxy_set_header Host $host;
 +
        proxy_cache_bypass $http_upgrade;
 +
    }
 +
}
 +
</source>
 +
 +
create a file. i.e. elasticsearch.conf under /etc/nginx/conf.d/ with the following conf:
 +
<source>
 +
server {
 +
    listen 81;
 +
 +
    server_name {{IP-or-DNS}}; # i.e. 217.138.24.32
 +
 +
    auth_basic "Restricted Access";
 +
    auth_basic_user_file /etc/nginx/htpasswd.users;
 +
 +
    location / {
 +
        proxy_pass http://localhost:9200;
 +
        proxy_http_version 1.1;
 +
        proxy_set_header Upgrade $http_upgrade;
 +
        proxy_set_header Connection 'upgrade';
 +
        proxy_set_header Host $host;
 +
        proxy_cache_bypass $http_upgrade;
 +
    }
 +
}
 +
</source>
 +
 +
nginx start/stop
 +
 +
<source>
 +
# sudo service nginx start
 +
# sudo service nginx status
 +
</source>
 +
 +
 +
==functionbeat - AWS Cloudwatch integration==
 +
<source>
 +
# ./functionbeat setup -e
 +
# ./functionbeat -v -e -d "*" deploy cloudwatch-method-name
 +
<source>
 +
 +
==metricbeat - AWS Service Metrics monitoring==
 +
<source>
 +
# doc : https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-aws.html
 +
 +
# sudo systemctl start metricbeat
 +
# sudo systemctl enable metricbeat
 +
# sudo systemctl status metricbeat
 +
<source>

Latest revision as of 00:00, 28 October 2021

Elasticsearch is a search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. Official clients are available in Java, .NET (C#), PHP, Python, Apache Groovy, Ruby and many other languages.[2] According to the DB-Engines ranking, Elasticsearch is the most popular enterprise search engine followed by Apache Solr, also based on Lucene.

Elasticsearch is developed alongside a data-collection and log-parsing engine called Logstash, and an analytics and visualisation platform called Kibana. The three products are designed for use as an integrated solution, referred to as the "Elastic Stack" (formerly the "ELK stack").

Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. "Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have zero or more replicas. Each node hosts one or more shards, and acts as a coordinator to delegate operations to the correct shard(s). Rebalancing and routing are done automatically".[2] Related data is often stored in the same index, which consists of one or more primary shards, and zero or more replica shards. Once an index has been created, the number of primary shards cannot be changed.

Elasticsearch uses Lucene and tries to make all its features available through the JSON and Java API. It supports facetting and percolating, which can be useful for notifying if new documents match for registered queries.

Another feature is called "gateway" and handles the long-term persistence of the index;[6] for example, an index can be recovered from the gateway in the event of a server crash. Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL datastore, but it lacks distributed transactions.

Apache Lucene

Kibana

NoSQL

ElasticSearch Quick Notes

Installation

elesticsearch installation : https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html

# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.1-linux-x86_64.tar.gz
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.1-linux-x86_64.tar.gz.sha512
# shasum -a 512 -c elasticsearch-7.15.1-linux-x86_64.tar.gz.sha512
# tar -xzf elasticsearch-7.15.1-linux-x86_64.tar.gz
# cd elasticsearch-7.15.1/

Enable automatic creation of system indicesedit action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

  •  > you may consider setting the value to * which will allow automatic creation of all indices.

Running Elasticsearch from the command line

# ./bin/elasticsearch

Test elasticsearch installation

# curl -X GET "localhost:9200/?pretty"

Running as a daemon

# ./bin/elasticsearch -d -p pid
# pkill -F pid

https://www.elastic.co/blog/running-elasticsearch-on-aws

Kibana Quick Notes

Installation

https://www.elastic.co/guide/en/kibana/current/targz.html

# curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.15.1-linux-x86_64.tar.gz
# curl https://artifacts.elastic.co/downloads/kibana/kibana-7.15.1-linux-x86_64.tar.gz.sha512 | shasum -a 512 -c -
# tar -xzf kibana-7.15.1-linux-x86_64.tar.gz
# cd kibana-7.15.1-linux-x86_64/

start kibana

# ./bin/kibana

https://www.elastic.co/guide/en/kibana/current/start-stop.html

NGINX Quick Notes

Installation

resource:

nginx settings to add server based auth:

# echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

it will ask password, save it

create a file. i.e. kibana.conf under /etc/nginx/conf.d/ with the following conf:

server {
    listen 80;

    server_name {{IP-or-DNS}}; # i.e. 217.138.24.32

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

create a file. i.e. elasticsearch.conf under /etc/nginx/conf.d/ with the following conf:

server {
    listen 81;

    server_name {{IP-or-DNS}}; # i.e. 217.138.24.32

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:9200;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

nginx start/stop

# sudo service nginx start
# sudo service nginx status


functionbeat - AWS Cloudwatch integration

<source>

  1. ./functionbeat setup -e
  2. ./functionbeat -v -e -d "*" deploy cloudwatch-method-name

<source>

metricbeat - AWS Service Metrics monitoring

<source>

  1.  doc : https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-aws.html
  1. sudo systemctl start metricbeat
  2. sudo systemctl enable metricbeat
  3. sudo systemctl status metricbeat

<source>