Tyler Voll

Writer - System Admin


Today I took a dive at installing my own instance of the Elastic Stack on a Digital Ocean droplet, so that I can better test out how to utilize the stack.
While this guide will go through the steps of running Elastic Search on a CentOS droplet, it will not go into the many uses and features of the Kibana interface.
For more information on Kibana and how to utilize the visual front-end of the Elastic Stack, be sure to check out one of my previous articles, located here.


While there are guides out for the Elastic Stack, I found the majority of ones relating to CentOS were severely outdated, pulling resources from the ELK stack that were a few years old; even though some of these articles were published within the last year. What this guide will deliver is an easy to understand, step-by-step process to deploying a Elastic Stack instance that not only works, but is running their (as of the time of writing) most recent version of the stack. Although this guide is using Digital Ocean as the host of our VPS, that doesn't mean that you can't follow this guide for any normal install on your own virtual machine or on another cloud hosted environment. This walk-through is for spinning up a quick install of Elastic Stack that is public facing, to be primarily used for testing. For more information on steps you can take if you'd like to take Elastic Stack into a production installation, scroll to the end of this article for more information.

 


First we have some Pre-Requisites:
Creating a CentOS 7.5 Droplet through Digital Ocean.
First select the create button to get a dropdown of the droplet you'd like to create.

Next, select the distribution you want on the droplet. For this tutorial, we will be using their newest version of CentOS.



It's recommended to use a droplet with at least 4Gb of RAM, but after you have scrolled down and selected the hardware of your droplet along with the location, you are ready to create your droplet.

 


After your droplet is created, you will be sent an email detailing the information to SSH into your new DO droplet.

Once you've gained access to your droplet, the next pre-requisite is installing java.
Before installing Java, we will update the new CentOS install.

yum update
yum install java-1.8.0-openjdk-devel
java -version

Installing the Elastic Stack
First, we need to make sure all of the repositories are added on our CentOS server for each component of the stack.
The components we will be installing are: Elasticsearch, Logstash, and Kibana

sudo vi /etc/yum.repos.d/elasticsearch.repo
Enter in:
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Save and exit.


Next we will need to add the GPG Key in order to properly install from the repo.

 rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
 
Once it has been added, we can install Elasticsearch.
sudo yum install elasticsearch

Next, lets edit our elasticsearch.yml in order to bind Elasticsearch to our Droplet's IP.

sudo vi /etc/elasticsearch/elasticsearch.yml

Uncomment: network.host: localhost

Modify: network.host: your-ip-entered-here

Save and exit.


After installed, we can start up the service and enable the service to start up with CentOS.
systemctl start elasticsearch
systemctl enable elasticsearch

We can now run a curl command to reach out to Elasticsearch and see if it is working.
curl -X GET http://localhost:9200

Then we can go ahead and install Logstash
sudo yum install Logstash

Don't worry about running Logstash yet, we will come to that component later for further customization with beats.

Next, installing Kibana:

sudo yum install kibana


Now, since we are using a digital ocean droplet, we will want to customize our Kibana install
so that it recognizes the ip address of the droplet as something that can be accessible outside of the droplet (in order to view Kibana).

sudo vi /etc/kibana/kibana.yml

Uncomment these lines:
server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save and exit.


Modify the server.host & elasticsearch.url fields so instead of saying localhost, they have your Droplet's IP address.
ex. server.host: 159.203.188.215 & elasticsearch.url: "http://159.65.97.168:9200"

After these changes have been completed, you can start up the Kibana service and enable it to boot with the OS.
systemctl start kibana
systemctl enable kibana

Once the Kibana service has started up and is running successfully, you can interact with Kibana by entering in your droplet's IP:5601 into your browser.
http://IP-Address:5601/

If you accidentally started up the Kibana service before making proper adjustments to your kibana.yml, make sure to make those changes and then restart your CentOS server
before the changes take effect. Also, make sure that the elasticsearch.url is pointing to the right location.

Having any issues with Kibana? A good way to find out what is happening is by checking the logs.
By default, you won't have a the logs stored outside of standard output, to do so, just edit your kibana.yml.

sudo vi /etc/kibana/kibana.yml
Uncomment logging.dest: stdout

Modify it to point to a place where you'd like to store your kibana log.
ex: logging.dest: /var/log/kibana.log

Be sure to give it the correct permissions so that Kibana can properly write to the log.
chown kibana:kibana /var/log/kibana.log

With that all said and done, you can properly look through the output that Kibana delivers!
tail -n 10 -f /var/log/kibana.log







Congratulations! This should be all you need to do on a Digital Ocean droplet to get a simple instance of the Elastic stack running.
If you are interested in deploying on a production environment, consider deploying your Elastic Stack instance across multiple nodes, as explained in further detail here.

Stay tuned, because the more I delve into the Elastic Stack, the more i'll post! My next article on the Elastic Stack will probably be looking at customization and parsing data through Logstash, and further working with Filebeat in order to really flesh out the advantages of using the Elastic Stack. Just wanted to say thank you for those who enjoy reading my articles, if you're interested in following along through digital ocean, feel free to help us out here by trying out our promo code. You'll get free Digital Ocean credit to start you out with, which can definitely be nifty if you'd like to use them to host your Elastic Stack instance!

tags:
Back to Top