How To Install ELK On Ubuntu 22.0.4 [Updated]

How To Install ELK On Ubuntu 22.0.4 [Updated]

Prerequisites - Let’s Get Our Ubuntu Ready To Rock&Roll

Make sure you’re running all of these command with administrative privileges!

  • Install Nginx and Allow HTTP
sudo apt install nginx
sudo systemctl enable nginx
sudo ufw allow 'Nginx Full'
  • Install Java Dev Environment For Backend Behind The Scenes Support
sudo apt install default-jre
sudo apt install default-jdk

  • use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the curl command to the gpg –dearmor command, which converts the key into a format that apt can use to verify downloaded packages.
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch |sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg
  • This command will give our Ubuntu package manager APT the ability to read from the Elastic source:
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

*Let’s Update APT

sudo apt update
  • Let’s start to boogiedown and download the E of ELK (Elastic Stack)
sudo apt install elasticsearch
  • Let’s Configure Elasticsearch
sudo nano /etc/elasticsearch/elasticsearch.yml
  • If you want to configure your Elastic Stack on individual systems you can specify IP address here and security of where it’s exposed. Here, I’m doing an all in one deployment so I’ll stick with localhost, and it’s that by default so we can even skip this if we wanted.
    # ---------------------------------- Network -----------------------------------
    #
    # By default Elasticsearch is only accessible on localhost. Set a different
    # address here to expose this node on the network:
    #
    network.host: localhost
    #
    
  • If all is well we should be able to start elasticsearch now
sudo systemctl start elasticsearch
  • Let’s be good SysAdmins(system administrators) and make sure it starts on boot in case the system gets shutdown.
sudo systemctl enable elasticsearch
  • To validate our install is Gucci we can curl localhost on the elasticsearch port
curl -X GET "localhost:9200"
  • Alright let’s copacabana and install Kibana the K in ELK
sudo apt install kibana
  • Be a good SysAdmin and enable that on startup too (Spoiler Alert: we’re gonna do this for all of the component of our Elastic stack)
sudo systemctl enable kibana

*Let’s start our Kibana up

sudo systemctl start kibana
  • Let’s create an administrative user (change kibanaadmin) for Kibana access
echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
  • Next we’re gonna create a server block as we’re gonna build a domain
sudo nano /etc/nginx/sites-available/your_domain
  • Paste This Text Into The File Changing Info like your_domain and defaults like the proxy pass if you want to be more secure.
server {
    listen 80;

    server_name your_domain;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
  • Save and close the file, then we’ll enable the server block
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain
  • Make sure you didn’t pull a goofus and make any errors in that file
sudo nginx -t
  • Reload Nginx as we’re all set to use Nginx with ELK
sudo systemctl reload nginx
  • Let’s check out our GUI now thanks to Nginx to make sure Kibana is all set. Go to the browser and navigate to http://your_domain/status or you can use localhost:5601 If you get a login pop up login with the admin creds (administrative credentials you made earlier kibanaadmin etc.) and you should be able to see Elastic. For now let’s close back out of the browser, but it’s awesome we can see the Kibana dashboard is ready to party!

  • Let’s Install The Last part of the ELK the L Logstash

sudo apt install logstash
  • In our instance we’re not setting up multiple systems to help logstash ingest and mario pipe all of our data to send to elasticsearch for data processing, but if you are setting up multiple machines to lessen the work load you can use the logstash configuration file here /etc/logstash/conf.d and add the IPs of the systems and their destination (ip of elasticsearch). (Think input and output like a mario pipe)

  • Next what we’re gonna do is setup some ‘beats’ (yes it’s the industry term, and we’re here for it) that will give our Elastic stack data for us to look at and monitor.

  • We’re gonna start with filebeat so we can ship log data in a nice way. Let’s create the conf (configuration) file for that boi now.

sudo nano /etc/logstash/conf.d/02-beats-input.conf
  • Paste this and make sure the port matches up with your config from earlier. If you stuck with the defaults it’s 5044.
input {
  beats {
    port => 5044
  }
}
  • Now we’re gonna get Elasticsearch to process the data and throw it up to Kibana when we want it. Let’s create that conf file my friends :)
sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

*Paste the below and peak that we’re specifying Elasticsearch at port 9200, as that’s how we conf’d it earlier ;)

output {
  if [@metadata][pipeline] {
	elasticsearch {
  	hosts => ["localhost:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  	pipeline => "%{[@metadata][pipeline]}"
	}
  } else {
	elasticsearch {
  	hosts => ["localhost:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
	}
  }
}
  • Let’s validate our moustaching around with Logstash looks good and works.
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
  • Let’s get Logstashing loggin and started up
sudo systemctl start logstash
sudo systemctl enable logstash
  • Now we can start to get data into our Elastic Stack through the different beats! We’ll keep going with filebeat.
sudo apt install filebeat
  • Let’s config that bad boi ;P
sudo nano /etc/filebeat/filebeat.yml
  • Scroll down and make sure your config has the elasticsearch deetz commented (with the # behind it). This is because we’re using Logstash to ship all our data, so we don’t need Elasticsearch deetz in the config. The way the data flows is through logstash then to Elasticsearch for data procesing then we can view that thru Kibana, hence the ELK stack.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

``

* Now uncomment the Logstash deetz (remove the # like below)

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  • Filebeat has a heck of a ton of modules we can enable depending on the type of logs we want to get shipped to our Elastic Stack. Check em out here

  • We’ll get started with the system module so we can see local logs.

sudo filebeat modules enable system

*Now we need to set up the Filebeat mario pipe to munch on the log data before sending it through Logstash to Elasticsearch

sudo filebeat setup --pipelines --modules system
  • Let’s load the template for file beat giving it an index so we can use it in Kibana.
sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
  • Now let’s disable Logstash output and enable Elasticsearch output. We needed this enable for a hot sec in the beginning here because Filebeat connects to Elasticsearch to check version information.
sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601
  • Wait a lil bit as all of our ELK puzzle pieces come together and we’re almost ready to party

  • Now all we gotta do is get our data flowing by starting filebeat

sudo systemctl start filebeat
sudo systemctl enable filebeat
  • Let’s validate our data is flowing and we’re all done. Now the real fun can begin.
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'




comments powered by Disqus