Howto view and analyze your logs on a web page

Recently I started having some problems with my DD-WRT router. I was having some connection problems and occasional reboots. So, to deal with the problem, I wanted to collect some data from the router in the form of logs. Luckily for me, DD-WRT has a syslogd service which is could send logs to a syslog server on another machine over TCP (or UDP?) connection. Since I know close to nothing in that area, I went to the next best trusted source - Google!

Some of the search results that I've seen referred to something called Logstash, Elasticsearch, and Kibana. To make the long story short, these 3 components make a great centralized log repository with awesome visual data representation and search capabilities.

Got your attention?

Below is a quick guide on how to get up and running with this stack. This guide is goind to setup all 3 components (with appropriate dependencies) on an Ubuntu 14.04 server. Also, for this to be most effective, some pieces should be installed on separate computers. Based on my understanding, Elasticsearch and Kibana should be installed on 1 computer while Logstash should be installed on other coputers to gather logs. Since I wanted to analyze my DD-WRT logs (as well as logs of my current server), I've installed all 3 pieces on a single machine. With that being said, here's the guide.

First thing first, I'm assuming that you have:

  • Admin rights to your server
  • Server with Ubuntu 14.04 Desktop / Server
  • LAMP stack installed

Now with that out of the way, let's get started!

Component Installation

Update the system

Before we begin, update your system by running:

sudo apt-get update
sudo apt-get upgrade

Once that completes, install Java 7 by running:

sudo apt-get install openjdk-7-jre

Installing Logstash

Download latest version of Logstash from their website (For our tutorial, we are going to use version: 1.4.2)

cd ~
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

Extract newly downloaded logstash file to /opt/logstash

tar xzvf logstash-1.4.2.tar.gz
sudo mv logstash-1.4.2 /opt/logstash

Now, add a 'logstash' user for running the service

sudo adduser --system --disabled-login --no-create-home --group logstash
sudo usermod -a -G adm logstash

Next, we need to download an init script that will allow logstash to be started up as a service at system boot:

# download init script
sudo wget -O /etc/init.d/logstash https://raw.githubusercontent.com/elasticsearch/logstash/master/pkg/logstash.sysv
# make it executable
sudo chmod +x /etc/init.d/logstash
# make it auto-startable
sudo update-rc.d logstash defaults

Setup all required folders to make Logstash service run

# create logstash config dir
sudo mkdir -p /etc/logstash/conf.d
# create logstash logs dir
sudo mkdir /var/log/logstash
sudo chown -R logstash: /var/log/logstash
# make home folder
sudo mkdir /var/lib/logstash
sudo chown -R logstash: /var/lib/logstash

Verify that logstash is working by running the following command:

/opt/logstash/bin/logstash agent -e "input {stdin { } } output { stdout { codec => rubydebug } }"

After entering command above, type something in the terminal, i.e. "Hello, Logstash!" and press control + D. Your output should look something like this:

/opt/logstash/bin/logstash agent -e "input {stdin { } } output { stdout { codec => rubydebug } }"
Hello, Logstash!
{
       "message" => "Hello, Logstash!",
      "@version" => "1",
    "@timestamp" => "2014-07-28T01:27:27.231Z",
          "host" => "elk"
}

If all went well, let's create a sample configuration to parse the syslog files. Simply paste this config into /etc/logstash/conf.d/syslog.conf

input {
    file {
        path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
        type => "syslog"
    }
}
 
output {
    elasticsearch {
        bind_host => "127.0.0.1"
        cluster => "elasticsearch"
        host => "127.0.0.1"
    }
}

Now, let's start the service:

sudo service logstash start

Installing Elasticsearch

Next, download debian package for Elasticsearch. Based on Logstash's documentation, we need version 1.1.1 (as per documentation for Logstash 1.4.2), which could be found here

cd ~
# download debian package
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb.sha1.txt
# verify its integrity
sha1sum -c elasticsearch-1.1.1.deb.sha1.txt

EDIT: Thanks to the reader newmember, to get elasticsearch 1.4.2 to work with logstash, you'll need to add a line: http.cors.enabled: true to /etc/elasticsearch/elasticsearch.yml

Now, let's install it by running:

sudo dpkg -i elasticsearch-1.3.0.deb

and enable the autostart service by running:

sudo update-rc.d elasticsearch defaults 95 10

Now you are ready to start the service

sudo service elasticsearch start

Installing Kibana

First, let's download latest source code by running:

cd ~
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz

Once we've downloaded it, let's unpack!

tar xzvf kibana-3.1.0.tar.gz

After unpacking, let's modify the configuration file by running:

editor kibana-3.1.0/config.js

You'll want to edit line that reads:

elasticsearch: "http://"+window.location.hostname+":9200",

with:

elasticsearch: "/elasticsearch/",

After unpacking, let's move it to our Apache's default document root folder:

sudo mv kibana-3.1.0 /var/www/kibana3

Configure Apache

First, let's create SSL Key and Certificat files by running:

sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
cd /etc/pki/tls
sudo openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private/kibana.key -out certs/kibana.crt

Now, create Apache configuration so that you could access your Kibana instance. In this example, we are going to create a new file: /etc/apache2/sites-available/kibana.conf. Here's a template to get you started:

Alias /kibana3 /var/www/kibana3
<VirtualHost *:443>
    SSLEngine On
    SSLCertificateFile /etc/pki/tls/certs/kibana.crt
    SSLCertificateKeyFile /etc/pki/tls/private/kibana.key
 
    DocumentRoot /var/www/kibana3
    <Directory /var/www/kibana3>
        Allow from all
        Options -Multiviews
    </Directory>
 
    LogLevel debug
    ErrorLog /var/log/apache2/kibana3_error.log
    CustomLog /var/log/apache2/kibana3_access.log combined
 
    # Set global proxy timeouts
    <Proxy http://127.0.0.1:9200>
        ProxySet connectiontimeout=5 timeout=90
    </Proxy>
 
    # Proxy for _aliases and .*/_search
    <LocationMatch "^/(_nodes|_aliases|.*/_aliases|_search|.*/_search|_mapping|.*/_mapping)$">
        ProxyPassMatch http://127.0.0.1:9200/$1
        ProxyPassReverse http://127.0.0.1:9200/$1
    </LocationMatch>
 
    # Proxy for kibana-int/{dashboard,temp} stuff (if you don't want auth on /, then you will want these to be protected)
    <LocationMatch "^/(kibana-int/dashboard/|kibana-int/temp)(.*)$">
        ProxyPassMatch http://127.0.0.1:9200/$1$2
        ProxyPassReverse http://127.0.0.1:9200/$1$2
    </LocationMatch>
 
    # Point to the elasticsearch
    <Location /elasticsearch/>
        ProxyPass http://localhost:9200/
        ProxyPassReverse /
    </Location>
</VirtualHost>

To see the full template, please visit its source on GitHub

Finally, enable your configuration

# enable site configuration
sudo a2ensite kibana
# enable mod_proxy and mod_ssl
sudo a2enmod proxy
sudo a2enmod ssl
# restart apache to apply changes
sudo service apache2 restart

and verify that Kibana loads up by navigating to: https://<your server ip>/kibana3

That's it! Now you should have a fully functioning system that will collect your syslogs and allow you to browse and search through them on your new web interface.

Comments