ELK Stack
It stands for Elasticsearch, Logstash, and Kibana. All working together to give you a free centralized logging solution. I’ve spent several hours searching multiple sites and getting multiple answers with some that applied to out of date software version. All to get a centralized System Log Server on the cheap. So this is my guide to show how I setup an ELK stack on Ubuntu 14.04 LTS Server to monitor pfSense 2.2.x Firewalls, Windows, Mac OS X, and Linux systems with the current release of all the software (as of this writing, March 2016). I’ll try to credit the sources as much as possible as well, yet as I experienced, some may not work in the future. Here is the lineup of what I’m working with:
- Elasticsearch 2.2.0
- Logstash 2.2.2
- Kibana 4.4
- Java 8 Update 74
- Ubuntu Server 14.04 LTS
- Nginx 1.4.6
And this is some of the additional software and topics I’ll be going over:
- Self signed certificates
- Curator 3.4.x (for cleaning up old logs)
- Marvel 2.2.0
- Logging pfSense 2.2.x
- Logging Windows, Mac OS X, and Linux Systems
Ouline
- Setting up the log server
- Client Side Setup
- Extra Logstash Configs
- pfSense Config File (11-pfsense.conf)
[Updated: 6/10/2016]
- pfSense Pattern File (pfsense.grok)
- Linux Pattern File (linux.grok)
- Troubleshooting
Setting up the log server^ Back to Top
Install Ubuntu
Find a machine that you want to use as your log server that has network access. Weather you use a VM (like I did) or a physical machine, be sure it has as much space as you think you may need.
I would recommend a 500 GB drive. I reserved 1 TB dynamic drive on my VM just to be safe, since expanding the drive afterwords is a bit of a pain and may break other thing. I am monitoring three (3) pfSense firewalls and 2 Windows servers for now, so that space should easily give me a month of logs.
Download and install Ubuntu 14.04 LTS from
http://www.ubuntu.com/download/server
You should be able to keep all the default settings. Don’t worry about installing the SSH server at the end, I’ll go over that as well. It’s up to you weather you want to do the auto security updates as well. I opted to do them.
Update Ubuntu once you have it installed:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Setup SSH Server (optional)^ Back to Top
Install the program:
sudo apt-get install openssh-server
Copy the default config file, then make it unwritable:
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.factory-defaults
sudo chmod a-w /etc/ssh/sshd_config.factory-defaults
Edit the config file:
sudo nano /etc/ssh/sshd_config
Add this to the last line in that file. Place the name of the Ubuntu Server user that should have SSH access in place of youruser
. For additional users, separate by spaces:
AllowUsers youruser
Restart SSH
sudo restart ssh
More details can be found on the Ubuntu SSH Documentation site.
Install Java 8^ Back to Top
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer
Accept both agreements when prompted. You can verify your version of Java with java -version
Install Elasticsearch (Single Node)^ Back to Top
This is a basic install, that will make it easier to upgrade.
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update && sudo apt-get install elasticsearch
Configure
Open elasticsearch.yml
for editing:
sudo nano /etc/elasticsearch/elasticsearch.yml
Uncomment (remove # symbol) and replace the line with cluster.name
, node.name
, and network.host
in your elasticsearch.yml
file with the following:
... cluster.name:ClusterName
... node.name:node-1
... network.host: localhost
You can use anything in place of ClusterName
, just be sure it’s the same in all of the config files. You may also use anything in place of node-1
as long as it’s different in all of the config files, as a unique identifier.
Restart Elasticsearch service:
sudo service elasticsearch restart
Have Elasticsearch run at boot:
sudo update-rc.d elasticsearch defaults 95 10
Install Kibana^ Back to Top
Download Kibana:
echo "deb http://packages.elastic.co/kibana/4.4/debian stable main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install kibana
Configure
Open kibana.yml
for editing:
sudo nano /opt/kibana/config/kibana.yml
Uncomment and replace the line with server.host:
in your kibana.yml
file with the following:
server.host: "localhost" ... elasticsearch.url: "http://localhost:9200"
Now enable and run Kibana
sudo update-rc.d kibana defaults 95 10
sudo service kibana start
Create Self-signed Certs^ Back to Top
For this you may either create your own and just follow the naming and directory scheme, or just follow what I did.
Create directories for the certificates and keys.
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
Create self signed certificates to use with nginx and logstash, being sure to replace yoursite.com
:
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt -subj /CN=yoursite.com
For yoursite.com
use your FQDN (Fully Qualified Domain Name) if you have one for the server. Sub-domains will work too. Otherwise it will error out if you try to use an IP. If you would like to make them using an IP, you can find those instructions at Mitchell Anicas’ site. Also, be sure to copy the logstash-forwarder.crt
file somewhere you can easily get to, as you will need it in the “Sending Logs from Clients” section.
Install Nginx^ Back to Top
This is will be to setup a reverse proxy so that you can access Kibana externally.
Install Nginx and Apache2-utils:
sudo apt-get install nginx apache2-utils
Setup a user for the web access. I use kibadmin
in this example:
sudo htpasswd-c
/etc/nginx/htpasswd.userskibadmin
NOTE:
The -c
switch will create if doesn’t exist, or overwrite existing. So be sure to exclude -c
for the additional users, or else it will make them the only users.
Open and edit the Nginx configuration file:
sudo nano /etc/nginx/sites-available/default
This is what your default
file should look like:
server {
listen 443 ssl;
server_name yoursite.com
;
ssl_certificate /etc/pki/tls/certs/logstash-forwarder.crt;
ssl_certificate_key /etc/pki/tls/private/logstash-forwarder.key;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Restart Nginx:
sudo service nginx restart
You should now be able to access your Kibana site at https://[serverIP]
. So for example, if you have this at 192.168.1.10 you will go to https://192.168.1.10
where it will prompt you for the Kibana username and password you had set earlier. You can find additional info on the Nginx Wiki.
Install Logstash^ Back to Top
echo "deb http://packages.elastic.co/logstash/2.2/debian stable main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install logstash
Make Files Editable Remotly (Optional)^ Back to Top
To make things easier for myself, I made it so that I can connect to the server (SSH step needs to be done) with Filezilla and edit the file woth Notepad++. To do this, you’ll need to change the folder permissions and make some group.
First add your user to a new group, I will cal it adminusers
.
sudo groupadd adminusers
sudo usermod -a -G adminusers youruser
Now add your group to the config folders for logstash:
sudo chgrp -R adminusers
/etc/logstash/conf.d
sudo chgrp -R adminusers
/opt/logstash/patterns
And finally, change the permissions in the folder:
sudo chmod -R 775 /etc/logstash/conf.d
sudo chmod -R 775 /opt/logstash/patterns
Configure
Create and open a config files for configuring Logstash:
02-beats-input.conf^ Back to Top
Create and open 02-beats-input.conf:
sudo nano /etc/logstash/conf.d/02-beats-input.conf
This is what my 02-beats-input.conf
file looks like:
input { beats {port => 5044
} udp { type => pfLogsport => 5140
} }
NOTE:
Ports can be changed here, as long as your consistent in your configurations on the log-forwarder service. I’m using UDP only for pfSense. So if you don’t have the same, you may exclude that section.
15-tagging.conf (Optional)^ Back to Top
Here is where I create a file I use for adding tags to different log sources, allowing me to put them in different indexes. This is completely optional, yet handy when it comes to cleaning up logs. So that I can have it delete my Firewall logs that are older than 30 days, and Linux server logs that are older than a year. Otherwise I would have to delete all or none.
Create and open 15-tagging.conf
:
sudo nano /etc/logstash/conf.d/15-tagging.conf
This is what my 15-tagging.conf
file looks like:
filter { if [host] in ["server-ap1", "server-dc1"] { mutate { replace => {"type" => "linuxLog"} add_tag => ["linux-log", "VLAN1"] } } if [host] in "Smiths-Macbook.local" { mutate { replace => {"type" => "macLog"} add_tag => ["linux-log", "VLAN2", "mac"] } } if "linux-log" in [tags] { grok { patterns_dir => [ "./patterns
" ] match => {"message" => "%{LINUXINFO}
"} } } }
You can find more ways to isolate and tag data in the Logstash Documentation’s Event Dependent Configuration. Here I refer to %{LINUXINFO}
, that can be foudn in my Linux Pattern File, where it tells it how to format the output. I tell it the pattern file is located in ./patterns
, which expands out to /etc/logstash/patterns
.
30-elasticsearch-output.conf
Create and open 30-elasticsearch-output.conf:
sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
This is what my 30-elasticsearch-output.conf
file looks like:
output { if [type] == "pfLogs" { elasticsearch { hosts => "localhost:9200" index => "logstash-pflogs-%{+YYYY.MM.dd}" } } else if [type] == "wineventlog" { elasticsearch { hosts => "localhost:9200" index => "logstash-winlogs-%{+YYYY.MM.dd}" } } else if [type] == "linuxLog" { elasticsearch { hosts => "localhost:9200" index => "logstash-linuxlogs-%{+YYYY.MM.dd}" } } else if [type] == "macLog" { elasticsearch { hosts => "localhost:9200" index => "logstash-maclogs-%{+YYYY.MM.dd}" } } else { elasticsearch { hosts => "localhost:9200" index => "logstash-other-%{+YYYY.MM.dd}" } } stdout { codec => rubydebug } }
TIP: You can create multiple index types based on different field, yet you cannot have capital letters in the index name.
Test Logstash:
sudo service logstash configtest
Restart Logstash:
sudo service logstash restart
Have Logstash run at boot:
sudo update-rc.d logstash defaults 95 10
At this point we can connect to Kibana (just be sure to use https:// if you followed my same configurations), yet there is not data sent to it yet, so we should hold off on that for now.
Install Curator^ Back to Top
Curator is the program that will help you automatically clean-up old logs.
You will first need to install PIP
sudo apt-get install python-pip
sudo apt-get update && sudo apt-get install python-elasticsearch-curator
Now throw it into a cron job so that it happens automatically:
sudo crontab -e
Choose nano to edit it with. Then enter the following line at the bottom, then save and close:
30 0 * * * /usr/local/bin/curator delete indices --older-than 30 --time-unit days --timestring \%Y.\%m.\%d --prefix logstash-pflogs
| sed -e 's|\\||g'
What I’ve done with this line is make a cron job that runs every day at 12:30am, and delete any indicesolder than 30 days that starts with logstash-pflogs
. If you want to see which indices you have, run the following line:
curl 'localhost:9200/_cat/indices?v'
Then you can test if your cronjob will work by running the following with your own pattern. This uses show
instead of delete
so that you can test the pattern. It will show the ones matching your pattern that are from the last 5 days.
sudo /usr/local/bin/curator show indices --newer-than 5 --time-unit days --timestring \%Y.\%m.\%d --prefix logstash-pflogs
| sed -e 's|\\||g'
Client Side Setup
Now we have the ELK stack setup, it’s time to start sending it some logs so that we can see what it could do.
Windows Client^ Back to Top
To get windows to send logs to this, you will need to install Winlogbeat. It can be downloaded from:
https://www.elastic.co/downloads/beats/winlogbeat
You can look at their Documentation as well, but here’s a walkthrough of what I did. First you will need to download and extract the folder to C:\Program Files\
(same on 32-bit and 64-bit), then rename it to Winlogbeat
. Now you will need to open PowerShell as an administrator. Navigate to C:\Program Files\Winlogbeat
:
cd 'c:\Program Files\Winlogbeat'
Then run the program:
PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-winlogbeat.ps1
Configure
Now, minimize PowerShell for now and navigate to C:\Program Files\Winlogbeat
. Use a text editor and edit winlogbeat.yml
. You’ll first want to make sure you’re will receive the logs you want. This section shows that I’ll be receiving the Application, Security, and System logs from it.
event_logs: - name: Application ignore_older: 72h - name: Security ignore_older: 72h - name: System ignore_older: 72h
The ignore_older: 72h
means what is sounds like, that it will only send me the logs from the past 3 days. If you want all the previous logs, you can comment that out. Additional configuration options can be found in the Winlogbeat Documentation.
Then comment out the Elasticsearch section like this:
output: ### Elasticsearch as output #elasticsearch: # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify and additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 # hosts: ["localhost:9200"]
Then navigate down to the Logstash section, then uncomment and make these changes:
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["yoursite.com:5044
"]
The yoursite.com:5044
entry is directly related to the beats section in your 02-beats-input.conf
file. So make sure the ports match. Save and close the file after you’re done makings changes.
Now lets go back to PowerShell and run the following command to test the config you just saved:
.\winlogbeat.exe -c .\winlogbeat.yml -configtest -e
From there, they want to load the Index Template in Elasticsearch. Yet their method didn’t work for me since we don’t have a connection to the Elasticsearch server over the network (only Kibana). So what I did instead was upload the winlogbeat.template.json
file into my home directory on the Syslog server, using a FTP Client like FileZilla. Then running the following command on the Syslog Server:
curl -XPOST 'http://localhost:9200/_template/winlogbeat?pretty' -d @/home/youruser
/winlogbeat.template.json
Now the template is loaded into Elasticsearch, we can start the service on the client Windows machine. You can either manually go the Services on your computer and click start, or run the following command in PowerShell on the Windows Computer:
Start-Service winlogbeat
You should now be sending logs to your Syslog Server.
To Uninstall
Open PowerShell as an administrator and either manually stop the winlogbeat
service or run the following:
Stop-Service winlogbeat
Now navigate to C:\Program Files\Winlogbeat
and then run the uninstaller:
cd 'c:\Program Files\Winlogbeat'
PowerShell.exe -ExecutionPolicy UnRestricted -File .\uninstall-service-winlogbeat.ps1
Linux Client^ Back to Top
On a Linux client, you will to install Filebeat. To do so, go onto the client machine and run the following:
curl https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list
sudo apt-get update && sudo apt-get install filebeat
Now have it start at startup:
sudo update-rc.d filebeat defaults 95 10
NOTE:
This is assuming your client machine is runing Ubuntu 14.04 LTS. Addition information for other distributions can be found in the Filebeat Documentation.
Configure
You will now need to edit the filebeat.yml
. You can use the Remote Edit method here, or run the following line:
sudo nano /etc/filebeat/filebeat.yml
The configuration will be very similar to the Windows method, where you comment out Elasticsearch output and uncomment and configure the Logstash section. This is the default log path. You can add additional ones here as log as you keep with the same format. When making your own, be sure to use spaces and not tabs or else it will error out.
paths: - /var/log/*.log
Then again, comment out the Elasticsearch section, then navigate down to the Logstash section to uncomment and make these changes:
output:
### Elasticsearch as output
#elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]
...
...
...
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["yoursite.com:5044
"]
Making sure again that the yoursite.com:5044
entry is directly related to the beats section in your 02-beats-input.conf
file.
To Uninstall
If you need to remove the logger from the machine, you can run the following line:
sudo apt-get remove --purge filebeat
Mac Client^ Back to Top
There is a Filebeat version for Mac, were setup is similar to Linux. Open up a Terminal window. First navigate to your Downloads folder and download the program. Replace youruser
with the name of your local user:
cd /Users/youruser
/Downloads/
Then download and extract the program:
curl -L -O https://download.elastic.co/beats/filebeat/filebeat-1.1.2-darwin.tgz
tar xzvf filebeat-1.1.2-darwin.tgz
Move the newly extracted folder to your Applications folder and rename it to Filebeat
:
mv /Users/youruser
/Downloads/filebeat-1.1.2-darwin /Applications/Filebeat
Configure
Now edit the filebeat.yml
configuration file:
sudo nano /Applications/Filebeat/filebeat.yml
In the top section, put the locations you would like to log. By default, it will grab all the logs. Yet I change it to only grab a few. This is what my top section looks like. Each line is indented using spaces not tabs:
paths: - /var/log/system.log - /var/log/install.log - /var/log/accountpolicy.log
Then just like in the other two we’ll comment out the Elasticsearch section, navigate down to the Logstash section to uncomment and make these changes:
output:
### Elasticsearch as output
#elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]
...
...
...
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["yoursite.com:5044
"]
Making sure again that the yoursite.com:5044
entry is directly related to the beats section in your 02-beats-input.conf
file. Now save and close the file.
Next, we’ll need to create a file called FilebeatDaemon.plist
to allow us to run the program at startup.
sudo nano /Library/LaunchDaemons/FilebeatDaemon.plist
The file should look list this:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.elastic.filebeat</string> <key>ProgramArguments</key> <array> <string>/Applications/Filebeat/filebeat</string> <string>-e</string> <string>-c</string> <string>/Applications/Filebeat/filebeat.yml</string> <string>-d</string> <string>”publish”</string> </array> <key>KeepAlive</key> <true/> </dict> </plist>
You’ll now be able to load and run the program with the following command:
sudo launchctl load /Library/LaunchDaemons/FilebeatDaemon.plist
You can verify that it’s running by either of the following commands:
ps -ef | grep filebeat
top
To Uninstall
Unload the FilebeatDaemon.plist
file:
sudo launchctl unload /Library/LaunchDaemons/FilebeatDaemon.plist
Then remove the following file and folder either manually or by command line:
/Library/LaunchDaemons/FilebeatDaemon.plist
/Applications/Filebeat/
Extra Logstash Configs^ Back to Top
The names of the files don’t matter too much, but it does process them in alphabetical order. So here are some other configs I have create based on others
pfSense 2.2.x Config File^ Back to Top
These steps I modified from Elijah Paul’s Site.
First we will need a config file to filter out the pfSense 2.2.x Logs. Create and open 11-pfsense.conf:
sudo nano /etc/logstash/conf.d/11-pfsense.conf
This is what my 11-pfsense.conf
file looks like:
UPDATED: 06/10/2016
: Added new line under “if [prog]… grok {… match =>”
filter { if [host] =~/192\.168\.0\.1/
{ mutate { add_tag => ["PFSense", "Ready"] add_field =>{"log_src" => "pfSenseVLAN1"}
} } if [host] =~/192\.168\.1\.1/
{ mutate { add_tag => ["PFSense", "Ready"] add_field =>{"log_src" => "pfSenseVLAN2"}
} } if [host] =~/192\.168\.2\.1/
{ mutate { add_tag => ["PFSense", "Ready"] add_field =>{"log_src" => "pfSenseVLAN3"}
} } if [host] =~/192\.168\.3\.1/
{ mutate { add_tag => ["PFSense", "Ready"] add_field =>{"log_src" => "pfSenseVLAN4"}
} } if "PFSense" in [tags] { grok { add_tag => [ "firewall" ] match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ] overwrite => [ "message" ] } mutate { gsub => ["datetime"," "," "] } date { match => [ "datetime", "MMM dd HH:mm:ss" ] } mutate { replace => [ "message", "%{msg}" ] } mutate { remove_field => [ "msg", "datetime" ] } } if [prog] =~ /^filterlog$/ { mutate { remove_field => [ "msg", "datetime" ] } grok { patterns_dir => [ "./patterns"] match => [ "message", "%{LOG_DATA}%{IP_SPECIFIC_DATA}%{IP_DATA}%{PROTOCOL_DATA}" ] match => [ "message", "%{LOG_DATA}%{IP_SPECIFIC_DATA}%{IP_DATA}" ] } mutate { lowercase => [ 'proto' ] } geoip { add_tag => [ "GeoIP" ] source => "src_ip" } } if [iface] =="em0"
{ mutate { replace => { "iface" =>"WAN"
} } } else if [iface] =="em1"
{ mutate { replace => { "iface" =>"LAN"
} } } }
NOTE:
Be sure to replace the IPs like /192\.168\.0\.100/
in the same format. So if your IP was 192.168.10.105, you would record it as /192\.168\.10\.105/
in this file. Also, you can change the field name log_src
to whatever you would like, as well as the name you want to give the pfSense Firewall, like I named one of mine pfSenseVLAN1
. Just be aware that sometimes if you use dashes, like “pfSense-VLAN1”, it may separate search results for that field in to pfSense and VLAN1. Also, at the end of this file your interfaces like my em0
and em1
may vary. So you can replace that with the correct name if necessary as well as rename it where I have WAN
and LAN
.
pfSense 2.2 Pattern File^ Back to Top
Now we’ll need a pattern file to accompany this. Create a patterns directory for Logstash:
sudo mkdir /opt/logstash/patterns
Create and open pfsense2-2.grok
:
sudo nano /opt/logstash/patterns/pfsense2-2.grok
This is what my pfsense2-2.grok
file looks like. I edited it from the original (from elijahpaul on GitHub, because the logs were not matching to pick up the tcp_flag and IPv6 data:
# GROK match pattern for logstash.conf filter: %{LOG_DATA}%{IP_SPECIFIC_DATA}%{IP_DATA}%{PROTOCOL_DATA} # GROK Custom Patterns (add to patterns directory and reference in GROK filter for pfSense events): # GROK Patterns for pfSense 2.2 Logging Format # # Created 27 Jan 2015 by J. Pisano (Handles TCP, UDP, and ICMP log entries) # Edited 14 Feb 2015 by E. Paul # Edited 15 March 2016 by StealthShark # # Usage: Use with following GROK match pattern # # %{LOG_DATA}%{IP_SPECIFIC_DATA}%{IP_DATA}%{PROTOCOL_DATA} LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}), IP_SPECIFIC_DATA (%{IPv4_SPECIFIC_DATA}|%{IPv4_SPECIFIC_DATA_v2}|%{IPv6_SPECIFIC_DATA}) IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}), IPv4_SPECIFIC_DATA_v2 (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}), IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{BASE16NUM:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}), IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}), PROTOCOL_DATA (%{TCP_DATA}|%{UDP_DATA}|%{ICMP_DATA}|%{IPV6_IN_IPV4_DATA}) TCP_DATA (%{TCP_HEADER}%{TCP_FOOTER}) TCP_HEADER (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}), TCP_FOOTER (%{TCP_FOOTER1}|%{TCP_FOOTER2}|%{TCP_FOOTER3}|%{TCP_FOOTER4}) TCP_FOOTER1 (%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options}) TCP_FOOTER2 (%{INT:sequence_number}),,(%{INT:tcp_window}),,(%{GREEDYDATA:tcp_options}) TCP_FOOTER3 ,(%{INT:ack_number}),(%{INT:tcp_window}),, TCP_FOOTER4 (%{HOSTPORT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),, UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}) ICMP_DATA (%{ICMP_TYPE}%{ICMP_RESPONSE}) IPV6_IN_IPV4_DATA (%{MONGO_WORDDASH:tcp_flags}), ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)), ICMP_RESPONSE (%{ICMP_ECHO_REQ_REPLY}|%{ICMP_UNREACHPORT}| %{ICMP_UNREACHPROTO}|%{ICMP_UNREACHABLE}|%{ICMP_NEED_FLAG}|%{ICMP_TSTAMP}|%{ICMP_TSTAMP_REPLY}) ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence}) ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port}) ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol}) ICMP_UNREACHABLE (%{GREEDYDATA:icmp_unreachable}) ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu}) ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence}) ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})
NOTE:
Be sure there is a new new line (carriage return) after each entry, otherwise it may not recognize the definitions.
After every config change, you will want to restart restart Logstash:
sudo service logstash restart
Linux Pattern File^ Back to Top
Create a patterns directory for Logstash if you don’t already have one:
sudo mkdir /opt/logstash/patterns
Create and open linux.grok
:
sudo nano /opt/logstash/patterns/linux.grok
This is what my linux.grok
file looks like.
# GROK Patterns for Linux Logging Format # # Created 15 March 2016 by StealthShark # Usage: Use with following GROK match pattern # # %{LINUXINFO} LINUXINFO %{SYSLOGTIMESTAMP:date} %{HOSTNAME:system} %{SYSLOG5424PRINTASCII:log_source} %{GREEDYDATA:message_info}
Restart Logstash:
sudo service logstash restart
Troubleshooting^ Back to Top
Logstash
Starting/Stopping/Restarting
sudo service logstash start
sudo service logstash stop
sudo service logstash restart
Test Config File
sudo /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/13-windows.conf
View info sent to Logstash live
sudo tail -f /var/log/logstash/logstash.stdout
You can also filter it to look for a specific keyword, like I looked for any line containg the word error
sudo tail -f /var/log/logstash/logstash.stdout | grep -i error
View all indices
curl 'localhost:9200/_cat/indices?v'
Delete indices
curl -XDELETE 'localhost:9200/logstash-*'
Elasticsearch^ Back to Top
Marvel
One product that is handy for looking at what’s going on with your Elasticsearch nodes/indeicies/clusters is Marvel. It give you statistics on your data cluster. It ties into Elasticsearch and can be viewed through your Kibana site. You can find installation instructions in the Marvel Documentation. Best of all, it’s free for basic use. You just need a register to get a basic license. Here’s a runthrough of the install.
On your Syslog server, navigate to your Elasticserach folder and install the license and plugin.
sudo /usr/share/elasticsearch/bin/plugin install license
sudo /usr/share/elasticsearch/bin/plugin install marvel-agent
Now we’ll install the Marvel app in Kibana:
sudo /opt/kibana/bin/kibana plugin --install elasticsearch/marvel/latest
Restart Elasticsearch and Kibana
sudo service elasticsearch restart
sudo service kibana restart
Now when you open up your Kibana, you’ll see an icon to the right of the other tabs that will allow you to switch between Marvel and Kibana. If you have registered for a basic license, I would recommend using FTP to upload it to your home directory. Then one you navigate to your home directory on the Syslog Server, run he the following command:
curl -XPUT 'http://localhost:9200/_license?acknowledge=true' -d @name-of-your-license.json
Credits/Resources^ Back to Top
- Most of my install commands are based on code from Mitchell Anicas’ site
- Although it’s not always specific, you can find a lot of documentation for Elasticsearch Products at https://www.elastic.co/products
Hi.
When using the remote syslog setting of pfSense, it’s not like the transferred logs are encrypted in any way, is it?
Encryption is not my expertise, which is why I did not include it in the tutorial, but I believe you are correct that they are not encrypted. Yet the Beats service can be encrypted by modifying your 02-beats-input.conf on the server and the winlogbeat.yml/filebeat.yml file on the client to add a certificate as they show in the documentation (https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html#configuring-lsf).
There is a 3rd-party package for pfSense that may be able to encrypt, yet the service to do that appears to be a paid service. I’ve found one instance (https://discuss.elastic.co/t/filebeat-on-freebsd-pfsense/38278/27) where people tried to install Filebeat on pfSense directly, and there may be more out there as well. If you get it to work, you should be able to add the certificates and have encrypted pfSense logging.
Hi again.
Late reply, I know. Wasn’t notified of you replying to my comment. The encryption part is a mess, but that aside, I’m trying to use your filter and grok with pfSense, both 2.2 and 2.3.
Something along these lines are waht the incoming log looks like with pfsense 2.2:
http://i.imgur.com/2W5WdmI.png
with 2.3
http://i.imgur.com/bjsnqRJ.png
Since your Grok and filter is made for 2.2, it seem to work better.. 2.3 doesnt seem to be parsed at all? Any idea? Both also got the “_grokparsefailure” thing, in tags.
I’ve just upgraded to 2.3.x recently, since the release of 2.3.1. So I wasn’t able to test it out just yet. The problems I have ran into before with grok files was that it is VERY picky with spaces and new lines. So open up your file and make sure it looks exactly like mine. When I copied it over originally, it put it all in a single line and it failed. To make editing easier, I connected with FileZilla and used Notepad++ to edit it.
Yours may also be a unique log format I haven’t run into yet. If you paste that string (you can just make up numbers for security, but use Exact same format) I can give you a better idea of where it went wrong. A good resource for building your own grok patterns is here:
http://grokconstructor.appspot.com/do/construction
Just paste your string into the top line and hit “Go!”. It will allow you to choose the type. You can use my file as a comparison to step through the logic. Yet my first guess is still a formatting issues in the grok file.
I think I’m seeing the same thing as anonymite.
Have you updated any of your files since upgrading to 2.3.1?
I’m currently running 2.3.1-RELEASE-p1 and it seems to be working fine for me. Yet looking closer at my postings, it seems I did leave out a line in a config file after posting an update to the grok file. I updated it on the site. Yet at the bottom of the 11-pfsense.conf file, there is a section for “if [prog]”. If you look under the “grok {” section, you will see the addition “match” line. Let me know if that works.
Hi!
I don´t understand what is wrong: pfSense input data is OK, but Windows Event Logger not.
“sudo tcpdump -vv -n dst port 5140” returns
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:08:47.009187 IP (tos 0x0, ttl 64, id 16176, offset 0, flags [none], proto UDP (17), length 175)
10.0.0.1.514 > 10.0.10.20.5140: [udp sum ok] SYSLOG, length: 147
Facility local0 (16), Severity info (6)
Msg: Jun 9 21:03:57 filterlog: 7,16777216,,1000000105,em0,match,block,in,6,0×00,0x43d5b,1,UDP,17,60,fe80::6a5b:35ff:febf:43cc,ff02::1:2,546,547,60
0x0000: 3c31 3334 3e4a 756e 2020 3920 3231 3a30
0x0010: 333a 3537 2066 696c 7465 726c 6f67 3a20
0x0020: 372c 3136 3737 3732 3136 2c2c 3130 3030
0x0030: 3030 3031 3035 2c65 6d30 2c6d 6174 6368
0x0040: 2c62 6c6f 636b 2c69 6e2c 362c 3078 3030
0x0050: 2c30 7834 3364 3562 2c31 2c55 4450 2c31
0x0060: 372c 3630 2c66 6538 303a 3a36 6135 623a
0x0070: 3335 6666 3a66 6562 663a 3433 6363 2c66
0x0080: 6630 323a 3a31 3a32 2c35 3436 2c35 3437
0x0090: 2c36 30
OK! But “sudo tcpdump -vv -n dst port 5044” returns
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:10:57.744405 IP (tos 0x0, ttl 128, id 14608, offset 0, flags [DF], proto TCP (6), length 52)
10.0.10.6.52536 > 10.0.10.20.5044: Flags [S], cksum 0x5157 (correct), seq 2511101702, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
21:10:57.744903 IP (tos 0x0, ttl 128, id 14609, offset 0, flags [DF], proto TCP (6), length 40)
Only “logstash-pflogs-2016.06.09” was created. “logstash-winlogs-2016.06.09” was not.
Do you know what is wrong? Thank you!
I don’t think there is anything wrong, that’s similar input I get when I run that command. Looks like your reading the TCP input, yet I believe it has its own beats protocol. Does it look fine when you read it through the Kibana interface? If you want to see everything your Logstash is receiving, run the following command:
sudo tail -f /var/log/logstash/logstash.stdout
Or you can narrow it down to a line with a keyword like “message”:
sudo tail -f /var/log/logstash/logstash.stdout | grep –line-buffered -i message
It will make it a lot easier if you stop the pfSense Firewall(s) from sending logs to it. The easiest way to do it and save the settings it to just uncheck the boxes like “Everything” in your pfSense log setting, while keeping the “Send log messages…” box checked. Let me know if that works.
Thank you again for your support!
I think I figured out my issue:
INFO Error publishing events (retrying): EOF [winlogbeat.exe]
I updated to Logstash v2.3.2 and this problem was solved!
a few minor things I noticed.. a few typos..
— search for Elastisearch and Elastcisearch
— for the Linux patterns.. you mention “Create and open linux.grok” but then list the example of your file as “This is what my pfsense2-2.grok file looks like”
Now for my issues:
I’ve installed ELK on my NAS using a Docker container so from your steps, I just added the files
02-beats-input.conf
11-pfsense.conf
15-tagging.conf
30-elasticsearch-output.conf
/opt/logstash/patterns/pfsense2-2.grok
/opt/logstash/patterns/linux.grok
— in 02-beats-input.conf I’ve included an http section to retrieve by SmartThings logs.
I see both SmartThings AND pfSense logs in /var/log/logstash/logstash.stdout
The pfSense logs have the Geo location so everything seems to be working.
However, a Kibana discover only shows the SmartThings events.
Any ideas?
thanks,
tom
i think i figured out my issue.. the pfSense logs are coming in at the wrong time.
Here’s the pfSense timestamp in the logstash.stdout from an alert at 3:56 PM – EST
“@timestamp” => “2016-06-11T15:56:44.000Z”,
The SmartThings logs have the correct times.
The timezone is correct on my ELK machine
Sat Jun 11 15:57:49 EDT 2016
Okay, glad you were able to figure it out. Also, thanks for the heads up on the typos. I’ve corrected them.