Post by ZF on Sept 11, 2015 3:27:56 GMT -5
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-insta
wget -O - packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
echo 'deb packages.elasticsearch.org/elasticsearch/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list
sudo apt-get update
sudo apt-get -y install elasticsearch=1.4.4
sudo vi /etc/elasticsearch/elasticsearch.yml
Uncomment network.host: localhost
sudo service elasticsearch restart
//Autostart elasticsearch on boot
sudo update-rc.d elasticsearch defaults 95 10
cd /opt
wget download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz
tar -xvzf kibana-*.tar.gz
vi ~/kibana-4*/config/kibana.yml
In the Kibana configuration file, find the line that specifies host, and replace the IP address ("0.0.0.0" by default) with "localhost":
Replace host: "0.0.0.0" with
host: "localhost"
ln -s kibana-4.0.1-linux-x64 kibana
cd /etc/init.d && sudo wget gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start
sudo apt-get install nginx apache2-utils
sudo htpasswd -c /etc/nginx/htpasswd.users kibanaking
Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
sudo vi /etc/nginx/sites-available/default
//Delete the file's contents, and paste the following code block into the file. Be sure to update the server_name to match your server's name:
cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.backup
vi /etc/nginx/sites-available/default
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601.
Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.
Now restart Nginx to put our changes into effect:
sudo service nginx restart
echo 'deb packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
apt-get update
apt-get install logstash
mkdir -p /etc/pki/tls/certs
mkdir /etc/pki/tls/private
gedit /etc/ssl/openssl.cnf
cp /etc/ssl/openssl.cnf /etc/ssl/openssl.cnf.backup
gedit /etc/ssl/openssl.cnf
Find the [ v3_ca ] section in the file, and add this line under it (substituting in the Logstash Server's private IP address):
openssl.cnf excerpt (updated)
subjectAltName = IP: logstash_server_private_ip
cd /etc/pki/tls/
openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
//This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.
gedit /etc/logstash/conf.d/01-lumberjack-input.conf
Insert the following input configuration:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Save and quit.
//This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able.
gedit /etc/logs
tash/conf.d/10-syslog.conf
Insert the following syslog filter configuration:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and quit.
//This output basically configures Logstash to store the logs in Elasticsearch.
//With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
gedit /etc/logstash/conf.d/30-lumberjack-output.conf
Insert the following output configuration:
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
Save and exit.
If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01- and 30-).
sudo service logstash restart
//Set Up Logstash Forwarder (Add Client Servers)
Do these steps for each Ubuntu or Debian server that you want to send logs to your Logstash Server.
For instructions on installing Logstash Forwarder on Red Hat-based Linux distributions (e.g. RHEL, CentOS, etc.),
refer to the Build and Package Logstash Forwarder section of the CentOS variation of this tutorial.
Copy SSL Certificate and Logstash Forwarder Package
On Logstash Server, copy the SSL certificate to Client Server (substitute the client server's address, and your own login):
scp /etc/pki/tls/certs/logstash-forwarder.crt user@client_server_private_address:/tmp
After providing your login's credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the Logstash server.
Install Logstash Forwarder Package
On Client Server, create the Logstash Forwarder source list:
echo 'deb packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list
It also uses the same GPG key as Elasticsearch, which can be installed with this command:
wget -O - packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Then install the Logstash Forwarder package:
sudo apt-get update
sudo apt-get install logstash-forwarder
Note: If you are using a 32-bit release of Ubuntu, and are getting an "Unable to locate package logstash-forwarder" error, you will need to install Logstash Forwarder manually.
Now copy the Logstash server's SSL certificate into the appropriate location (/etc/pki/tls/certs):
sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
Configure Logstash Forwarder
On Client Server, create and edit Logstash Forwarder configuration file, which is in JSON format:
sudo vi /etc/logstash-forwarder.conf
Under the network section, add the following lines into the file, substituting in your Logstash Server's private address for logstash_server_private_address:
logstash-forwarder.conf excerpt 1 of 2
"servers": [ "logstash_server_private_address:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
Under the files section (between the square brackets), add the following lines,
logstash-forwarder.conf excerpt 2 of 2
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
Save and quit. This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier. The paths section specifies which log files to send (here we specify syslog and auth.log), and the type section specifies that these logs are of type "syslog* (which is the type that our filter is looking for).
Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.
Now restart Logstash Forwarder to put our changes into place:
sudo service logstash-forwarder restart
Now Logstash Forwarder is sending syslog and auth.log to your Logstash Server! Repeat this section for all of the other servers that you wish to gather logs for.
//Installing rapidminer to model data
Download rapidminer from sourceforce:
sourceforge.net/projects/rapidminer/files/?source=navbar
cd /opt/rapidminer/
java -jar lib/rapidminer.jar
A pop up leading to Rapidminer marketplace should show
Download the Anomaly Detection Extension
//Installing PostgreSQL
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository
deb apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
Import the repository signing key, and update the package lists
wget --quiet -O - www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.4
apt-get install postgresql-server-dev-9.4
apt-get install python-pip python-dev
pip install simplejson hashlib xmltodict psycopg2 python-evtx python-registry
SOURCE: www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04
sudo apt-get update
sudo apt-get -y install oracle-java8-insta
wget -O - packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
echo 'deb packages.elasticsearch.org/elasticsearch/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list
sudo apt-get update
sudo apt-get -y install elasticsearch=1.4.4
sudo vi /etc/elasticsearch/elasticsearch.yml
Uncomment network.host: localhost
sudo service elasticsearch restart
//Autostart elasticsearch on boot
sudo update-rc.d elasticsearch defaults 95 10
cd /opt
wget download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz
tar -xvzf kibana-*.tar.gz
vi ~/kibana-4*/config/kibana.yml
In the Kibana configuration file, find the line that specifies host, and replace the IP address ("0.0.0.0" by default) with "localhost":
Replace host: "0.0.0.0" with
host: "localhost"
ln -s kibana-4.0.1-linux-x64 kibana
cd /etc/init.d && sudo wget gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start
sudo apt-get install nginx apache2-utils
sudo htpasswd -c /etc/nginx/htpasswd.users kibanaking
Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
sudo vi /etc/nginx/sites-available/default
//Delete the file's contents, and paste the following code block into the file. Be sure to update the server_name to match your server's name:
cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.backup
vi /etc/nginx/sites-available/default
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601.
Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.
Now restart Nginx to put our changes into effect:
sudo service nginx restart
echo 'deb packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
apt-get update
apt-get install logstash
mkdir -p /etc/pki/tls/certs
mkdir /etc/pki/tls/private
gedit /etc/ssl/openssl.cnf
cp /etc/ssl/openssl.cnf /etc/ssl/openssl.cnf.backup
gedit /etc/ssl/openssl.cnf
Find the [ v3_ca ] section in the file, and add this line under it (substituting in the Logstash Server's private IP address):
openssl.cnf excerpt (updated)
subjectAltName = IP: logstash_server_private_ip
cd /etc/pki/tls/
openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
//This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.
gedit /etc/logstash/conf.d/01-lumberjack-input.conf
Insert the following input configuration:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Save and quit.
//This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able.
gedit /etc/logs
tash/conf.d/10-syslog.conf
Insert the following syslog filter configuration:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and quit.
//This output basically configures Logstash to store the logs in Elasticsearch.
//With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
gedit /etc/logstash/conf.d/30-lumberjack-output.conf
Insert the following output configuration:
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
Save and exit.
If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01- and 30-).
sudo service logstash restart
//Set Up Logstash Forwarder (Add Client Servers)
Do these steps for each Ubuntu or Debian server that you want to send logs to your Logstash Server.
For instructions on installing Logstash Forwarder on Red Hat-based Linux distributions (e.g. RHEL, CentOS, etc.),
refer to the Build and Package Logstash Forwarder section of the CentOS variation of this tutorial.
Copy SSL Certificate and Logstash Forwarder Package
On Logstash Server, copy the SSL certificate to Client Server (substitute the client server's address, and your own login):
scp /etc/pki/tls/certs/logstash-forwarder.crt user@client_server_private_address:/tmp
After providing your login's credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the Logstash server.
Install Logstash Forwarder Package
On Client Server, create the Logstash Forwarder source list:
echo 'deb packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list
It also uses the same GPG key as Elasticsearch, which can be installed with this command:
wget -O - packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Then install the Logstash Forwarder package:
sudo apt-get update
sudo apt-get install logstash-forwarder
Note: If you are using a 32-bit release of Ubuntu, and are getting an "Unable to locate package logstash-forwarder" error, you will need to install Logstash Forwarder manually.
Now copy the Logstash server's SSL certificate into the appropriate location (/etc/pki/tls/certs):
sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
Configure Logstash Forwarder
On Client Server, create and edit Logstash Forwarder configuration file, which is in JSON format:
sudo vi /etc/logstash-forwarder.conf
Under the network section, add the following lines into the file, substituting in your Logstash Server's private address for logstash_server_private_address:
logstash-forwarder.conf excerpt 1 of 2
"servers": [ "logstash_server_private_address:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
Under the files section (between the square brackets), add the following lines,
logstash-forwarder.conf excerpt 2 of 2
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
Save and quit. This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier. The paths section specifies which log files to send (here we specify syslog and auth.log), and the type section specifies that these logs are of type "syslog* (which is the type that our filter is looking for).
Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.
Now restart Logstash Forwarder to put our changes into place:
sudo service logstash-forwarder restart
Now Logstash Forwarder is sending syslog and auth.log to your Logstash Server! Repeat this section for all of the other servers that you wish to gather logs for.
//Installing rapidminer to model data
Download rapidminer from sourceforce:
sourceforge.net/projects/rapidminer/files/?source=navbar
cd /opt/rapidminer/
java -jar lib/rapidminer.jar
A pop up leading to Rapidminer marketplace should show
Download the Anomaly Detection Extension
//Installing PostgreSQL
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository
deb apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
Import the repository signing key, and update the package lists
wget --quiet -O - www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.4
apt-get install postgresql-server-dev-9.4
apt-get install python-pip python-dev
pip install simplejson hashlib xmltodict psycopg2 python-evtx python-registry
SOURCE: www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04