Thursday 26 November 2015

Setting up collectd based monitoring system(server with multiple agents) and sending system metrics to graphite

There are many tutorials on setting up collectd as an agent on a machine. However, I have not found  many tutorials that describe how to setup a centralized "collectd" server with multiple collectd agents. This setup will  aggregate system metrics from multiple clients and sends them to Graphite
This is my attempt to note down the steps that I used for setting up centralized collectd metric collection system.

Download latest version of collectd.tar.gz (collectd-5.5 or more). Collectd, by default,contains  many plugins e.g. cpu,load,disk,graphite(write_graphite),redis, mysql etc. As a result, it is  possible to capture almost all the system metrics.

Basic installation of collectd involves usual steps:- ./configure and make install for collectd server  as well as collectd agent:

# tar -xvf collectd-version.tar.gz
# cd collectd-version
#./configure --disable-turbostat # if SL <6.6 or CentOS < 6.6
# make
# make install

Copy collectd daemon file to /etc/init.d
# cp ./contrib/redhat/init.d-collectd /etc/init.d/collectd
# chmod +x /etc/init.d/collectd

Make soft links for binaries in /opt/collectd/bin
# ln-s /opt/collectd/sbin/collectdmon /usr/bin/collectdmon
# ln -s /opt/collectd/sbin/collectd /usr/bin/collectd

If you are sending collectd metrics to graphite, please make sure that graphite is installed  and is running prior to compiling collectd. It is presumed that graphite and collectd server  are on the same machine in this use-case.

Setting up collectd server

Once collectd is installed as described above, modify the /opt/collectd/etc/collectd.conf file to contain the following:

Hostname "hostname"
FQDNLookup true
BaseDir "/opt/collectd/var/lib/collectd"
PIDFile "/opt/collectd/var/run/collectd.pid"
PluginDir "/opt/collectd/lib/collectd"
TypesDB "/opt/collectd/share/collectd/types.db"
Interval 10

LoadPlugin logfile
<Plugin logfile>
  LogLevel info
  File "/var/log/collectd.log"
  Timestamp true
  PrintSeverity true
</Plugin>

LoadPlugin network
<Plugin network>
Listen "*" "25826"
</Plugin>

LoadPlugin interface
<Plugin interface>
Interface "eth0"
</Plugin>

LoadPlugin write_graphite
<Plugin write_graphite>
<Node "graphing">
Host "localhost"
Port "2003"
Protocol "tcp"
LogSendErrors true
Prefix "collectd."
StoreRates true
AlwaysAppendDS false
EscapeCharacter "_"
</Node>
</Plugin>

Make adjustments for your network as needed.

Run collectd using:
# service collectd start

Some useful information that is required while setting up:

Default directories for collectd:

Plugins -/opt/collectd/lib/collectd
Binaries - /opt/collectd/sbin/collectd 
Configuration file: /opt/collectd/etc/collectd.conf 

Run collectd like this:
/opt/collectd/sbin/collectd -C /opt/collectd/etc/collectd.conf

Test Collectd configuration:
#collectd -t

Test Collectd plugin configuration:
#collectd -T

Check netstat output:
# nestat -naptul |grep "25826"

Setting up Collectd agent

Now, we are going install the collectd agent on the client machine and then tell it to send the  metrics to the collectd server (not Graphite). The collectd clients do not need "write_graphite"  plugin and can use the older version of Collectd rpms that are available in CentOS/SL  repositories. So, on each client, run:

# yum install collectd collectd-utils

Modify /etc/collectd.conf config file as per your requirement:

Hostname "hostname"
FQDNLookup true
BaseDir "/var/lib/collectd"
PIDFile "/var/run/collectd.pid"
PluginDir "/usr/lib/collectd"
TypesDB "/usr/share/collectd/types.db"
Interval 10
#Timeout 5
ReadThreads 5

LoadPlugin logfile
<Plugin logfile>
  LogLevel info
  File "/var/log/collectd.log"
  Timestamp true
  PrintSeverity true
</Plugin>

LoadPlugin network
<Plugin network>
Server "collectd-server.domain.com" "25826"
</Plugin>

LoadPlugin cpu
LoadPlugin load
LoadPlugin disk
LoadPlugin memory
LoadPlugin processes

Include "/etc/collectd/filters.conf"
Include "/etc/collectd/thresholds.conf"

Be sure to configure the network plugin with your collectd server information.

With this configuration, client metrics statistics are sent to collectd server on port 25826.  These are further sent to Graphite.  If you want to spice up web front-end, you can use grafana and show the trend of system metrics.

Enabling python plugin

If you wish to enable python and iptables plugin support, please do the following:

# yum install python-devel
# yum install iptables-devel

Now, re-compile the collectd source package once again for these modules.
# cd collectd-version
# ./configure --enable-python --enable-iptables
# make
# make install

This process is same for any additional plugins that you may wish to add - e.g. mysql, postgres etc.

Please check the Modules output carefully for the plugin support while configuring source package during compilation.


 Some of the useful links that I encountered while setup:
  • https://collectd.org/wiki/index.php
  • https://collectd.org/wiki/index.php/Match:Hashed/Config
  • http://blog.matthewdfuller.com/2014/06/sending-collectd-metrics-to-graphite.html
  • https://keptenkurk.wordpress.com/2015/08/28/using-collectd-with-multiple-networked-instances/
  • http://giovannitorres.me/enabling-almost-all-collectd-plugins-on-centos-6.html




Monday 19 October 2015

Tune your CentOS 6.x system using tuned

tuned, a system performance tuning tool, comes with 9 different system tuning profiles for different scenarios. Each profile implements different tunables for different system resources such as cpu, network, ATA disk.  

tuned, normally runs as daemon and allows dynamic modification of system settings depending on usage.

Basically yo do:
# yum install tuned

#tuned-adm list

Available profiles:
- laptop-ac-powersave
- server-powersave
- laptop-battery-powersave
- desktop-powersave
- virtual-host
- virtual-guest
- enterprise-storage
- throughput-performance
- latency-performance
- spindown-disk
- default

# tuned-adm profile latency-performance

# to turn off:
#tuned-adm off


So, by running tuned, your system will always be optimally tuned.

Friday 16 October 2015

Installation of flash player on Mozilla firefox in Ubuntu

Before you begin, first check whether Flash is already installed in your system. Visit below official Adobe flash tester page.

Test Your Flash Plugin (https://www.adobe.com/software/flash/about)

In this page, if you see a flash animation and a box mentioning “Version Information” of flash, then it is enabled in Mozilla in your system.

Alternatively, you can also visit about:plugins in Mozilla and check for flash plugin entry.

Now, let's update it as there are many vulnerabilities discovered in Adobe flash in the past year. Also, offline installation is also useful if you are
on internal network. Here  are the steps for manual installation:

1) Download the tar.gz archive from https://get.adobe.com/flashplayer
2) open it with the Archive Manager.
3) Unpack and copy libflashplayer.so to the plugins directory of Firefox -e /home/user/.mozilla/plugins
If it does not exist, create plugins directory and copy file -libflashplayer.so to it.

If "libflashplayer.so" exists in "/usr/lib/adobe-flashplugin/" directory, copy the latest "libflashplayer.so" file to this directory also as it is usually listed first in the path and is picked up by the browser.

And restart the firefox! That's it!

Wednesday 14 October 2015

Installing Bro with PF_ring on CentOS 6.x

Bro is an amazing network traffic analysis system. Unfortunately, it is not that popular in information security unlike Snort and frankly, I don't know why!!!

I wanted to install Bro with PF_ring to load balance the traffic on 10G link. Although, the Bro manual details the steps, there are some missing links that took some of my time during installation. So, here are my notes:

Enable/Install EPEL repository
========================
#wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6.8.noarch.rpm
#rpm -ivh epel-release-6.8.noarch.rpm

Upgrade cmake
==============
# Cmake rpm available as a part of  CentOS 6/Scientific Linux 6 repository is old. - cmake-2.6.4-5.el6.x86_64

Bro requires that cmake version should be at least 2.8.1 or more - e.g. cmake-2.8.11.2-1.el6.x86_64
This rpm is available as a part of EPEL repository.

Remove existing cmake (ver- 2.6.4)
#yum remove cmake

Install cmake-2.8
#yum install cmake28

Now, make some symbolic links:
#ln -s /usr/bin/cmake28 /usr/bin/cmake
#ln -s /usr/bin/ccmake28 /usr/bin/ccmake
#ln -s /usr/bin/cpack28 /usr/bin/cpack
#ln -s /usr/bin/ctest28 /usr/bin/ctest

Note:

Don't blindly install cmake28 version from EPEL repository like:
#yum install cmake28

This rpm is just a wrapper and has a dependency on cmake26.

So, you should install cmake28-2.8.11.2-1.el6.x86_64 pacakge and not cmake28-2.8.12.2-2.el6.x86_64


ipsumdump installation
======================
#wget http://www.read.seas.harvard.edu/~kohler/ipsumdump
#tar -zxvf ipsumdump-1.85.tar.gz
#cd ipsumdump-1.85
#./configure
#make && make install

#install Bro IDS dependent packages  from linux repository.
===================================
#yum install kernel-devel kernel-headers -y
#yum install make autoconf automake gcc gcc-c++ flex bison libpcap libpcap-devel -y
#yum install openssl openssl-devel python-devel swig zlib zlib-devel -y
#yum install openssl-libs bind-libs -y
#yum install gawk -y
#yum install pcre-devel -y
#yum install libtool -y 
#yum install numactl numactl-devel -y
#yum install gperftools-libs gperftools-devel -y
#yum install GeoIP GeoIP-devel -y
#yum install jemalloc jemalloc-devel -y
#yum install curl -y
#yum install libcurl-devel -y

Set LD flags for python 2.7.10 compilation:

#export LDFLAGS=-L/usr/local/lib
#export CFLAGS=-I/usr/local/include
#export CPPFLAGS=-I/usr/local/include
#export LD_LIBRARY_PATH=/usr/local/lib

Python-2.7.10 installation
==========================
CentOS comes with python2.6 by default. Bro requires python2.7 at least for Broccoli component.
Please do not try to remove existing python version as it will remove many python dependent packages e.g. yum requires python2.6 that comes default with SL/CentOS distribution.

So, install python 2.7.x in addition to existing python 2.6.6

#wget http://www.python.org/ftp/python/2.7.10/Python-2.7.10.tgz
#tar -zxvf Python-2.7.10.tgz
#cd Python-2.7.10
#./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
#make
#make altinstall
#ln -s /usr/local/bin/python2.7 /usr/bin/python2.7

Add python to system path
#export PATH=$PATH:/usr/local/bin/python2.7

If you face any compilation issue,please follow some good blog links that lists python2.7.10 installation instructions:

  • http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/
  • https://github.com/h2oai/h2o-2/wiki/Installing-python-2.7-on-centos-6.3.-Follow-this-sequence-exactly-for-centos-machine-only

Now, it is the time to install python package manager - pip so that you can install python packages:
Download file - get-pip.py from https://bootstrap.pypa.io/get-pip.py

# wget get-pip.py from https://bootstrap.pypa.io/get-pip.py 
#python2.7 get-pip.py 

If you are having a local PyPI repository,then

#python2.7 get-pip.py --trusted-host=pypi-local-domain-hostname -i http://local-pypi-repo-url

Now pip will be installed under /usr/local/bin/pip2.7 

Create a symbolic link:
#ln -s /usr/local/bin/pip2.7 /usr/bin/pip2.7

In addition to this, you may be required to install(copy) sqlite3 python bindings on python2.7:

It is presumed that python(python2.6.10) has been installed as a part of default installation.

#cp /usr/lib64/python2.6/lib-dynload/_sqlite3.so /usr/local/lib/python2.7/sqlite3/

Now, install pysubnettree python package:
#pip2.7 install pysubnettree 

On local PyPI:
#pip2.7 install pysubnettree --trusted-host=pypi-local-domain-hostname -i http://local-pypi-repo-url


# Download, install and configure PF_RING
=========================================
Download pf_ring source from http://www.ntop.org/get-started/download/#PF_RING

Now, compile/install various libraries required for PF_RING:

#cd /usr/src
#tar -zxvf PF_RING-6.0.3.tar.gz
#cd PF_RING-6.0.3/userland/lib
#./configure --prefix=/opt/pfring
#make
#make install

#cd ../libpcap
#./configure --prefix=/opt/pfring
#make
#make install

#cd ../tcpdump-4.1.1
#./configure --prefix=/opt/pfring
#make
#make install

#cd ../../kernel

(During kernel 'make' installation step, compile(make) it as normal user rather than as a root.)
#make
#make install

Note - Please make sure that your kernel-devel, kernel-headers and kernel rpms have  same major/minor versions. If not, you will encounter error in make step.
e.g.
# rpm -qa |grep -i kernel
kernel-headers-2.6.32-431.1.2.el6.x86_64
kernel-devel-2.6.32-431.1.2.el6.x86_64
kernel-2.6.32-431.1.2.el6.x86_64

Find out kernel version and try to install corresponding kernel-devel rpm from CentOS/RHEL repository. Do not try to install kernel-devel blindly as there may be a version mismatch between 
kernel-devel and kernel rpms. If not taken care, it will be give you installation headaches!!

Add pf_ring module at start up:

#modprobe pf_ring enable_tx_capture=0 min_num_slots=32768

or

#insmod pf_ring.so enable tx_capture=0 transparent_mode=0 min_num_slots=32768


# Download, install and configure Bro
====================================
Download Bro from bro site - http://www.bro.org/download/index.html
cd bro-2.4.1
./configure --with-pcap=/opt/pfring --enable-debug --enable-perftools --enable-jemalloc
make && make install

To check status of PF_ring
=========================
# modinfo pf_ring
# cat /proc/net/pf_ring/info
# lsmod |grep -i pf_ring

If  you wish to blacklist pf_ring module:
echo "blacklist pf_ring" >> /etc/modprobe.d/blacklist.conf

Once this is done, please follow Bro cluster setup instructions given at :
https://www.bro.org/sphinx/configuration/index.html

Some interesting links for Bro PF_ring installation

  •     http://ossectools.blogspot.in/2012/10/multi-node-bro-cluster-setup-howto.html
  •     https://thecomputersecurityblog.wordpress.com/2015/03/17/install-bro-on-centos-7-x6-x/
  •     http://mailman.icsi.berkeley.edu/pipermail/bro/2013-November/006269.html
  •     http://sickbits.net/configuring-a-network-monitoring-system-sensor-w-pf_ring-on-ubuntu-server-1-04-part-1-interface-configuration/
  •     https://sathisharthars.wordpress.com/2014/05/07/installing-and-configuring-bro-nids-in-centos-6/
  •     https://github.com/h2oai/h2o-2/wiki/Installing-python-2.7-on-centos-6.3.-Follow-this-sequence-exactly-for-centos-machine-only

Monday 14 September 2015

InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail.

While retriving C&C servers for palevo bots, I encountered an error -

 "InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail....."

psj@psj-desktop:~/Developement/palevo$ python requests_palevo_domains.py
/usr/local/lib/python2.6/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
Traceback (most recent call last):
  File "requests_proxy_usage.py", line 9, in <module>
    r=requests.get('https://palevotracker.abuse.ch/blocklists.php?download=domainblocklist',proxies=proxy_dict)
  File "/usr/local/lib/python2.6/dist-packages/requests/api.py", line 69, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/requests/sessions.py", line 465, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python2.6/dist-packages/requests/sessions.py", line 573, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python2.6/dist-packages/requests/adapters.py", line 431, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: [Errno 8] _ssl.c:480: EOF occurred in violation of protocol

There are multiple ways to overcome this issue:

1) Upgrade to python 2.7.9  as suggested in urllib3 documentation - https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning

2) By default, python standard library’s ssl module is used. Unfortunately, there are several limitations which are addressed by PyOpenSSL:
  • (Python 2.x) SNI support.
  • (Python 2.x-3.2) Disabling compression to mitigate CRIME attack.
To use the Python OpenSSL bindings instead, you’ll need to install the required packages:
 
$ pip install --upgrade pyopenssl ndg-httpsclient pyasn1
 
or 
 
$ pip install requests[security] 

Thursday 10 September 2015

Keeping track of programs generating TCP/UDP traffic on Windows

While doing routine security investigation, there was a requirement to track the program generating some TCP traffic. I made use of Sysinternal's TCPView to find out the offending program.

Here are some other options, if you are interested.

1) TCPView - It is a Windows program that will show you detailed listings of all TCP and UDP endpoints on your system, including the local and remote addresses and state of TCP connections.

Download link - https://technet.microsoft.com/en-us/library/bb897437.aspx

2)Microsoft network monitor is another tool.

Download link - https://www.microsoft.com/en-us/download/details.aspx?id=4865

If interested in commericial version, NetBalancer is a good utility.

Download link -
  1. Netbalancer - https://netbalancer.com/download
  2. Command line version - https://netbalancer.com/docs#command_line__nbcmd_exe_

If you have installed Sysinternal utilities on PC, there is a utility namely, Tcpvcon.
Tcpvcon usage is similar to that of the built-in Windows netstat utility.

Usage: tcpvcon [-a] [-c] [-n] [process name or PID]
-a     Show all endpoints (default is to show established TCP connections).
-c     Print output as CSV.
-n     Don't resolve addresses..

If you wish to stick to old netstat utility, here is the way to find out the program ID making TCP/UDP connections:

c:\> netstat -nab

Tuesday 21 July 2015

Securing Windows/Linux machines using OVAL

Keeping windows/linux machines safe and secure is always a daunting task. The problem is further compounded if you have to deal with multiple windows/linux machines with a wide variety of versions (right from Windows XP-SP3 to Window 8, and Linux variants like CentOS,Scientific Linux, RedHat etc). The end-user is careless most of the time and it is duty of system administrators to remind him that some of the settings are not OK from the security point-of-view. Sometimes, users tweak in windows settings for operational reasons. So, it's necessary to do timely system checks and take corrective actions. Since the manual process is always time consuming and error prone, I was looking for vulnerability compliance solution which is free. Of course, there are number of powerful commercial tools for enforcing Windows policy checks from McAfee,TripWire etc and if you can afford them, go for them!!

Because of budget constraints, I decided to stick to free tool - OVAL. Though the tool does not have a polished interface unlike the commercial counterparts, it does a decent job of finding the security state of system. One big advantage is that Ovaldi is cross-platform and can run on both Windows as well as Linux. Running OVAL scans allowed me to automate the scans of many Windows and Linux system and it is possible to achieve consistency and accuracy across different machines.

For detailed information about OVAL, please visit - http://oval.mitre.org

OVAL is a language that describes checks to be made. These checks are usually conditional i.e. whether a particular audit setting exists or not , or if a particular component is installed or not. Further, they can be grouped with operators like AND, OR and NOT.

If you wish to download complete oval database, please visit - http://oval.mitre.org/rep-data/index.html
Whereas, the latest OVAL definitions are here - http://oval.mitre.org/repository/data/updates/latest

There are two open source OVAL interpreters available -
Ovaldi - http://sourceforge.net/projects/ovaldi/
open-scap - http://www.open-scap.org/page/Main_Page

Since open-scap is available only for Linux and its variants and I wanted to investigate security state of a windows machine, I decided to try ovaldi.

Ovaldi installation

Download page for Ovaldi is here. Note that this links will take you to the latest version available at the post publishing time, i.e. 5.10.1. So, if there is a newer version,make use of the latest. Don't forget to change all the references from version 5.10.1 to your version in the text that follows.

Choose the EXE versions for Windows that suit your environment. In my case, it was 32-bit version, but if you have 64-bit version of Windows, download that one instead.

Unzip the file using 7-zip or winzip and and install the files to a directory - say, C:\Program Files\OVAL


Now, Ovaldi is installed!!  Also, add the file path of ovaldi.exe file to windows environment so that you do not have to type full path again and again.

If you encounter error - MSVCR100.dll is missing or

"The program can't start because MSVCR100.dll is missing from your computer. Try reinstalling the program to fix this problem."

Fix it by downloading the file from the following urls:

MSVCR100.dll = Visual C++ 2010 Runtime

32Bit: Microsoft Visual C++ 2010 SP1 Redistributable Package (x86)
http://www.microsoft.com/de-de/download/details.aspx?id=8328

64Bit: Microsoft Visual C++ 2010 SP1 Redistributable Package (x64)
http://www.microsoft.com/en-us/download/details.aspx?id=13523


Now, it is time to log-in to Windows machine as administrator and run ovaldi interpreter.


Download file definitions

Now we have interpreter and we need definitions that will be run by interpreter. Please go the the page-http://oval.mitre.org/rep-data/index.html In the page, you will see section Downloads by Version and Namespace. You need to select class to download based on the version of oval interpreter you have. The following classes are available:

  •     compliance - checks that the installation is compliant with recommended security practices.
  •     inventory - checks that produce results of what is installed.
  •     miscellaneous - misc category
  •     patch - patching status
  •     vulnerability - test that verify if there is a vulnerability present on the machine.

When you click on one of those classes you are presented with a new page that gives you a list of available definitions grouped by different criteria. For example, by clicking on vulnerability class (probably the largest one) you can select the download by platform, family or all.

For the purpose of Windows 7 testing of oval, we can downloaded file microsoft.windows.7.xml through platform/vulnerabilities, and this file was renamed to microsoft.windows.7.vulnerabilities.xml so that I do not get confused at a later stage what these tests contain!! Similarly, it is possible to downloaded equivalent files from compliance and inventory classes and you can name them as microsoft.windows.compliance.xml and microsoft.windows.inventory.xml, respectively.

Running Ovaldi

To run, please enter following command, say:

c:\program files\oval\ovaldi-5.10.1\ovaldi -m -a "c:\program files\oval\ovaldi-5.10.1\xml" -o microsoft.windows.7.vulnerability.xml -r 20150721-result.xml -x 20150721-result.html -d 20150721-system-characteristics.xml

The above command will check vulnerabilities that are present on the system. Of course, only the vulnerabilities defined in the database (microsoft.windows.7.vulnerability.xml) will checked.

An explanation for the other options is given below:
  •     Option -m. Don't check md5 sum of oval definitions file (in this case that is microsoft.windows.7.vulnerability.xml).
  •     Option -a specifies where all the auxiliary files necessary for interpreter are. For example, default style sheet file is there, also, XML definitions and tests are also there. The default value of this option assumes that you are running ovaldi in its base directory (i.e. where it is installed) so it has to be specified in order for everything to work.
  •     Option -o specifies oval definition file to use.
  •     Option -r specifies XML result file. The default value is results.xml and in the case of multiple runs, default file name will be overwritten. So, using this option prevents that from happening.
  •     Option -x specifies HTML result file. This file is generated from XML result file by applying style sheet (XSL) file. Default file is used if none is specified on the command line.
  •     Option -d specifies in which file will be saved system characteristics, i.e. installed options, existing files, etc. used during interpreter run of oval definition file.

Once the ovaldi program is finished, there will be three new files in the directory. When you open results file (20150721-result.html if you used the command given above) then you'll see four section named OVAL Results Generator Information, System Information, OVAL System Characteristics Generator Information and OVAL Definition Results.


Some links of interest related to openscap and ovaldi are given below:
  1. https://www.csiac.org/sites/default/files/vulnerability_assessment.pdf
  2. http://sgros.blogspot.in/2011/10/installing-and-testing-ovaldi-on.html
  3. http://www.vulnerabilityassessment.co.uk/ovaldi.htm

Presentation -
  1. http://nvd.nist.gov/scap/docs/conference%20presentations/workshops/OVAL%20Tutorial%201%20-%20Overview.pdf
  2. http://oval.mitre.org/community/docs/Developer_Days_2013_OVAL_Session_Minutes.pdf
  3. http://www.energy.gov/sites/prod/files/cioprod/documents/SCAP_in_Action_-_Demo_of_SCAP_Capabilities.pdf
  4. http://blog-shawndwells.rhcloud.com/wp-content/uploads/2012/07/2013-03-25-SCAP-Workshop-Coursebook.pdf
Other interesting projects related to openscap and ovaldi:
  1. MITRE course - http://benchmarkdevelopment.mitre.org/course/confirmation.html

  2. Centralized SCAP - http://blog.siphos.be/2013/09/creating-a-poor-man-central-scap-system/
     
  3. https://github.com/cyberxml/cyberxml-django





Wednesday 17 June 2015

Python packaging - Pip trusted host issues

For quite some time,I have not updated pip. Yesterday, when I updated pip using our local PyPI server to its latest version - 7.0.3 for python2.7, my simple attempt to install python packages failed. e.g. I was trying to install fuzzy module that deals with strings similarities.

root@psj-desktop:~# pip install fuzzy -i http://osrepo.xxx.in/pypi/simpleCollecting fuzzy
  The repository located at osrepo.xxx.in is not a trusted or secure host and is being ignored. If this repository is available via HTTPS it is recommended to use HTTPS instead, otherwise you may silence this warning and allow it anyways with '--trusted-host osrepo.xxx.in'.
  Could not find a version that satisfies the requirement fuzzy (from versions: )
No matching distribution found for fuzzy

root@psj-desktop:~# pip install --pre fuzzy -i http://osrepo.xxx.in/pypi/simple
Collecting fuzzy
  The repository located at osrepo.xxx.in is not a trusted or secure host and is being ignored. If this repository is available via HTTPS it is recommended to use HTTPS instead, otherwise you may silence this warning and allow it anyways with '--trusted-host osrepo.xxx.in'.
  Could not find a version that satisfies the requirement fuzzy (from versions: )
No matching distribution found for fuzzy

Things became clear after reading pip developer documentation - https://media.readthedocs.org/pdf/pip/develop/pip.pdf
Another place you should look for is changelog in github repository - https://github.com/pypa/pip/

It is possible to create pip.conf in a number of ways:

On Unix the default configuration file is: $HOME/.config/pip/pip.conf or you can create globally under /etc/pip.conf
On Windows the configuration file is : %APPDATA%\pip\pip.ini

Sample /etc/pip.conf
---------------------------------
[global]
index-url = http://osrepo.xxx.in/pypi/simple
trusted-host = osrepo.xxx.in
disable-pip-version-check = true
allow-all-external=true
timeout = 120


Monday 15 June 2015

Installation of Autopsy third party modules

Autopsy Forensic Browser is a graphical interface to the The Sleuth Kit and other digital investigation tools. Using both of them, you can analyze Windows and LINUX disks and file systems (NTFS, FAT, UFS1/2, Ext2/3, etc.). I was going through all the features of Autopsy on my desktop to gain first-hand experience.

A number of Autopsy modules are available here - http://wiki.sleuthkit.org/index.php?title=Autopsy_3rd_Party_Modules

For my reference, the procedure for installation of Autopsy module is given below:
  • Navigate to the latest .nbm module file - e.g. https://github.com/williballenthin/Autopsy-WindowsRegistryIngestModule/tree/master/precompiled
  • Click on the .nbm file so that the View Raw text appears.
  • Right-click on the View Raw text and select Save Link As... to save the raw .nbm file.
  • Start Autopsy and close the Welcome screen.
  • From the menu, select Tools | Plugins.
  • Open Downloaded tab and click the Add Plugins button.
  • From the Add Plugins window, navigate to the downloaded .nbm module file and open it.
  • Click Install and follow the wizard.

Tuesday 12 May 2015

Elasticsearch error - Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]

After adding the new elasticsearch node, initially, I was struggling with error :

WARN: org.elasticsearch.discovery: [logstash-id.xxx.in-25379-6424] waited for 30s and no initial state was set by the discovery

and I corrected the situation by adding iptables rules.

Thereafter, things appeared to be smooth but could not find new elasticsearch index using Elasticsearch head plugin even after few minutes. So, I started searching through debug logs and spotted an error :

log4j, [2015-05-12T11:00:02.003] DEBUG: org.elasticsearch.discovery.zen: [logstash-id.xxx.in-26328-4264] filtered ping responses: (filter_client[true], filter_data[false]) {none}
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
    at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
log4j, [2015-05-12T11:00:06.507] DEBUG: org.elasticsearch.discovery.zen: [logstash-id.xxx.in-26328-4264] filtered ping responses: (filter_client[true], filter_data[false]) {none}
^CInterrupt received. Shutting down the pipeline. {:level=>:warn, :file=>"logstash/agent.rb", :line=>"119"}

Since I was using latest version of logstash and elasticsearch, I was bit puzzled as all the google solutions(references) were pointing to the old versions of them:

[root@id admin]# /opt/logstash/bin/logstash --version
logstash 1.4.2-modified

[root@es2 ~]# rpm -qa|grep elastic -i
elasticsearch-1.4.4-1.noarch

Finally, after reading good documentation of logstash, I decided to add 'protocol' option and vola! - it worked!!

output {
        #stdout { codec => rubydebug }
        if [type] == "netflow" {
                elasticsearch {
                        cluster => "elk-cluster"
                        index => "netflow-%{+YYYY.MM.dd}"
                        host => "10.4.0.47"
                        protocol => "http"
                        workers => 2
                }
        }


Elasticsearch warning - WARN: org.elasticsearch.discovery: [...] waited for 30s and no initial state was set by the discovery

I added a new elasticsearch node and got puzzled by the error:

log4j, [2015-05-12T10:42:18.996]  WARN: org.elasticsearch.discovery: [logstash-id.xxx.in-25379-6424] waited for 30s and no initial state was set by the discovery
^C

Finally, it turned out  that iptables was blocking access to elasticsearch host. So, after adding the firewall rules, things worked flawlessly.

Allow elasticsearch in iptables(firewall) rules:

#Allow elasticsearch access
iptables -A INPUT -s 10.4.0.45 -i eth0 -p tcp --dport 9200 -j ACCEPT
iptables -A OUTPUT -d 10.4.0.45 -o eth0 -p tcp --dport 9200 -j ACCEPT

Elasticsearch curator - delete commands

If you wish to delete some old elasticsearch indices, curator is the de-facto standard. The program provides so many useful options and I appreciate the versatility. Although, curator has a wiki and elasticsearch page, things were not that clear to me in one go. So, here are my notes on deleting elasticsearch indices:

[admin@mgr ~]$ curator --version
curator, version 3.0.0

[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.46 delete --help
Usage: curator delete [OPTIONS] COMMAND [ARGS]...

  Delete indices or snapshots

Options:
  --disk-space FLOAT  Delete indices beyond DISK_SPACE gigabytes.
  --reverse BOOLEAN   Only valid with --disk-space. Affects sort order of the
                      indices.  True means reverse-alphabetical (if dates are
                      involved, older is deleted first).  [default: True]
  --help              Show this message and exit.

Commands:
  indices    Index selection.
  snapshots  Snapshot selection.


[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.46 delete indices --help
Usage: curator delete indices [OPTIONS]

  Get a list of indices to act on from the provided arguments, then perform
  the command [alias, allocation, bloom, close, delete, etc.] on the
  resulting list.

Options:
  --newer-than INTEGER            Include only indices newer than n time_units
  --older-than INTEGER            Include only indices older than n time_units
  --prefix TEXT                   Include only indices beginning with prefix.
  --suffix TEXT                   Include only indices ending with suffix.
  --time-unit [hours|days|weeks|months]
                                  Unit of time to reckon by
  --timestring TEXT               Python strftime string to match your index
                                  definition, e.g. 2014.07.15 would be
                                  %Y.%m.%d
  --regex TEXT                    Provide your own regex, e.g
                                  '^prefix-.*-suffix$'
  --exclude TEXT                  Exclude matching indices. Can be invoked
                                  multiple times.
  --index TEXT                    Include the provided index in the list. Can
                                  be invoked multiple times.
  --all-indices                   Do not filter indices.  Act on all indices.
  --help                          Show this message and exit.

If you wish to use dry-run feature in curator:

[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.46 --dry-run --debug delete indices --time-unit days --timestring "\%Y.\%m.\%d" --older-than 7 --prefix logstash- --all-indices

You should always specify logfile and loglevel (CRITICAL = 50, ERROR=40, WARNING=30, INFO=20, DEBUG=10, NOTSET=0) to capture curator program output for ease of debugging and specify sufficient timeout period. The timeout period becomes important when you are taking snapshots of indices. So, always  give sufficient timeout period:

[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.47 --timeout 300 --logfile curator_log.txt --loglevel 10 --debug delete indices --time-unit days --timestring "\%Y.\%m.\%d" --older-than 7 --prefix logstash- --all-indices


[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.46 --timeout 300 --logfile curator_log.txt --loglevel 10 --debug delete indices --time-unit days --timestring "\%Y.\%m.\%d" --older-than 7 --prefix logstash-apache --all-indices

If you wish to delete the indices on the basis of hard disk space,use the following command:

[admin@mgr ~]$ /usr/bin/curator --host 10.4.0.46 --timeout 300 --logfile ttt.txt --loglevel 10 --debug delete --disk-space 40 indices --all-indices




Thursday 30 April 2015

Logstash - require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar

After installation of logstash 1.4.2 rpm on CentOS, I got stuck across this error:

[root@psj admin]# tail -f /var/log/logstash/logstash.err
  require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
  require at org/jruby/RubyKernel.java:1085
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/ffi.rb:1
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:1
  require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
  require at org/jruby/RubyKernel.java:1085
     LibC at /opt/logstash/lib/logstash/util/prctl.rb:4
   (root) at /opt/logstash/lib/logstash/util/prctl.rb:3
     main at /opt/logstash/lib/logstash/runner.rb:79
   (root) at /opt/logstash/lib/logstash/runner.rb:215
^C

[root@psj admin]# service logstash status
logstash is not running


After some time, I realized that it may be a path issue  or not getting proper java environment variables issue.
So, to get rid of this, please check your environment for JAVA paths:

[root@psj ELK]# cat /etc/environment
JRE_HOME=/usr/java/jre1.8.0_40/
JAVA_HOME=/usr/java/jdk1.8.0_31
JDK_HOME=/usr/java/jdk1.8.0_31/
[root@psj ELK]# source /etc/environment

Also, add logstash and java to the path:

[root@psj ELK]# export PATH=$PATH:/usr/java/default/bin
[root@psj ELK]# export PATH=/opt/logstash/bin:$PATH

[root@psj ELK]# service logstash status
logstash is running
If this is not your case, perhaps, you can look up bug details reported on github.
https://github.com/elastic/logstash/issues/1289

Logstash - LoadError: Could not load FFI Provider: (NotImplementedError) FFI not available: null

If you get error "LoadError: Could not load FFI Provider" while running logstash(ver 1.4.2) daemon on CentOS like:


[root@psj ELK]# cat /var/log/logstash/logstash.err
LoadError: Could not load FFI Provider: (NotImplementedError) FFI not available: null
 See http://jira.codehaus.org/browse/JRUBY-4583
  require at org/jruby/RubyKernel.java:1085
  require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/ffi/ffi.rb:69
  require at org/jruby/RubyKernel.java:1085
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:1
  require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
  require at org/jruby/RubyKernel.java:1085
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/ffi.rb:1
   (root) at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:1
  require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
  require at org/jruby/RubyKernel.java:1085
     LibC at /opt/logstash/lib/logstash/util/prctl.rb:4
   (root) at /opt/logstash/lib/logstash/util/prctl.rb:3
     main at /opt/logstash/lib/logstash/runner.rb:79
   (root) at /opt/logstash/lib/logstash/runner.rb:215


Please do the following:

Modify "LS_JAVA_OPTS" option in /etc/sysconfig/logstash file  as given below:

#vim /etc/sysconfig/logstash
...
#LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"
#LS_JAVA_OPTS="-Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=$HOME"
LS_JAVA_OPTS="-Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/logstash"
...

"Djava.io.tmpdir" can be set to any directory of your preference instead of "/var/lib/logstash".

[root@psj ELK]# service logstash status
logstash is running

Wednesday 29 April 2015

DDR Memory timings

I wanted to upgrade system memory in my PC from 4GB to 16GB and got puzzled by the terminologies followed - e.g. DDR3-1333, PC3-12800 by the memory manufacturers.

DDR memory specifications follow DDR-xxx/PC-YYYY classification.

After searching on Google for few seconds, I realized that both the ways refer to memory timings only.

The first number, xxx, indicates the maximum clock speed that the memory chips supports-e.g. DDR-400 memory work at 400 MHz, DDR2-800 can work up to 800 MHz, and DDR3-1333 can work up to 1,333 MHz. It is to be noted that this is not the real clock speed of the memory and the real clock of the DDR, DDR2, and DDR3 memories is usually half of the labeled clock speed.

The second number indicates the maximum theoretical transfer rate that the memory reaches, in MB/s- e.g. DDR-400 memory transfer data at 3,200 MB/s, and are labeled as PC-3200. The DDR2-800 memory transfers data at 6,400 MB/s and are labeled as PC2-6400. Where-as, DDR3-1333 memory can transfer data at 10,664 MB/s and are labeled as PC3-10600 or PC3-10666. The number “2” or “3” after “DDR” or “PC” indicates that we are talking about a DDR2 or DDR3 memory, not DDR.


Maximum memory transfer rate (MB/second) = clock speed (in MHz) * 8.

 i.e.
  • 1066 MHz = PC3-8500 8500 MB/s 
  • 1333 MHz = PC3-10600 10600 MB/s
  • 1600 MHz = PC3-12800 12800 MB/s  
 The following site was useful for finding the memory timings information:
http://www.hardwaresecrets.com/article/understanding-ram-timings/26

Sunday 19 April 2015

Issue in Elasticsearch Curator cron job

I learned a lesson that cron treats '%' character as  a special character.

I have written "elasticsearch-curator" cronjob for closing the elasticsearch indexes older than 2 days and the elasticsearch index was not getting closed for some reason. However, I was able to execute the same command in bash shell without any issues.

0 8 * * * /usr/bin/curator --host 10.1.0.46 close indices --time-unit days --timestring "%Y.%m.%d" --older-than 2 > /dev/null 2>&1

The google search pointed me to the link:
http://www.ducea.com/2008/11/12/using-the-character-in-crontab-entries/

and then, I realized that I have to escape % character in my cron job!!

0 8 * * * /usr/bin/curator --host 10.1.0.46 close indices --time-unit days --timestring "\%Y.\%m.\%d" --older-than 2 > /dev/null 2>&1

Similarly, if you wish to delete older indices, you can use:

$ /usr/bin/curator --host 10.44.0.46 delete indices --older-than 10 --time-unit days --timestring '%Y.%m.%d' --prefix netflow- 

Friday 10 April 2015

Installation of OpenAppID pre-processor for Snort IDS

I have heard many good things about OpenAppID pre-processor for snort and wanted to include it in my existing snort installation before up-gradation. This pre-processor allows you to detect application running on your network and can be a great aid in identifying suspicious applications or any application not confirming to your company policy.

There is a good installation note for OpenAppID from snort team but, I felt, there are some missing links. So, here is the sequence that is to be followed for enabling OpenAppID for your snort IDS installation on CentOS or its equivalent linux distribution system:

Make sure that the following rpms are present on the system. If not install them using yum.

# yum install ethtool make zlib zlib-devel gcc gcc-c++ libtool.x86_64 pcre-devel libpcap libpcap-devel flex bison tcpdump autoconf unzip python-setuptools python-devel lua lua-devel


Download snort and its associated libraries from snort site:

All the downloaded packages are saved under /home/admin/install directory.

Now, let us compile and install them one-by-one.

# cd /home/admin/installs
# tar xzvf libdnet-1.12.tar.gz
# cd libdnet-1.12/
# ./configure
# make
# make install
# cd ..
# tar xzvf LuaJIT-2.0.3.tar.gz
# cd LuaJIT-2.0.3/
# make
# make install
# cd ..
# tar -xzvf daq-2.0.4.tar.gz
# cd daq-2.0.4/
# ./configure
# make
# make install
# ldconfig
# cd ..

Now, compile snort with openAppID pre-processor.

# tar -xvf snort-2.9.7.2.tar.gz
# cd snort-2.9.7.2
# ./configure --enable-sourcefire --enable-open-appid
# make
# make install
# which snort
/usr/local/bin/snort
# /usr/local/bin/snort --version

,,_ -*> Snort! <*-
o" )~ Version 2.9.7.2 GRE (Build 177)

Now, configure snort configuration files and create some directories:

# mkdir /etc/snort # For configuration
# mkdir /var/log/snort # For log data
# mkdir /usr/local/lib/snort_dynamicrules # For dynamic rules
# mkdir /etc/snort/rules # For normal text rules
# touch /etc/snort/white_list.rules # For white lists
# touch /etc/snort/black_list.rules # For black lists

A set of configuration files are included in the snort tarball. These files need be copied into /etc/snort/ directory.

# cd /home/admin/installs/snort-2.9.7.2
# cp etc/* /etc/snort/

This process will copy the files - file_magic.conf,snort.conf,unicode.map,classification.config,gen-msg.map,reference.config,threshold.conf to /etc/snort

Now, extract snort registered rules(snort-snapshot-2.9.7.2) and copy them to /etc/snort

# cd /home/admin/installs
# mkdir -p snort_rules
# mv snortrules-snapshot-2.9.7.2.tar.gz snort_rules
# cd snort_rules
# tar -zxvf snortrules-snapshot-2.9.7.2.tar.gz
# cp -r preproc_rules /etc/snort
# cp -r rules /etc/snort
# cp -r so_rules /etc/snort

The next step is configure  snort configuration file - /etc/snort/snort.conf. The following changes are required to be made:

# vim /etc/snort/snort.conf

RULE_PATH /etc/snort/rules
SO_RULE_PATH /etc/snort/so_rules
PREPROC_RULE_PATH /etc/snort/preproc_rules
WHITE_LIST_PATH /etc/snort
BLACK_LIST_PATH /etc/snort




# comment path to dynamic rules libraries
#dynamicdetection directory /usr/local/lib/snort_dynamicrules

The next step is to add the configuration for the OpenAppID preprocessor to the snort.conf file. Find the lines for the reputation preprocessor. Just after the reputation preprocessor and before Step 6 we will add another preprocessor setting.

preprocessor appid: app_stats_filename appstats-u2.log, \
   app_stats_period 60, \
   app_detector_dir /usr/local/snort

This will turn on the OpenAppID preprocessor. The first line names the configuration file to which application statistics will be logged, the second one indicates the time period used to sample this data and the third one specifies the directory which contains the odp directory we extracted from the Open App ID Detector package.

Now, let us configure output section in snort.conf.

Again open snort.conf file and look into Step 6 to find the lines explaining the unified2 output type.

In that section add the following line:
output unified2: filename snort_openappid.log, limit 128, appid_event_types

Now fire up Snort instance:

# /usr/local/snort/bin/snort -c /usr/local/snort/etc/snort.conf -i eth#

Where eth# is which ever interface you will be monitoring with (e.g. eth0).


Possible Errors

1) daq_static library not found:

# ./configure: line 15736: daq-modules-config: command not found
checking for daq_load_modules in -ldaq_static... no

   ERROR!  daq_static library not found, go get it from
   http://www.snort.org/.

This happens if daq_static library is not in the path. So. add "/usr/local/bin" to the path variable.

[root@psj snort-2.9.7.2]# which daq_static
/usr/bin/which: no daq_static in (/sbin:/bin:/usr/sbin:/usr/bin)
[root@psj snort-2.9.7.2]# which daq-modules-config
/usr/bin/which: no daq-modules-config in (/sbin:/bin:/usr/sbin:/usr/bin)
[root@psj snort-2.9.7.2]# export PATH=$PATH:/usr/local/bin


2) LuaJIT library not found:


[root@psj snort-2.9.7.2]# ./configure --enable-sourcefire --enable-open-appid
checking for a BSD-compatible install... /usr/bin/install -c

checking pkg-config is at least version 0.9.0... yes
checking for luajit... no

   ERROR!  LuaJIT library not found. For better performance, go get it from
   http://www.luajit.org/.
configure: error: "Fatal!"

To correct, install  latest version of LuaJIT from http://luajit.org/download.html

3) libluajit-5.1.so.2: cannot open shared object file: No such file or directory

  Loading dynamic preprocessor library /usr/local/lib/snort_dynamicpreprocessor//libsf_reputation_preproc.so... done
  Loading dynamic preprocessor library /usr/local/lib/snort_dynamicpreprocessor//libsf_dce2_preproc.so... done
  Loading dynamic preprocessor library /usr/local/lib/snort_dynamicpreprocessor//libsf_modbus_preproc.so... done
  Loading dynamic preprocessor library /usr/local/lib/snort_dynamicpreprocessor//libsf_appid_preproc.so... ERROR: Failed to load /usr/local/lib/snort_dynamicpreprocessor//libsf_appid_preproc.so: libluajit-5.1.so.2: cannot open shared object file: No such file or directory
Fatal Error, Quitting..

To correct it, do the following:

[root@psj snort-2.9.7.2]# ldd /usr/local/lib/snort_dynamicpreprocessor/libsf_appid_preproc.so
    linux-gate.so.1 =>  (0x007d8000)
    libluajit-5.1.so.2 => not found
    libdnet.1 => /usr/local/lib/libdnet.1 (0x00c79000)
    libpcre.so.0 => /lib/libpcre.so.0 (0x00a7c000)
    libnsl.so.1 => /lib/libnsl.so.1 (0x00d6f000)
    libm.so.6 => /lib/libm.so.6 (0x00667000)
    libcrypto.so.10 => /usr/lib/libcrypto.so.10 (0x007d9000)
    libdl.so.2 => /lib/libdl.so.2 (0x00eef000)
    libsfbpf.so.0 => /usr/local/lib/libsfbpf.so.0 (0x005c5000)
    libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x00114000)
    libz.so.1 => /lib/libz.so.1 (0x00e07000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x006e5000)
    libc.so.6 => /lib/libc.so.6 (0x00aac000)
    /lib/ld-linux.so.2 (0x002b8000)

# find /usr/ -name libluajit-5.1.so.2 # Check where is the required .so
/usr/local/lib/libluajit-5.1.so.2

[root@psj snort-2.9.7.2]# ls -l /usr/local/lib/libluajit-5.1.so.2
lrwxrwxrwx 1 root root 22 Apr 10 12:31 /usr/local/lib/libluajit-5.1.so.2 -> libluajit-5.1.so.2.0.3

# ldconfig

Again, try to run snort instance:

# snort -c /etc/snort/snort.conf -T


If you wish to write your own OpenAppID plugins or extend/tailor the functionality, technical details are available in this document.

The following article were very useful while installing and configuring OpenAppID:

1) http://blog.snort.org/2014/03/firing-up-openappid.html
2) http://phucnw.blogspot.in/search?q=snort 
3) http://puremonkey2010.blogspot.in/2014/10/snort-customized-appid-lua-script-as.html?m=1
4) https://www.bilgiguvenligi.gov.tr/saldiri-tespit-sistemleri/snort-openappid-ile-uygulama-farkindaligi.html

The following video nicely explains the concepts behind OpenAppID:
1) http://www.irongeek.com/i.php?page=videos/derbycon4/t402-snort-openappid-how-to-build-an-open-source-next-generation-firewall-adam-hogan
2) http://blog.snort.org/2014/06/openappid-training-videos-how-to-create.html

Presentation links:

1) https://www.snort.org/documents/openappid-detection-webinar

2) http://www.centralohioissa.org/wp-content/uploads/2014/07/OpenAppID-ISSA_Rafeeq-Rehman.pdf
3) https://www.snort.org/documents/55

Tuesday 24 March 2015

Comparing RPM versions

I wanted to compare rpms installed on my system with the Scientific Linux/CentOS repository rpms and update any existing rpm if there is large version difference. This is not usually needed as yum will automatically update the packages to its latest version! But, I was interested in finding out how a 'newer version' is determined when 'rpm -U' or 'yum localupdate' is executed.

After some google searches, I came across rpmdev-vercmp tool. This python based tool is a part of rpmdevtools rpm package. This tool requires that you should know epoch, version and release information for each rpm you need to compare.

You can find out this information for any package using:

#rpm -qa --queryformat "'%{NAME}' '%{EPOCH}:%{VERSION}' '%{RELEASE}' '%{ARCH}'\n" |grep package_name

If you wish to know all the query tags, use the following command:
#rpm --querytags

This information extracted needs to be compared with latest rpms from linux repositories and rpmdev-vercmp tool comes in handy. Otherwise, you have to make string comparisons and there is a chance that you might miss a use-case in your code!! So, I decided to use rpmdev-vercmp utility without hesitation.

$ rpmdev-vercmp --help

rpmdev-vercmp <epoch1> <ver1> <release1> <epoch2> <ver2> <release2>
rpmdev-vercmp <EVR1> <EVR2>
rpmdev-vercmp # with no arguments, prompt

Exit status is 0 if the EVR's are equal, 11 if EVR1 is newer, and 12 if EVR2
is newer.  Other exit statuses indicate problems.

$ rpmdev-vercmp audit-2.2-2.el6.i686 audit-2.2-2.el6.i686
These are equal



Useful links:
  1. http://www.faqssys.info/bash-script-to-verify-that-an-rpm-is-at-least-at-a-given-version/
  2. http://utcc.utoronto.ca/~cks/space/blog/linux/RPMShellVersionComparison

Tuesday 3 March 2015

Installation of BRO IDS on CentOS

I am using snort IDS for a long time and it generates a lot of useful alerts for malicious activities on my PC. Further, I have heard good things about BRO IDS and wanted to give a try. Bro offers a network analysis framework that is different from the typical IDS like snort.

Here are the steps for installation on CentOS 6.5 or higher linux machines:

1) # Install runtime dependencies.
# yum -y install libpcap openssl-libs bind-libs zlib bash python libcurl gawk GeoIP gperftools-libs

2) # Install the build dependencies.
# yum -y install libpcap-devel openssl-devel bind-devel zlib-devel cmake git perl libcurl-devel GeoIP-devel python-devel gperftools-devel swig

You may also require these libraries and so, install them in advance especially if you are compiling Bro from source tar.gz:

#yum -y install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel #yum -y python-devel swig zlib-devel
# yum install jemalloc
# yum install jemalloc-devel
# yum install curl
# yum install libcurl-devel
# yum install GeoIP
# yum install GeoIP-devel
# yum install gperftools
# yum install ruby


3) Install  EPEL repository on the machine.
4) Download and install rpm from Bro site- https://www.bro.org/download/index.html

Of course, if you wish, you can compile the Bro IDS from the source!!

By default, all Bro IDS related files are installed in /opt/bro.

5) Modify default path:
# export path /opt/bro/bin:$PATH

You can also add PATH=/opt/bro2/bin:$PATH to your ~/.profile file in your home directory to make the change permanent.

6) For basic configuration steps, please follow the documentation on the project page:

Using your favorite editor, please modify the following 3 files:
$ PREFIX  refers to the base of bro installation directory
  •     $PREFIX/etc/node.cfg -> Configure the network interface to monitor (i.e. interface=eth0)

[admin@ids]$  cd /opt/bro
[admin@ids bro]$ cat etc/node.cfg
# Example BroControl node configuration.
#
# This example has a standalone node ready to go except for possibly changing
# the sniffing interface.

# This is a complete standalone configuration.  Most likely you will
# only need to change the interface.
[bro]
type=standalone
host=localhost
interface=eth0

  •     $PREFIX/etc/networks.cfg -> Configure the local networks (i.e. 10.0.0.0/8 Private IP space )
[admin@ids bro]$ cat etc/networks.cfg
# List of local networks in CIDR notation, optionally followed by a
# descriptive tag.
# For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes.

10.0.0.0/8          Private IP space
192.168.0.0/16      Private IP space
  •     $PREFIX/etc/broctl.cfg -> Change the MailTo address and the log rotation
 The broctl.cfg file is where in recipient address for all emails send out by Bro and BroControl, and log rotation intervals among other features can be configured.

When you run bro for the first time, a warning may be reported. Please ignore.
# broctl
warning: cannot read '/var/bro/spool/broctl.dat' (this is ok on first run)

Welcome to BroControl 1.2

Type "help" for help.

[BroControl] >


# broctl
warning: cannot read '/var/bro/spool/broctl.dat' (this is ok on first run)

Welcome to BroControl 1.2

Type "help" for help.

[BroControl] >

 [root@ids bro]# broctl

Welcome to BroControl 1.3

Type "help" for help.

[BroControl] > help

BroControl Version 1.3

  capstats [<nodes>] [<secs>]      - Report interface statistics with capstats
  check [<nodes>]                  - Check configuration before installing it
  cleanup [--all] [<nodes>]        - Delete working dirs (flush state) on nodes
  config                           - Print broctl configuration
  cron [--no-watch]                - Perform jobs intended to run from cron
  cron enable|disable|?            - Enable/disable "cron" jobs
  df [<nodes>]                     - Print nodes' current disk usage
  diag [<nodes>]                   - Output diagnostics for nodes
  exec <shell cmd>                 - Execute shell command on all hosts
  exit                             - Exit shell
  install                          - Update broctl installation/configuration
  netstats [<nodes>]               - Print nodes' current packet counters
  nodes                            - Print node configuration
  peerstatus [<nodes>]             - Print status of nodes' remote connections
  print <id> [<nodes>]             - Print values of script variable at nodes
  process <trace> [<op>] [-- <sc>] - Run Bro (with options and scripts) on trace
  quit                             - Exit shell
  restart [--clean] [<nodes>]      - Stop and then restart processing
  scripts [-c] [<nodes>]           - List the Bro scripts the nodes will load
  start [<nodes>]                  - Start processing
  status [<nodes>]                 - Summarize node status
  stop [<nodes>]                   - Stop processing
  top [<nodes>]                    - Show Bro processes ala top
  update [<nodes>]                 - Update configuration of nodes on the fly
 
Commands provided by plugins:

  ps.bro [<nodes>]                 - Show Bro processes on nodes' systems


[BroControl] > cron enable
cron enabled
[BroControl] > install
creating policy directories ... done.
installing site policies ... done.
generating standalone-layout.bro ... done.
generating local-networks.bro ... done.
generating broctl-config.bro ... done.
updating nodes ... done.
[BroControl] >

[BroControl] > status
Name         Type       Host          Status    Pid    Peers  Started
bro          standalone localhost     stopped  
[BroControl] > start
starting bro ...
[BroControl] > status
Name         Type       Host          Status    Pid    Peers  Started
bro          standalone localhost     running   32206  0      04 Mar 12:21:47
[BroControl] >

 That's all!! Check the docs for more information.
 

Saturday 28 February 2015

Error - Could not open configuration file /etc/httpd/conf.d/nfsen.conf

During the installation of nfsen - a netflow monitoring system on my machine, I stumbled across this error after doing nfsen configuration:

Could not open configuration file /etc/httpd/conf.d/nfsen.conf

I checked all the permissions and they were correct. After some time, I realized that it is an SE-Linux issue!

[root@localhost html]# vim /etc/httpd/conf.d/nfsen.conf
[root@localhost html]# service httpd start
Starting httpd: httpd: Syntax error on line 221 of /etc/httpd/conf/httpd.conf: Could not open configuration file /etc/httpd/conf.d/nfsen.conf: Permission denied
                                                           [FAILED]
[root@localhost html]# ls -l /etc/httpd/conf/httpd.conf
-rw-r--r--. 1 apache apache 34418 Feb 28 21:52 /etc/httpd/conf/httpd.conf
[root@localhost html]# httpd -k start
[root@localhost html]# httpd -k stop
[root@localhost html]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          enforcing
Policy version:                 24
Policy from config file:        targeted

[root@localhost html]# chcon -t httpd_config_t /etc/httpd/conf.d/nfsen.conf
[root@localhost html]# service httpd start
Starting httpd:                                            [  OK  ]

Friday 13 February 2015

ImportError: No module named version

I have installed python beaver package and wanted to install elasticsearch-curator. That's when I encountered this error:

File "/usr/lib64/python2.6/distutils/dist.py", line 975, in run_commands

        self.run_command(cmd)

      File "/usr/lib64/python2.6/distutils/dist.py", line 995, in run_command

        cmd_obj.run()

      File "<string>", line 12, in replacement_run

      File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2310, in load

        return self.resolve()

      File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2316, in resolve

        module = __import__(self.module_name, fromlist=['__name__'], level=0)

    ImportError: No module named version

It turned out that python-daemon package with the culprit!

To get rid of, do the following:
    # pip uninstall python-daemon
    # pip install python-daemon
  
    If you wish, you may install specific version also, say
    # pip install python-daemon==1.6.1
    # pip install beaver --upgrade
    # pip install elasticsearch-curator


Thursday 12 February 2015

Elasticsearch - Exception in thread "main" java.lang.UnsupportedClassVersionError: org/elasticsearch/bootstrap/Elasticsearch : Unsupported major.minor version 51.0

While installing elasticsearch on my Scientific Linux 6.5, I encountered the following error:
[root@meg ELK]# Exception in thread "main" java.lang.UnsupportedClassVersionError: org/elasticsearch/bootstrap/Elasticsearch : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
Could not find the main class: org.elasticsearch.bootstrap.Elasticsearch. Program will exit.
^C

This error is generated because JDK version on the system is not present or it needs to be updated.
I downloaded latest version of JDK from Oracle site - http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html




The steps to configure the latest java on the system are described below:

[root@meg ELK]# rpm -e elasticsearch
warning: /etc/elasticsearch/elasticsearch.yml saved as /etc/elasticsearch/elasticsearch.yml.rpmsave
[root@meg ELK]# rpm -ivh elasticsearch-
elasticsearch-1.4.0.noarch.rpm  elasticsearch-head-master.zip
[root@meg ELK]# rpm -ivh jdk-
jdk-7u71-linux-x64.rpm  jdk-8u31-linux-x64.rpm
[root@meg ELK]# rpm -ivh jdk-8u31-linux-x64.rpm
Preparing...                ########################################### [100%]
   1:jdk1.8.0_31            ########################################### [100%]
Unpacking JAR files...
        rt.jar...
        jsse.jar...
        charsets.jar...
        tools.jar...
        localedata.jar...
        jfxrt.jar...
[root@meg ELK]# ja
jar      java     javac    javadoc  javaws
[root@meg ELK]# java -version
java version "1.6.0_33"
OpenJDK Runtime Environment (IcedTea6 1.13.5) (rhel-1.13.5.0.el6_6-x86_64)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)
[root@meg ELK]# ls -l /usr/share/java
java/       java-1.3.1/ java-1.4.0/ java-1.4.1/ java-1.4.2/ java-1.5.0/ java-1.6.0/ java-1.7.0/ javadoc/    java-ext/   java-utils/ javazi/
[root@meg ELK]# ls -l /usr/java/jdk1.7.0_71/
bin/                                jre/                                README.html                         THIRDPARTYLICENSEREADME.txt
COPYRIGHT                           lib/                                release
db/                                 LICENSE                             src.zip
include/                            man/                                THIRDPARTYLICENSEREADME-JAVAFX.txt
[root@meg ELK]# alternatives --install /usr/bin/java java /usr/java/jdk1.7.0_71/bin/java 2
[root@meg ELK]# alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
   2           /usr/java/jdk1.7.0_71/bin/java

Enter to keep the current selection[+], or type selection number: 2
[root@meg ELK]# java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
[root@meg ELK]# rpm -ivh elasticsearch-
elasticsearch-1.4.0.noarch.rpm  elasticsearch-head-master.zip
[root@meg ELK]# rpm -ivh elasticsearch-1.4.0.noarch.rpm
Preparing...                ########################################### [100%]
   1:elasticsearch          ########################################### [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch to start automatically using chkconfig
 sudo /sbin/chkconfig --add elasticsearch
### You can start elasticsearch by executing
 sudo service elasticsearch start

[root@meg ELK]# vim /etc/elasticsearch/elasticsearch.yml
[root@meg ELK]# service elasticsearch start
Starting elasticsearch:                                    [  OK  ]

Friday 6 February 2015

Find out which libraries being used by a program in linux

To find out the libraries being used by a software, use ldd. ldd will tell you what libraries are loaded by a particular piece of software. For example:

[psj@localhost ~]$ ldd /usr/bin/python
    linux-gate.so.1 =>  (0x00221000)
    libpython2.6.so.1.0 => /usr/lib/libpython2.6.so.1.0 (0x00222000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x00ca4000)
    libdl.so.2 => /lib/libdl.so.2 (0x00cc1000)
    libutil.so.1 => /lib/libutil.so.1 (0x0723a000)
    libm.so.6 => /lib/libm.so.6 (0x00cd3000)
    libc.so.6 => /lib/libc.so.6 (0x00b0b000)
    /lib/ld-linux.so.2 (0x00ae9000)

To list libraries for the programs under execution, lsof is your best friend.

[psj@localhost ~]$ lsof -n -P +c 0 |grep udev
gnome-settings- 2227            psj  mem       REG      253,0    60160      69022 /lib/libudev.so.0.5.1
gnome-settings- 2227            psj  mem       REG      253,0    27456      69023 /usr/lib/libgudev-1.0.so.0.0.1
gnome-panel     2246            psj  mem       REG      253,0    60160      69022 /lib/libudev.so.0.5.1


Thanks to a great tip from Johannes B. Ullrich at SANS, ISC.

More details are available here - https://isc.sans.edu/forums/diary/What+is+using+this+library/19275/

Thursday 5 February 2015

Fabric installation error - pkg_resources.DistributionNotFound: paramiko>=1.10

I wanted to install fabric package for automated deployments.  I was confident that I have installed all the dependent packages

# yum install python-devel
# yum install gmp-devel

I have also download latest version of gmplib and compiled it:

# tar -xvjf gmp-0.6.0.0a.tar.bz2
#./configure
#make
#make install

"gmplib" that comes with default CentOS installation is old and fabric reports time attack related vulnerabilities when running. More details - https://community.webfaction.com/questions/12199/libgmp-time-attack-vulnerability

# pip install meld3
# pip install ecdsa
# pip install pycrypto
# pip install paramiko
# pip install fabric

But, it was not going to be a smooth ride and I encountered an error:

Traceback (most recent call last):
  File "/usr/bin/fab", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2655, in <module>
    working_set.require(__requires__)
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 648, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: paramiko>=1.10

I tried many attempts in installing/re-installing/upgrading all the packages... The error will not going away... Frustrated, I checked my setup tools version:
psj@psj-desktop:~$ easy_install --version
setuptools 0.6

There after, I updated setup tools using pip:
# pip install setuptools --upgrade

You can also use:
# easy_install -U setuptools

All the python packages were re-installed once again using:
# pip install ecdsa --ignore-installed
# pip install pycrypto --ignore-installed
# pip install paramiko --ignore-installed
# pip install fabric --ignore-installed

The problem related to "paramiko" disappeared!!