Traditional endpoint client protection focused on blacklisting. This was pretty effective way back in the day but in todays ever-mutating world it is not very manageable or useful. The replacement for black list is white list. Well-managed white listing can be very effective, but managing it well is …well…difficult. Immunity’s approach at endpoint detection is to approach it differently. what if instead of focusing on bad or good, we look at attack behaviours, attack patterns and attack chains? That’s what we are going to do over the next couple of posts.
Initially, this post started as a simple question: what would it take to get El Jefe data into Splunk? The purpose is NOT to replicate what is already being done in El Jefe. El Jefe does what it does and it does it great. Instead we will use the data from E lJefe to provide dashboards, reports and alerts generated by Splunk.
First off, a warning: I am by no means a developer. Way back in the 90’s I could spin some pretty mean Pascal and dBase IV programs, but after working in desktop and server operations for many years I have developed into what I call CopyUnderstandProductionizegrammer.
A CUPgrammer’s typically has a problem that needs to get fixed in a hurry, searches for an existing solution for the problem or a similar problem, Copies the code and then if unfamiliar with the code tests it until it is Understood and known to fix the problem without introducing new ones. This is often done with copious amounts of print statements. Finally, the CUPgrammer implements the new adapted code into Production.
With access to some amazing minds at Immunity, one day I might not be a CUPgrammer and actually learn proper development practices, but for now, I followed my modus operandi and went to work.
There are many ways for Splunk to consume data. There isn’t a API for El Jefe that we can point Splunk to (yet) and while I could grab the data out of the SQL database that would mean changes if the schema changes, version upgrades, etc etc, In my mind, the best and most CUPgrammer option was to push the data to Splunk through the Splunk API.
I asked our Digital Executive Protection program team what the best place to get all of the El Jefe data from a system. Turns out all of the data is in a collected in a Python dictionary just before posting it to the XML service.
Since El Jefe is open source, on my El Jefe server, I opened the file
<location_of_El Jefe>/webapp/xmlserver/ElJefeXMLServer.py, after making a backup, of course.
In the import section add:
Then scroll down and locate… the class SecureXMLRPCRequestHandler section. Just above that line, enter in the following:
Then in the class SecureXMLRPCRequestHandler section right after the line rpc = ElJefeRPC() add the following.
Now before you start up El Jefe, make sure the username and password entered in Splunk have been created and add the following to splunk/etc/system/local/props.conf.
Restart Splunk and the ElJefe XML service, login to Splunk and do a quick search for sourcetype=eljefe.
SEDCMD-StripHeaders=s/^[^{]+//
With the data in, let’s build a quick situation dashboard.
In the not distant future, Immunity will likely release an app on Splunk apps that includes a few interesting dashboards and reports, but for now, either create an app, or add the dashboard to an existing app. This dashboard will highlight four things initially.
- Number of events per system
- Binaries over time
- Unique binaries
- Rare processes
Go to Splunk console, dashboards, create a new dashboard called ElJefe_OverSight, click edit source, delete the existing lines and paste in the XML from the link below.
NOTE: In our environment the {}. that precedes the json fields is removed on import to Splunk. If you did not do that in your props.conf make sure you add a {}. before each of the field names.
Save the changes, go back to the El Jefe app or wherever you created the dashboard and click on it.
You should now see something similar to this.