Thursday, September 17, 2015

Mobile applications instrumentation and reverse engineering, no-jailbreak style

With the advent of instrumentation frameworks such as Frida (1), mobile application assessment methodology has become increasingly sophisticated. Modern Enterprise environments almost always include a mobile device component and when performing a security assessment for the Enterprise, having the ability to rapidly introspect mobile applications is hugely beneficial to an assessment team.

Most Enterprise mobile applications are essentially just web service specific browser implementations in the sense that the application heavily interacts with a, often obscured, back-end web API.

Because the developers have in their heads an implicit assumption that only they are going to be peeking under the covers of their application, they often miss the subtle things that can be done with their API or the data they are sending to the mobile device.

So as a penetration tester, you can ask yourself these questions:

What data is available to the user of the application that could be used in a sensitive way. Is there geolocation data for other users? Is there information sent to the mobile device that is not displayed, but is highly interesting and can be correlated with other data to reveal something sensitive?

How much of the application's security is client side and not enforced by the server? This used to be common in web applications, but in mobile applications it is now a huge problem. Remember all the old bugs where you could set a price to -1 dollars and get something for free...
What rate limits are set on authentication attempts, if any?

Are there any input injection vulnerabilities in the back-end servers? Can I send weird data to another mobile device that confuses it?

These are all questions that you would ask in any web application assessment, but now, you want to ask them in the mobile world. And of course, these questions cannot be answered by automated static analysis, so exposing this to a human is a key feature of any toolset.

Having the ability to take an existing application and instrumenting it to interact with its back-end service in controlled ways hugely increases the ability to determine API semantics and attack surface scope without having to jump through a lot of code analysis hoops. It also allows you to quickly change the behavior of an existing application in an effort to make the back-end service perform actions it was never intended to perform or return data that was not intended to be visible to the end user.

Normally dealing with such analysis on mobile OSes can be frustrating as usually one would rely on the availability of public jailbreaks in order to jailbreak the device and then bypass the restrictions imposed by the mobile OS itself on outside application instrumentation. This is specifically true for iOS. But being too low level is a huge problem! You want to interact with the application the way the developers do - using the objects and functions they created!

As such, our consulting team has a reoccurring need to fully instrument and alter the runtime behavior of a given mobile application on iOS, but without having to rely on jailbreaks. This spawned the development of BLACKBUCK: a jailbreak agnostic iOS application instrumentation framework which we use to perform our mobile assessments (as you know if you are a customer :).


BLACKBUCK builds on top of a variety of existing analysis frameworks. It currently links frida-gum, capstone and a ctypes bridging framework that allows us to interact with Objective-C directly from Python. BLACKBUCK is essentially an iOS injectable dylib that provides a runtime Python-based bridge into the iOS application's runtime internals including its Objective-C objects and methods.

BLACKBUCK currently only supports iOS. For BLACKBUCK delivery on non-jailbroken iOS we use a second Immunity tool which we called JOEY.


The way you generally modify an iOS mobile application without relying on a jailbreak is to first obtain a valid certificate from Apple, then modify the target mobile app Mach-O binary so that it loads a custom dynamic library, re-sign and re-package everything, and then re-install the app to the device.

This is the general modus operandi for non-AppStore apps. Such as apps that are given to you by e.g. the Enterprise customer you are performing an assessment for and which lack the usual AppStore encryption layer. For AppStore based apps, you would first dump the decrypted application from memory and then proceed just as you would with a non-AppStore app.

JOEY automates the non-AppStore scenario, since there are already tools available to perform the memory dump rebuild for AppStore apps, JOEY currently does not include such functionality. It is, however, on the docket for a future release.

JOEY is written in PyQT and as such JOEY's front-end can run on anything that can run Python and the QT framework. The JOEY back-end has to run on Mac OS X which is where the actual code signing occurs.

The way JOEY works is very simple: you pass the original IPA package, the dylib you want to inject, and provide a destination path for the re-signed package. JOEY will then build the repackaged application it for you that includes your custom dylib.


BLACKBUCK is written in Python and provides an API to interact with all the frameworks we rely on, which means we can directly access a lot of the features that are normally internal to e.g. Frida.

As mentioned previously we also have the ability to interact with Objective-C code directly from Python. This Python layer allows you to implement an Objective-C class entirely in Python, hook Objective-C methods with Python methods, and so on and so forth.

Currently you can interact with BLACKBUCK either by uploading Python modules to have them executed or imported, or by accessing the BLACKBUCK web interface.

BLACKBUCK web interface

The BLACKBUCK web interface is very useful and allows us to interact with and control our hijacked application via BLACKBUCK. You could also use the alternate interaction method (file upload) to upload a Python module and then import it into the runtime session to start inspecting and influencing the application.

So now that you have a basic idea of what BLACKBUCK is, let's have a look at a demo:

1. Frida -

Friday, June 12, 2015

Look for DUQU2 across all time and space!

If you are running El Jefe than you can just use the below script to test for any possible Duqu2 infections that have occured across your network for all time (assuming they didn't recompile specifically for you, which is very possible).

Any user of El Jefe can run this script by putting it inside the eljefe/webapp/scripts folder. Of course, if you get a hit, you can examine the machines that were infected much more closely in the GUI itself.

Happy "Hunting" :)

---CUT HERE---

import sys
import os

if "." not in sys.path: sys.path.append(".")
if "../" not in sys.path: sys.path.append("../")
if "../../" not in sys.path: sys.path.append("../../")
os.environ["DJANGO_SETTINGS_MODULE"] = "webapp.settings"

from home.models import binaries

evil_md5 = [

binaries_hashes = set([b.binary_md5 for b in binaries.objects.all()])
filtered_hashes = list(set(evil_md5))
print 'Found %d binaries' % len(binaries_hashes)
print 'Tesing against %d duqu md5 hashes' % len(filtered_hashes)

for md5_hash in list(set(filtered_hashes)):
    if md5_hash in binaries_hashes:
        print 'Found hash %s' % md5_hash

Friday, March 6, 2015

CANVAS - Psexec & Kerberos credentials

With the recent release of MS14-068 it became quite clear that a PSEXEC (sysinternals) module would be a fine addition in CANVAS. We wrote and released in CANVAS 6.99 a quite simple yet rather effective one with a couple of fine features.

1. PSEXEC and its basics

First of all let us remember how the original SysInternals tool works. A remote command (be it interactive or not) is executed following this procedure:

  • The PSEXEC client extracts out of its own binary a Windows service (PSEXESVC.exe) and uses the current user token (or alternate credentials) to store it in one of the writable SMB shares of the target, usually ADMIN$.
  • The client then uses the DCE/RPC API associated with the SVCCTL named pipe to create a Windows service associated with PSEXESVC.exe. This obviously requires ADMIN privileges on the remote host.
    • The client remotely starts the service.
    • Once started this service creates a control pipe (\PSEXESVC) on the target.
  • The client remotely opens this pipe using the SMB API. It reads and writes on it following an internal protocol.
    • An internal cookie is used to ensure that both the client and the server are using a same version of the protocol (this is mandatory in case the Windows service would be left running on the target). 
    • The service gets the knowledge of various options requested by the client such as the command to execute, if the command should be run with lower privileges, etc.
  • The service creates 3 named pipes and starts a new process whose STDIN, STDOUT and STDERR file descriptors may be redirected to the 3 aforementioned pipes.
  • The client connects to the 3 pipes using the SMB API and initiates a read polling on STDOUT/STDERR pipes. If necessary (CMD.exe), it will also write to the STDIN pipe.
  • When the new process' life is over, the service may be stopped and destroyed.
  • The service binary may be removed from the share.
That said, people usually either use PSEXEC to:
  • Execute a command (RCE)
  • Start CMD.exe (interactive shell)
For obvious reasons, in a security context such as a penetration test they rarely use the Desktop interaction feature.

2. Our PSEXEC module

2.1. Implementation of the RCE/interactive shell features

We chose to keep the original PSEXEC named pipes polling model to implement the RCE functionality in our module. This was not strictly mandatory as other ways to do this exist. MSF for example relies on the redirection of the standard output in a file which is later downloaded using SMB. We decided to use named pipes in order to anticipate future needs (and avoid leaving files on disk in the event of a disconnection, which is slightly better OSPEC). On the other hand, we decided to use a traditional MOSDEF connect back to mimic the interactive shell. This is exactly what was implemented in our MS14-068 exploit.

Both methods have their pros and cons:
  • The RCE only requires a client connection to the 445 TCP port. However the pipe management is not 100% perfect and side effects may sometimes be observed with the current version. 
  • The MOSDEF shell is quite powerful (it includes the RCE and interactive shell abilities) but may not be possible if egress filtering is used. While you might eventually be able to disable the Windows firewall of your target using the RCE, there is nothing you could do against an external equipment.
Note: The choice between both of them is done by filling the "cmd" parameter. If "mosdef" is specified, then the MOSDEF connect back is used otherwise what ever command is specified will be executed (if possible).

2.2. The authentication in SMB

This module provides two ways to authenticate the user:
  • Using NTLMSSP (login + password)
  • Using Kerberos (login + password + domain OR login + domain + credential file)

The NTLMSSP authentication may use the plaintext password or the NTLM hash of the password. This is the so called pass-the-hash technique as shown below:

Computing the NTLM hash

Using the NTLM hash in the CLI

The Kerberos authentication is used in two cases:

  • Implicitly as a fallback method if the NTLMSSP fails (this can happen with domain policies enforced) but only if the user had also specified the domain FQDN (which is not mandatory with NTLMSSP).
  • Explicitly if we have Kerberos credentials (potentially retrieved using post-intrusion tools) and we intend to use them. We decided that these credentials would always be stored in a UNIX ccache file because we had done the effort to write an API for MS14-068. 
To illustrate the situation, let's generate artificial ADMIN credentials (basically the TGT of the domain administrator) using kinit in the IMMU4.COM domain:

Artificially generating a Kerberos TGT
We can then fill the appropriate field in CANVAS to specify these credentials:

Using a Kerberos credential file to compromise the target
And we finally get our node popping:

Target is owned!

2.3. Upload and execution of a specific binary

In a few cases, you may want to run your own binary on the target. Our module will take care of both the upload and the execution of this specific binary. It will basically be seen as a specific RCE.

Several options may be specified either on the command line or using the GUI:
  • The command and argument to execute (cmd)
  • The local directory in which is stored the binary to upload (local_upl)
  • The directory in which should be stored the binary once uploaded (remote_path_upl)
For example, suppose the user is running this command:

$ python exploits/psexec/ -t -p 445 -Ocmd:"mybinary mybinaryargument" -Ouser:administrator -Odomain:IMMU2.COM -Okrb5_ccache:/tmp/krb5cc_1000 -Olocal_upl:/tmp/BABAR -Oremote_path_upl:"C:\\"

Then /tmp/BABAR will be uploaded in C:\\ as mybinary.exe (the .exe suffix being automatically added) and executed from this directory.

3. Stealing credentials using "kerberos_ticket_export"

We wrote a command module that allows us to detect the presence of kerberos credentials on both Windows and Unix targets. This module is used to list the credentials that are associated with a compromised account under which is running the MOSDEF callback:
  • If the node is a Linux/BSD system, all the Kerberos tools will be using standard MIT libraries which store credentials in specific directories (for example /tmp/krb5cc_%{uid} on Ubuntu/FreeBSD).
  • If the node is Windows we use LsaCallAuthenticationPackage() in a MOSDEF-C payload.
Here is a little demo:

We also wrote a similar module to actually extract the tickets and convert them (if necessary) to ccache files. This is where PSEXEC comes to play. Both modules are quite similar:
  • Exporting credentials on Unix systems basically means copying a file that we can immediately reuse. The ccache file may include a TGT but also one of several TGS.
  • On Windows systems, we are (currently) limited to TGT extraction. Basically we keep using LsaCallAuthenticationPackage but with different parameters. This may allow you to extract the TGT and the associated session key. In a couple of circumstances and for security reasons that we may explain in another blogpost, the session key cannot always be retrieved in which case the TGT is useless. Once the TGT, the session key, and other parameters are extracted, we use our ccache API to build and store the tickets locally (in a session directory).
The following screenshot illustrates the ticket extraction:

Ticket is extracted and saved (oops small typo in the module code :P)

And the subsequent use of PSEXEC to own a new node:

The previous relative path is specified (although cut by the GUI here)

4. Last words

The Kerberos ticket modules are currently in an early stage of development. There is lots of room for improvement including full 64 bits support, corner case situations and privileged account related tricks. As for the PSEXEC module itself, we may add a couple of features such as a "runas" option.

Thursday, November 13, 2014

El Jefe 2.2 - The curious case of a 3G Modem ( Tracking USB devices and malware)

We are glad to announce a new and exciting release of El Jefe!

If you are in the business of protecting networks, you certainly spend enormous amounts of time grepping through pages of network log files trying to track down the origins of any threat that hits your network. Maybe you have a "SIEM" that helps you by storing this information for a window of time, and letting you search it.

If you are lucky, you can find your Russian malware inside the inbox of your director of finance's secretary in the form of a beautiful Word 0day. But there are days that the odds are not with you, and there is no explanation - no way to find out how your network got infected. Let's add a little bit more complexity to our mental game and let's make the threat a sophisticated implant, something like INNUENDO.

It's time to think USB! And that is exactly what this new release of El Jefe want to give you. A clear and visual way to track down every USB device connected on every computer on your network.

A good way to understand your USB devices and their relationship is to open the USB Relationship Map on your El Jefe server:

USB Relationship Map
The first thing you notice is that WIN-4C072EVNM9N and Anibal-PC workstation shares three devices (two mass storage and one Huawei Mobile device).

Double clicking on the devices will give us interesting information about the type of device and who has plugged it in.

This Huawei 3G Modem has been used on two workstations. Why are we even using Huawei modems?!?

You can also obtain a list of every USB device plugged and unplugged from any given endpoint or all endpoints. This will give us a good idea of the HUAWEI mystery we want to highlight in this post.
This list of USB devices on the endpoints has beautiful Christmas colors.
It seems that the HUAWEI Mobile and the HUAWEI SD Storage are connected at the same time, so this probably means they are the exact same device.

The Events view always provides us with a good picture of how this will correlate with process generation, and as expected it doesn't let us down. Seconds after connecting the 3G modem Autorun.exe was executed on the machine. This is not pretty at all.

Event view
Binary inspection give us a thumb down, this is getting worse.

The Thai Antivirus BKav knows something that neither the Russian or the Estonian AV's know, WHAT IS THIS .exe HIDING?
BKav identifies the binary as a KeyLogger

With one click in El Jefe we can now use Cuckoo to start to analyze the binary more, and even search our entire enterprise to see where else it has been installed. But we leave that as an exercise for the reader.

We hope you enjoy the El Jefe 2.2 release!

Monday, September 22, 2014

El Jefe and Splunk Part 1

Immunity focuses on the offensive side of security, even with a defensive product like El Jefe.

Traditional endpoint client protection focused on blacklisting. This was pretty effective way back in the day but in todays ever-mutating world it is not very manageable or useful. The replacement for black list is white list. Well-managed white listing can be very effective, but managing it well is …well…difficult. Immunity’s approach at endpoint detection is to approach it differently. what if instead of focusing on bad or good, we look at attack behaviours, attack patterns and attack chains? That’s what we are going to do over the next couple of posts.

Initially, this post started as a simple question: what would it take to get El Jefe data into Splunk? The purpose is NOT to replicate what is already being done in El Jefe. El Jefe does what it does and it does it great. Instead we will use the data from E lJefe to provide dashboards, reports and alerts generated by Splunk.

First off, a warning: I am by no means a developer. Way back in the 90’s I could spin some pretty mean Pascal and dBase IV programs, but after working in desktop and server operations for many years I have developed into what I call CopyUnderstandProductionizegrammer.

A CUPgrammer’s typically has a problem that needs to get fixed in a hurry, searches for an existing solution for the problem or a similar problem, Copies the code and then if unfamiliar with the code tests it until it is Understood and known to fix the problem without introducing new ones. This is often done with copious amounts of print statements. Finally, the CUPgrammer implements the new adapted code into Production.

With access to some amazing minds at Immunity, one day I might not be a CUPgrammer and actually learn proper development practices, but for now, I followed my modus operandi and went to work.

There are many ways for Splunk to consume data. There isn’t a API for El Jefe that we can point Splunk to (yet) and while I could grab the data out of the SQL database that would mean changes if the schema changes, version upgrades, etc etc, In my mind, the best and most CUPgrammer option was to push the data to Splunk through the Splunk API.

I asked our Digital Executive Protection program team what the best place to get all of the El Jefe data from a system. Turns out all of the data is in a collected in a Python dictionary just before posting it to the XML service.

Since El Jefe is open source, on my El Jefe server, I opened the file
<location_of_El Jefe>/webapp/xmlserver/, after making a backup, of course.

In the import section add:

Then scroll down and locate… the class SecureXMLRPCRequestHandler section. Just above that line, enter in the following:

Then in the class SecureXMLRPCRequestHandler section right after the line rpc = ElJefeRPC() add the following.
Now before you start up El Jefe, make sure the username and password entered in Splunk have been created and add the following to splunk/etc/system/local/props.conf.

 Restart Splunk and the ElJefe XML service, login to Splunk and do a quick search for sourcetype=eljefe.

You should get some results. Click on the all fields button in Splunk and notice that Splunk auto extracted all the fields.

By default, Splunk will prefix each filed with a {}. . If you want to remove them, add the following between the sourcetype and KV_mode lines in props.conf.
With the data in, let’s build a quick situation dashboard.

In the not distant future, Immunity will likely release an app on Splunk apps that includes a few interesting dashboards and reports, but for now, either create an app, or add the dashboard to an existing app. This dashboard will highlight four things initially.
  1. Number of events per system 
  2. Binaries over time 
  3. Unique binaries 
  4. Rare processes 
The dashboard will include a selectable time and the option to input a system name.

Go to Splunk console, dashboards, create a new dashboard called ElJefe_OverSight, click edit source, delete the existing lines and paste in the XML from the link below.
NOTE: In our environment the {}. that precedes the json fields is removed on import to Splunk. If you did not do that in your props.conf make sure you add a {}. before each of the field names.

Save the changes, go back to the El Jefe app or wherever you created the dashboard and click on it.

You should now see something similar to this.

Now that we have the data coming in, in our next post, we will go back to El Jefe and get a little offensive with it.