BloodHound Basics

Over the past months and years at $dayjob, I have done a lot of work with BloodHound to remove attack paths and improve the attack surface of our Active Directory environment. During this time I have found a number of ways to leverage BloodHound to perform what is effectively an audit of Active Directory, by identifying key attack paths and quantifying issues within large enterprise environments.

Initially, I found the more advanced query language (Cypher) to be quite complex, but it is very powerful and just happens to use a slightly different structure to other languages such as SQL.

To start, Ill generate a BloodHound dataset using the DBCreator script provided by SpecterOps. Following that, I will cover what Cypher is and explain some of its features in a later post. Finally, I will share some queries which can help to audit your environment.

For this guide, I wont cover what BloodHound is or the very basics of the program. There are other guides which already exist which do a great job of this, and the documentation is very thorough.

Environment Prep

Lets use the DBCreator script, but we will use byt3bl33d3r’s PR to fix some of the issues in the original version. This section does get a bit techy, so skip over the install section if you just want to learn about BloodHound!

I had a lot of issues getting this to work, so I simplified the usage of pickle in the MainMenu class. By changing the assignment to first_names and last_names to something simpler:

first_pickle = open("data/first.pkl",'rb')
last_pickle = open("data/last.pkl",'rb')

self.first_names = pickle.load(first_pickle)
self.last_names = pickle.load(last_pickle)

cmd.Cmd.__init__(self)

I also found the environment variables didn’t work, so I opted to clear them, and finally the group nesting function uses a hardcoded value (dept = group[0:-19]) for the length of the default ‘TESTLAB.LOCAL‘ domain name. I changed this to the value of self.domain + 6 to return the correct value & work as expected.

dept = group[0:-(len(self.domain)+6)]

The final group nesting logic is:

I will make a PR for this if I get around to it one day!

And lets load up BloodHound to verify it worked correctly:

And we get a pretty neat graph out of it when we run one of the pre-built queries – but more on this later on!

Basic Analysis

When we have our data loaded into BloodHound, we are presented with a view which shows all of the Domain Admins in the data we gathered. In my example, there are a lot of Domain Admins, so the graph is quite large!

We can click on any of these users to load details on that specific user. For example we can see that BDELUNG00508@HTTP418INFOSEC.COM is the account for Mr Brendan Delung.

We can use this to show some basic information on the user, such as their name (Brendan Delung), when they last logged in (Sat 19th November 2022)

If we scroll down a bit to the Group Membership section, we can see the First Degree Group Membership entry. This complex name is another way of saying the groups which this user is a member of. From when we first loaded up BloodHound, we know that Brendan is a member of the Domain Admins group (i.e. DOMAIN ADMINS@HTTP418INFOSEC.COM). From the screenshot below, we can see that Brendan is a member of 8 groups (including the Domain Admins).

If we click on this row, BloodHound will run a query in the background to show the groups which Brendan is a member of in the graph view. The view now shows us the groups:

Another option to represent the groups which a user is a member of is the Unrolled Group Membership, which is below the First Degree Group Membership feature we just used.

This takes the output from above, and then checks if any of these groups are within other groups and so on. Again, by clicking on the row we can see the graph which it creates, showing a further 2 groups which BDELUNG00508 is part of:

As we can see, the first ‘column’ of yellow nodes show the groups we could see before (Starting with OPERATIONS00039), but now we can see that the OPERATIONS0122 group is a member of another group (OPERATIONS00826), which itself is within another group (OPERATIONS01589)!

This shows the power of BloodHound, as running queries like this gets very complex with large environments. Whilst the output here is a little boring to us as an attacker, it becomes far more interesting if one of these unrolled groups has access which was not expected, such as local admin to a server.

Pathfinding

BloodHound allows us to find paths between AD objects easily, using the ‘Pathfinding’ option in the UI.

If we click on this icon, we can now enter a ‘Start Node’ and ‘Target Node’ – in other words, where are we and where do we want to get to.

In the context of a red team, the Start Node could be a user who has been phished, and the End Node could be the Domain Admins group (Or whatever we want to ultimately compromise), which would show attack paths to obtain domain admin rights.

We can also fill this detail in by right clicking on a node and then selecting either ‘Set as Starting Node’ or ‘Set as Ending Node’.

To show this, we will use DBERENDT00668 as our starting point.

As we type in a group, BloodHound will autofill suggestions:

After some thinking, BloodHound will show us an ‘attack path’ – the steps we would need to take as an attacker to become a Domain Admin user.

To explain the above attack path, DBERENDT00668 user is a member of the IT00928 group. Members of this group can then RDP onto the COMP01364 server. This server then has a session for MSCHIVELY01554, who is a Domain Admin user.

If we wanted to learn more about any of these permissions, we can right click on the ‘edge’ (The line between the coloured nodes) and then click on ‘Help’.

This will then give a short overview on how it could be exploited:

High Value Groups

BloodHound has the concept of ‘High Value Groups’, which represent the traditionally highly powerful groups within Active Directory such as Domain Admins, Enterprise Admins and so on. In short, if any of these AD objects are compromised by an attacker, it is very bad news! In the graph view, these objects have a small diamond on the top right of their icon.

Owned Users

Another core concept is marking users as ‘owned’, which can be done by right clicking on a user and clicking on ‘Mark User as Owned’. This does two things:

  1. Marks the user object with a little skull symbol to show they are owned
  2. Allows us to filter on ‘owned’ users in our queries

BloodHound has a number of queries to search from users who are owned – for example the Shortest Paths to Domain Admins from Owned Principals query, which will search from every owned user to find the shortest route to becoming Domain Admin.

I have found this feature to be very useful when combined with other datasets. For example, if a password spraying or cracking exercise is performed, then any weak accounts could be marked as ‘owned’. We can then use Bloodhound to highlight the issues posed by these accounts in a really visual way – showing just how ‘close’ a weak account might be to becoming a domain admin!

Moving Laterally

Another key use case for BloodHound is for attackers, when they have first landed in an environment and are looking to move laterally. If we assume that we have infected the DBERENDT00668@HTTP418INFOSEC.COM user, it would be time consuming to establish our access purely through LDAP or PowerShell queries.

If we load up the user, we can see that they have a lot of interesting outbound access. In the screenshot below we will focus on the Execution Rights section of BloodHound. This shows the permissions that our user has. For example First Degree RDP Privileges will show the servers where our user has been explicitly granted access via RDP.

The Group Delegated RDP Privileges will show servers where our user is in a group (or nested groups) which has been granted access to a resource via RDP. More information on how this could be abused can be found on the BloodHound wiki.

If we click on the Group Delegated RDP Privileges entry above, BloodHound will again render this into a graph for us – showing that 6 different groups are granting access to servers via RDP for this user.

Custom Queries

Finally, at the bottom of the graph view is the ‘Raw Query’ tab, which allows us to run our own custom queries in the ‘Neo4j’ language – which we will cover in my post on the more advanced usage of BloodHound. This allows us to run far more complex queries and quantify a lot of the data in AD rapidly.

Diagrams: Timelines

Aren’t timelines great?!

Anyone who has spoken with me will know that I am a huge fan of diagrams as a way of breaking down complex topics into easier to understand concepts. Over the past few months and years at $dayjob, I have produced a number of diagrams during the reporting phase of our internal testing. Over this post, Im going to focus on one particular type of diagram – the mighty timeline. Bear in mind that these timelines work well from both an offensive and defensive standpoint!

There are several key benefits to producing these timelines, in no particular order:

  1. It ensures that you fully understand the order in which activity occurred. For example:
    • Do we know exactly where we obtained credentials X?
    • Where did they attempt to use them?
    • Do we know all of the hosts which a threat actor moved laterally onto?
  2. If you make this timeline during the live testing window, it serves as an excellent tool for deconfliction between red team activity and the blue team.
  3. These timelines can be given as part of an executive overview without having to dig into vast levels of detail and can help to visualise the stages of an attack

For context, my role at $dayjob is more of a white-team role, where I organise and co-ordinate red team testing – so I often have to do debriefs on the output of red team testing!

More Timelines!

For this post, I will be using the GDPR report into the British Airways hack. As a public document, we can use this to produce a potential timeline of the incident by reading between the lines of the content included in the report. I will emphasise that I have no idea on the detail behind the BA hack, all the detail in this post is from the ICO ruling. Any detail which is actually true is purely by coincidence (a.k.a. Please don’t sue me).

Throughout this, I will be using Draw.io which is a free alternative to Visio. If you have the option, I would recommend Visio as it does have some nice extra features.

Typically, I produce 4 diagrams for any complex red team engagement:

  • Low-Level Diagram
  • Medium-Level Diagram
  • High-Level Diagram
  • Defensive Improvement Opportunities

Low-Level Diagrams are typically shared with our trusted agents, or the defensive teams following any testing. High-level diagrams & ‘Defensive Improvement Opportunities’ diagrams are used extensively for stakeholder debriefs or in PowerPoint presentations. Medium-level diagrams tend not to be used all that much in my experience, but still have their place!

Low Level Diagrams

For these low level diagrams, I feel that there is basically no limit to how detailed they can be. From my experience, these are what I use on a day to day basis through the live testing period. These are eventually shared with the blue teams in order for them to learn from and to correlate any alerts and detections – so the more detail the better!

They also serve as an excellent tool for deconfliction, so I would recommend producing them as you go if at all possible. The process of making these diagrams often reveals any areas which you don’t fully understand, so you can clarify your knowledge with the red team lead before the reporting stage begins in anger.

An example could be the following, focusing on the initial days of the attack on BA:

There are a few features of this diagram which I find have been useful, going from top to bottom in the diagram:

  • Adding a few brief summaries of key points can help to draw attention to key events. I tend to keep these key milestones the same across all of the diagrams I produce, from the low-level to the high-level diagrams. This helps to highlight the key moments in what can sometimes be quite complex diagrams!
  • A rough summary of the kill-chain steps can help to indicate what was happening at the given time.
  • Again, splitting the background colours up into individual days will help to distinguish each days activities. I find this also helps to indicate how much activity occurred. For example, the 22nd was clearly a very busy day for Magecart in the diagram above.
  • Highlighting SOC tickets or other defensive actions can help to tell the story of any near misses, and show potential areas for improvement. Again, these are totally fictitious SOC tickets (Please don’t sue me)

High Level Diagrams

For these diagrams, I normally try to be quite succinct in the comments made on the diagram. As you’ll see below, there are far less boxes, and they contain less technical information. This is intentional, as this sort of high-level diagram is more likely going to be viewed by less technical stakeholders, who (unfortunately) wont care about the leet techniques used to gain initial access. This also has the added advantage that these diagrams will then tend to fit into a PowerPoint slide without having to be split up.

In order to maintain consistency, I have again included the key milestones and kill chain elements, and am instead using a scale of weeks instead of days.

Another option which can be handy, is to create a diagram which includes every day (or week) for the time period which you are covering. For example, in the diagram above I deliberately didn’t include the 2nd, 9th or 16th July as there was no activity. In the diagram below I have included it, which can put highlight the length of dwell time between periods of activity.

From the below diagram, we can clearly see that there were two significant periods with no major activity – which highlights the length of time from initial access to detection.

In prior tests, I have also taken this time to point out what could have happened during the red team exercise. For example, above I point out that ransomware could have been deployed – an action which would have likely been very damaging for BA’s estate (And arguably financially crippling compared to the GDPR fine they ended up with).

Whilst this is more relevant for tests where I operate as a white team lead, it can be useful to apply some business context to the findings. For example, adding a box to clarify potential business impacts which could have occurred:

  • “With access as user X, testers could have compromised system ABC but decided on other targets”
  • System ABC was compromised, but testers used an alternate route to achieve Objective X”

Defensive Improvement Opportunities

The final ‘style’ of timeline diagram which I will cover is the ‘improvement opportunities‘ diagram. This has been a secret weapon of complex internal debriefs, where suggestions from a red team report can be mapped onto a timeline, showing the potential solutions to the key findings of a red team exercise.

Following the debrief of a test, the next logical question on most people’s mind is “How do we fix this?”. Therefore having a diagram which contains recommendations mapped to the testing timeline is a good starter for this conversation. If a particular event from testing draws a lot of attention, then it can help to direct the discussion. For example below, if the initial access was especially concerning to the audience, then there are 4 potential options which could be discussed to aid remediation (REC01, REC02, REC03, REC04).

Below is a snippet of what could have been produced for the initial stages of the BA report (With several fictitious additions from myself to make it a little more interesting)

The naming of this diagram is admittedly very woolly, but it is very much deliberate (And not me running out of ideas for new timelines). This slide typically will lead to a lot of ‘discussion’, and so it is best to ensure the language used remains neutral and simply presents the facts without any opinion. For example, instead of including a statement of ‘The entire Citrix team effectively have Domain Admin permissions’, I would instead simply suggest an AD permissions review to ensure I don’t see my P45 any sooner than I need to.

As you can see, splitting the findings into procedural and technical ‘improvement opportunities’ can help to distinguish between two common groups for these opportunities. Depending on the test, these groups might change – or you might find that splitting them into groups isn’t that much use!

I would also advise giving a unique reference to each suggestion (e.g. REC01, REC02…) , as it makes it far easier to discuss the suggestions and ensure that everyone is referring to the same item on the diagram! Additionally, it means that any actions or notes can be accurately taken – there’s nothing worse than having to second guess your notes after a debrief!

Timeline Guidelines

As you may have noticed from the diagrams above, I try to be quite consistent with how I lay them out and design them. I find that this consistency makes them easier to read, and means the audience can focus on the content rather than unusual layout or styling choices!

One of the most important things I find is to ensure the direction of the arrows makes logical sense when laying out a timeline. For example, in the high-level view below, I have highlighted the two ‘rows’ of events I created to show the two separate strands of activity:

  1. Red – The potential ransomware angle
  2. Blue – The eventual PII theft angle

This also makes it clear that both of these chains of events stem from the same initial point (Citrix compromise & break-out).

To show why I use these distinct ‘rows’, I have made another diagram below which only uses a single ‘row’ for the events.

This shows how the narrative behind a red team exercise (or incident) can quickly become confusing if we decide to do this, as it is hard to distinguish how each action was achieved. For example:

  • Did we get Domain Admin privileges from a database server?
  • Could ransomware have been deployed estate-wide from a database?

I try to use the same font across the entire diagram. In areas where I want to draw attention, I typically make the font bold and increase it by a font size or two. If you need to add more even more detail to an event, I typically just add it into a comment on the shape when I am using Visio – so that I can refer back to it later on. For example, below I make the title of the boxes bold, but keep the text the same size.

In terms of the colours to choose, I would be careful when using shades of red and green, as they have connotations of red being bad and green being good – which you might not want! I have found the following colours to work well for specific events:

ColourUsage
Dark Pink (Totally not red 😉 )Key attacker actions/objectives
OrangeAdd context to attacker activity
BlueDefender actions
GreenKey defensive actions

Below is a snippet of the high-level view showing these colours:

Additionally, when it comes to debriefing tests, any red on a report can appear inflammatory to the teams which have to remediate the issue. For example, this would likely not make you many friends in a debrief:

Accessibility

Across all of these timelines, accessibility should be considered the whole time. Generally we want to make them as readable as possible, so ensuring we use a good size font is an obvious place to start, although this can be challenging on complex low-level diagrams which cover a wider timespan.

Additionally, colour-blindness does effect a lot of people, so I would avoid making red and green colours an integral part of the diagram. I tend to use colours to enhance key parts of the diagram, but a colourblind reader should also be able to glean the same information.  To take this further, I have previously added a dashed border or patterned background to signify events of interest – as both colourblind and non-colourblind readers will be drawn to those items. This can start to look messy if you aren’t careful, so use it at your own risk!

Finally, try to make the diagrams as uncluttered as possible. Reading diagrams with large amounts of intricate detail and arrows is very difficult to understand (and boring), they can often be simplified, especially when included in a PowerPoint slide! Whilst this is of less concern for the low-level diagrams which contain all of the technical information, I try to always minimise the number of events included in a high-level diagram if at all possible.

Summary

Hopefully this post was of use, I truly think that well-designed diagrams and timelines are a real timesaver when it comes to reporting and debriefing a red team test. With some good thought and planning up-front, they can avoid a lot of confusion and generally lead to better outcomes when doing a debrief.

Attacking Password Managers: LastPass

This is the second post in a series I have done looking at password managers. My first post was on KeePass and covered some techniques which can be used against local password managers.

For this post, LastPass will be used as an example of a cloud-based password manager. In my opinion, these managers often have better UI/UX, so are easier for non-techy people to use.

LastPass has unfortunately had a *pretty rough* history, with several high-profile breaches in 2015, August 2022 and November 2022. Despite this, I would consider it as one of the most popular password managers – so it is worth using for this post!

There are several methods which I will cover here, most of which will apply to any cloud/browser-based password manager:

  1. Dumping Saved Credentials
  2. Dump The LastPass Vault From Memory
  3. Session Theft – Browser Pivoting
  4. Session Theft – Cookies via DPAPI
  5. Remote Debugging – Accessing The LastPass Vault
  6. Remote Debugging – Accessing Cookies Via The Debug WebSocket

I was only able to successfully access the vault through the ‘Remote Debugging – Accessing The LastPass Vault’ section. With the other techniques I was able to get to a position where I believe I had the correct cookies or position, but I was unable to successfully load the vault. I suspect this is due to LastPass performing host or session checking – but it could well also be due to me needing to get better at Pass-The-Cookie attacks!

Dumping Saved Credentials

To dump any saved Chrome passwords, we can use SharpChrome with the logins /password:PASSWORD parameters. This does require us to get the plaintext password of the user, but there are many ways of obtaining this 😉

To dump other Chromium based browsers such as Brave or Edge, we can use the /browser flag. For example, we would use the following command to interact with an Edge browser: SharpChrome.exe logins /password:PASSWORD /browser:edge

One thing to bear in mind with this approach is that LastPass will detect new sign in locations, such as different browsers, IP addresses and so on – so we cannot dump creds for LastPass and login directly to it!

Dump The LastPass Vault From Memory

For LastPass to work as an extension, it loads the vault into memory – but there are some weaknesses in the way in which it does this. Notably a BlackHat talk from 2015 covers this in technical detail if you are interested.

Luckily TrustedSec have automated a lot of this with their lastpass BOF, which is covered in an excellent 15-minute demo on YouTube. The BOF is part of their CS-Remote-OPs-BOF‘ repository

To start with, I will ensure that the LastPass extension is logged in but I will not have LastPass.com open in Chrome

I will then use Cobalt Strike to list the processes and pass the PIDs for Chrome processes into the lastpass BOF.

It will begin to parse memory and dump out any relevant strings, for example the master password:

As well as credentials from within the vault:

I did find that the BOF would only pull credentials from the vault after I had specifically searched for them in the extension, though it did appear as if LastPass have altered the way in which the extension works since the BOF was written, as the parsing was a little off when I used it. Nonetheless, this is a great BOF and serves as a very useful starting point for parsing Chrome’s memory.

Session Theft – Browser Pivoting

If our target has authenticated into LastPass, we can use Cobalt Strike’s Browser Pivot feature to leverage this already-established connection, but the target would have to be using Internet Explorer 😔.

We can use FoxyProxy to route our traffic through the HTTP proxy which CS has established

After adding an exception and relaxing rules on HSTS and the network.stricttransportsecurity.preloadlist option, this did partially load LastPass, but I had issues with it successfully loading the vault.

Session Theft – Cookies via DPAPI

I then turned to dumping values via DPAPI, which was a bit of a struggle. I believe that Chrome has altered the way in which it stores cookies, which prevents some of the ‘older’ tooling from working automagically. To counter this, I went to the Chrome/User Data folder on my test lab and searched for ‘cookie’, revealing the new location!

It turns out that the Cookie DB is within %LOCALAPPDATA%\Google\Chrome\User Data\Default\Network, which isnt where SharpChrome looks – most other blog posts mention to look in the \Default\Cookies folder. We can use this file path in SharpChrome to dump the cookies using the following command:

SharpChrome.exe cookies /target:"C:\Users\vagrant\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies" /password:PASSWORD

Looking at the documentation, we need to provide this with the /statekey:X parameter. We can obtain these values by running SharpChrome.exe statekeys

We will use the value ***EC73E9 for the /statekey parameter, and will also use the /url parameter to only include cookies for lastpass. This leaves us with the final command:

execute-assembly /home/user/SharpChrome.exe cookies /target:"C:\Users\user\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies" /password: /statekey:YADDAYADDA_EC73E9 /url:lastpass /format:json

This /format:json allows us to export directly into the Cookie-Editor extension.

Remote Debugging – Accessing The LastPass Vault

At this point I started looking into the remote-debugging-port technique which has been covered a lot recently. I then stumbled upon MDSec’s post which covered the technique below in far greater (and better) detail – go check out their post!

I found that using a redirector and the CS SOCKS proxy was very slow in my lab, even when using an interactive beacon so is probably too intensive for real world usage unless you absolutely need to! Ultimately I used Chisel to directly connect from the target to my team server, in the real-world you would want to ensure this is way more locked down!

To start, I spawned Chrome with chrome.exe --remote-debugging-port=10999. As covered elsewhere, there is a --headless argument which wont create a GUI window, but I had serious issues getting it to work!

Following a lot of errors, I realised we need to Chrome with the victims own user profile. I didn’t have success specifying a user profile directory, but did have success just using --profile-directory="Default". I believe that this profile directory value can change depending on how the target’s environment is configured – so this might change!

Using Chromium I used the flags described within the MDSec blog post to use the Chisel SOCKS proxy inside Chrome:

--proxy-server="socks5://127.0.0.1:1080" --proxy-bypass-list="<-loopback>"

This did appear to work on Chromium in Ubuntu 22.04 – but it would not successfully load the LastPass Vault for me. Instead I used a Windows VM with Chrome to more closely mimic the target user’s environment.

To get the remote debugging URL, we need to visit http://localhost:10999/json/list and get the devtoolsFrontendUrl value.

Then we can use the following magic URL from MDSec to spawn a new instance of the LastPass vault extension:

localhost:10999/json/new?chrome-extension://hdokiejnpimakedhajhdlcegeplioahd/vault.html

We then visit /json/list to see that it has spawned a new instance

Which we can then visit in our attackers browser which we started earlier.

The demo above will replicate any actions you take onto the targeted users desktop. This is has obvious, significant drawbacks so would require some *heavy* social engineering to work if you pursue this method without using the headless mode or another technique!

Ultimately, I couldn’t get this to work in headless mode through Firefox and Chromium on an Ubuntu machine. I suspect that this is a combination of:

  • LastPass is likely doing some sort of host checking, which fails when using a different host to the target (Windows/Chrome vs Ubuntu/Chromium&Firefox)
  • Using headless mode introduced other errors
  • Lack of ability

Remote Debugging – Accessing Cookies Via The Debug WebSocket

Accessing the vault via a GUI felt like a pretty long winded and noisy way of accessing LastPass. I did some hunting on Google and found a post by SpecterOps on dumping cookies via the debug port which we were using in the previous example.

This post mentions the CookieNapper tool by GreyCatSec, which uses WebSockets exposed by the debug port to simply query for all of the cookies. This has several advantages:

  • No need to use SharpChromium or load extra binaries
  • Invisible to the user
  • Should have less latency as the attack would require less usage of proxies

Using the SOCKS proxy we set up with Chisel, we can use the tool to dump cookies:

From here, we can use the EditThisCookie extension to import the cookies. Again I had issues getting access to the vault, but the cookies did appear to be successfully dumped.

Summary

In summary, I learned a lot digging into these 2 password managers – in no particular order:

  • KeePass has a number of pretty significant design flaws, even in the latest version
  • The backdooring of KeePass via the trigger system is pretty neat
  • Scanning memory is a really handy technique for password managers and programs such as these…
  • I need to improve at pass-the-cookie attacks
    • Or just don’t target LastPass I guess ¯\_(ツ)_/¯
  • Password managers are still a very good solution, and are hugely more secure than other solutions (Cough passwords.xlsx)

For what its worth, I think LastPass (or more broadly cloud-based password managers) are the best solution for most users. They tend to be much more usable than local applications such as KeePass, which should in theory encourage end users to use it more, thus making their credentials more likely to be more secure. For more technical or computer-savvy users, KeePass is ultimately more secure from an architectural point of view in my opinion.

That being said, cloud-based password managers are a huge target for attackers and you do end up putting a lot of trust into the provider, rather than an endpoint which is under your control. With LastPass now having suffered multiple breaches, this is an area which attackers will continue to target!

Further Reading