RedTeamNotes: Combining Notes & Graphs!

Intro

RedTeamNotes started as a mini project to try and make a better note taking application than what was currently available. The big issue for me with apps such as Obsidian or OneNote was that whilst they have great note taking capability, they struggle to show how different notes relate to each other – unless you follow every link and manually piece it together.

Whilst doing CRTO, I often found myself in a position where I was trying to achieve an objective but would struggle to remember all of the various ways of achieving this. For example, to move laterally, I would likely remember that I could:

  • Dump LSASS to obtain AES256 keys
  • Obtain the plaintext password and perform overpass-the-hash
  • Use Rubeus to monitor for TGTs using the /monitor command

But would I remember that I could also use the following?

  • NTLM Relaying, if I have control of an device with unconstrained delegation
  • ADCS to obtain a certificate, then leverage THEFT5
  • And several other options..

Dun Dun Dun

So I decided to build my own note taking application – RedTeamNotes!

I had a few aims for this tool:

  • It should be possible to reuse the notes in other applications.
    • i.e. Use JSON
  • Try to avoid dependencies on too many tools just to build the application
    • I don’t want to have to download Node and 100 dependencies if I want to play around in HackTheBox
  • The relationships between notes should be very clear

With a few known limitations:

  • The tool wont handle editing the notes or relationships
  • The notes will be intentionally quite brief and will mainly signpost other resources

After many changes, I ended up choosing a few great JavaScript libraries to help me out. The graphing UI is handled by Drawflow. Positioning the various nodes turned out to be one of the hardest parts of the project, as it is very easy to know what the correct graph looks like, but it is very hard to actually implement in an automated way from my experience! Luckily I found Dagre’s GraphLib, which I believe uses the ‘Dagre’ algorithm to sort the nodes, but this might well be wrong!

Aside from these two libraries, the rest was blood, sweat, tears and swearing at CSS selectors.

On the right, we can view information on our selected technique, which currently supports:

  • Description
  • OPSEC considerations
  • Links
  • Code examples
  • Defensive guidance

This data would be represented with the following JSON:

"constrained_delegation" : {
    "name": "Constrained Delegation",
    "description": "Constrained delegation is a feature of AD which allows a service to act on behalf of another user to specific other services. If we can compromise a service with constrained delegation enabled, we can potentially steal cached TGTs",
    "opsec" : ["Make sure the msdspn value (If using Rubeus) is a valid SPN which we can delegate to, only specific SPNs will be allowed."],
    "code_examples": [
        {
            "description" : "Find machines with uncontrained delegation enabled via BloodHound",
            "code" : "MATCH (c:Computer), (t:Computer), p=((c)-[:AllowedToDelegate]->(t)) RETURN p"
        },
        {
            "description" : "Find machines with constrained delegation enabled via LDAP",
            "code" : "(&(objectCategory=computer)(msds-allowedtodelegateto=*))"
        },
        {
            "description" : "Find SPNs we can delegate to via PowerView",
            "code" : "Get-DomainComputer -Identity PC_NAME | Select-Object -Expandproperty msds-allowedtodelegateto"
        }
    ],
    "links" : ["https://www.guidepointsecurity.com/blog/delegating-like-a-boss-abusing-kerberos-delegation-in-active-directory/"]
}

Using our earlier example of pass-the-ticket, we can see how this is represented below. Notice the 6 lines leading into the left of ‘Pass The Ticket’, showing 6 techniques which could get us to that position.

I decided to make a node for some of the ‘tactics’ (As MITRE ATT&CK would refer to them), which helps Dagre to better position the nodes. This has the added benefit of being able to help me perform a pseudo-checklist when I am stuck on a machine.

For example, below are a set of the privilege escalation techniques I have currently added into the tool:

The tool also can handle quite a lot of nodes. Especially considering that the relationships are quite complex, there is no caching, and it is not a super-efficient algorithm!

Searching

The tool has a search bar which will query the title and description of all the nodes on the current page, which is performed using FuzzySort. For example, lets look for techniques related to ‘tgt’:

We can then click on the top item “Dump TGT’s” and be taken to the relevant node

We can also switch between multiple datasets. For now, I currently have 3 datasets within my notes:

With code samples, we can click on the clipboard emoji, and the example text will be copied to the clipboard.

Get-DomainGPO | Get-DomainObjectAcl -ResolveGUIDs | ? { $_.ActiveDirectoryRights -match "WriteProperty|WriteDacl|WriteOwner" -and $_.SecurityIdentifier -match "S-1-5-21-SID_GOES_HERE-[d]{4,10}" } | select ObjectDN, ActiveDirectoryRights, SecurityIdentifier | fl

Summary

Hopefully this serves as some inspiration as to what can be done to make note-taking a bit more user-friendly and usable. I’m hoping to develop this idea into a few other directions in the coming weeks and months, as I think this style of program could be useful for a few other applications!

Digging Into Mimikatz’s lsadump And sekurlsa

Overview

Initially, my aim with this post was to dig into Mimikatz in greater detail. I had used its more common functions during CRTO and OSCP, but had never explored its more exotic features in any depth. Mimikatz is an enormous tool, so I focused on the lsadump and sekurlsa functions, as they are commonly used for dumping credentials.

I also wanted to focus on providing detail on how this can be detected and monitored, as Mimikatz leverages a number of legitimate features of Windows, which can make it difficult to prevent. All of the work in this article has been performed by a number of excellent researchers – I have simply pulled it together into this article!

As an introduction to Mimikatz and credential providers, I found this blog to be a useful introduction. Initially, I will cover a few new features I uncovered which grabbed my attention. Then I will go through each of the functions, covering what they do, how they can be leveraged and how to defend against it. Finally, I will cover some recommendations to defend against password reuse, and to prevent Mimikatz from gathering credentials.

Interesting Stuff

DCShadow

The lsadump::dcshadow module allows for an attacker with Domain Admin permissions to more stealthily modify settings on any user of their choice. This means an account could have a single property changed to backdoor it, without being as noisy as performing an attack such as DCSync.

Decrypt ‘Isolated’ Credentials

Using sekurlsa::bootkey, we can decrypt blobs which are protected by an isolated LSA process (i.e. behind Credential Guard). With access to the physical memory of a machine, it is possible for Mimikatz to recover the necessary information to decrypt these protected blobs.

Inter-Realm Tickets

With Domain Admin privileges, we can dump the forest trust keys with Mimikatz which we can use to create a forged trust ticket. This will allow us to leverage an alternative high-level persistence mechanism to golden tickets without using the krbtgt account. (Link)

mimilib.dll

This little file can be used to register a custom SSP (Security Support Provider). If this is added to the registry, it will then log any credentials used to access the machine in plaintext, even if Credential Guard is enabled. This default method is well signatured, but I had not come across custom SSPs before. (Link 1, Link 2)

Pass The PRT

A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10 or newer, Windows Server 2016 and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) specially issued to Microsoft first party token brokers to enable single sign-on (SSO) across the applications used on those devices” – Microsoft

Using the sekurlsa::cloudap module, we can dump the PRT for a machine. With some DPAPI magic, we can then generate cookies for an account for up to 14 days. This allows us to interact with Azure as the compromised user, without needing access to the compromised device. (Link)

The Hacker Tools

This is a site I have seen several people mention, but I had never used it much until now. It is a really handy reference for the various functions which Mimikatz supports. Here are links to the lsadump and sekurlsa pages – thanks Charlie Bromberg!

lsadump::

backupkeys

Overview

The lsadump::backupkeys function allows for the Domain Controller backup DPAPI keys to be dumped

Attacking

This requires Domain Admin privilege and can only be performed against a DC.

Exploiting this means we can export the backup DPAPI keys. This can be done using Mimikatz or using DSInternals (Get-BootKey, Get-ADDBackupKey and Save-DPAPIBlob cmdlets) in this order.

We can then decrypt DPAPI secrets for all users, including decrypting these secrets off the target machine.

Prevention/Detection

This attack is hard to defend against as to leverage this, an attacker has to already have a Domain Admin level of permission. Therefore, limiting access to Tier 0/DC’s is likely the best route.

cache

Overview

lsadump::cache will load cached domain credentials from the registry, these credentials are cached in case the device is unable to connect to a DC, when these values will be used to authenticate the user.

These cached credentials can be found at registry keys HKEY_LOCAL_MACHINE\SECURITY\Cache\NL$1 through to NL$10.

Attacking

This attack requires elevated privileges.

The recovered hashes can be cracked with hashcat, if the hash is transformed into the format “$DCC2$10240#USERNAME#HASHand mode 2100 is used.

Prevention/Detection

The caching of domain credentials can be restricted via Group Policy. Under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options and set the value for ‘Interactive logon: Number of previous logons to cache (in case domain controller is not available)‘ to 0.

Alternatively, the registry key HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon can be set to 0. By default Windows 10 and Server 2016 cache the 10 most recent passwords (i.e. The Winlogon value is set to 10)

changentlm

Overview

lsadump::changentlm allows the password of a user to be changed with an NTLM hash or a plaintext password.

Attacking

This requires access to the users current NTLM hash.

In particular, this technique can be very handy if we have a hash, but need to authenticate with a cleartext platform (Such as Sharepoint/OWA). It allows us to set a known password for an account, then revert it back to the original hash (If we have obtained the hash via DCSync or other means).

Despite these two benefits, there are a couple of key drawbacks. It can only be used once per day and it will not update password on client apps (Such as Outlook on their mobile), so could tip them off.

Prevention/Detection

This activity generates event ID 4723, which has odd characteristics of the subject and target being different users. This should be unusual, as the target should be the only person resetting their password. If an admin does a password reset, it generates event ID 4724

The new password will get added to the password history, which can help correlate events. Additionally, enabling Azure Conditional Access policies to force MFA or restrict logons to trusted network segments can provide another layer of security, although this can be bypassed by the Pass-The-PRT attack (via sekurlsa::cloudap)

dcshadow

Overview

Using lsadump::dcshadow, an attacker can modify any parameter on any user. With some forethought, an attacker could more stealthily backdoor accounts and gain persistence than other better known methods of leveraging Domain Admin permissions.

Attacking

This requires Domain Admin privilege.

As mentioned above, we can modify any permission on any user. There is also a very good technical deep dive on this technique.

Prevention/Detection

There are a few ways of detecting this activity, first we can investigate Event ID 4929 (Detailed Directory Service Replication) from non-DC hosts. Additionally, EID 4662 can be monitored for a rapid creation and deletion of a DC object

Moving away from Event IDs, we can monitor for computers which have an RPC service with a GUID of E3514235–4B06–11D1-AB04–00C04FC2DCD2 exposed, as this should be DC specific. This GUID also features in a characteristic SPN which can be fingerprinted.

dcsync

Overview

One of the classic Active Directory attacks, lsadump::dcsync allows us to perform a DCSync attack. By leveraging the MS-DRSR (Directory Replication Service Remote) protocol, this attack will effectively turn the compromised device into another DC, allowing it to replicate credential material from the other legitimate DC’s.

Attacking

As mentioned above, we can perform the DCSync attack. This requires both the Replicating Directory Changes and Replicating Directory Changes All ACEs. Typically this is limited to DA, EA and DC groups, but it can be mistakenly granted to other users or groups in AD.

DCSync can allow us to compromise the KRBTGT account, allowing golden ticket attacks to be performed, as well as other attacks.

Prevention/Detection

The easiest way of preventing this attack is to improve the security of your AD environment using an attack path auditing solution such as BloodHound. This tool has support for the 2 required permissions (GetChanges and GetChangesAll), as well as having a pre-built query to identify users with these permissions.

Should it not be possible to remediate these permissions in your environment, then you can monitor the network for MS-DRSR traffic coming from non-DC computers, but it is ultimately a far worse solution than preventing the attack in the first place.

lsa

Overview

lsadump::lsa will interact with the LSA server (lsass.exe) in order to dump credentials for a specific user.

Attacking

This function gives us 2 options for interacting with the LSA server, we can either patch it with /patch and return NTLM hashes although this is not recommended. Alternatively we can inject into the lsass.exe process with /inject. We can specify an individual user with the /name or /id parameters.

Prevention/Detection

Follow the defensive recommendations at the end of this post, in particular the guidance on hardening LSASS.

mbc

Overview

I couldn’t find any blogs relating to this, but I believe this function will reveal the MachineBoundCertificate for a given device. This certificate can be generated by Credential Guard.

Attacking

N/A – No publicly known attacks (yet…)

Prevention/Detection

I couldn’t find any articles relating to preventing this behaviour, but there is a good article on confirming that values within msDS-KeyCredentialLink are legitimate. CredentialGuard (And by extension MachineBoundCertificates) can populate this field.

netsync

Overview

lsadump::netsync leverages the Microsoft Netlogon Remote Protocol (MS-NRPC) to allow an attacker to request an NTLM hash for a machine account. This is similar to DCSync, but only for machine accounts. TrustedSec have done an excellent guide to this technique.

Attacking

In order to perform this attack, we need to obtain the NTLM hash of the DC machine account.

After leveraging netsync, we can then create silver tickets as the machine account of our choice.

Prevention/Detection

TrustedSec’s blog post has some good detection measures.

The patch for ZeroLogon (CVE-2020-1472) looks to prevent this attack, unless specific settings are disabled to allow ‘dangerous’ activity.

packages

Overview

This function shows the various credential packages available to Mimikatz

Attacking

N/A – This just shows information about Mimikatz

Prevention/Detection

N/A – This just shows information about Mimikatz

postzerologon

Overview

Following ZeroLogon exploitation, this will change the machine account to a known value.

Attacking

Targeted machine account will have its password set to ‘Waza1234/Waza1234/Waza1234/

Prevention/Detection

Follow Microsoft’s guidance on patching via KB4557222

rpdata

Overview

N/A – Unknown function

Attacking

N/A – Unknown function

Prevention/Detection

N/A – Unknown function

sam

Overview

Running lsadump::sam will dump the hashes stored within the local SAM registry hive

Attacking

This requires elevated privileges to the machine and is covered under T1003.002.

It can be performed remotely by dumping the SAM and SYSTEM hives using tooling such as reg save and running Mimikatz off the target.

Prevention/Detection

To thwart this style of attack, using strong, unique passwords can help to prevent credential re-use, should a computer become compromised.

We can detect this attack by enabling ‘Audit Object Access‘. This can be done via Local Security Policy under the following path: Security Settings\Local Policies\Audit Policy\Audit object access. This can also be done via group policy at the following path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Audit Policy\Audit object access

Following this, you then need to enable audit policies for the relevant registry keys, such as the SAM and SECURITY hives. This will generate EID 4656 events which can then be detected. This can be extremely noisy though, so will require further tuning.

secrets

Overview

Running lsadump::secrets will dump the hashes stored within the local SECURITY registry hive

Attacking

This requires elevated privileges to the machine and is covered under T1003.004.

It can be performed remotely by dumping the SECURITY and SYSTEM hives using tooling such as reg save and running Mimikatz off the target.

Prevention/Detection

To thwart this style of attack, using strong, unique passwords can help to prevent credential re-use, should a computer become compromised.

We can detect this attack by enabling ‘Audit Object Access‘. This can be done via Local Security Policy under the following path: Security Settings\Local Policies\Audit Policy\Audit object access.

This can also be done via group policy at the following path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Audit Policy\Audit object access

Following this, you then need to enable audit policies for the relevant registry keys, such as the SAM and SECURITY hives. This will generate event ID 4656 events which can then be detected. This can be extremely noisy though, so will require further tuning.

setntlm

Overview

Leveraging lsadump::setntlm will allow us to change the password of an account via the command line, should we have sufficient privileges.

Attacking

This function requires us to have ‘privileged access’ over the account. For example if we have compromised an account with a ForceChangePassword permission over another account. A benefit of this method is that it doesn’t require us to know the current or previous password

Prevention/Detection

This technique will generate event ID 4738. The new password will get added to the password history, which can help correlate events. Additionally, enabling Azure Conditional Access policies to force MFA or restrict logons to trusted network segments can provide another layer of security, although this can be bypassed by the Pass-The-PRT attack (via sekurlsa::cloudap)

trust

Overview

We can use lsadump::trust to patch LSASS in order to obtain the forest trust keys, which can allow us to forge inter-realm tickets.

Attacking

We can forge an inter-realm ticket, allowing DA in Domain A to move laterally into Domain B, should they share a bi-directional trust (Link 1 and Link 2). Additionally, there is an interesting blog by XPN looking at cracking these keys.

Prevention/Detection

Follow the guidance at the end of the post on identifying attack paths to critical assets such as Domain Controllers and Domain Admins.

zerologon

Overview

lsadump::zerologon performs the ZeroLogon attack.

Attacking

This will set the password of the computer account on the targeted domain controller to a blank string.

Prevention/Detection

Follow Microsoft’s guidance on patching via KB4557222

sekurlsa::

backupkeys

Overview

sekurlsa::backupkeys shows the GUIDs of the DPAPI backup keys on a DC. This is very similar to the lsadump::backupkeys command which will also automatically gather the value of the keys and the location of them on disk.

Attacking

This will reveal the GUID values of the backup DPAPI keys, which can then be exported and leveraged to decrypt secrets on any domain-joined machine, as covered in the attacking section of lsadump::backupkeys.

Prevention/Detection

N/A – See lsadump::backupkeys.

bootkey

Overview

I was unable to find a comprehensive post on sekurlsa::bootkey, so had to do quite a bit of digging to uncover it. Based on a tweet by Benjamin Delpy, we can use this function with the /raw parameter to decrypt ‘LSA Isolated’ (aka Credential Guard-protected) credentials, if we are able to obtain the ‘SecureKey.pdb’ file. From some comments on that post, it appears this might be possible against VMWare hosts.

Attacking

Benjamin Delpy has produced a video which demonstrates how to perform this attack. I am planning on digging into this technique at a later date, but for now I will use the screenshots from Benjamin Delpy’s video. Below we can see the output of sekurlsa::logonpasswords before sekurlsa::bootkey is run:

A minidump of LSASS is then taken and loaded into Mimikatz. The bootkey command is then run passing in the .vmem file of a virtual machine. It appears as if this technique will work if we have access to the physical memory of a machine.

Following this, we can now obtain the cleartext password which is ‘protected’ by Credential Guard.

Prevention/Detection

Follow the defensive recommendations at the end of this post, in particular the guidance on hardening LSASS.

Protecting credentials in Credential Guard is still a good step to take in order to secure credentials, this technique doesn’t fundamentally change that guidance! 

cloudap

Overview

sekurlsa::cloudap will reveal the Primary Refresh Token (PRT) for an Azure-joined or hybrid Azure-joined device. This PRT file will allow us to generate PRT cookies, which in turn will allow us to impersonate the user to Azure from an attacker controlled device.

Attacking

In short, this allows us to perform the Pass-The-PRT Attack. Joe Stocker summarises this attack really well: “This allows the attacker to sign in as the user, even if their device is not Intune compliant or Hybrid Azure AD joined”.

To perform this attack, we will need to have local admin access to the machine in question, but we will not need to compromise anything such as the TMP, as the information we need (PRT and derived key) are found in LSASS.

Prevention/Detection

Follow the defensive recommendations at the end of this post, in particular the guidance on hardening LSASS.

credman

Overview

sekurlsa::credman will interact with Credential Manager (credman), to uncover credentials.

Attacking

We can enumerate stored credentials with vaultcmd, although this didn’t reveal all the entries on my system in practise.

Additionally, we can backup the CredMan files to export them off a target system, but this does require GUI access.

Prevention/Detection

The best way of preventing this is preventing Credential Manager from caching passwords in the first place. I was unable to find a method of being more granular than a straight allow or deny of Credential Manager.

Backing up all the credentials from Credential Manager will create Event ID 5376.

dpapi

Overview

sekurlsa::dpapi will dump DPAPI user master keys from LSASS. These SHA1 hash of the password is cached by LSASS following the initial logon. This SHA1 value is used by DPAPI to decrypt the encrypted blobs.

Attacking

We can use this function to reveal the masterkeys for each user. These keys are then used to decrypt DPAPI-protected credentials, which can be performed with the dpapi:: module in Mimikatz.

Prevention/Detection

From Harmj0ys post, he states it is very difficult to prevent this attack.

Follow the defensive recommendations at the end of this post to harden LSASS.

dpapisystem

Overview

sekurlsa::dpapisystem will obtain the DPAPI system master keys from the SYSTEM and SECURITY registry hives.

Attacking

We can use this function to reveal the System master key. This key are then used to decrypt DPAPI-protected credentials, which can be performed with the dpapi:: module in Mimikatz.

Prevention/Detection

From Harmj0ys post, he states it is very difficult to prevent this attack.

Follow the defensive recommendations at the end of this post to harden LSASS.

ekeys

Overview

sekurlsa::ekeys will dump any kerberos tickets (i.e. all key types) from LSASS. This is very similar to the triage command in Rubeus, but with key differences under the hood. Rubeus will leverage the LsaCallAuthenticationPackage API call, whereas Mimikatz parses LSASS.

Both have different pros and cons, but the sekurlsa::ekeys command is likely to be the less OPSEC-safe option due to interacting with LSASS.

Attacking

We can perform many attacks with a valid hash of a user, in particular pass-the-hash or overpass-the-hash. We can use Rubeus to obtain a valid TGT using a hash with asktgt and the /rc4 or /aes256 flags. Using the sekurlsa::ekeys command, we will be able to find the AES256 hash for the user, which is more normal for an environment and might be more OPSEC-safe.

Prevention/Detection

If the hashes are used via overpass-the-hash, then the difference in encryption strength can be detectable if we were to use the RC4 hash instead of an AES256 hash.

Follow the defensive recommendations at the end of this post to harden LSASS.

kerberos

Overview

I believe sekurlsa::kerberos lists credentials cached by the Kerberos SSP following a successful authentication to the network. This appears to be separate to the Kerberos tickets which are also cached and obtainable through Rubeus’s triage command, or sekurlsa::tickets within Mimikatz.

Attacking

We can perform many attacks with a valid hash of a user, in particular pass-the-hash or overpass-the-hash.

Prevention/Detection

Follow the defensive recommendations at the end of this post to harden LSASS.

krbtgt

Overview

sekurlsa::krbtgt obtains the password hash of the krbtgt account from a domain controller.

Attacking

Compromising the KRBTGT account can be used to perform golden ticket attacks

Prevention/Detection

Follow the guidance at the end of the post on identifying attack paths to critical assets such as Domain Controllers and Domain Admins.

livessp

Overview

sekurlsa::livessp gathers data from the LiveSSP SSP, this was introduced in Windows 8 when signing in with a Live account.

Attacking

We can perform many attacks with a valid hash of a user, in particular pass-the-hash or overpass-the-hash.

Prevention/Detection

From GentilKiwi’s post, which refers to this post, we can likely remove LiveSSP from the authorised credential providers without too much impact.

KB2871997 appears to introduce the capability to prevent caching LiveSSP credentials for members of the Protected Users group.

logonpasswords

Overview

sekurlsa::logonpasswords is the classic mode of running Mimikatz, returns credentials from all supported credential providers and SSPs.

Attacking

N/A – The various SSPs and credential providers are covered in this guide.

Prevention/Detection

Follow the defensive recommendations at the end of this post to harden LSASS.

minidump

Overview

sekurlsa::minidump instructs Mimikatz to parse a memory dump of LSASS and not to interact with lsass.exe on the machine. This has the advantage of not having to interact with LSASS multiple times if you are going to perform multiple queries.

Attacking

In order to obtain a memory dump of a process, there are various ways for dumping processes in Windows, though they will require admin privileges.

Common methods include using ProcDump.exe, Task Manager or rundll32 with comsvcs.

Prevention/Detection

N/A – This is Mimikatz functionality and can be performed off the host.

msv

Overview

sekurlsa::msv shows credentials from the MSV1_0 credential package, which typically handles “Interactive logons, batch logons, and service logons

Attacking

This is another credential provider which Mimikatz can parse. There are some interesting blogs which dig into this provider in greater detail.

Prevention/Detection

From Benjamin Delpy’s post, we need to retain MSV1_0 in order for the computer to work properly, so we will have to harden LSASS instead to prevent these secrets from being stolen. 

process

Overview

Following the usage of sekurlsa::minidump, sekurlsa::process will switch back to interacting with lsass.exe on the machine, rather than leveraging the data from the dumped file.

Attacking

N/A – This is Mimikatz functionality

Prevention/Detection

N/A – This is Mimikatz functionality

pth

Overview

sekurlsa::pth allows us to perform pass-the-hash attacks in Mimikatz, as well as spawning a process as a given user.

Attacking

This will allow us to spawn a process of our choice as a given user if we have the hash of their password.

There are also some specific criteria around which params are needed, as covered here.

It might be easiest to spawn a bogus program using this technique, then use steal_token in Cobalt Strike to leverage the access token of this process created via PTH.

Prevention/Detection

Generally EID 4624 with LogonType 9, but this blog covers some more detection settings which can help.

ssp

Overview

sekurlsa::ssp will gather data from SSP’s (Security Support Provider). Applications can implement their own third-party SSPs to handle authentication if SSO isn’t an option. Benjamin Delpy’s blog mentions that applications with third-party accounts might legitimately leverage this functionality.

Attacking

As an attacker, we can leverage this in three ways. We can simply use sekurlsa::ssp to read credentials gathered by pre-existing SSPs. Using misc::memssp, we can register a new SSP, which will log passwords in plaintext, but will not persist over a reboot and will require injecting into LSASS which is obviously not ideal. Finally, using mimilib.dll, we can create a more persistent method for leveraging SSPs, but it does require dropping a DLL to disk.

Prevention/Detection

Look for the existence of log files such as C:\Windows\System32\kiwissp.log or C:\Windows\System32\mimilsa.log, which will show the credentials potentially gathered by an attacker. Although if this is seen, then it is likely too late. 

Follow the defensive recommendations at the end of this post to harden LSASS, although remember that this is a Credential Guard bypass, therefore PPL will have to be enabled to prevent this attack from being successful.

tickets

Overview

sekurlsa::tickets lists all the Kerberos tickets from the system.

Attacking

Functionally, I believe this is similar to Rubeus’s triage command. Both require elevated access to view tickets from other users.

Tickets are valid for 10 hours, but can be renewed for up to 7 days. This can be leveraged using Pass-The-Ticket, which is covered by many other blogs.

Prevention/Detection

Follow the defensive recommendations at the end of this post to harden LSASS.

There is the potential to detect by matching users who are logged on compared to the kerberos tickets in memory. This seems quite complex and might be hard to implement at scale, even if it is effective.

Defender for Identity is able to detect suspected pass-the-ticket activity through external ID 2018.

trust

Overview

We can use sekurlsa::trust to obtain the forest trust keys, which can allow us to forge inter-realm tickets.

Attacking

We can forge an inter-realm ticket, allowing DA in Domain A to move laterally into Domain B, should they share a bi-directional trust (Link 1 and Link 2). Additionally, there is an interesting blog by XPN looking at cracking these keys

Prevention/Detection

Follow the guidance at the end of the post on identifying attack paths to critical assets such as Domain Controllers and Domain Admins.

tspkg

Overview

The sekurlsa::tspkg function will list credentials from the ‘Terminal Services’ authentication provider, which is more commonly known as TsPkg. This has close relations to TSSSP (Terminal Services SSP), which is the SSP which then leverages TsPKG to manage passwords.

Attacking

This is caused by a series of non-default settings being enabled. These relate to enabling a semi-SSO mode in RDP, where NLA is used. We need these settings to be enabled, and an authorised server (or a wildcard) to be set in the Computer Configuration\Administrative Templates\System\Credentials Delegation field of a GPO.

Prevention/Detection

The infamous KB2871997 introduces the capability to prevent plaintext caching of this SSP, but it is not enabled by default!

wdigest

Overview

sekurlsa::wdigest lists credentials from the Digest SSP, better known as wdigest.dll or WDigest.

Attacking

Hashes recovered from WDigest can be leveraged using standard pass-the-hash (T1550.002) or overpass-the-hash methodology.

Prevention/Detection

KB2871997 introduces the capability to prevent plaintext caching of Wdigest. Be warned, as it is installed by not enabled by default on Win8.1 and 2012 R2!

To enable this, install the patch for Windows 7/Windows 8/Server 2008 R2/Server 2012. Then ensure the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\WDigest entry has the UseLogonCredential property set to 0.

If this is set to 1, then it will not protect the machine at all (i.e. Wdigest will cache in plaintext). If the UseLogonCredential entry doesn’t exist, then it will again cache in plaintext on older OS’s.

This is a very complex patch, so refer to the KB2871997 article or an article by StealthBits which breaks down these considerations.

Defensive Advice

The sections below will cover some defensive guidance to help protect a device against the various Mimikatz functions above from being possible.

Hardening LSASS

This section will cover some recommendations for enhancing the security of LSASS. Due to it being commonly targeted by attackers, it makes sense to secure this as best as possible.

Enable PPL (Protected Process Light)

Enabling PPL (M1025) will effectively mean that even administrative users are unable to interact with LSASS via tools such as Mimikatz (As PPL will prevent an attacker for successfully invoking the OpenProcess API call to interact with LSASS, even if they have SeDebugPrivilege).

With this enabled, it will force an attacker to resort to other methods to bypass PPL, which open up more opportunities to detect them. Common examples of bypassing PPL typically rely on loading a custom driver, such as Mimikatz’ own mimidrv driver.

To enable PPL on an individual computer, we can do this by opening the HKLM\SYSTEM\CurrentControlSet\Control\Lsa key and adding the DWORD value “RunAsPPL“, setting its value to 1 and restarting. This can also be done via group policy

Enable Credential Guard

We can enable Credential Guard (M1043) to run ‘LSASS’ within its own heavily-restricted virtual environment, this isolated ‘version’ of LSASS is known as LSAIso. LSASS still runs on the computer, but talks to LSAIso in order to handle and monitor authentication requests. This monitoring limits the ability for an attacker to interact with the credentials within LSASS, as they are now isolated.

We can enable Credential Guard via group policy, InTune or on the local machine

As covered in sekurlsa::ssp, a well known bypass of this is to register a custom SSP, which will then capture any subsequent credentials in plaintext.

Enable Attack Surface Reduction (ASR) Rules

Should you be enrolled in a compatible plan, enabling ASR rules can help to monitor and prevent malicious access to LSASS. This includes attacks such as Pass-The-PRT (sekurlsa::cloudap), which was able to still work even with Credential Guard enabled.

The following PowerShell command will enable the rule in blocking mode:

Add-MpPreference -AttackSurfaceReductionRules_Ids 9e6c4e1f-7d60-472f-ba1a-a39ef669e4b2 -AttackSurfaceReductionRules_Actions Enabled

To enable it in audit mode, use -AttackSurfaceReductionRules_Actions AuditMode instead. With ASR enabled, this will generate event ID’s 1121 (Block mode) and 1122 (Audit mode) within the Windows Defender/Operational log

Restricting Local Admin Rights

Preventing users (and attackers) from obtaining administrative privileges in the first place will also add another layer of defence. Whilst there are a number of ways of escalating privileges, it introduces more opportunities to catch an attacker.

Reducing Credential Reuse

The age old advice of not re-using credentials still works today. A predictable or weak naming scheme is trivially exploitable by an attacker and can lead to widespread access across the environment. Using strong, unique passwords helps to prevent this.

Assess AD Attack Paths

Using a tool such as BloodHound, we can assess an Active Directory environment to identify ‘attack paths’ which an attacker might leverage. The BloodHound Enterprise site has a good introduction to this topic, but all of the concepts can be performed using the free version.

By removing attack paths within your environment, you can effectively prevent an attacker from directly escalating their privilege within Active Directory. If an attacker is able to gain permissions such as Domain Administrator, they are able to perform a number of highly powerful attacks and will be extremely difficult to fully remove from the environment.

Attack paths can also be created when privileged accounts log into other computers. Therefore limiting the computers which powerful accounts such as the Domain Admin can log into will help to reduce the chance of an attacker compromising the account.

OffSecOps: Using Jenkins For Red Team Tooling

Origin

The inspiration for this post came from the excellent talk by Harmj0y at SO-CON 2020. I have been meaning to dig into using Jenkins to automate the building of red team tooling for some while now, but having recently completed the RTO exam, I felt it was time to have a play!

The Gist referenced by Harmjoy can be found here.

Aims

Before starting this mini project, my aim was to build a reasonably simple CI pipeline to:

  1. Get the latest version of Rubeus
  2. Perform some obfuscation
  3. Compile it.
  4. Have a less detectable Rubeus executable

With an aim to be able to take this code and re-use it on various other projects/repos as we wish, so modularity is a key aim here.

There are a fair amount of similarities between this post and the OffensivePipeline project, but I wanted to expand my knowledge within Jenkins rather than using C#, which I am already pretty comfortable with. I also feel Jenkins is likely to offer more flexibility in the future as I expand this project further.

An important caveat before we begin, I realise this guide only touches on some very basic obfuscation. The resulting binary will still be easily detectable, but this guide should highlight some of the basics!

Initial Jenkins Configuration

There are plenty of guides out there for installing Jenkins, so I wont labour the point here. A blog post from XenoSCR helped me at the start to install and configure Jenkins, as well as setting up a basic pipeline.

As stated earlier I wanted to be able to compile the projects within Jenkins, so naturally MSBuild was going to be the main candidate to do this. As usual, StackOverflow contains a guide on how to setup MSBuild in Jenkins, which I will cover below.

First, lets go to Manage Jenkins -> Plugin Manager. I didn’t have an MSBuild entry in my Global Tool Configuration page, so I had to go and install it from the Plugin Manager

When this has downloaded, we will add our configuration by going to Manage Jenkins -> Global Tool Configuration and scrolling down to the MSBuild section.

Click on ‘Add MSBuild’ and fill in the details for the path to MSBuild. Ensure you use the path to MSBuild for your installation of Visual Studio, I originally set it to the path for v4.0.3019 but I had a lot of issues with it failing to compile the project correctly.

Jenkins Pipeline

As described by Will in his talk, we will use ‘Pipelines’ to perform this compilation. From the Jenkins Dashboard we can click on New Item, and then select ‘Pipeline’.

For a basic project, lets use the code sample below. This will download Rubeus from GitHub and then show the contents of the folder.

pipeline { 
    agent any
    
    environment { 
        PROJECT_NAME = "Rubeus"
    }
    
    stages {
    	stage('Checkout') {
    	    steps {
                git """https://github.com/GhostPack/${env.PROJECT_NAME}.git"""
    	    }
    	}
            
        stage('Echo') {
            steps {
                bat """dir C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}"""
            }
        }
    }
}

Click on Save, and then on Build Now. We can then click on ‘Console Output’ to show us what Jenkins is doing. This should reveal the information below, which will show us the root directory of our Rubeus directory – proving we can clone the repository via code!

Now we can prove that we can actually run a job and it will execute code, lets try to automate a bit more of this. We currently are pulling the repo and running dir, lets try to actually compile this code into an executable.

Compiling Rubeus

Thanks to us configuring MSBuild earlier, we can now refer to it from within our pipeline – no need to mess around with remembering the path all of the time!

First off, MSBuild has a *fairly* complex command line structure, so I first got this working in the command line before porting it across to Jenkins. This isnt helped by Rubeus using .NET v4.0 which is no longer officially supported by Microsoft, so I was unable to find a legit download of the binary. Due to this, I used .NET v4.8 which isnt the best option for us in terms of compatibility, but we can always change that down the road!

After a lot of trial and error with MSBuild and the various command line options, my final command was:

"C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\MSBuild.exe" /p:Configuration=Release "/p:Platform=Any CPU" /maxcpucount:2 /nodeReuse:false /p:TargetFrameworkMoniker=".NETFramework,Version=v4.8" Rubeus.sln

We now get a success message from MSBuild!

Lets now change our jenkinsfile up so that it will compile the tool using MSBuild. We will wrap that earlier MSBuild command in a new stage to help keep our project nice and modular:

stage('Compile') {
    steps {
        bat "\"${tool 'MSBuild_VS2022'}\\MSBuild.exe\" /p:Configuration=${env.CONFIG} \"/p:Platform=${env.PLATFORM}\" /maxcpucount:%NUMBER_OF_PROCESSORS% /nodeReuse:false /p:TargetFrameworkMoniker=\".NETFramework,Version=v4.8\" ${env.PROJECT_FILE_PATH}" 
    }
}

I also added in a temporary stage to print out the contents of the Rubeus\bin\Release folder. This helped me test that it had actually compiled the executable and saved me a few clicks as I debugged the pipeline.

With these additions, our jenkinsfile now looks like the code below. You can see I have added some more environment variables, which will help me to reuse this code for other repositories.

pipeline { 
    agent any
    
    environment { 
        PROJECT_NAME = "Rubeus"
        PROJECT_FILE_PATH = "Rubeus.sln"
        CONFIG = 'Release' 
        PLATFORM = 'Any CPU' 
    }
    
    stages {
    	stage('Checkout') {
    	    steps {
                git """https://github.com/GhostPack/${env.PROJECT_NAME}.git"""
    	    }
    	}
            
        stage('Echo') {
            steps {
                bat """dir C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}"""
            }
        }
        
        stage('Compile') {
            steps {
                bat "\"${tool 'MSBuild_VS2022'}\\MSBuild.exe\" /p:Configuration=${env.CONFIG} \"/p:Platform=${env.PLATFORM}\" /maxcpucount:%NUMBER_OF_PROCESSORS% /nodeReuse:false /p:TargetFrameworkMoniker=\".NETFramework,Version=v4.8\" ${env.PROJECT_FILE_PATH}" 
            }
        }
        
        stage('Echo Post Compilation') {
            steps {
                bat """dir C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}\\bin\\${CONFIG}"""
            }
        }
    }
}

We can assess our progress by uploading our binary to VirusTotal. Whilst I wouldnt do this on a live test, it is handy for assessing how well this pipeline works. We will test this again at the end of this post, but for now our binary is only detected by 41 vendors – even though it is totally unobfuscated.

Jenkins Shared Libraries

We now have a good base to build from, as we can pull the latest version of Rubeus and compile it just from a click of a button! Our aim now is to remove some well known strings from the Rubeus executable. This will be the first baby steps towards us obfuscating our executable file.

To do this, we will use Shared Libraries to bundle up samples of code which we will reuse. For example, this will be stuff such as changing the default GUIDs, removing comments and so on. Conceptually, this is very similar to using functions when programming.

First off, we will create a shared library by creating a folder structure as shown below. I based my library on this blog post.

- obfuscation-lib/
  --> vars/
      --> someFunction.groovy

The code for my someFunction.groovy file was:

def call(String name = 'User') {
    echo "Welcome, ${name}."
}

Annoyingly, we can’t include a local path to a Shared Library in Jenkins, as it expects us to load it from Git. There is a nice hacky workaround where we can load a local Git repository using the file:// protocol handler, as described here.

To prep my library for this, I created a new git repository and committed my code to it. You have to remember to commit your code after every change to your library!

We will then go to Manage Jenkins -> Configure System -> Global Pipeline Libraries, and add our library in.

We can choose a name here, I will go for obfuscation-lib, and then we set the project repository to point at the location of our newly created git repository.

Back in our pipeline’s jenkinsfile, we now have to import this library using the name we just set (At the top of the above photo). We import it with the following code:

@Libary('LIBRARY_NAME')_

Don’t forget the underscore after the bracket, else it wont work!

To summarise, with our shiny new library, this gives us the following very basic pipeline below. Whilst you don’t have to use variables in our function call, I wanted to ensure it would work.

@Library('obfuscation-lib')_

pipeline { 
    agent any
    
    environment { 
        SOME_VAR = "SOME_VALUE"
    }
    
    stages{
        stage('Library Test') {
            steps{
                someFunction "${SOME_VAR}"
            }
        }
    }
}

As shown below, it will print out our variables.

Comment Obfuscation

Putting this altogether, lets use our obfuscation-lib library to obfuscate something useful within our target repository. To do this, we will build a pretty basic string replacement function. We will use this to replace any phrases which are known to set off EDR/AV alerts. A basic example would be replacing any mention of ‘mimikatz’.

Firstly, lets get the path to the Jenkins workspace. Ideally we will do this without having to manually specify it for each function call. Luckily we can use ${WORKSPACE} within our shared library to get this path. We can now update our library and it will print the directory out.

def call(String name = 'User') {
    echo "Welcome, ${name}. Workspace is ${WORKSPACE}"
}

Commit our changes and re-run the pipeline, and we get the following:

From here, we will use some code from this post to create a simple find and replace tool.

//Heavily adapted from  http://www.ensode.net/roller/dheffelfinger/entry/groovy_script_to_find_and
def call(String extension = '*.cs', String findText = '', String replaceText = '') {
    //Navigate to the current workspace
    def currentDir = new File("${WORKSPACE}");

    def backupFile;
    def fileText;

    currentDir.eachFileRecurse({
        file ->
        for (ext in exts){
            if (file.name.endsWith(extension)) {
                fileText = file.text;
                backupFile = new File(file.path + ".bak");
                backupFile.write(fileText);
                fileText = fileText.replaceAll(findText, replaceText)
                file.write(fileText);
            }
        }
    })
}

We will now add another stage into the pipeline, called “Obfuscate“. We will attempt to obfuscate the version number to demonstrate our function works. Also I will modify the “Echo Post Compilation” step to instead run Rubeus so we can check if the version number changed or not. This leaves us with these two new stages below.

stage('Obfuscate') {
    steps {
        replaceAll(".cs", "v2.0.2", "NO_SIGNATURES_PLZ")
          }
}

stage('Execute Post Compilation') {
    steps {
        bat """C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}\\bin\\${CONFIG}\\Rubeus.exe"""
    }
}

After running this, it with an error relating to “expected to call java.io.File.eachFileRecurse but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call error“. This is explained here, but basically we need to add @NonCPS to the top of our custom function.

We now end up with the following jenkinsfile:

@Library('obfuscation-lib')_

pipeline { 
    agent any
    
    environment { 
        PROJECT_NAME = "Rubeus"
        PROJECT_FILE_PATH = "Rubeus.sln"
        CONFIG = 'Release' 
        PLATFORM = 'Any CPU' 
    }
    
    stages {
    	stage('Checkout') {
    	    steps {
                git """https://github.com/GhostPack/${env.PROJECT_NAME}.git"""
    	    }
    	}
            
        stage('Echo') {
            steps {
                bat """dir C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}"""
            }
        }
        
        stage('Obfuscate') {
            steps {
                replaceAll(".cs", "v2.0.2", "NO_SIGNATURES_PLZ")
            }
        }
        
        stage('Compile') {
            steps {
                bat "\"${tool 'MSBuild_VS2022'}\\MSBuild.exe\" /p:Configuration=${env.CONFIG} \"/p:Platform=${env.PLATFORM}\" /maxcpucount:%NUMBER_OF_PROCESSORS% /nodeReuse:false /p:TargetFrameworkMoniker=\".NETFramework,Version=v4.8\" ${env.PROJECT_FILE_PATH}" 
            }
        }
        
        stage('Execute Post Compilation') {
            steps {
                bat """C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}\\bin\\${CONFIG}\\Rubeus.exe"""
            }
        }
    }
}

And then our custom function:

//Heavily adapted from  from http://www.ensode.net/roller/dheffelfinger/entry/groovy_script_to_find_and
@NonCPS
def call(String extension = '.cs', String findText = '', String replaceText = '') {
    //Navigate to the current workspace
    def currentDir = new File("${WORKSPACE}");
    def fileText;

    currentDir.eachFileRecurse({
        file ->
            if (file.name.endsWith(extension)) {
                fileText = file.text;
                fileText = fileText.replaceAll(findText, replaceText)
                file.write(fileText);
            }
    })
}

Now when we run our pipeline, we can see that we have modified the version number which Rubeus puts out to the console:

Extending Our Custom Function (Again)

We can further extend this function to create a basic function which performs some common OPSEC considerations for a C# projects. There is a lot of different checks which could be built in here, but we will focus on two main ones just to prove the point:

  1. Change the GUID of the binary
  2. Remove assembly information

Changing the GUID

As we can see from the AssemblyInfo.cs file, Rubeus uses a GUID of 658c8b7f-3664-4a95-9572-a3e5871dfc06.

This will tip off any analyst that we are using Rubeus, as we can see from Googling the GUID:

We will first use a regex to escape this. I found https://www.freeformatter.com/java-regex-tester.html a great resource when developing these Java regexes, and it saves re-running the pipeline over and over again! To save you from having to write Java regex, below is my code:

@NonCPS
def call() {
    //Replace the default GUID & assembly info
    sanitiseAssemblyInfo();
}

@NonCPS
def sanitiseAssemblyInfo(){
    def assemblyInfoFile = new File("${WORKSPACE}\\${PROJECT_NAME}\\Properties\\AssemblyInfo.cs");
    def assemblyInfoText = assemblyInfoFile.text;

    //Replace the default GUID (e.g. "[assembly: Guid("658c8b7f-3664-4a95-9572-a3e5871dfc06")]")
    def newGUID = "[assembly: Guid(\"${UUID.randomUUID().toString()}\")]"
    assemblyInfoText = assemblyInfoText.replaceAll(/\[assembly:\sGuid.*/, newGUID)
}

After committing our changes and running the pipeline, we can see that the AssemblyInfo.cs file has been modified, and we have a new GUID.

We can then extend our function to clear all the assembly values, only leaving a version number. This follows a very similar pattern to the function above:

@NonCPS
def call() {
    //Replace the default GUID & assembly info
    sanitiseAssemblyInfo();
}

@NonCPS
def sanitiseAssemblyInfo(){
    def assemblyInfoFile = new File("${WORKSPACE}\\${PROJECT_NAME}\\Properties\\AssemblyInfo.cs");
    def assemblyInfoText = assemblyInfoFile.text;

    //Replace the default GUID (e.g. "[assembly: Guid("658c8b7f-3664-4a95-9572-a3e5871dfc06")]")
    def newGUID = "[assembly: Guid(\"${UUID.randomUUID().toString()}\")]"
    assemblyInfoText = assemblyInfoText.replaceAll(/\[assembly:\sGuid.*/, newGUID)

    //Replace any entry beginning with "[assembly: Assembly", removing the value within the brackets.
    //I.e. [assembly: AssemblyTitle("Rubeus")] ==> [assembly: AssemblyTitle("")]
    //See https://stackoverflow.com/a/38296697 for more info
    assemblyInfoText = assemblyInfoText.replaceAll(/(\[assembly:\sAssembly.*\(\").*/, '$1\")]')

    //Finally, we will set the AssemblyVersion value to be 1.0.0.0 just to make it look a bit more legit
    assemblyInfoText = assemblyInfoText.replaceAll(/\[assembly:\sAssemblyVersion.*/, "[assembly: AssemblyVersion(\"1.0.0.0\")]")

    //And write it all to the file :)
    assemblyInfoFile.write(assemblyInfoText);
}

Now if we view the AssemblyInfo.cs file, we can see that the assembly information has been stripped out successfully.

Putting this all together, we have our final jenkinsfile:

@Library('obfuscation-lib')_

pipeline { 
    agent any
    
    environment { 
        PROJECT_NAME = "Rubeus"
        PROJECT_FILE_PATH = "Rubeus.sln"
        CONFIG = 'Release' 
        PLATFORM = 'Any CPU' 
    }
    
    stages {
        stage('Checkout') {
    	    steps {
                git """https://github.com/GhostPack/${env.PROJECT_NAME}.git"""
    	    }
    	}

        stage('Obfuscate') {
            steps {
                replaceAll(".cs", "v2.0.2", "NO_SIGNATURES_PLZ")
                cSharpBasicOpsec()
            }
        }
        
        stage('Compile') {
            steps {
                bat "\"${tool 'MSBuild_VS2022'}\\MSBuild.exe\" /p:Configuration=${env.CONFIG} \"/p:Platform=${env.PLATFORM}\" /maxcpucount:%NUMBER_OF_PROCESSORS% /nodeReuse:false /p:TargetFrameworkMoniker=\".NETFramework,Version=v4.8\" ${env.PROJECT_FILE_PATH}" 
            }
        }
        
        stage('Execute Post Compilation') {
            steps {
                bat """C:\\ProgramData\\Jenkins\\.jenkins\\workspace\\MSBuildTest\\${env.PROJECT_NAME}\\bin\\${CONFIG}\\Rubeus.exe"""
            }
        }
    }
}

In addition to these functions, I then added another find and replace to remove the default help text for Rubeus. After compiling this project, we can now see that only 32 vendors detect the code – meaning we have defeated 9 of them!

What Next?

From this point, there is a lot of different ways which you could take this project. Harmj0y touches on a few within his talk, but some of the easier items I have implemented are:

Changing The Namespace

The Rubeus namespace is very well known, so changing this was one of my first priorities.

This is easily visible within the project:

Removal Of ‘Bad’ Functions

Using our offensive-lib library, I created a new function to remove any functions which are known to be easily detectable. As mentioned before, I used this to remove the default help text functions “ShowLogo” and “ShowUsage” in Rubeus.

For now, I have opted to just replace the function with a single new line, though this could be replaced with C# code, should we need to preserve functionality.

Implementing Automatic AMSI Checking

By using the ThreatCheck project by RastaMouse, we can have our pipeline automatically check itself against AMSI signatures. We will just run the check and manually review the output, but for production use we would likely implement this as a test – so that any code detected by AMSI is not compiled for use.

Slack Intergration

Instead of having to review the output of our builds, we can instead use a plugin and Slack WebHooks to get the data sent straight to us!

Summary

This is just the first baby steps into using Jenkins for OffSecOps, but hopefully it shows the potential use of a system such as this.

Some of the next steps I have planned include implementing more GitHub projects, as well as running multiple pipelines to automatically build my red teaming toolset.