GoodHound came about because I had a need to perform a repeatable assessment of attack paths using Bloodhound.
I found that when used in a defensive way BloodHound was so good at identifying attack paths in a domain I was faced with several thousand to process with each review, with no way to deduplicate the findings I had already logged in previous reviews.
I wanted a way to programmatically find attack paths, and to display these in a prioritised order, beginning with the amount of users exposed to each path. This meant that I could find the key points in the network that would allow me to advise the remediation teams with actions that would help to mitigate the attack paths that the most users were able to use.
I also wanted to be able to extract some summary management information, in order to be able to log these over time. This could help to demonstrate to management improvements over time using charts.
Finally I wanted a way to plumb the paths found back into Bloodhound to be presented with the familiar and easy to read attack path graph that Bloodhound has always done so well.
This is a working project, and my first ever attempt at a real tool. I’m grateful for any feedback you may have, whether that’s bugs, issues, feature requests or general usage questions. Just log an issue and I’ll do my best to accommodate.
Quick Start
To get up and running quickly with default options:
Pre-requisites
- Ensure you already have Bloodhound and neo4j setup – https://bloodhound.readthedocs.io/en/latest/#install
- Ensure you have python installed
- Upload your SharpHound output into Bloodhound
Install GoodHound
pip install goodhound
Run with basic options
goodhound -p “neo4jpassword”
Installation
Pre-requisites
- Python and pip already installed.
- This has been tested with Python version 3.9 and 3.10. Earlier versions may also work. Feel free to try and log an issue letting me know if it worked or not.
- Both neo4j and bloodhound will need to be already installed. The docs at https://bloodhound.readthedocs.io/en/latest/#install explain this well.
- If running Bloodhound with Sharphound version 4.1 you will need to add a parameter when running GoodHound to patch a minor bug in Bloodhound 4.1 see bug report.
The parameter is detailed here.
- If running Bloodhound with Sharphound version 4.1 you will need to add a parameter when running GoodHound to patch a minor bug in Bloodhound 4.1 see bug report.
Using Pipenv (recommended)
If you don’t want to make any changes to your installed python libraries you can use pipenv:
pipenv install goodhound
Then call pipenv run to run GoodHound inside of the virtual environment just created
pipenv run goodhound -h
Using Pip
Use pip to install directly from the PyPi library:
pip install goodhound
This will create a ‘goodhound’ entrypoint that you can call from the CLI:
goodhound -h
Clone from GitHub
To run the raw code from GitHub
git clone https://github.com/idnahacks/GoodHound.git
cd GoodHound
pip install -r requirements.txt
python -m goodhound -h
Bloodhound 4.1
With the latest release of Bloodhound 4.1 there is a minor bug where nodes that do not have the highvalue attribute set to true do not end up with the attribute at all.
This causes an issue with GoodHound as it uses this parameter to ascertain paths from non-highvalue nodes to high value nodes.
The Patch
When running GoodHound on a set of data that has been gathered using SharpHound 4.1 add the parameter –patch41:
goodhound -p “neo4jpassword” –patch41
This goes through the neo4j database and assigned the highvalue attribute to false anywhere that it isn’t already set to true.
Parameters
Below is an explanation of all of the available parameters that can be used with GoodHound.
Database settings
-s can be used to point GoodHound to a server other than the default localhost installation (bolt://localhost:7687)
-u can be used to set the neo4j username
-p can be used to set the neo4j password
Output formats
-o can be used to select from:
- stdout -displays the output on screen
- csv saves a comma separated values file for use with reporting or MI
- md or markdown to display a markdown formatted output
-d an optional filepathdirectory path for the csv output option
By default the output is csv and these are created in the current working directory.
-q supresses all output
-v enables verbose output
–debug enables debug output
Number of results
-r can be used to select the amount of results to show. By default the top 5 busiest paths are displayed.
-sort can be used to sort by:
- number of users with the path (descending)
- hop count (ascending)
- risk score (descending)
Schema
-sch select a file containing cypher queries to set a custom schema to alter the default Bloodhound schema.
This can be useful if you want to set the ‘highvalue’ label on AD objects that are not covered as standard, helping to provide internal context.
For example, you want to add the highvalue label to ‘dbserver01’ because it contains all of your customer records. The schema file to load in could contain the following cypher query:
match (c:Computer {name:’DBSERVER01@YOURDOMAIN.LOCAL’}) set c.highvalue=TRUE
The schema can contain multiple queries, each on a separate line.
SQLite Database
By default GoodHound stores all attack paths in a SQLite database called goodhound.db stored in the current working directory. This gives the opportunity to query attack paths over time.
–db-skip will skip logging anything to a local database
–sql-path can be used to point GoodHound to the location of the database file. If a directory is provided a database named goodhound.db will be created in that directory. If an existing db file is provided this db will be updated with any new findings.
Bloodhound 4.1 Patch
With the latest release of Bloodhound 4.1 there is a minor bug where nodes that do not have the highvalue attribute set to true do not end up with the attribute at all.
This causes an issue with GoodHound as it uses this parameter to ascertain paths from non-highvalue nodes to high value nodes.
When running GoodHound on a set of data that has been gathered using SharpHound 4.1 add the parameter
–patch41
goodhound -p “neo4jpassword” –patch41
This goes through the neo4j database and assigned the high value attribute to false anywhere that it isn’t already set to true.
Output
Default output is to generate a html report and 3 csv files as follows:
Summary Report
The Summary report contains some high level information regarding the number of paths found, the number of enabled non-admin users that are exposed to an attack path, and the number of paths that have been seen before based on the entries in the GoodHound local database.
The end goal is to reduce the number of exposed users, by taking a two pronged approach.
Busiest paths will highlight attack paths that are exposed to the greatest number of users.
Weakest links will highlight links that might help to close down the number of paths available.
Busiest Paths Report
The output shows a total number of unique users that have a path to a HighValue target.
It then breaks this down to individual paths, ordered by the risk score.
Each path is then displayed showing the starting group, the number of non-admin users within that path, the number of hops, the risk score, a text version of the path and also a Cypher query. This cypher query can be directly copied into the Raw Query bar in Bloodhound for a visual representation of the attack path.
Weakest Links Report
The weakest links report is a way to potentially find links of attack paths that repeatedly show up in the dataset. For each weak link shown the report will also tell you how many of the total attack paths that was seen in.
NOTE: In order to use the Bloodhound query that is created with the weakest link report you will need the APOC library neo4j plugin installed. To do this copy the APOC jar file from the $NEO4J_HOME/labs directory to the $NEO4J_HOME/plugins directory and restart Neo4j.
Risk Score
The Risk Score is a mechanism to help prioritise remediation. It is calculated based on the Exploit Cost and the number of non-admin users exposed to that attack path. The more users that are exposed, and the lower the exploit cost, the higher the risk score.
It is not intended to be a risk assessment in and of itself, and the intention is not to assign severities such as Critical, High, Medium etc to certain scores.
The score is calculated using the following formula:
Risk Score = (MaxExploitCostPossible – ExploitCost) / MaxExploitCostPossible * %ofEnabledNon-AdminUserswiththepath
Max Exploit Cost Possible is 3 * the maximum number of hops seen across all attack paths. 3 is chosen because it is the highest score any single hop in an attack path can have.
Exploit Cost
Exploit Cost is an estimation of how noisy or complex a particular attack path might be. (Kudos to the ACLPWN project for this idea.)
For example, if an attacker has compromised userA and userA is a member of groupB then that step in the attack path doesn’t require any further exploitation or real opsec considerations.
Conversely if an attacker has compromised a user’s workstation which also has an admin user session on it, to exploit this the attacker would (possibly) need to elevate permissions on the workstation and run something like Mimikatz to extract credentials from memory. This would require OPSEC considerations around monitoring of LSASS processes and also potentially require endpoint protectionbypasses. All of which make the exploitation that little bit more difficult.
These scores have been assigned based upon my personal best judgement. They are not set in stone and discussions around the scoring are welcome and will only help to improve this.
The scores assigned to each exploit are:
Relationship | Target Node Type | OPSEC Considerations | Possible Protections to Bypass | Possible Privesc Required | Cost |
---|---|---|---|---|---|
Memberof | Group | No | No | No | 0 |
HasSession | Any | Yes | Yes | Yes | 3 |
CanRDP | Any | No | No | No | 0 |
Contains | Any | No | No | No | 0 |
GPLink | Any | No | No | No | 0 |
AdminTo | Any | Yes | No | No | 1 |
ForceChangePassword | Any | Yes | No | No | 1 |
AllowedToDelegate | Any | Yes | No | No | 1 |
AllowedToAct | Any | Yes | No | No | 1 |
AddAllowedToAct | Any | Yes | No | No | 1 |
ReadLapsPassword | Any | Yes | No | No | 1 |
ReadGMSAPassword | Any | Yes | No | No | 1 |
HasSidHistory | Any | Yes | No | No | 1 |
CanPSRemote | Any | Yes | No | No | 1 |
ExecuteDcom | Any | Yes | No | No | 1 |
SqlAdmin | Any | Yes | No | No | 1 |
AllExtendedRights | Group/User/Computer | Yes | No | No | 1 |
AddMember | Group | Yes | No | No | 1 |
AddSelf | Group | Yes | No | No | 1 |
GenericAll | Group/User/Computer | Yes | No | No | 1 |
WriteDACL | Group/User/Computer | Yes | No | No | 1 |
WriteOwner | Group/User/Computer | Yes | No | No | 1 |
Owns | Group/User/Computer | Yes | No | No | 1 |
GenericWrite | Group/User/Computer | Yes | No | No | 1 |
AllExtendedRights | Domain | Yes | Yes | No | 2 |
GenericAll | Domain | Yes | Yes | No | 2 |
WriteDACL | Domain | Yes | Yes | No | 2 |
WriteOwner | Domain | Yes | Yes | No | 2 |
Owns | Domain | Yes | Yes | No | 2 |
GenericAll | GPO/OU | Yes | No | No | 1 |
WriteDACL | GPO/OU | Yes | No | No | 1 |
WriteOwner | GPO/OU | Yes | No | No | 1 |
Owns | GPO/OU | Yes | No | No | 1 |
WriteSPN | User | Yes | No | No | 1 |
AddKeyCredentialLink | Any | Yes | Yes | No | 2 |
SQLite Database
By default Goodhound will insert all of attack paths that it finds into a local SQLite database located in a db directory inside the current working directory.
This database can be then queried separately using the SQLite tools and queries.
In order to query the database you’ll need the SQLite binaries available from https://www.sqlite.org/download.html
Example Goodhound SQLITE queries
Connect to DB
sqlite3.exe db\goodhound.db
Get paths not seen in over 90 days
select * from paths where date(last_seen, ‘unixepoch’) < date(‘now’, ‘-90 days’);
See number of paths containing a section of paths, useful for looking at the Nodes brought up in the Weakest Link report
select count(*) from paths where fullpath like’%ReadLAPSPassword -> SERVER%.DOMAIN.LOCAL%‘;
See bloodhound queries for paths containing a key starting group and scan time
select query from paths where groupname = ‘GROUP1@DOMAIN.LOCAL’ and datetime(last_seen, ‘unixepoch’) = ‘2021-10-28 05:15:22’;
Close DB connection
.quit