Security Scorecards is a tool for Security Health Metrics For Open Source.
Motivation
A short motivational video clip to inspire us: https://youtu.be/rDMMYT3vkTk “You passed! All D’s … and an A!”
Goals
- Automate analysis and trust decisions on the security posture of open source projects.
- Use this data to proactively improve the security posture of the critical projects the world depends on.
Scorecard Checks
The following checks are all run against the target project by default:
Name | Description |
---|---|
Active | Did the project get any commits in the last 90 days? |
Automatic-Dependency-Update | Does the project use tools to automatically update its dependencies? |
Binary-Artifacts | Is the project free of checked-in binaries? |
Branch-Protection | Does the project use Branch Protection ? |
CI-Tests | Does the project run tests in CI, e.g. GitHub Actions, Prow? |
CII-Best-Practices | Does the project have a CII Best Practices Badge? |
Code-Review | Does the project require code review before code is merged? |
Contributors | Does the project have contributors from at least two different organizations? |
Fuzzing | Does the project use fuzzing tools, e.g. OSS-Fuzz? |
Frozen-Deps | Does the project declare and freeze dependencies? |
Packaging | Does the project build and publish official packages from CI/CD, e.g. GitHub Publishing ? |
Pull-Requests | Does the project use Pull Requests for all code changes? |
SAST | Does the project use static code analysis tools, e.g. CodeQL, SonarCloud? |
Security-Policy | Does the project contain a security policy? |
Signed-Releases | Does the project cryptographically sign releases? |
Signed-Tags | Does the project cryptographically sign release tags? |
Token-Permissions | Does the project declare GitHub workflow tokens as read only? |
Vulnerabilities | Does the project have unfixed vulnerabilities? Uses the OSV service. |
To see detailed information about each check and remediation steps, check out the checks documentation page.
Check Documentation
This page contains information on how each check works and provide remediation steps to fix the failure. All of these checks are basically “best-guesses” currently, and operate on a set of heuristics.
They are all subject to change, and have room for improvement! If you have ideas for things to add, or new ways to detect things, please contribute!
Active
A project which is not active may not be patched, may not have its dependencies patched, or may not be actively tested and used. So this check tries to determine if the project is still “actively maintained”. It currently works by looking for commits within the last 90 days, and succeeds if there are at least 2 commits in the last 90 days.
Remediation steps
- There is NO remediation work needed here. This is just to indicate your project activity and maintenance commitment.
Automatic-Dependency-Update
This check tries to determine if a project has dependencies automatically updated. The checks looks for dependabot or renovatebot. This check only looks if it is enabled and does not ensure that it is run and pull requests are merged.
Remediation steps
- Signup for automatic dependency updates with dependabot or renovatebot and place the config file in the locations that are recommended by these tools.
Binary-Artifacts
This check tries to determine if a project has binary artifacts in the source repository. These binaries could be compromised artifacts.Building from the source is recommended.
Remediation steps
- Remove the binary artifacts from the repository.
Branch-Protection
Branch protection allows defining rules to enforce certain workflows for branches, such as requiring a review or passing certain status checks. This check would work only when the token has Admin access to the repository. This check determines if the default and release branches are protected. More specifically, the checks for Allow Force Pushes (disabled), Allow Deletions (disabled), Enforce Admins (enabled), Require Linear History (enabled), Required Status Checks (enabled and must have non-empty context enabled), Required Pull Request Reviews (>=1), Dismiss Stale Reviews (enabled), Require Code Owner Reviews (enabled).
Remediation steps
- Enable branch protection settings in your source hosting provider to avoid force pushes or deletion of your important branches.
- For GitHub, check out the steps here.
CI-Tests
This check tries to determine if the project runs tests before pull requests are merged. It works by looking for a set of well-known CI-system names in GitHub CheckRuns
and Statuses
among the recent commits (~30). A CI-system is considered well-known if its names contains any of the following: appveyor, buildkite, circleci, e2e, github-actions, jenkins, mergeable, test, travis-ci. The check succeeds if at least 75% of successful pull requests have at least one successful check associated with them.
Remediation steps
- Check-in scripts that run all the tests in your repository.
- Integrate those scripts with a CI/CD platform that runs it on every pull request (e.g. GitHub Actions, Prow, etc).
CII-Best-Practices
This check tries to determine if the project has a CII Best Practices Badge. It uses the URL for the Git repo and the CII API. The check does not consider if the repo has a solver or gold levels for passing the test.
Remediation steps
- Sign up for the CII Best Practices program.
This check tries to determine if a project requires code review before pull requests are merged. First it checks if branch-Protection is enabled on the default branch and the number of reviewers is at least 1. If this fails, it checks if the recent (~30) commits have a Github-approved review or if the merger is different from the committer (implicit review). The check succeeds if at least 75% of commits have a review as described above. If it fails, it does the same check but looking for reviews by Prow (labels “lgtm” or “approved”). If this fails, it does the same but looking for gerrit-specific commit messages (“Reviewed-on” and “Reviewed-by”).
Remediation steps
- Follow security best practices by performing strict code reviews for every new pull request.
- Make “code reviews” mandatory in your repository configuration. E.g. GitHub.
- Enforce the rule for administrators / code owners as well. E.g. GitHub
This check tries to determine if a project has a set of contributors from multiple companies. It works by looking at the authors of recent commits and checking the Company
field on the GitHub user profile. A contributor must have at least 5 commint in the last 30 commits. The check succeeds if all contributor span at least 2 different companies.
Remediation steps
- There is NO remediation work needed here. This is just to provide some insights on which organization(s) have contributed to the project and making trust decision based on that. But you can ask your contributors to join their respective organization.
This check tries to determine if a project has declared and pinned its dependencies. It works by (1) looking for the following files in the root directory: go.mod, go.sum (Golang), package-lock.json, npm-shrinkwrap.json (Javascript), requirements.txt, pipfile.lock (Python), gemfile.lock (Ruby), cargo.lock (Rust), yarn.lock (package manager), composer.lock (PHP), vendor/, third_party/, third-party/; (2) looks for unpinned dependencies in Dockerfiles, shell scripts and GitHub workflows. If one of the files in (1) AND all the dependencies in (2) are pinned, the check succeds.
Remediation steps
- Declare all your dependencies with specific versions in your package format file (e.g.
package.json
for npm,requirements.txt
for python). For C/C++, check in the code from a trusted source and add aREADME
on the specific version used (and the archive SHA hashes). - If the package manager supports lock files (e.g.
package-lock.json
for npm), make sure to check these in the source code as well. These files maintain signatures for the entire dependency tree and saves from future exploitation in case the package is compromised. - For Dockerfiles and github workflows, pin dependencies by hash. See example gitcache-docker.yaml and Dockerfile examples.
- To help update your dependencies after pinning them, use tools such as Github’s dependabot or renovate bot.
This check tries to determine if the project uses a fuzzing system. It currently works by checking if the repo name is in the OSS-Fuzz project list.
Remediation steps
- Integrate the project with OSS-Fuzz by following the instructions here.
This check tries to determine if the project requires pull requests for all changes to the default branch. It works by looking at recent commits (first page, ~30) and uses the GitHub API to search for associated pull requests. The check discards commits by usernames containing ‘bot’ or ‘gardener’. The check considers a commit containing the string Reviewed-on
as being reviewed through gerrit; and does not check for a corresponding PR.
Remediation steps
- Always open a pull request for any change you intend to make, big or small.
- Make “pull requests” mandatory in your repository configuration. E.g. GitHub
- Enforce the rule for administrators / code owners as well. E.g. GitHub
This check tries to determine if the project uses static code analysis systems. It currently works by looking for well-known results in GitHub pull requests. More specifically, the check first looks for Github apps named github-code-scanning (codeql) and sonarcloud in the recent (~30) merged PRs. If >75% of commits contain at least a successful check (by any of the apps above), the check succeeds. If the above fails, the check instead looks for the use of “github/codeql-action” in a github workflow.
Remediation steps
- Run CodeQL checks in your CI/CD by following the instructions here.
This check tries to determine if a project has published a security policy. It works by looking for a file named SECURITY.md
(case-insensitive) in a few well-known directories.
Remediation steps
- Place a security policy file
SECURITY.md
in the root directory of your repository. This makes it easily discoverable by a vulnerability reporter. - The file should contain information on what constitutes a vulnerability and a way to report it securely (e.g. issue tracker with private issue support, encrypted email with a published public key).
This check tries to determine if a project cryptographically signs release artifacts. It works by looking for filenames: *.minisign (https://github.com/jedisct1/minisign), *.asc (pgp), *.sign. for the last 5 GitHub releases. The check does not verify the signatures.
Remediation steps
- Publish the release.
- Generate a signing key.
- Download the release as an archive locally.
- Sign the release archive with this key (should output a signature file).
- Attach the signature file next to the release archive.
- For GitHub, check out the steps here.
This check looks for cryptographically signed tags in the last 5 tags. The check does not verify the signature, but relies on github’s verification.
Remediation steps
- Generate a new signing key.
- Add your key to your source hosting provider.
- Configure your key and email in git.
- Publish the tag and then sign it with this key.
- For GitHub, check out the steps here.
This check tries to determine if a project’s GitHub workflows follow the principle of least privilege, i.e. if the GitHub tokens are set read-only by default. For each workflow yaml file, the check looks for the permissions keyword. If it is set globally as read-only for the entire file, this check succeeds. Otherwise it fails. The check cannot detect if the “read-only” GitHub permission settings is enabled, as there is no API available.
Remediation steps
- Set permissions as
read-all
orcontents: read
as described in GitHub’s documentation.
This check determines if there are open, unfixed vulnerabilities in the project using the OSV service.
Remediation steps
- Fix the vulnerabilities. The details of each vulnerability can be found on https://osv.dev.
Usage
Using repository URL
The program can run using just one argument, the URL of the repo:
$ go build
$ ./scorecard –repo=github.com/kubernetes/kubernetes
Starting [Signed-Tags]
Starting [Automatic-Dependency-Update]
Starting [Frozen-Deps]
Starting [Fuzzing]
Starting [Pull-Requests]
Starting [Branch-Protection]
Starting [Code-Review]
Starting [SAST]
Starting [Contributors]
Starting [Signed-Releases]
Starting [Packaging]
Starting [Token-Permissions]
Starting [Security-Policy]
Starting [Active]
Starting [Binary-Artifacts]
Starting [CI-Tests]
Starting [CII-Best-Practices]
Finished [Contributors]
Finished [Signed-Releases]
Finished [Active]
Finished [Binary-Artifacts]
Finished [CI-Tests]
Finished [CII-Best-Practices]
Finished [Packaging]
Finished [Token-Permissions]
Finished [Security-Policy]
Finished [Automatic-Dependency-Update]
Finished [Frozen-Deps]
Finished [Fuzzing]
Finished [Pull-Requests]
Finished [Signed-Tags]
Finished [Branch-Protection]
Finished [Code-Review]
Finished [SAST]
RESULTS
Repo: github.com/kubernetes/kubernetes
Active: Pass 10
Automatic-Dependency-Update: Fail 3
Binary-Artifacts: Pass 10
Branch-Protection: Fail 0
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 10
Fuzzing: Pass 10
Packaging: Fail 0
Pull-Requests: Pass 10
SAST: Fail 10
Security-Policy: Fail 5
Signed-Releases: Fail 10
Signed-Tags: Fail 10
Token-Permissions: Pass 10
For more details why a check fails, use the --show-details
option:
./scorecard –repo=github.com/kubernetes/kubernetes –checks Frozen-Deps –show-details
Starting [Frozen-Deps]
Finished [Frozen-Deps]
RESULTS
Repo: github.com/kubernetes/kubernetes
Frozen-Deps: Fail 10
…
!! frozen-deps/docker – cluster/addons/fluentd-elasticsearch/es-image/Dockerfile has non-pinned dependency ‘golang:1.16.5’
…
!! frozen-deps/fetch-execute – cluster/gce/util.sh is fetching and executing non-pinned program ‘curl https://sdk.cloud.google.com | bash’
…
!! frozen-deps/fetch-execute – hack/jenkins/benchmark-dockerized.sh is fetching an non-pinned dependency ‘GO111MODULE=on go install github.com/cespare/prettybench’
…
Using A Package Manager
scorecard has an option to provide either --npm
/ --pypi
/ --rubygems
package name and it would run the checks on the corresponding GitHub source code.
For example:
./scorecard –npm=angular
Starting [Active]
Starting [Branch-Protection]
Starting [CI-Tests]
Starting [CII-Best-Practices]
Starting [Code-Review]
Starting [Contributors]
Starting [Frozen-Deps]
Starting [Fuzzing]
Starting [Packaging]
Starting [Pull-Requests]
Starting [SAST]
Starting [Security-Policy]
Starting [Signed-Releases]
Starting [Signed-Tags]
Finished [Signed-Releases]
Finished [Fuzzing]
Finished [CII-Best-Practices]
Finished [Security-Policy]
Finished [CI-Tests]
Finished [Packaging]
Finished [SAST]
Finished [Code-Review]
Finished [Branch-Protection]
Finished [Frozen-Deps]
Finished [Signed-Tags]
Finished [Active]
Finished [Pull-Requests]
Finished [Contributors]
RESULTS
Active: Fail 10
Branch-Protection: Fail 0
CI-Tests: Pass 10
CII-Best-Practices: Fail 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 0
Fuzzing: Fail 10
Packaging: Fail 0
Pull-Requests: Fail 9
SAST: Fail 10
Security-Policy: Pass 10
Signed-Releases: Fail 0
Signed-Tags: Fail 10
Running specific checks
To use a particular check(s), add the --checks
argument with a list of check names.
For example, --checks=CI-Tests,Code-Review
.
Before running Scorecard, you need to, either:
- create a GitHub access token and set it in environment variable
GITHUB_AUTH_TOKEN
. This helps to avoid the GitHub’s api rate limits with unauthenticated requests.
#For posix platforms, e.g. linux, mac:
export GITHUB_AUTH_TOKEN=
#For windows:
set GITHUB_AUTH_TOKEN=
Multiple GITHUB_AUTH_TOKEN
can be provided separated by comma to be utilized in a round robin fashion.
- create a GitHub App Installations for higher rate-limit quotas. If you have an installed GitHub App and key file, you can use these three environment variables, following the commands shown above for your platform.
GITHUB_APP_KEY_PATH=
GITHUB_APP_INSTALLATION_ID=
GITHUB_APP_ID=
These can be obtained from the GitHub developer settings page.
Understanding Scorecard Results
Each check returns a Pass / Fail decision, as well as a confidence score between 0 and 10. A confidence of 0 should indicate the check was unable to achieve any real signal, and the result should be ignored. A confidence of 10 indicates the check is completely sure of the result.
There are three formats currently: default
, json
, and csv
. Others may be added in the future.
These may be specified with the --format
flag.
If you’re only interested in seeing a list of projects with their Scorecard check results, we publish these results in a BigQuery public dataset.
This data is available in the public BigQuery dataset openssf:scorecardcron.scorecard
. The latest results are available in the BigQuery view openssf:scorecardcron.scorecard_latest
.
You can extract the latest results to Google Cloud storage in JSON format using the bq
tool:
#Get the latest PARTITION_ID
bq query –nouse_legacy_sql ‘SELECT partition_id FROM
openssf.scorecardcron.INFORMATION_SCHEMA.PARTITIONS ORDER BY partition_id DESC
LIMIT 1′
#Extract to GCS
bq extract –destination_format=NEWLINE_DELIMITED_JSON
‘openssf:scorecardcron.scorecard$’ gs://bucket-name/filename.json
The list of projects that are checked is available in the cron/data/projects.csv
file in this repository. If you would like us to track more, please feel free to send a Pull Request with others.
NOTE: Currently, these lists are derived from projects hosted on GitHub ONLY. We do plan to expand them in near future to account for projects hosted on other source control systems.
Adding A Scorecard Check
If you’d like to add a check, make sure it is something that meets the following criteria and then create a new GitHub Issue:
- The scorecard must only be composed of automate-able, objective data. For example, a project having 10 contributors doesn’t necessarily mean it’s more secure than a project with say 50 contributors. But, having two maintainers might be preferable to only having one – the larger bus factor and ability to provide code reviews is objectively better.
- The scorecard criteria can be as specific as possible and not limited general recommendations. For example, for Go, we can recommend/require specific linters and analyzers to be run on the codebase.
- The scorecard can be populated for any open source project without any work or interaction from maintainers.
- Maintainers must be provided with a mechanism to correct any automated scorecard findings they feel were made in error, provide “hints” for anything we can’t detect automatically, and even dispute the applicability of a given scorecard finding for that repository.
- Any criteria in the scorecard must be actionable. It should be possible, with help, for any project to “check all the boxes”.
- Any solution to compile a scorecard should be usable by the greater open source community to monitor upstream security.