secureCodeBox is a kubernetes based, modularized toolchain for continuous security scans of your software project. Its goal is to orchestrate and easily automate a bunch of security-testing tools out of the box.
Purpose of this Project
The typical way to ensure application security is to hire a security specialist (aka penetration tester) at some point in your project to check the application for security bugs and vulnerabilities. Usually, this check is done at a later stage of the project and has two major drawbacks:
- Nowadays, a lot of projects do continuous delivery, which means the developers deploy new versions multiple times each day. The penetration tester is only able to check a single snapshot, but some further commits could introduce new security issues. To ensure ongoing application security, the penetration tester should also continuously test the application. Unfortunately, such an approach is rarely financially feasible.
- Due to a typically time boxed analysis, the penetration tester has to focus on trivial security issues (low-hanging fruit) and therefore will probably not address the serious, non-obvious ones.
With the secureCodeBox we provide a toolchain for continuous scanning of applications to find the low-hanging fruit issues early in the development process and free the resources of the penetration tester to concentrate on the major security issues.
The purpose of secureCodeBox is not to replace the penetration testers or make them obsolete. We strongly recommend to run extensive tests by experienced penetration testers on all your applications.
Important note: The secureCodeBox is no simple one-button-click-solution! You must have a deep understanding of security and how to configure the scanners. Furthermore, an understanding of the scan results and how to interpret them is also necessary.
There is a German article about Security DevOps – Angreifern (immer) einen Schritt voraus in the software engineering journal OBJEKTSpektrum.
Architecture Overview
Upgrading
Upgraded Kubebuilder Version to v3
The CRD’s are now using apiextensions.k8s.io/v1
instead of apiextensions.k8s.io/v1beta1
which requries at least Kubernetes Version 1.16 or higher. The Operator now uses the new kubebuilder v3 command line flag for enabling leader election and setting the metrics port. If you are using the official secureCodeBox Helm Charts for your deployment this has been updated automatically.
If you are using a custom deployment you have to change the --enable-leader-election
flag to --leader-elect
and --metrics-addr
to --metrics-bind-address
. For more context see: https://book.kubebuilder.io/migration/v2vsv3.html#tldr-of-the-new-gov3-plugin
Restructured the secureCodeBox HelmCharts to introduce more consistency in HelmChart Values
The secureCodeBox HelmCharts for hooks and scanners are following a new structure for all HelmChart Values:
Instead of secureCodebox Version 2 example:
image:
image.repository — Container Image to run the scan
repository: owasp/zap2docker-stable
image.tag — defaults to the charts appVersion
tag: null
parserImage:
parserImage.repository — Parser image repository
repository: docker.io/securecodebox/parser-zap
parserImage.tag — Parser image tag
@default — defaults to the charts version
tag: null
parseJob:
parseJob.ttlSecondsAfterFinished — seconds after which the kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
ttlSecondsAfterFinished: null
scannerJob:
scannerJob.ttlSecondsAfterFinished — seconds after which the kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
ttlSecondsAfterFinished: null
scannerJob.backoffLimit — There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy)
@default — 3
backoffLimit: 3
Added scanner.appendName to chart values
Using {{ .Release.name }} in the nmap
HelmChart Name for scanTypes
causes issues when using this chart as a dependency of another chart. All scanners HelmCharts already used a fixed name for the scanType
they introduce, with one exception: the nmap
scanner HelmChart.
The nmap exception was originally introduced to make it possible configure yourself an nmap-privilidged
scanType, which is capable of running operating system scans which requires some higher privileges: https://www.securecodebox.io/docs/scanners/nmap#operating-system-scans
This idea for extending the name of a scanType is now in Version 3 general available for all HelmCharts.
The solution was to add a new HelmChart Value scanner.appendName
for appending a suffix to the already defined scanType name. Example: the scanner.nameAppend: -privileged
for the ZAP scanner will create zap-baseline-scan-privileged
, zap-api-scan-privileged
, zap-full-scan-privileged
as new scanTypes instead of zap-baseline-scan
, zap-api-scan
, zap-full-scan
.
Renamed demo-apps to demo-targets
The provided vulnerable demos are renamed from demo-apps
to demo-targets
, this includes the namespace and the folder of the helmcharts.
Renamed the hook declarative-subsequent-scans to cascading-scans
The hook responsible for cascading scans is renamed from declarative-subsequent-scans
to cascading-scans
.
Fixed Name Consistency In Docker Images / Repositories
For the docker images for scanners and parsers we already had the naming convention of prefixing these images with scanner-
or parser-
.
Hook images however were named inconsistently (some prefixed with hook-
some unprefixed). To introduce more consistency we renamed all hook images and prefix them with hook-
like we did with parser and scanner images.
Please beware of this if you are referencing some of our hook images in your own HelmCharts or custom implementations.
Renamed lurcher
to lurker
In the 3.0 release, we corrected the misspelling in lurcher
. To remove the remains after upgrade, delete the old service accounts and roles from the namespaces where you have executed scans in the past:
Find relevant namespaces
kubectl get serviceaccounts –all-namespaces | grep lurcher
Delete role, role binding and service account for the specific namespace
kubectl –namespace delete serviceaccount lurcher
kubectl –namespace delete rolebindings lurcher
kubectl –namespace delete role lurcher
Removed Hook Teams Webhook
We implemented a more general notification hook which can be used to notify different systems like MS Teams and Slack and also Email based in a more flexible way with custom message templates. With this new hook in place it is not nessesary to maintain the preexisting MS Teams Hook any longer and therefore we removed it.