Hacking Tools

Eclipse : The AI-Driven Sensitive Information Detection Tool

Eclipse was designed as a part of Nebula Pro, the first AI Powered Penetration Testing Application. Eclipse was designed to address the growing concerns surrounding sensitive data management.

Unlike traditional methods, Eclipse is not limited to identifying explicitly defined sensitive information; it delves deeper, detecting any sentences that may hint at or contain sensitive information.

Sensitive Information Detection: Eclipse can process documents to identify not only explicit sensitive information but also sentences that suggest the presence of such data else where.

This makes it a useful invaluable tool for preliminary reviews when you need to quickly identify potential sensitive content in your documents.

Privacy Preservation: With concerns about data privacy in the context of Large Language Models (LLMs), Eclipse offers a potential solution.

Before you send your data to APIs hosting LLM(s), Eclipse can screen your documents to ensure no sensitive information is inadvertently exposed.

Appropriate Use Cases For Eclipse:

Preliminary Data Screening: Eclipse is ideal for initial screenings where speed is essential. It helps users quickly identify potential sensitive information in large volumes of text.

Data Privacy Checks: Before sharing documents or data with external parties or services, Eclipse can serve as a first line of defense, alerting you to the presence of sensitive information.

Limitations:

Eclipse is designed for rapid assessments and may not catch every instance of sensitive information. Therefore:

  • Eclipse should not be used as the sole tool for tasks requiring exhaustive checks, such as legal document review, where missing sensitive information could have significant consequences.
  • Consider using Eclipse alongside thorough manual reviews and other security measures, especially in situations where the complete removal of sensitive information is crucial.

Compatibility

Eclipse has been extensively tested and optimized for Linux platforms. As of now, its functionality on Windows or macOS is not guaranteed, and it may not operate as expected.

System Dependencies

  • Storage: A minimum of 20GB is required.
  • RAM: A minimum of 16GB RAM memory is required
  • Graphics Processing Unit (GPU): While not mandatory, having at least 8GB of GPU memory is recommended for optimal performance.

PYPI Based Distribution Requirement(s)

  • Python3
  • Python3 (3.10 or later)
  • PyTorch (A machine learning library for Python)
  • Transformers library by Hugging Face (Provides state-of-the-art machine learning techniques for natural language processing tasks)
  • Requests library (Allows you to send HTTP requests using Python)
  • Termcolor library (Enables colored printing in the terminal)
  • Prompt Toolkit (Library for building powerful interactive command lines in Python)

To install the above dependencies:

pip install torch transformers requests termcolor prompt_toolkit

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

Promptmap

Prompt injection is a type of security vulnerability that can be exploited to control the…

2 days ago

Firefly – Black Box Fuzzer For Web Applications

Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly…

2 days ago

Winit : Cross-Platform Window Creation And Management In Rust

Winit is a robust, cross-platform library designed for creating and managing windows in Rust applications.…

2 days ago

Browser Autofill Phishing – The Hidden Dangers And Security Risks

In today’s digital age, convenience often comes at the cost of security. One such overlooked…

2 days ago

Terminal GPT (tgpt) – Your Direct CLI Gateway To ChatGPT 3.5

Terminal GPT (tgpt) offers a seamless way to bring the power of ChatGPT 3.5 directly…

2 days ago

garak, LLM Vulnerability Scanner : The Comprehensive Tool For Assessing Language Model Security

garak checks if an LLM can be made to fail in a way we don't…

5 days ago