Get Rich Quick: Ai For Gambling Get Rich Quick: Ai For Gambling
  • About
  • Blog
  • Code
  • About
  • Blog
  • Code
  •  
Get Rich Quick: Ai For Gambling

Don’t hate the game.

This repository provides an AI-driven tool for generating creative NFL prop bets based on real-time NFL news headlines, she uses RSS feeds, natural language processing (NLP), sentiment analysis, and multiple LLMs to anticipate the psychophysiological effects today’s sports headline news will most likely have on the preparedness of individual NFL players and collectively as teams, in order to ultimately suggest prop bets to make related to NFL players estimated performances, with accompanying odds and betting recommendations.

Features

  • Headline Aggregation: Gathers NFL news headlines from a list of top RSS feeds.
  • Player Identification: Uses NLP (spaCy) to extract player names from headlines.
  • Sentiment Analysis: Analyzes sentiment in each headline related to the player to calculate an overall sentiment score.
  • AI-Generated Prop Bets: Creates unique prop bets using OpenAI’s API based on each headline.
  • Database Storage: Stores player data, headlines, sentiment scores, and generated prop bets in an SQLite database.

Installation

  1. Clone the repository:
   git clone https://github.com/peteralcock/getrichquick.git
   cd getrichquick
  1. Install required dependencies:
    Ensure you have Python 3.x installed, then run:
   pip install feedparser spacy textblob openai
   python -m spacy download en_core_web_sm
  1. Set up OpenAI API key:
    Replace sk-proj-3c39_W2Vsa... with your own OpenAI API key in the client initialization.

Usage

  1. Run the script:
    Execute the script to fetch headlines, process player names, analyze sentiment, and generate prop bets.
   python main.py
  1. Check Output:
  • Player profiles: Displays each player’s headlines and overall sentiment score.
  • Generated Prop Bets: Prop bets generated by the AI are stored in the SQLite database (nfl_players.db), under the prop_bets table.

Database Structure

  • players Table: Contains unique player IDs and aggregated sentiment scores.
  • headlines Table: Stores headlines associated with each player.
  • prop_bets Table: Contains AI-generated prop bets linked to each player and headline.

Example Workflow

  1. Fetch Headlines: Retrieves NFL headlines from specified RSS feeds.
  2. Identify Players: Extracts player names using spaCy’s Named Entity Recognition.
  3. Sentiment Analysis: Computes a sentiment score for each player based on their headlines.
  4. Generate Prop Bets: Uses OpenAI API to generate three creative prop bets for each headline related to a player.
  5. Save Results: Stores data in an SQLite database.

Warning

This tool is intended for entertainment purposes only and should not be used as a primary betting guide. Gambling carries financial risks, and prop bets generated are speculative.

Read More
Welcome To Rushmore: Ai-Powered Online Learning Platform For Educators

I’m thrilled to introduce Rushmore, an AI-powered SaaS e-learning platform I developed to revolutionize online course creation. Designed for educators, creators, and entrepreneurs, Rushmore simplifies the process of generating and selling lesson plans using artificial intelligence.


🎓 Welcome to Rushmore

Inspired by the spirit of learning and innovation, Rushmore allows you to teach yourself and others whatever you desire.By leveraging AI, you can instantly create comprehensive lesson plans and share your knowledge with the world.


🚀 Key Features

  • Modern Landing Page: Showcases features, pricing, and testimonials in a sleek, SaaS-style design.
  • User Authentication: Register or log in using email, Google, or Facebook accounts.
  • Course Creation:
    • Input a course title and optional sub-topics.
    • Select the number of topics to generate.
    • Choose between Image & Theory or Video & Theory course types.
  • AI-Generated Content: Automatically generates a structured list of topics and sub-topics based on your input.
  • Interactive Learning: Includes an AI chatbot for real-time Q&A during courses.
  • Export Options: Download entire courses as PDFs.
  • Course Certificates: Earn and download completion certificates, also delivered via email.
  • Subscription Management:
    • Offers Free, Monthly, and Yearly plans.
    • Supports payments through PayPal, Stripe, Paystack, Flutterwave, and Razorpay.
    • Manage subscriptions directly from your profile.
  • Responsive Design: Optimized for all devices and screen sizes.

🛠️ Admin Panel Features

  • Dashboard: Monitor users, courses, revenue, and more.
  • User Management: View and manage all registered users.
  • Course Oversight: Access and manage all user-created courses.
  • Subscription Insights: Track paid users and subscription details.
  • Content Management: Edit pages like Terms, Privacy, Cancellation, Refund, and Billing & Subscription.

⚙️ Getting Started

To set up Rushmore locally:

  1. Clone the repository:bashCopyEditgit clone https://github.com/peteralcock/Rushmore.git
  2. Navigate to the project directory:bashCopyEditcd Rushmore
  3. Install dependencies:bashCopyEditnpm install
  4. Configure environment variables:
    • Create a .env file with necessary credentials for MongoDB, authentication providers, and payment gateways.
  5. Start the application:bashCopyEditnpm start

Rushmore is more than just a tool; it’s a platform to empower educators and creators to share knowledge effortlessly. By automating course creation, it allows you to focus on delivering value to your audience.

Feel free to explore, contribute, or provide feedback on the GitHub repository. Let’s make learning accessible and engaging for everyone.

Happy teaching!

Read More
RoadShow: AI-Powered Under-priced Antique Analysis

As a developer passionate about blending technology with real-world applications, I embarked on a project to streamline the process of evaluating antique listings on Craigslist. The result is RoadShow, an application that automates the collection and analysis of antique listings, providing insights into their value, authenticity, and historical context.

🎯 The Challenge

Navigating through countless Craigslist listings to find genuine antiques can be time-consuming and often requires expertise to assess an item’s worth and authenticity. I aimed to create a tool that not only aggregates these listings but also provides meaningful analysis to assist collectors, resellers, and enthusiasts in making informed decisions.

🧰 The Solution

RoadShow is designed to:

  • Scrape Listings: Utilizes Puppeteer to collect antique listings from Craigslist NYC, ensuring efficient data retrieval with rate limiting and error handling.

  • Store Data: Saves listing details, images, and analysis results in a structured SQLite database for easy access and management.

  • Analyze with AI: Integrates OpenAI’s API to evaluate each listing, providing:

    • Estimated fair market value

    • Price assessment (underpriced, overpriced, or fair)

    • Authenticity evaluation

    • Tips for determining authenticity

    • Historical context and additional insights

🧠 How It Works

  1. Data Collection: The application uses Puppeteer to navigate Craigslist’s NYC antiques section, extracting relevant information from each listing.

  2. Data Storage: Extracted data, including images, are stored in a SQLite database, facilitating efficient data management and retrieval.

  3. AI Analysis: Each listing is analyzed using OpenAI’s API, generating comprehensive insights that are appended to the database records.

🚀 Getting Started

To explore or contribute to RoadShow, visit the GitHub repository: https://github.com/peteralcock/RoadShow

Read More
Let’s turn vibes into ventures

I started coding when I was 10 years old, before GitHub was a thing. It was Visual Basic 4.0 that hooked me into the world of native desktop software development. Growing up in the age of AOL exposed me to the world of “Punters & Progz”, hobbyist software applications that manipulated the Windows API to do things like scrape chatrooms for screen names or send mass messages with phishing lures or flood IMs to people in order to overflow the cache memory in the recipients’ client as to “punt” them offline when their program crashed from the overflow. The desire to create similar fire led me into learning the WINSOCK protocol as to write invisible client/server trojan horse viruses that buried themselves in the boot registry so I could prank my friends.

25 years later, I now have the power of LLMs to do all the grunt-work it used to take to pull these stunts. I feel like Mickey Mouse in Fantasia after he discovers the wizard’s magic wand. I can make the brooms do all my chores. Except in my scenario, I already know how to avoid flooding the castle.

Suddenly, I feel like I don’t have enough hands. There’s no more limits. I can make literally anything in under 2 weeks. So now, I don’t even know where to dedicate my time. There are so many possibilities that I’ve decided I’m going to listen to the crowd. I’m going to make an app every week based on the comments I get on this post and on my blog, where I’ll be documenting this experiment.

Leave your brilliant app idea as a comment, and once a week I’ll pick the most interesting one and prototype it for you, with a follow-up business plan on how we could monetize it. Consider this to be my speed dating marathon as I look for a new business partner. If you’re willing to do the marketing and operations, I’m willing to do the engineering, and together we can rule the galaxy.

Remember kids, real power is the power to get people to follow you, and an idea is worth nothing until executed. So leave your idea in the comments and let’s build something real. Something weird. Something nobody’s done before.
Because the future doesn’t belong to the smartest or the richest—it belongs to the fastest. And I’m moving at LLM-speed.

Whether it’s a dumb meme generator that goes viral, a fintech tool that saves people money, or a social app that connects people in a way they didn’t even know they needed—if your idea hits me right, I’ll bring it to life, right here, in public.

⚡ Drop your app idea in the comments.
🚀 I’ll build it in a week.
💸 We’ll map the business model together.

And if it takes off? You’re the cofounder.
Let’s turn vibes into ventures.

Read More
Gekko: AI Hedge Fund Simulation

The intersection of distinct human investment philosophies and the analytical power of AI presents a fascinating area for exploration. What happens when different, sometimes conflicting, legendary investment strategies are implemented by AI agents within the same simulated market environment? Gekko is a project designed to explore exactly that.

The Concept: Simulating Investment Minds

The core idea behind Gekko is to build a platform where AI agents, each modeled after the distinct philosophy of a well-known investor (like Warren Buffett, Cathie Wood, Ben Graham, etc.), can analyze market data and generate trading signals. It serves as a digital sandbox for experimenting with how these AI-driven strategies might interact and perform within a simulated hedge fund structure.

This isn’t about creating a live trading bot. Gekko is intended purely as an experimental and educational tool to observe how LLMs can interpret financial data and mimic strategic decision-making based on predefined investment personas.

Gekko’s Features

To achieve this simulation, Gekko incorporates several components:

  • Diverse AI Agents: A variety of agents represent different investment styles – value, growth, innovation, contrarian, technical analysis, sentiment analysis, and more.
  • Multi-Agent System: Agents process data and generate signals; these are then synthesized by portfolio and risk management layers to make simulated portfolio adjustments.
  • LLM Integration: Agents utilize Large Language Models (supporting OpenAI, Groq, and local Ollama instances) for data interpretation and reasoning.
  • Simulation & Backtesting: The platform includes a simulated trading engine and backtesting features to run strategies against historical data.
  • Dashboard Interface: A ReactJS frontend provides visualization of the simulated portfolio, trades, and agent outputs.
  • Reasoning Output: An option exists (--show-reasoning) to inspect the logic behind agent decisions, aiding in understanding the simulation.

Technical Aspects and Challenges

Developing Gekko involved several technical considerations. Key tasks included:

  • Translating nuanced investment philosophies into effective prompts for the AI agents.
  • Integrating various data sources (pricing, news, financials) via APIs for agent use.
  • Building a backend (using FastAPI) and simulation engine capable of handling the workflow.
  • Enabling flexible deployment through Docker.

Seeing how the different agent types process the same information and arrive at varied conclusions based on their programmed personas is one of the core outcomes of the simulation.

Future Directions

As an experimental platform, Gekko could potentially evolve. Further development might involve refining agent interactions, incorporating more sophisticated market data, or enhancing the simulation’s realism. It serves as a base for exploring AI applications in strategy analysis.

Explore and Experiment (Responsibly!)

Gekko is available for those interested in experimenting with AI in the context of financial strategy simulation.

Read More
ZEPSEC: An All-in-One Platform for Cybersecurity Management

Staying ahead of threats is crucial. ZEPSEC emerges as a comprehensive solution designed to manage vulnerabilities, track threats, and plan incident responses effectively. Let’s delve into what ZEPSEC offers and how it can benefit organizations in securing their digital assets every few minutes rather than months. (Six-month audit cycles are an absolutely ridiculous concept these days).

What is ZEPSEC?

ZEPSEC is a cybersecurity platform that provides a suite of tools for vulnerability management, Indicators of Compromise (IoC) database, and threat tracking. It is described as an all-in-one intelligent threat detection, vulnerability assessment, and incident response planning tool. The platform aims to help organizations stay ahead of cyber threats and secure their digital assets.

ZEPSEC offers several key features:

  1. Live Vulnerability Tracking: Continuously monitors and identifies vulnerabilities in real-time, allowing organizations to address them promptly.
  2. Real-Time Risk Notifications: Alerts users to potential risks as they arise, ensuring that security teams can respond quickly to emerging threats.
  3. Incident Response Planner: Assists in planning and executing responses to security incidents, helping to minimize damage and recover swiftly.
  4. Multi-Organization Ready: Designed to support multiple organizations, making it suitable for managed service providers or large enterprises with multiple divisions.
  5. AI-Powered Virtual CISO: Leverages artificial intelligence to provide expert guidance and recommendations, acting as a virtual Chief Information Security Officer.

Open-Source and Subscription Options

ZEPSEC offers an open-source version in Russian, targeted at experienced security professionals. For those who prefer English, there is an AI-assisted version available through a paid subscription. This dual approach allows both budget-conscious users and those needing language support to benefit from the platform.

For organizations looking to enhance their cybersecurity posture, ZEPSEC offers a comprehensive set of tools to manage vulnerabilities, track threats, and plan incident responses. With both open-source and subscription-based options, it caters to a wide range of users, from seasoned security experts to those seeking AI-driven guidance.

Sources:

  • ZEPSEC GitHub Repository
Read More
Aditude: 360 Digital Ad Management

Wanna run your own digital ad server? Well NOW YOU CAN!
Just clone my repository, and you can make BILLIONS!! (Maybe.)

Aditude is an open-source platform designed to simplify the creation, management, and delivery of digital advertisements. Unlike cloud-based ad management solutions that often come with subscription fees and dependency on third-party providers, Aditude is self-hosted, giving users full control over their data and infrastructure. It’s tailored for media companies, publishers, and website networks that want to sell and manage banner ads efficiently across multiple sites.

The project, as described on its GitHub repository, supports a range of ad formats, including GIF, JPG, PNG, HTML5, and external scripts like Google AdSense. It also offers features like responsive banner creation, payment integration, and multi-language support, making it a versatile tool for diverse use cases.

Key Features of Aditude

Let’s break down some of the standout features that make Aditude a compelling choice for ad management:

1. Simplified Banner Creation with Shortcodes

Aditude uses shortcodes to streamline banner creation. These placeholders are replaced during ad delivery, allowing users to create banners without diving deep into complex coding. For example, a shortcode might define a banner’s position or creative, making it easy to update campaigns dynamically. This feature is particularly useful for non-technical users who need to manage ads efficiently.

2. Support for Multiple Ad Formats

The platform supports a variety of ad formats, including:

  • Static banners: GIF, JPG, PNG.
  • HTML5 banners: Uploaded as ZIP files containing HTML, CSS, and JavaScript for interactive ads.
  • External scripts: Integration with platforms like Google AdSense for seamless third-party ad delivery.

This flexibility ensures that Aditude can handle both traditional and modern ad formats, catering to different advertiser needs.

3. Payment Integration

Aditude supports selling ads via PayPal and Coinbase, enabling crypto payments alongside traditional methods. Once a payment gateway is configured, advertisers can sign in, create campaigns, set budgets, and purchase ad slots. The system automatically calculates views or time duration based on the budget and can publish banners post-payment, streamlining the ad-buying process.

4. Responsive Banner Templates

Creating responsive banners that look great on all devices can be challenging, but Aditude simplifies this with built-in templates. Users can generate banners without writing code, making it accessible for those with limited design or development experience.

5. Multi-Language Support

For global publishers, Aditude’s multi-language support is a significant advantage. The platform can be configured to serve ads in different languages, ensuring a seamless experience for diverse audiences. Detailed instructions for setting this up are provided in the project’s configuration documentation.

6. Automated Banner Rotation

Aditude automatically rotates banners in designated positions, ensuring fair exposure for all active campaigns. This feature is crucial for media companies managing multiple advertisers across a network of websites.

7. Customizable User Experience

The platform allows customization of the login page and supports external login methods, giving administrators flexibility to align the system with their brand. Additionally, users can define sellable ad positions, tailoring the platform to their specific inventory.

8. Easy Setup and Scalability

Setting up Aditude is straightforward. After unzipping the files and placing them in the correct directory (with appropriate permissions like CHMOD 755 or 777), users navigate to the installation URL, input database credentials, and finalize the setup. Upon first login, Aditude creates example banners, positions, campaigns, and clients to help users get started quickly.

The platform is designed to scale, making it suitable for media companies operating networks of websites. It allows webmasters to configure ad positions and advertisers to purchase ads across the network, with robust tracking for payments and performance.

Why Choose Aditude?

Aditude stands out for several reasons:

  • Self-Hosted Control: By hosting Aditude on your own servers, you retain full control over your data and avoid reliance on third-party providers. This is a significant advantage for privacy-conscious organizations or those with specific compliance requirements.
  • Cost-Effective: As an open-source solution, Aditude eliminates recurring subscription costs, making it an attractive option for small to medium-sized publishers.
  • Flexibility: With support for multiple ad formats, payment gateways, and languages, Aditude adapts to a wide range of use cases, from single-site publishers to large media networks.
  • Ease of Use: Features like shortcodes, templates, and automated rotation make ad management accessible to users with varying levels of technical expertise.

Potential Use Cases

Aditude is particularly well-suited for:

  • Media Companies: Managing ad inventory across a network of websites, with centralized payment tracking and position configuration.
  • Independent Publishers: Running ads on a single site with full control over creatives and monetization.
  • Ad Networks: Facilitating ad sales for multiple clients, with support for diverse ad formats and payment methods.

Getting Started with Aditude

To explore Aditude, head to its GitHub repository at peteralcock/Aditude. The setup process is well-documented, and the repository includes code snippets, such as PHP functions for handling ad delivery and JavaScript for dynamic ad loading.

Here’s a quick overview of the setup steps:

  1. Ensure your server meets the necessary requirements (e.g., PHP, database support).
  2. Download and unzip the Aditude files from the repository.
  3. Place the files in your server’s directory and set appropriate file permissions.
  4. Navigate to the installation URL (e.g., http://yourdomain.com/yourfolder).
  5. Enter your database credentials and complete the setup.
  6. Log in to explore example campaigns and start customizing your ad management workflow.

Source: https://github.com/peteralcock/Aditude

Any questions? Just hit me up!

Read More
Detector Gadget: FIGHT CRIME WITH DATA FORENSICS

(No relation to that Inspector guy…) Detector Gadget is an eDiscovery and digital forensics analysis tool that leverages bulk_extractor to identify and extract features from digital evidence. Built with a containerized architecture, it provides a web interface for submitting, processing, and visualizing forensic analysis data.

As legal proceedings increasingly rely on digital evidence, organizations require sophisticated tools that can dissect massive data stores quickly and accurately. Enter Detector Gadget (no relation to that famous Inspector), an eDiscovery and digital forensics solution built to streamline the discovery process. By leveraging the powerful capabilities of bulk_extractor, Detector Gadget automates feature identification and extraction from digital evidence, reducing the time and complexity associated with traditional forensic methods.

Challenges in eDiscovery and Digital Forensics
The sheer volume and diversity of modern digital data create major hurdles for investigators, legal professionals, and security teams. Conventional forensic tools often struggle to handle large-scale analyses, making quick identification of relevant data a painstaking task. Additionally, it can be difficult to coordinate among multiple stakeholders—attorneys, forensic analysts, and IT teams—when the technology stack is fragmented.

Detector Gadget was developed to tackle these challenges head-on. By focusing on containerized deployment, a unified web interface, and automated data pipelines, it offers a holistic solution for eDiscovery and forensic analysis.

Containerized Architecture for Scalability and Reliability
One of Detector Gadget’s core design decisions is its containerized architecture. Each aspect of the platform—data ingestion, processing, and reporting—runs within its own container. This modular approach provides several benefits:

• Scalability: You can easily spin up or down additional containers to handle fluctuating workloads, ensuring the system meets the demands of any size or type of investigation.
• Portability: Containerization makes Detector Gadget simple to deploy in various environments, whether in on-premise servers or on cloud-based infrastructure such as AWS or Azure.
• Security and Isolation: By running processes in isolated containers, any vulnerabilities or misconfigurations are less likely to affect the overall system.

Bulk_extractor at the Core
Detector Gadget’s forensic engine is powered by bulk_extractor, a widely used command-line tool known for its ability to detect and extract multiple types of digital artifacts. Whether it’s credit card numbers, email headers, or other sensitive data hidden within disk images, bulk_extractor systematically scans and indexes vital information. This eliminates the guesswork in searching for specific data types and helps investigators home in on the exact evidence relevant to an inquiry.

The Web Interface: Streamlined Submission and Visualization
A standout feature of Detector Gadget is its intuitive web interface. Rather than grappling with command-line operations, investigators and eDiscovery professionals can:

• Submit Evidence: Upload disk images, file snapshots, or directory contents via a simple drag-and-drop interface.
• Configure Analysis: Select from various scanning options and data filters, customizing the bulk_extractor engine to focus on particular file types, geographical metadata, or communications records.
• Monitor Progress: Watch in real time as the system processes large data sets, providing rough time estimates and resource utilization metrics.
• View Results: Detector Gadget’s dashboards show interactive charts and graphs that visualize extracted features, from keyword hits to identified email addresses or financial details—making it easier to pinpoint patterns in the data.

Secure Collaboration and Audit Trails
In eDiscovery or digital forensics, it’s vital to maintain an irrefutable chain of custody and accurate tracking of user actions. Detector Gadget implements robust user authentication and role-based access controls, ensuring that only authorized personnel can perform specific tasks. Every action—from uploading evidence to exporting reports—is logged for audit trail purposes, satisfying compliance standards and safeguarding the integrity of the investigation.

Deployment and Integration
Detector Gadget also supports smooth integration with existing legal document management systems, case management platforms, and investigative workflows. The containerized design, combined with RESTful APIs, allows organizations to connect Detector Gadget’s findings with other collaboration and storage solutions. Whether you’re archiving analysis reports or triggering a deeper look into suspicious artifacts, the flexible architecture supports a wide range of custom integrations.

PROTOYPING TIME: 30m(ish)

Features

  • Digital Forensics Analysis: Extract emails, credit card numbers, URLs, and more from digital evidence
  • Web-based Interface: Simple dashboard for job submission and results visualization
  • Data Visualization: Interactive charts and graphs for analysis results
  • Asynchronous Processing: Background job processing with Celery
  • Containerized Architecture: Kali Linux container for bulk_extractor and Python container for the web application
  • Report Generation: Automated generation and delivery of analysis reports
  • RESTful API: JSON API endpoints for programmatic access

Architecture

Detector Gadget consists of several containerized services:

  • Web Application (Flask): Handles user authentication, job submission, and results display
  • Background Worker (Celery): Processes analysis jobs asynchronously
  • Bulk Extractor (Kali Linux): Performs the actual forensic analysis
  • Database (PostgreSQL): Stores users, jobs, and extracted features
  • Message Broker (Redis): Facilitates communication between web app and workers

Getting Started

Prerequisites

  • Docker and Docker Compose
  • Git

Installation

  1. Clone the repository
   git clone https://github.com/yourusername/detector-gadget.git
   cd detector-gadget
  1. Start the services
   docker-compose up -d
  1. Initialize the database
   curl http://localhost:5000/init_db
  1. Access the application Open your browser and navigate to http://localhost:5000 Default admin credentials:
  • Username: admin
  • Password: admin

Usage

Submitting a Job

  1. Log in to the application
  2. Navigate to “Submit Job”
  3. Upload a file or provide a URL to analyze
  4. Specify an output destination (email or S3 URL)
  5. Submit the job

Viewing Results

  1. Navigate to “Dashboard” to see all jobs
  2. Click on a job to view detailed results
  3. Explore the visualizations and extracted features

Development

Running Tests

# Install test dependencies
gem install rspec httparty rack-test

# Run tests against a running application
rake test

# Or run tests in Docker
rake docker_test

Project Structure

detector-gadget/
├── Dockerfile.kali             # Kali Linux with bulk_extractor
├── Dockerfile.python           # Python application
├── README.md                   # This file
├── Rakefile                    # Test tasks
├── app.py                      # Main Flask application
├── celery_init.py              # Celery initialization
├── docker-compose.yml          # Service orchestration
├── entrypoint.sh               # Container entrypoint
├── requirements.txt            # Python dependencies
├── spec/                       # RSpec tests
│   ├── app_spec.rb             # API tests
│   ├── fixtures/               # Test fixtures
│   └── spec_helper.rb          # Test configuration
├── templates/                  # HTML templates
│   ├── dashboard.html          # Dashboard view
│   ├── job_details.html        # Job details view
│   ├── login.html              # Login form
│   ├── register.html           # Registration form
│   └── submit_job.html         # Job submission form
└── utils.py                    # Utility functions and tasks

Customization

Adding New Feature Extractors

Modify the process_job function in utils.py to add new extraction capabilities:

def process_job(job_id, file_path_or_url):
    # ...existing code...

    # Add custom bulk_extractor parameters
    client.containers.run(
        'bulk_extractor_image',
        command=f'-o /output -e email -e url -e ccn -YOUR_NEW_SCANNER /input/file',
        volumes={
            file_path: {'bind': '/input/file', 'mode': 'ro'},
            output_dir: {'bind': '/output', 'mode': 'rw'}
        },
        remove=True
    )

    # ...existing code...

Configuring Email Delivery

Set these environment variables in docker-compose.yml:

environment:
  - SMTP_HOST=smtp.your-provider.com
  - SMTP_PORT=587
  - SMTP_USER=your-username
  - SMTP_PASS=your-password
  - SMTP_FROM=noreply@your-domain.com

Production Deployment

For production environments:

  1. Update secrets:
  • Generate a strong SECRET_KEY
  • Change default database credentials
  • Use environment variables for sensitive information
  1. Configure TLS/SSL:
  • Set up a reverse proxy (Nginx, Traefik)
  • Configure SSL certificates
  1. Backups:
  • Set up regular database backups
  1. Monitoring:
  • Implement monitoring for application health

Security Considerations

  • All user-supplied files are processed in isolated containers
  • Passwords are securely hashed with Werkzeug’s password hashing
  • Protected routes require authentication
  • Input validation is performed on all user inputs

Future Development

  • User roles and permissions
  • Advanced search capabilities
  • PDF report generation
  • Timeline visualization
  • Case management
  • Additional forensic tools

Troubleshooting

Common Issues

Bulk Extractor container fails to start

# Check container logs
docker logs detector-gadget_bulk_extractor_1

# Rebuild the container
docker-compose build --no-cache bulk_extractor

Database connection issues

# Ensure PostgreSQL is running
docker-compose ps db

# Check connection parameters
docker-compose exec web env | grep DATABASE_URL
Read More

Recent Posts

  • Get Rich Quick: Ai For Gambling
  • Welcome To Rushmore: Ai-Powered Online Learning Platform For Educators
  • RoadShow: AI-Powered Under-priced Antique Analysis
  • Let’s turn vibes into ventures
  • Gekko: AI Hedge Fund Simulation

Sleepless in Brooklyn