Go to file
2024-08-09 03:32:37 -04:00
.gitignore first commit 2024-08-09 03:24:50 -04:00
ai_log.js first commit 2024-08-09 03:24:50 -04:00
package.json first commit 2024-08-09 03:24:50 -04:00
README.md Update README.md 2024-08-09 03:32:37 -04:00

AI NGINX Log Analysis

Overview

This project is an Express.js-based server designed to interact with models via llama-cpp-python[web] emulation server to analyze NGINX logs for potential security threats. The server processes incoming requests, analyzes the content, and provides responses that include alerts or general insights. The server can also scrape web pages and manage conversation histories per IP address.

Features

  • NGINX Log Analysis: Analyzes web traffic logs to identify potential security threats and generate appropriate responses.
  • Conversation History: Maintains a history of interactions for each client IP, allowing for context-aware responses.
  • Web Scraping: Scrapes web pages to extract and format relevant information.
  • Token Management: Limits the number of tokens used in the conversation to ensure responses are within the model's limits.
  • Core Service Management: Provides endpoints to restart the core GPT service and reset conversation histories.
  • Cross-Origin Resource Sharing (CORS): Enabled for all routes, allowing for flexible API usage.

Installation

Note: A llama-cpp-python OpenAI Emulation Server is required alongside the backend server. The server in this code is configured to run on 127.0.0.1:8003.

  1. Clone the repository:

    git clone git@git.ssh.surf:snxraven/ai-nginx-log-security.git
    cd ai-nginx-log-security
    
  2. Install dependencies:

    npm install
    
  3. Create a .env file in the project root with the following content:

    MAX_CONTENT_LENGTH=2000
    
  4. Start the server:

    ai_log.js
    

Endpoints

1. /api/v1/chat

  • Method: POST
  • Description: Processes a user message, analyzes it for security threats, and returns a response.
  • Request Body:
    • message: The message to be analyzed.
  • Response: JSON object containing the response from the GPT model.

2. /api/v1/conversation-history

  • Method: GET
  • Description: Retrieves the conversation history for the requesting client's IP.
  • Response: JSON object containing the conversation history.

3. /api/v1/restart-core

  • Method: POST
  • Description: Restarts the core GPT service running in a Docker container.
  • Response: JSON object confirming the restart or detailing any errors.

4. /api/v1/reset-conversation

  • Method: POST
  • Description: Resets the conversation history for the requesting client's IP.
  • Response: JSON object confirming the reset.

Environment Variables

  • MAX_CONTENT_LENGTH: The maximum length for the content extracted from web pages during scraping (default: 2000 characters).

Usage

  1. Send a POST request to /api/v1/chat with a message to analyze web traffic logs.
  2. Use the /api/v1/conversation-history endpoint to fetch the chat history.
  3. Restart the core service using the /api/v1/restart-core endpoint if needed.
  4. Reset conversation history using the /api/v1/reset-conversation endpoint.

Logging

The server logs key actions, including incoming requests, conversation history management, and errors. Logs are timestamped and include IP addresses for traceability.

Notes

  • Ensure the llama-gpu-server Docker container is running before starting the Express.js server.
  • Conversation history is stored in memory and will be lost when the server restarts. Consider implementing persistent storage if long-term history is required.

Contributions

Contributions are welcome! Please fork the repository and submit a pull request with your changes.

This project leverages cutting-edge AI to enhance web security analysis, making it easier to identify and respond to threats in real-time.