4.3 KiB
Backend Server Documentation
This document provides a detailed breakdown and explanation of the backend server code, highlighting the setup, configuration, and functionality of various components.
Overview
The backend server is built using Express.js and provides APIs for chat interactions, conversation management, and core service management. It uses several libraries to handle HTTP requests, parse HTML content, tokenize text, and perform Google searches.
Components and Workflow
1. Environment Configuration
The server uses environment variables for configuration, which are loaded using dotenv
.
- Environment Variables:
PROMPT
: Initial system prompt for conversation history.ABUSE_KEY
: API key for abuse IP detection.MAX_CONTENT_LENGTH
: Maximum content length for scraped web page content.API_KEY
: API key for My-MC.link services.PATH_KEY
: Path key for My-MC.link services.
2. Server Setup
The server is created using Express.js, and various middlewares are applied to handle CORS, parse JSON requests, and manage conversation history.
Express and Middleware Configuration
- CORS Middleware: Allows all origins and specific headers.
- Body-Parser Middleware: Parses JSON request bodies.
- Conversation History Middleware: Tracks conversation history based on the client IP and logs incoming requests.
3. Helper Functions
Several helper functions are defined to support the server's operations:
- getTimestamp: Generates a timestamp for logging purposes.
- countLlamaTokens: Counts the number of tokens in conversation messages using the llama-tokenizer library.
- trimConversationHistory: Trims the conversation history if it exceeds a specified token limit.
- scrapeWebPage: Scrapes web page content using Cheerio and formats the extracted content.
4. API Endpoints
The server provides multiple API endpoints to handle chat requests, restart core services, reset conversation history, and fetch conversation history.
/api/v1/chat
- Handles incoming chat requests.
- Logs the request and user message.
- Updates the conversation history.
- Performs asynchronous plugin tasks (e.g., scraping, IP address checking).
- Sends a request to the llama API for a response.
- Returns the response to the client.
/api/v1/restart-core
- Restarts the core service using a Docker command.
- Returns the result of the command execution.
/api/v1/reset-conversation
- Resets the conversation history for the client IP.
- Returns a confirmation message.
/api/v1/conversation-history
- Fetches the conversation history for the client IP.
- Returns the conversation history.
5. Plugin Handling
Various plugins are used to enhance the bot's functionality:
- IP Plugin: Checks IP addresses against the AbuseIPDB.
- URL Plugin: Scrapes web page content.
- New Person Plugin: Fetches random user data.
- What Servers Plugin: Fetches Minecraft server information.
- My-MC Plugin: Fetches My-MC.Link wiki information.
- dlinux Plugin: Fetches dlinux wiki information.
- Search Plugin: Performs Google searches and scrapes search results.
6. Error Handling and Logging
The server includes error handling and logging throughout its operations to ensure robustness and provide detailed information on failures and processing times.
How to Run the Server
-
Clone the Repository: Ensure you have the server's code in your project directory.
-
Install Dependencies: Use npm to install the required dependencies:
npm install express body-parser cmd-promise cors cheerio llama-tokenizer-js google-it dotenv
-
Configure Environment Variables: Create a
.env
file in the root directory and populate it with the required variables as described above. -
Run the Server: Use Node.js to start the server:
node backend-server.js
Summary
The backend server handles chat interactions, manages conversation history, and provides various plugins to enhance functionality. It includes robust error handling, logging, and the ability to reset conversations and restart core services. Proper environment configuration and dependencies installation are crucial for the server to run effectively.