Update Readme
This commit is contained in:
parent
bcb17284cb
commit
eb80d460cc
96
README.md
96
README.md
@ -1,40 +1,96 @@
|
||||
|
||||
# RayAI and Backend Server
|
||||
|
||||
This project contains the RayAI bot source code as well as the backend server that runs it.
|
||||
This project contains the RayAI bot source code as well as the backend server that supports its operations.
|
||||
|
||||
Please Note: llama-cpp-python OpenAI Emulation Server is required alongside the backend server. In the codes case, this server is configured on 127.0.0.1:8002
|
||||
**Note**: A `llama-cpp-python` OpenAI Emulation Server is required alongside the backend server. The server in this code is configured to run on `127.0.0.1:8002`.
|
||||
|
||||
To run the llama-cpp-python server:
|
||||
## Prerequisites
|
||||
|
||||
`pip install llama-cpp-python[server]`
|
||||
- Node.js installed
|
||||
- Python installed
|
||||
- `llama-cpp-python` server
|
||||
|
||||
`python3 -m llama_cpp.server --model <model_path>`
|
||||
### Running the llama-cpp-python Server
|
||||
|
||||
Once you have this running, you can install and run the rest of the software:
|
||||
1. Install `llama-cpp-python` with server capabilities:
|
||||
|
||||
`git clone https://git.ssh.surf/snxraven/rayai`
|
||||
```bash
|
||||
pip install llama-cpp-python[server]
|
||||
```
|
||||
|
||||
`cd rayai`
|
||||
2. Start the server:
|
||||
|
||||
Install backend server:
|
||||
```bash
|
||||
python3 -m llama_cpp.server --model <model_path>
|
||||
```
|
||||
|
||||
`cd backend-server`
|
||||
## Cloning the Repository
|
||||
|
||||
`npm i`
|
||||
1. Clone the repository:
|
||||
|
||||
When finished with configuration start with:
|
||||
```bash
|
||||
git clone https://git.ssh.surf/snxraven/rayai
|
||||
cd rayai
|
||||
```
|
||||
|
||||
`node backend-server.js`
|
||||
## Backend Server Setup
|
||||
|
||||
Install Bot:
|
||||
1. Navigate to the backend server directory and install dependencies:
|
||||
|
||||
`cd ..`
|
||||
```bash
|
||||
cd backend-server
|
||||
npm install
|
||||
```
|
||||
|
||||
`cd bot`
|
||||
2. Create a `.env` file in the `backend-server` directory with the following environment variables:
|
||||
|
||||
`npm i`
|
||||
```env
|
||||
PROMPT=<initial-prompt>
|
||||
ABUSE_KEY=<abuse-ipdb-api-key>
|
||||
MAX_CONTENT_LENGTH=2000 (optional)
|
||||
API_KEY=<my-mc-api-key>
|
||||
PATH_KEY=<my-mc-path-key>
|
||||
```
|
||||
|
||||
When finished with configuration start with:
|
||||
3. Start the backend server:
|
||||
|
||||
`node discord-bot.js`
|
||||
```bash
|
||||
node backend-server.js
|
||||
```
|
||||
|
||||
## Discord Bot Setup
|
||||
|
||||
1. Navigate to the bot directory and install dependencies:
|
||||
|
||||
```bash
|
||||
cd ../bot
|
||||
npm install
|
||||
```
|
||||
|
||||
2. Create a `.env` file in the `bot` directory with the following environment variables:
|
||||
|
||||
```env
|
||||
THE_TOKEN=<your-discord-bot-token>
|
||||
CHANNEL_IDS=<comma-separated-list-of-channel-ids>
|
||||
ROOT_IP=<root-ip-address>
|
||||
ROOT_PORT=<root-port-number>
|
||||
API_PATH=<api-path>
|
||||
MAX_CONTENT_LENGTH=8000 (optional, default is 8000)
|
||||
MAX_TOKENS=<max-tokens>
|
||||
REPEAT_PENALTY=<repeat-penalty>
|
||||
OVERFLOW_DELAY=3 (optional, delay in seconds between message chunks)
|
||||
```
|
||||
|
||||
3. Start the Discord bot:
|
||||
|
||||
```bash
|
||||
node discord-bot.js
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
1. **Run the llama-cpp-python server** to emulate the OpenAI API.
|
||||
2. **Set up and run the backend server** to handle API requests, conversation history, and additional functionalities.
|
||||
3. **Set up and run the Discord bot** to interact with users and utilize the backend server for processing.
|
||||
|
||||
Ensure all environment variables are correctly configured, dependencies are installed, and both servers are running to enable full functionality of RayAI and its backend server.
|
105
backend-server/README.md
Normal file
105
backend-server/README.md
Normal file
@ -0,0 +1,105 @@
|
||||
# Backend Server Documentation
|
||||
|
||||
This document provides a detailed breakdown and explanation of the backend server code, highlighting the setup, configuration, and functionality of various components.
|
||||
|
||||
## Overview
|
||||
|
||||
The backend server is built using Express.js and provides APIs for chat interactions, conversation management, and core service management. It uses several libraries to handle HTTP requests, parse HTML content, tokenize text, and perform Google searches.
|
||||
|
||||
## Components and Workflow
|
||||
|
||||
### 1. **Environment Configuration**
|
||||
|
||||
The server uses environment variables for configuration, which are loaded using `dotenv`.
|
||||
|
||||
- **Environment Variables**:
|
||||
- `PROMPT`: Initial system prompt for conversation history.
|
||||
- `ABUSE_KEY`: API key for abuse IP detection.
|
||||
- `MAX_CONTENT_LENGTH`: Maximum content length for scraped web page content.
|
||||
- `API_KEY`: API key for My-MC.link services.
|
||||
- `PATH_KEY`: Path key for My-MC.link services.
|
||||
|
||||
### 2. **Server Setup**
|
||||
|
||||
The server is created using Express.js, and various middlewares are applied to handle CORS, parse JSON requests, and manage conversation history.
|
||||
|
||||
#### **Express and Middleware Configuration**
|
||||
|
||||
- **CORS Middleware**: Allows all origins and specific headers.
|
||||
- **Body-Parser Middleware**: Parses JSON request bodies.
|
||||
- **Conversation History Middleware**: Tracks conversation history based on the client IP and logs incoming requests.
|
||||
|
||||
### 3. **Helper Functions**
|
||||
|
||||
Several helper functions are defined to support the server's operations:
|
||||
|
||||
- **getTimestamp**: Generates a timestamp for logging purposes.
|
||||
- **countLlamaTokens**: Counts the number of tokens in conversation messages using the llama-tokenizer library.
|
||||
- **trimConversationHistory**: Trims the conversation history if it exceeds a specified token limit.
|
||||
- **scrapeWebPage**: Scrapes web page content using Cheerio and formats the extracted content.
|
||||
|
||||
### 4. **API Endpoints**
|
||||
|
||||
The server provides multiple API endpoints to handle chat requests, restart core services, reset conversation history, and fetch conversation history.
|
||||
|
||||
#### **/api/v1/chat**
|
||||
|
||||
- Handles incoming chat requests.
|
||||
- Logs the request and user message.
|
||||
- Updates the conversation history.
|
||||
- Performs asynchronous plugin tasks (e.g., scraping, IP address checking).
|
||||
- Sends a request to the llama API for a response.
|
||||
- Returns the response to the client.
|
||||
|
||||
#### **/api/v1/restart-core**
|
||||
|
||||
- Restarts the core service using a Docker command.
|
||||
- Returns the result of the command execution.
|
||||
|
||||
#### **/api/v1/reset-conversation**
|
||||
|
||||
- Resets the conversation history for the client IP.
|
||||
- Returns a confirmation message.
|
||||
|
||||
#### **/api/v1/conversation-history**
|
||||
|
||||
- Fetches the conversation history for the client IP.
|
||||
- Returns the conversation history.
|
||||
|
||||
### 5. **Plugin Handling**
|
||||
|
||||
Various plugins are used to enhance the bot's functionality:
|
||||
|
||||
- **IP Plugin**: Checks IP addresses against the AbuseIPDB.
|
||||
- **URL Plugin**: Scrapes web page content.
|
||||
- **New Person Plugin**: Fetches random user data.
|
||||
- **What Servers Plugin**: Fetches Minecraft server information.
|
||||
- **My-MC Plugin**: Fetches My-MC.Link wiki information.
|
||||
- **dlinux Plugin**: Fetches dlinux wiki information.
|
||||
- **Search Plugin**: Performs Google searches and scrapes search results.
|
||||
|
||||
### 6. **Error Handling and Logging**
|
||||
|
||||
The server includes error handling and logging throughout its operations to ensure robustness and provide detailed information on failures and processing times.
|
||||
|
||||
## How to Run the Server
|
||||
|
||||
1. **Clone the Repository**: Ensure you have the server's code in your project directory.
|
||||
|
||||
2. **Install Dependencies**: Use npm to install the required dependencies:
|
||||
|
||||
```bash
|
||||
npm install express body-parser cmd-promise cors cheerio llama-tokenizer-js google-it dotenv
|
||||
```
|
||||
|
||||
3. **Configure Environment Variables**: Create a `.env` file in the root directory and populate it with the required variables as described above.
|
||||
|
||||
4. **Run the Server**: Use Node.js to start the server:
|
||||
|
||||
```bash
|
||||
node <your-server-file>.js
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
The backend server handles chat interactions, manages conversation history, and provides various plugins to enhance functionality. It includes robust error handling, logging, and the ability to reset conversations and restart core services. Proper environment configuration and dependencies installation are crucial for the server to run effectively.
|
@ -106,7 +106,7 @@ function trimConversationHistory(messages, maxLength, tolerance) {
|
||||
|
||||
// Function to scrape web page
|
||||
async function scrapeWebPage(url, length) {
|
||||
console.log(`${getTimestamp()} [INFO] Starting to scrape URL: ${url}`);
|
||||
//console.log(`${getTimestamp()} [INFO] Starting to scrape URL: ${url}`);
|
||||
try {
|
||||
const res = await fetch(url);
|
||||
const html = await res.text();
|
||||
@ -139,7 +139,7 @@ async function scrapeWebPage(url, length) {
|
||||
}
|
||||
response += `\nURL: ${url}`;
|
||||
|
||||
// console.log(`${getTimestamp()} [INFO] Successfully scraped URL: ${url}`);
|
||||
console.log(`${getTimestamp()} [INFO] Successfully scraped URL: ${url}`);
|
||||
return response;
|
||||
} catch (err) {
|
||||
console.error(`${getTimestamp()} [ERROR] Failed to scrape URL: ${url}`, err);
|
||||
@ -254,7 +254,7 @@ app.post('/api/v1/chat', async (req, res) => {
|
||||
messages: conversationHistory[ip]
|
||||
};
|
||||
|
||||
console.log(sent);
|
||||
//console.log(sent);
|
||||
const llamaResponse = await fetch(`http://127.0.0.1:8002/v1/chat/completions`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
@ -502,8 +502,8 @@ async function handleSearchPlugin(searchQuery, ip, conversationHistory) {
|
||||
|
||||
const lastMessageIndex = conversationHistory[ip].length - 1;
|
||||
if (lastMessageIndex >= 0) {
|
||||
conversationHistory[ip][lastMessageIndex].content += "\nYou scraped these results, review them, Provide the URLs scraped in your review: \n" + searchResponse;
|
||||
console.log(`${getTimestamp()} [INFO] Processed search query: ${searchQuery}, response: ${searchResponse}`);
|
||||
conversationHistory[ip][lastMessageIndex].content += "\nYou scraped these results, generate a detailed report, Also provide the list of the scraped URLs in your report. \n" + searchResponse;
|
||||
console.log(`${getTimestamp()} [INFO] Processed search query: ${searchQuery}.`);
|
||||
} else {
|
||||
console.error(`${getTimestamp()} [ERROR] Conversation history is unexpectedly empty for: ${ip}`);
|
||||
}
|
||||
|
92
bot/README.md
Normal file
92
bot/README.md
Normal file
@ -0,0 +1,92 @@
|
||||
# Technical Breakdown and Documentation
|
||||
|
||||
This document provides a detailed technical breakdown of the Discord bot, explaining the purpose and functionality of each component in the provided code.
|
||||
|
||||
## Overview
|
||||
|
||||
The bot is designed to handle user interactions within specified Discord channels, perform actions based on specific commands, and manage conversation resets and core restarts. It leverages several libraries for functionality, including Discord.js for Discord interactions, Axios for HTTP requests, and HE for encoding messages.
|
||||
|
||||
## Components and Workflow
|
||||
|
||||
### 1. **Environment Configuration**
|
||||
|
||||
The bot uses environment variables for configuration. These are loaded using `dotenv` and include sensitive information such as the bot token and API paths.
|
||||
|
||||
- **Environment Variables**:
|
||||
- `THE_TOKEN`: Discord bot token.
|
||||
- `CHANNEL_IDS`: Comma-separated list of Discord channel IDs where the bot is active.
|
||||
- `ROOT_IP` and `ROOT_PORT`: IP address and port of the backend server.
|
||||
- `API_PATH`: API endpoint for backend services.
|
||||
- `MAX_CONTENT_LENGTH`, `MAX_TOKENS`, `REPEAT_PENALTY`, `OVERFLOW_DELAY`: Additional configuration parameters.
|
||||
|
||||
### 2. **Client Initialization**
|
||||
|
||||
A Discord client is created with intents to listen for guild messages and message content.
|
||||
|
||||
- **Intents**: Specify the types of events the bot will listen to, including guild messages and message content.
|
||||
|
||||
### 3. **Event Handling**
|
||||
|
||||
The bot sets up event listeners for when it becomes ready (`ready` event) and when a new message is created (`messageCreate` event).
|
||||
|
||||
#### **Ready Event**
|
||||
|
||||
- Logs a message indicating the bot has successfully logged in.
|
||||
|
||||
#### **MessageCreate Event**
|
||||
|
||||
- **Bot Message Handling**: Ignores messages from other bots.
|
||||
- **Channel Filtering**: Processes messages only if they are from specified channels.
|
||||
- **Command Handling**: Recognizes and handles specific commands:
|
||||
- `!r` or `!reset`: Resets the conversation.
|
||||
- `!restartCore`: Restarts the core process.
|
||||
|
||||
### 4. **Command Functions**
|
||||
|
||||
#### **resetConversation**
|
||||
|
||||
- Sends a POST request to the backend API to reset the conversation for the user.
|
||||
|
||||
#### **restartCore**
|
||||
|
||||
- Sends a POST request to the backend API to restart the core service.
|
||||
|
||||
### 5. **Message Handling**
|
||||
|
||||
#### **handleUserMessage**
|
||||
|
||||
- Encodes the user's message.
|
||||
- Sends a typing indicator in the Discord channel.
|
||||
- Makes a POST request to the backend chat API with the encoded message.
|
||||
- Handles the response, sending it back to the Discord channel, possibly in multiple chunks if it exceeds the content length limit.
|
||||
|
||||
#### **sendLongMessage**
|
||||
|
||||
- Splits long messages into smaller chunks.
|
||||
- Sends each chunk as an embedded message in the Discord channel, with a delay between chunks if necessary.
|
||||
|
||||
### 6. **Error Handling**
|
||||
|
||||
- Handles errors during API requests, including rate limiting and other issues, and notifies the user accordingly.
|
||||
|
||||
## How to Run the Bot
|
||||
|
||||
1. **Clone the Repository**: Ensure you have the bot's code in your project directory.
|
||||
|
||||
2. **Install Dependencies**: Use npm to install the required dependencies:
|
||||
|
||||
```bash
|
||||
npm install discord.js axios he cheerio dotenv
|
||||
```
|
||||
|
||||
3. **Configure Environment Variables**: Create a `.env` file in the root directory and populate it with the required variables as described above.
|
||||
|
||||
4. **Run the Bot**: Use Node.js to start the bot:
|
||||
|
||||
```bash
|
||||
node <your-bot-file>.js
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
The bot listens for messages in specified channels, processes commands for resetting conversations and restarting core services, and interacts with a backend API to handle user messages. It includes robust error handling and can split and send long messages in manageable chunks. Ensure proper environment configuration and dependencies installation to run the bot effectively.
|
Loading…
Reference in New Issue
Block a user