Go to file
Raven Scott f53c0c7195 Fix up NodeJS install GPU side 2023-11-14 18:54:17 -05:00
assets dockerize 2023-04-12 16:17:18 +02:00
gpu-server update dockerfile for GPU 2023-05-31 23:02:07 +02:00
huggingface-config Fix up HF config - Needs testing 2023-11-14 18:21:35 -05:00
server counting tokens properly 2023-08-14 23:00:23 -04:00
.gitignore add init prompt to docker-compose.yaml 2023-04-16 19:48:08 -07:00
Dockerfile dockerize 2023-04-12 16:17:18 +02:00
Dockerfile.gpu Fix up NodeJS install GPU side 2023-11-14 18:54:17 -05:00
README.md update readme 2023-05-27 02:02:38 +02:00
default.env fixing up token reducing if sessions too large 2023-08-15 01:08:56 -04:00
docker-compose.gpu.yml changing the name scheme for docker-compose 2023-05-20 15:14:25 +02:00
docker-compose.yml counting tokens properly 2023-08-14 23:00:23 -04:00
llamabot.js Adding error checking for stat message removal 2023-11-14 18:18:37 -05:00
package-lock.json counting tokens properly 2023-08-14 23:00:23 -04:00
package.json counting tokens properly 2023-08-14 23:00:23 -04:00

README.md

llama-cpp-python-djs-bot

THIS CODE IS MEANT TO BE SELF HOSTED USING THE LIBRARY: https://abetlen.github.io/llama-cpp-python/

Description

This code is for a Discord bot that uses OpenAI's GPT-3 language model (self hosted at home) to generate responses to user messages. It listens for messages in two specified Discord channels, and when a user sends a message, it appends it to the conversation history and sends it to the GPT-3 API to generate a response. The response is then sent back to the user in the same channel. The bot uses the Node.js discord.js library to interact with the Discord API and the node-fetch library to make HTTP requests to the GPT-3 API.

Here is a summary of the main parts of the code:

Import required modules and set environment variables using dotenv.

Create a new Client instance and set the intents and partials.

Define two channel IDs that the bot will listen to.

Create a Map to store ongoing conversations with users.

Define functions to update the bot's presence status, check if any conversation is busy, and set a conversation as busy or not busy.

Listen for the ready event and update the bot's presence status.

Listen for the messageCreate event and respond to messages that are sent in the specified channels.

When a message is received, check if any conversation is busy. If so, delete the message and send a busy response to the user.

If no conversation is busy, append the user message to the conversation history and send it to the GPT-3 API to generate a response.

If the response is not empty, send it back to the user in the same channel. If it is empty, send a reset message and delete the conversation history for that user.

Define a generateResponse function that sends a request to the GPT-3 API to generate a response. If the request times out or an error occurs, handle it accordingly.

Call the generateResponse function within the messageCreate event listener function.

demo

Backend REQUIIRED

The HTTP Server from https://abetlen.github.io/llama-cpp-python/ is required to use this bot.

llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).

To install the server package and get started:

pip install llama-cpp-python[server]

export MODEL=./models/your_model.py

python3 -m llama_cpp.server

Navigate to http://localhost:8000/docs to see the OpenAPI documentation.

Static Usage

  1. Use npm i

  2. Create a .env file cp default.env .env

  3. Edit .env for your needs

  4. Go into https://discord.com/developers/applications and enable Privileged Intents.

  5. Run the bot node llamabot.js

Docker Compose

This will automatically configure the API for you as well as the bot in two seperate containers within a stack.

  1. git clone https://git.ssh.surf/snxraven/llama-cpp-python-djs-bot.git

  2. cp default.env .env

  3. Set DATA_DIR in .env to the exact location of your model files.

  4. Edit docker-compose.yaml MODEL to ensure the correct model bin is set

  5. docker compose up -d

Docker Compose with GPU

This will automatically configure the API that supports cuBLAS and GPU inference for you as well as the bot in two seperate containers within a stack.

NOTE: Caching for GPU has been fixed.

  1. git clone https://git.ssh.surf/snxraven/llama-cpp-python-djs-bot.git - Clone the repo

  2. mv docker-compose.yml docker-compose.nogpu.yml; mv docker-compose.gpu.yml docker-compose.yml; - Move nongpu compose out of the way, Enable GPU Support

  3. mv Dockerfile Dockerfile.nongpu; mv Dockerfile.gpu Dockerfile; - Move nongpu Dockerfile out of the way, enable GPU Support

  4. cp default.gpu.env .env - Copy the default GPU .env to its proper location

  5. Set DATA_DIR in .env to the exact location of your model files.

  6. Edit docker-compose.yaml MODEL to ensure the correct model bin is set

  7. set N_GPU_LAYERS to the amount of layers you would like to export to GPU

  8. docker compose up -d

Want to make this better? Issue a pull request!