assets | ||
server | ||
.gitignore | ||
default.env | ||
docker-compose.yml | ||
Dockerfile | ||
llamabot.js | ||
package.json | ||
README.md |
llama-cpp-python-djs-bot
THIS CODE IS MEANT TO BE SELF HOSTED USING THE LIBRARY: https://abetlen.github.io/llama-cpp-python/
Description
This code is for a Discord bot that uses OpenAI's GPT-3 language model (self hosted at home) to generate responses to user messages. It listens for messages in two specified Discord channels, and when a user sends a message, it appends it to the conversation history and sends it to the GPT-3 API to generate a response. The response is then sent back to the user in the same channel. The bot uses the Node.js discord.js library to interact with the Discord API and the node-fetch library to make HTTP requests to the GPT-3 API.
Here is a summary of the main parts of the code:
Import required modules and set environment variables using dotenv.
Create a new Client instance and set the intents and partials.
Define two channel IDs that the bot will listen to.
Create a Map to store ongoing conversations with users.
Define functions to update the bot's presence status, check if any conversation is busy, and set a conversation as busy or not busy.
Listen for the ready event and update the bot's presence status.
Listen for the messageCreate event and respond to messages that are sent in the specified channels.
When a message is received, check if any conversation is busy. If so, delete the message and send a busy response to the user.
If no conversation is busy, append the user message to the conversation history and send it to the GPT-3 API to generate a response.
If the response is not empty, send it back to the user in the same channel. If it is empty, send a reset message and delete the conversation history for that user.
Define a generateResponse function that sends a request to the GPT-3 API to generate a response. If the request times out or an error occurs, handle it accordingly.
Call the generateResponse function within the messageCreate event listener function.
Backend REQUIIRED
The HTTP Server from https://abetlen.github.io/llama-cpp-python/ is required to use this bot.
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
pip install llama-cpp-python[server]
export MODEL=./models/your_model.py
python3 -m llama_cpp.server
Navigate to http://localhost:8000/docs to see the OpenAPI documentation.
Static Usage
-
Use
npm i
-
Create a .env file
cp default.env .env
-
Edit .env for your needs
-
Go into https://discord.com/developers/applications and enable Privileged Intents.
-
Run the bot
node llamabot.js
Docker Compose
This will start the backend API and Discord bot together in a stack.
-
git clone https://git.ssh.surf/snxraven/llama-cpp-python-djs-bot.git
-
cp default.env .env
-
Set DATA_DIR in .env to the exact location of your model files.
-
Edit docker-compose.yaml MODEL to ensure the correct model bin is set
-
docker compose up -d
Want to make this better? Issue a pull request!