Go to file
2023-04-11 19:44:11 -07:00
assets Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
server Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
.gitignore Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
default.env Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
docker-compose.yml Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
Dockerfile Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
llamabot.js Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
package.json Containerize bot and server in one stack. 2023-04-11 19:44:11 -07:00
README.md update readme 2023-04-11 23:47:21 +02:00

llama-cpp-python-djs-bot

THIS CODE IS MEANT TO BE SELF HOSTED USING THE LIBRARY: https://abetlen.github.io/llama-cpp-python/

Description

This code is for a Discord bot that uses OpenAI's GPT-3 language model (self hosted at home) to generate responses to user messages. It listens for messages in two specified Discord channels, and when a user sends a message, it appends it to the conversation history and sends it to the GPT-3 API to generate a response. The response is then sent back to the user in the same channel. The bot uses the Node.js discord.js library to interact with the Discord API and the node-fetch library to make HTTP requests to the GPT-3 API.

Here is a summary of the main parts of the code:

Import required modules and set environment variables using dotenv.

Create a new Client instance and set the intents and partials.

Define two channel IDs that the bot will listen to.

Create a Map to store ongoing conversations with users.

Define functions to update the bot's presence status, check if any conversation is busy, and set a conversation as busy or not busy.

Listen for the ready event and update the bot's presence status.

Listen for the messageCreate event and respond to messages that are sent in the specified channels.

When a message is received, check if any conversation is busy. If so, delete the message and send a busy response to the user.

If no conversation is busy, append the user message to the conversation history and send it to the GPT-3 API to generate a response.

If the response is not empty, send it back to the user in the same channel. If it is empty, send a reset message and delete the conversation history for that user.

Define a generateResponse function that sends a request to the GPT-3 API to generate a response. If the request times out or an error occurs, handle it accordingly.

Call the generateResponse function within the messageCreate event listener function.

Backend REQUIIRED

The HTTP Server from https://abetlen.github.io/llama-cpp-python/ is required to use this bot.

llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).

To install the server package and get started:

pip install llama-cpp-python[server]

export MODEL=./models/your_model.py

python3 -m llama_cpp.server

Navigate to http://localhost:8000/docs to see the OpenAPI documentation.

Usage

  1. Use npm i

  2. Create a .env file cp default.env .env

  3. Edit .env for your needs

  4. Go into https://discord.com/developers/applications and enable Privileged Intents.

  5. Run the bot node llamabot.js

Want to make this better? Issue a pull request!