update readme

This commit is contained in:
Raven Scott 2023-04-11 23:46:58 +02:00
parent 1a11d281ec
commit a6117c729c
2 changed files with 16 additions and 3 deletions

View File

@ -32,6 +32,19 @@ Define a generateResponse function that sends a request to the GPT-3 API to gene
Call the generateResponse function within the messageCreate event listener function.
# Backend REQUIIRED
The HTTP Server from https://abetlen.github.io/llama-cpp-python/ is required to use this bot.
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
pip install llama-cpp-python[server]
export MODEL=./models/your_model.py
python3 -m llama_cpp.server
Navigate to http://localhost:8000/docs to see the OpenAPI documentation.
# Usage
1) Use ```npm i ```

View File

@ -111,15 +111,15 @@ client.on('messageCreate', async (message) => {
if (conversation.messages.length === 0) {
conversation.messages.push({
role: 'user',
content: `You are rAi, the most helpful writing AI, you code, write and provide information without any mistakes.`
content: `Your name is rAi, you code, write and provide any information without any mistakes.`
});
conversation.messages.push({
role: 'user',
content: `My name is ${message.author.username}. \nIf you address me, tag me using @${message.author.username}`
content: `My name is ${message.author.username}.`
});
conversation.messages.push({
role: 'assistant',
content: `Hello, ${message.author.username}, I am here to answer any question you have, how may I help you?`
content: `Hello, ${message.author.username}, how may I help you?`
});
}