update readme
This commit is contained in:
13
README.md
13
README.md
@ -32,6 +32,19 @@ Define a generateResponse function that sends a request to the GPT-3 API to gene
|
||||
|
||||
Call the generateResponse function within the messageCreate event listener function.
|
||||
|
||||
# Backend REQUIIRED
|
||||
|
||||
The HTTP Server from https://abetlen.github.io/llama-cpp-python/ is required to use this bot.
|
||||
|
||||
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
|
||||
|
||||
To install the server package and get started:
|
||||
|
||||
pip install llama-cpp-python[server]
|
||||
export MODEL=./models/your_model.py
|
||||
python3 -m llama_cpp.server
|
||||
Navigate to http://localhost:8000/docs to see the OpenAPI documentation.
|
||||
|
||||
# Usage
|
||||
|
||||
1) Use ```npm i ```
|
||||
|
Reference in New Issue
Block a user