forked from snxraven/ravenscott-blog
2252 lines
117 KiB
Markdown
2252 lines
117 KiB
Markdown
<!-- lead -->
|
||
Understanding Discord-Linux: A Comprehensive Deep Dive into Container Generation
|
||
|
||
Discord-Linux is a powerful tool that leverages Discord's chat interface to generate and manage Linux containers. This platform provides users with the ability to create Linux-based environments directly from a Discord server, simplifying container management for developers, hobbyists, and system administrators alike. By utilizing Docker, MySQL, and Discord.js, Discord-Linux automates container creation, resource allocation, and user management, all while integrating seamlessly into a peer-to-peer system.
|
||
|
||
This post explores the intricacies of Discord-Linux, diving deep into its container generation process, resource management, networking, and the extensive security measures it employs.
|
||
|
||
|
||
## How Discord-Linux Works
|
||
|
||
At its core, Discord-Linux integrates Docker's containerization technology with the Discord.js bot framework. This combination enables users to issue commands within a Discord server to create, manage, and interact with Linux containers. These containers run a variety of operating systems, such as Ubuntu, Debian, Arch, and Alpine, depending on the user’s choice.
|
||
|
||
When a user issues a command, the bot performs multiple background tasks: validating the user, allocating resources, configuring the container’s network, and provisioning it with SSH access. This process happens almost instantaneously, offering users the ability to spin up Linux environments without having to deal with the complexities of cloud or server-based container management.
|
||
|
||
The primary programming languages and tools behind Discord-Linux include:
|
||
|
||
- **Discord.js**: A Node.js module that interacts with the Discord API, enabling bot functionality.
|
||
- **Docker**: Container orchestration to create isolated Linux environments.
|
||
- **MySQL**: For user data management and container tracking.
|
||
- **Node.js**: The underlying technology powering the bot and container creation scripts.
|
||
|
||
|
||
## User Authentication and Verification
|
||
|
||
Before a container can be generated, Discord-Linux performs several key checks to verify the legitimacy of the user. This ensures the system remains secure and that only authorized users can create containers.
|
||
|
||
### Account Age Verification
|
||
|
||
One of the first steps involves checking the age of the user's Discord account. By converting the user's Discord ID into a Unix timestamp, the system determines whether the account is older than 30 days, preventing new or potentially malicious accounts from exploiting the service.
|
||
|
||
Here’s the process of converting a Discord ID into a Unix timestamp:
|
||
|
||
```javascript
|
||
function convertIDtoUnix(id) {
|
||
var bin = (+id).toString(2);
|
||
var unixbin = bin.substring(0, 42); // Discord IDs have a timestamp in the first 42 bits
|
||
var unix = parseInt(unixbin, 2) + 1420070400000; // Discord epoch start date
|
||
return unix;
|
||
}
|
||
|
||
function convert(id) {
|
||
return new Promise((resolve, reject) => {
|
||
var unix = convertIDtoUnix(id.toString());
|
||
var timestamp = moment.unix(unix / 1000);
|
||
var isOlderThan30Days = timestamp.isBefore(moment().subtract(30, 'days'));
|
||
|
||
resolve({
|
||
formatted_date: timestamp.format('YYYY-MM-DD, HH:mm:ss'),
|
||
is_older_than_30_days: isOlderThan30Days
|
||
});
|
||
});
|
||
}
|
||
```
|
||
|
||
### Banlist Validation
|
||
|
||
Discord-Linux also checks a local banlist before generating a container. If the user is found in the banlist, their request is immediately denied, and they are notified of their restriction.
|
||
|
||
```javascript
|
||
const fs = require('fs');
|
||
try {
|
||
const data = fs.readFileSync(process.env.BANLIST, 'utf8');
|
||
if (data.includes(interaction.user.id)) {
|
||
return interaction.reply("You have been banned from using this service.");
|
||
}
|
||
} catch (err) {
|
||
console.error(err);
|
||
}
|
||
```
|
||
|
||
This banlist validation ensures that known offenders are prevented from accessing container resources, protecting the system from abuse.
|
||
|
||
|
||
## Database and User Management
|
||
|
||
Discord-Linux relies on a MySQL database to track user information and manage container assignments. Each user is registered in the database when they first request a container, and their relevant data—such as their Discord ID, container expiration date, and upgrade status—is stored.
|
||
|
||
When a user issues a command to create a container, Discord-Linux checks the database to see if the user already exists. If the user is new, they are registered in the database, and their container is created. If the user has an existing account, the system updates their container's expiration date or provides the necessary resources.
|
||
|
||
### MySQL Integration
|
||
|
||
Here’s an example of how the system queries the database to check for an existing user and registers a new one if necessary:
|
||
|
||
```javascript
|
||
function registerUser(discordID, containerName, createdDate, expireDate) {
|
||
return new Promise((resolve, reject) => {
|
||
connection.query(
|
||
`INSERT INTO users (discord_id, container_name, created_at, expire_at) VALUES ('${discordID}', '${containerName}', '${createdDate}', '${expireDate}')`,
|
||
function (err, results) {
|
||
if (err) reject(err);
|
||
resolve(results);
|
||
}
|
||
);
|
||
});
|
||
}
|
||
|
||
function checkIfUserExists(discordID) {
|
||
return new Promise((resolve, reject) => {
|
||
connection.query(
|
||
`SELECT * FROM users WHERE discord_id = '${discordID}'`,
|
||
function (err, results) {
|
||
if (results.length > 0) {
|
||
resolve(true);
|
||
} else {
|
||
resolve(false);
|
||
}
|
||
}
|
||
);
|
||
});
|
||
}
|
||
```
|
||
|
||
This seamless integration with MySQL allows Discord-Linux to efficiently track users and manage containers, making it easy to scale the platform to support more users and containers.
|
||
|
||
|
||
#### The docker_exec.js script (Used in some cases within our code)
|
||
|
||
The ability to execute commands inside Docker containers programmatically can be extremely useful when managing containers dynamically. The following script leverages the `cmd-promise` module and the Docker CLI to execute commands inside a Docker container, while handling both output and errors efficiently. Let's walk through the script to see how it works.
|
||
|
||
```javascript
|
||
const cmd = require('cmd-promise')
|
||
var args = process.argv
|
||
let code = args.slice(4).join(" ")
|
||
|
||
cmd(`docker exec -i `+ args[2] + ` /bin/bash -c ` + "' cd " + args[3] + " && " + code + "'")
|
||
.then(out => {
|
||
if (!out.stdout) {
|
||
console.log("Done")
|
||
}
|
||
console.log(out.stdout)
|
||
})
|
||
.catch(err => {
|
||
console.log(err.message.toString().replace("Command failed: " + `docker exec -i `+ args[2] + ` /bin/bash -c ` + "' cd " + args[3] + " && \"" + code + "\"'", ""))
|
||
})
|
||
```
|
||
|
||
This simple yet powerful Node.js script facilitates the execution of commands inside Docker containers. The main purpose is to automate the process of running shell commands inside a container's environment, making it a vital tool for platform administrators, especially when automating containerized environments.
|
||
|
||
### Importing the `cmd-promise` Module
|
||
|
||
The first step is to import the `cmd-promise` module, which is a handy utility for executing shell commands. It provides a promise-based structure that helps manage asynchronous operations.
|
||
|
||
```javascript
|
||
const cmd = require('cmd-promise')
|
||
```
|
||
|
||
By using promises, this module allows us to wait for the execution of a command to complete and handle the result or any errors that occur.
|
||
|
||
### Parsing Command-Line Arguments
|
||
|
||
The script captures the command-line arguments passed to it when executed. The `process.argv` array holds these arguments. In this case, the script expects a few arguments: the Docker container ID, the directory to navigate to, and the command to execute.
|
||
|
||
```javascript
|
||
var args = process.argv
|
||
let code = args.slice(4).join(" ")
|
||
```
|
||
|
||
- **`args[2]`**: Represents the Docker container ID.
|
||
- **`args[3]`**: Represents the directory inside the Docker container.
|
||
- **`args.slice(4)`**: Grabs the command to be executed inside the container and joins it into a single string.
|
||
|
||
### Building and Executing the Docker Command
|
||
|
||
Next, the script constructs the command that will be executed inside the container. The `docker exec` command is used to run a command inside a running container.
|
||
|
||
```javascript
|
||
cmd(`docker exec -i `+ args[2] + ` /bin/bash -c ` + "' cd " + args[3] + " && " + code + "'")
|
||
```
|
||
|
||
Here’s what the constructed command looks like:
|
||
|
||
```bash
|
||
docker exec -i <container_id> /bin/bash -c 'cd <directory> && <command>'
|
||
```
|
||
|
||
This command does the following:
|
||
- **`docker exec -i`**: Runs the command inside the specified Docker container interactively.
|
||
- **`/bin/bash -c`**: Runs the command inside a Bash shell within the container.
|
||
- **`cd <directory>`**: Changes the current directory to the one specified by the user.
|
||
- **`&& <command>`**: Executes the command passed by the user after changing to the specified directory.
|
||
|
||
### Handling Command Output
|
||
|
||
Once the command is executed, the script handles the output using promises. It checks if there is any output from the command, and logs it to the console. If there is no output, it simply prints "Done."
|
||
|
||
```javascript
|
||
.then(out => {
|
||
if (!out.stdout) {
|
||
console.log("Done")
|
||
}
|
||
console.log(out.stdout)
|
||
})
|
||
```
|
||
|
||
If the command runs successfully but produces no output, the script logs "Done" to indicate that it has completed. Otherwise, it logs the output of the command.
|
||
|
||
### Handling Errors Gracefully
|
||
|
||
In the event of an error, the script catches it and displays a cleaned-up error message to the user. This is particularly helpful when dealing with Docker or shell command failures.
|
||
|
||
```javascript
|
||
.catch(err => {
|
||
console.log(err.message.toString().replace("Command failed: " + `docker exec -i `+ args[2] + ` /bin/bash -c ` + "' cd " + args[3] + " && \"" + code + "\"'", ""))
|
||
})
|
||
```
|
||
|
||
The error handler strips out any unnecessary details from the error message, showing only the relevant parts. This improves the readability of error logs, making it easier to diagnose problems.
|
||
|
||
|
||
This script is a great example of how to programmatically interact with Docker containers using Node.js. It dynamically executes commands inside a container by building a `docker exec` command, handling output, and catching errors along the way. Whether it's navigating directories, running system commands, or processing results, this approach simplifies the management of containerized environments in automated workflows.
|
||
|
||
# Container Creation Workflow
|
||
|
||
Once the user has passed the verification process, the next step is the actual creation of the container. This involves a series of steps to allocate resources, set up the network, and provision the container for SSH access.
|
||
|
||
### The Generation Command
|
||
|
||
The container generation process is initiated by a command that calls a script responsible for deploying the container using Docker. Here's an example of the generation command:
|
||
|
||
```javascript
|
||
cmd(`node /home/scripts/gen_multi.js ${sshUserID} ubuntu ${interaction.user.id}`).then(out => {
|
||
console.log('Container generated:', out);
|
||
});
|
||
```
|
||
|
||
The `gen_multi.js` script handles all aspects of container deployment, including setting up the operating system, assigning network resources, and provisioning SSH access. The operating system is selected based on user input, and various distributions are supported, such as Ubuntu, Debian, Arch, Alpine, and more.
|
||
|
||
### Understanding the gen_multi.js script
|
||
|
||
This script is a powerful tool designed to automate the deployment and management of Docker containers. It leverages several key Node.js modules and libraries to streamline the process, making it ideal for handling various Linux distributions and container orchestration. Let’s walk through how it works, with code examples to illustrate each step.
|
||
|
||
#### Key Libraries and Dependencies
|
||
|
||
At the heart of the script are some critical Node.js libraries. The `unirest` library is used to make HTTP requests, while `Dockerode` provides a powerful Docker client for managing containers. The script also uses `cmd-promise` to execute shell commands in a promise-based manner. For logging, `log-to-file` is used, and it integrates with Discord via `discord.js` for sending logs or status updates.
|
||
|
||
Additionally, the script incorporates MySQL (`mysql2`) to interact with a database, allowing the script to store and retrieve user-specific data, such as user identifiers and resource configurations.
|
||
|
||
#### Handling Randomness Network Selection
|
||
|
||
The script uses a helper function `randomIntFromInterval` to generate random integers within a specified range. This randomness plays a critical role in selecting a random network from a predefined list (`networkNames`). These random values help in dynamically configuring Docker containers and ensuring that no two containers accidentally use the same ports or networks.
|
||
|
||
```javascript
|
||
function randomIntFromInterval(min, max) {
|
||
return Math.floor(Math.random() * (max - min + 1) + min);
|
||
}
|
||
|
||
let networkNames = ["ct1", "ct2", "ct3", "ct4", ...]; // A predefined list of networks
|
||
let randNetwork = networkNames[Math.floor(Math.random() * networkNames.length)];
|
||
```
|
||
|
||
#### MySQL Connection Setup
|
||
|
||
A MySQL connection is established using the `mysql2` library. The credentials (host, user, password, and database) are predefined. The database interaction allows the script to query user data, such as retrieving user-specific configuration values needed for container setup.
|
||
|
||
```javascript
|
||
connection.query(
|
||
"SELECT * FROM users WHERE uid = '" + myArgs[4] + "'",
|
||
function (err, results) {
|
||
if (results.length === 0) {
|
||
console.log("User does not exist");
|
||
} else {
|
||
console.log(results);
|
||
}
|
||
}
|
||
);
|
||
```
|
||
|
||
#### Detecting IP and Caching Network Configurations
|
||
|
||
The script runs a shell command via `cmd-promise` to detect the IP address associated with a randomly chosen network. Once the IP is detected, a network cache file is created and written to using `jsonfile`, saving the random network configuration for later use.
|
||
|
||
```javascript
|
||
cmd(`sh /home/scripts/detectIP.sh ` + randNetwork).then(out => {
|
||
let IPAddress = out.stdout.replace("\n", "");
|
||
|
||
const jsonfile = require('jsonfile');
|
||
const file = '/home/cache/netcache/' + myArgs[4] + ".network";
|
||
const obj = { id: randNetwork };
|
||
|
||
jsonfile.writeFile(file, obj).then(() => {
|
||
console.log('Network Cache Write complete');
|
||
});
|
||
});
|
||
```
|
||
|
||
#### The detectIP script
|
||
|
||
```bash
|
||
#!/bin/bash
|
||
|
||
# Check if the network name is provided as a parameter
|
||
if [ $# -ne 1 ]; then
|
||
echo "Usage: $0 <network_name>"
|
||
exit 1
|
||
fi
|
||
|
||
# Extract the network name from the parameter
|
||
network_name="$1"
|
||
|
||
# Get the subnet of the Docker network
|
||
subnet=$(docker network inspect "$network_name" -f '{{(index .IPAM.Config 0).Subnet}}')
|
||
|
||
if [ -z "$subnet" ]; then
|
||
echo "Network '$network_name' not found or doesn't have an assigned subnet."
|
||
exit 1
|
||
fi
|
||
|
||
# Extract just the subnet prefix (e.g., 172.18.0)
|
||
subnet_prefix="${subnet%.*}"
|
||
|
||
# Loop through IP addresses in the subnet to find an available one
|
||
for ((i=2; i<=254; i++)); do
|
||
ip="${subnet_prefix}.$i"
|
||
|
||
# Check if the IP address is already in use by a running container
|
||
if ! ping -c 1 "$ip" &> /dev/null; then
|
||
echo "$ip"
|
||
exit 0
|
||
fi
|
||
done
|
||
|
||
echo "No available IP addresses found in the subnet of network '$network_name'."
|
||
exit 1
|
||
```
|
||
|
||
This script checks if exactly one argument (a network name) is provided, and if not, it displays usage instructions and exits. It retrieves the subnet of the specified Docker network, checks if the network exists and has an assigned subnet, and extracts the subnet prefix (e.g., 172.18.0). The script then loops through possible IP addresses within the subnet range, checking if each is available by attempting to ping it. If an available IP is found, it prints the IP and exits. If no available IP is found, it notifies the system and exits.
|
||
|
||
#### Operating System Selection
|
||
|
||
The script supports multiple Linux distributions, such as Ubuntu, Debian, CentOS, and more. Depending on the command-line arguments passed, the script selects the appropriate operating system for the Docker container. Each OS is associated with specific resource settings (e.g., memory, CPUs) and restart policies, all configurable based on user input.
|
||
|
||
For example, if Ubuntu is chosen, the script sets the Docker image to `ssh:ubuntu`, assigns memory and CPU limits, and specifies that the container should always restart (`--restart always`). Other OS options, like Alpine or CentOS, follow similar logic but with different configurations.
|
||
|
||
```javascript
|
||
if (opperatingSystem === "ubuntu") {
|
||
osChosen = "ssh:ubuntu";
|
||
restartVar = "--restart always";
|
||
memoryVar = "--memory=1024m --memory-swap=1080m";
|
||
cpusVar = "--cpus=2";
|
||
specialPorts = "";
|
||
} else if (opperatingSystem === "alpine") {
|
||
osChosen = "ssh:alpine";
|
||
restartVar = "--restart always";
|
||
memoryVar = "--memory=1024m --memory-swap=1080m";
|
||
cpusVar = "--cpus=2";
|
||
specialPorts = "";
|
||
}
|
||
```
|
||
|
||
#### Container Creation and Docker Run
|
||
|
||
Once the OS is selected and configured, the script attempts to run a Docker container using `dockerode`. It constructs a `docker run` command with a set of options, including DNS configuration, memory limits, CPU limits, and custom network settings. The script also checks if the user has a trial key; if so, it uses `--rm` to automatically remove the container after use.
|
||
|
||
```javascript
|
||
cmd(`docker run --dns 1.2.3.4 --ulimit nofile=10000:50000 --mount type=bind,source=/etc/hosts,target=/etc/hosts,readonly -e PATH=/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/nodejs/bin:/usr/local/go/bin:/root/.cargo/bin:/root/.go/bin --ip ${IPAddress} -td ` + freeStorage + " " + specialStartup + ' --network=' + randNetwork + " -e CONTAINER_NAME=" + myArgs[4] + " " + trialVar + " " + memoryVar + " " + cpusVar + " " + restartVar + ' --hostname ' + myArgs[4] + ' --name ' + myArgs[4] + " " + osChosen).then(out => {
|
||
console.log("Container Created Successfully");
|
||
})
|
||
.catch(err => {
|
||
console.log('Error:', err);
|
||
});
|
||
```
|
||
|
||
### Key Elements of the Command
|
||
|
||
`docker run`
|
||
This is the core Docker command that starts a new container. It is followed by various options and configurations that tailor how the container will be run.
|
||
|
||
`--dns 1.2.3.4`
|
||
This option sets the DNS server for the container to `1.2.3.4`. This can be used to force the container to use a specific DNS server instead of the system default.
|
||
|
||
`--ulimit nofile=10000:50000`
|
||
The `ulimit` option is used to control the number of open file descriptors allowed in the container. In this case, the limit is set to a soft limit of 10,000 and a hard limit of 50,000. This is particularly useful when dealing with containers that require a large number of file handles, such as servers with heavy network or file I/O activity.
|
||
|
||
`--mount type=bind,source=/etc/hosts,target=/etc/hosts,readonly`
|
||
This bind mount command maps the host machine’s `/etc/hosts` file into the container’s `/etc/hosts` path as a read-only file. It allows the container to access the same host-to-IP mappings as the host system, ensuring consistency in DNS lookups within the container.
|
||
|
||
`-e PATH=/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/nodejs/bin:/usr/local/go/bin:/root/.cargo/bin:/root/.go/bin`
|
||
The `-e` flag sets environment variables inside the container. In this case, it modifies the `PATH` environment variable to include several common directories where executable files are located, including Node.js, Go, and system binaries.
|
||
|
||
`--ip ${IPAddress}`
|
||
This option sets a specific IP address for the container. The variable `${IPAddress}` is dynamically set in the script, likely determined earlier based on network or configuration.
|
||
|
||
`-td`
|
||
These two flags are combined:
|
||
- `-t` allocates a pseudo-TTY (terminal), which is useful for interactive applications.
|
||
- `-d` runs the container in detached mode, meaning it runs in the background.
|
||
|
||
`${freeStorage} ${specialStartup}`
|
||
These variables are used to dynamically inject additional options for storage and startup configurations. They are passed as part of the Docker run command:
|
||
- `freeStorage` might define storage-related options such as volume mounting or storage limits.
|
||
- `specialStartup` likely contains commands or environment variables that are executed or passed when the container starts.
|
||
|
||
`--network=${randNetwork}`
|
||
This specifies the Docker network to which the container will be connected. The variable `randNetwork` contains the name of the network, which is chosen randomly in the script from a predefined list of network names.
|
||
|
||
`-e CONTAINER_NAME=${myArgs[4]}`
|
||
This sets an environment variable `CONTAINER_NAME` inside the container, with the value coming from the command-line argument `myArgs[4]`. This is useful for tracking or logging the container’s name during execution.
|
||
|
||
`${trialVar} ${memoryVar} ${cpusVar} ${restartVar}`
|
||
These variables are used to define additional container settings:
|
||
- `trialVar`: Determines if the container is part of a trial, potentially adding `--rm` to automatically remove the container when it stops.
|
||
- `memoryVar`: Sets memory limits for the container, such as `--memory=1024m` to limit the container to 1024 MB of RAM.
|
||
- `cpusVar`: Specifies CPU limits, for example, `--cpus=2` to restrict the container to 2 CPU cores.
|
||
- `restartVar`: Adds a restart policy to the container, such as `--restart always` to ensure the container restarts automatically if it crashes or is stopped.
|
||
|
||
`--hostname ${myArgs[4]} --name ${myArgs[4]}`
|
||
These options set the hostname and name of the container, both based on the value of `myArgs[4]`. The hostname is the internal network name of the container, while the name is how the container will be referenced in Docker.
|
||
|
||
`${osChosen}`
|
||
Finally, the `${osChosen}` variable specifies which Docker image to use when creating the container. This variable is dynamically set based on the chosen operating system earlier in the script.
|
||
|
||
#### Handling Edge Cases and Custom Roles
|
||
|
||
The script has specific conditions for handling users with custom roles. For example, if a user has certain roles, they are allocated more resources (CPUs, memory). Conversely, for trial users, the resources are limited.
|
||
|
||
Additionally, there are special conditions for certain user IDs, where specific storage or CPU configurations are assigned, providing flexibility based on user requirements.
|
||
|
||
```javascript
|
||
if (myArgs[4] === "SSH42113405732790") {
|
||
cpusVar = "--cpus=8";
|
||
memoryVar = "--memory=8192m";
|
||
freeStorage = "--storage-opt size=50G";
|
||
}
|
||
```
|
||
|
||
#### Logging and Finalizing the Order
|
||
|
||
Throughout the process, the script logs important events, such as when a new order is received or when a container is successfully deployed. These logs are written both to the console and to a log file (`orders_generated.log`). This ensures that administrators can monitor the deployment process and track potential issues.
|
||
|
||
```javascript
|
||
log('New Order Received--', 'orders_generated.log');
|
||
log('Container Created Successfully', 'orders_generated.log');
|
||
```
|
||
|
||
Once the container is deployed, the script updates the user’s network ID in the MySQL database, ensuring that the network configuration is properly tracked. Finally, it logs the successful completion of the order and exits.
|
||
|
||
```javascript
|
||
async function updateNetworkID(netID, sshSurfID) {
|
||
connection.query(
|
||
"UPDATE users SET netid='" + netID + "' WHERE uid='" + sshSurfID + "'",
|
||
function (err, results) {
|
||
console.log(results);
|
||
}
|
||
);
|
||
}
|
||
```
|
||
This script is a highly customizable and automated solution for managing Docker containers, tailored to dynamically handle various Linux distributions and custom resource allocations based on user input. The use of random network and port assignment ensures that each container is unique, while the integration with MySQL and external APIs adds a layer of user management and user-specific configurations. By combining these powerful tools, the script efficiently handles container orchestration, making it a robust solution for system administrators.
|
||
|
||
### Container Configuration
|
||
|
||
The system dynamically configures the container's resources based on the user's selection and available system resources. It adjusts memory, CPU allocation, and storage space to ensure the container runs optimally.
|
||
|
||
Here's a simplified configuration process:
|
||
|
||
```javascript
|
||
if (operatingSystem == "ubuntu") {
|
||
osChosen = "ssh:ubuntu";
|
||
memoryVar = "--memory=1024m";
|
||
cpusVar = "--cpus=2";
|
||
}
|
||
```
|
||
|
||
Additional features like restarting policies, special port configurations, and network isolation are applied to the container, ensuring it meets security and performance standards.
|
||
|
||
|
||
## Container Networking
|
||
|
||
Networking in Discord-Linux is a critical component, as each container is assigned a unique network configuration to isolate it from others. The container is assigned a specific IP address and port mappings based on its network ID. A custom script detects the IP address and assigns it to the container:
|
||
|
||
```javascript
|
||
cmd(`sh /home/scripts/detectIP.sh ${randNetwork}`).then(out => {
|
||
let IPAddress = out.stdout.replace("\n", "");
|
||
});
|
||
```
|
||
|
||
Network isolation ensures that containers are secure and that traffic is properly routed within the peer-to-peer environment.
|
||
|
||
### Peer-to-Peer Networking
|
||
|
||
Discord-Linux operates on a peer-to-peer (P2P) network where each container can communicate with others through specific gateways. This setup allows containers to share resources or interact with each other securely, without relying on a central server for data exchange. The decentralized nature of the P2P network ensures scalability and robustness.
|
||
|
||
|
||
## Managing Resources
|
||
|
||
Discord-Linux employs various resource management techniques to ensure containers are allocated proper resources without overwhelming the host system.
|
||
|
||
### Memory and CPU Allocation
|
||
|
||
When generating a container, the system allocates CPU and memory resources based on the type of container requested by the user. For example, lightweight containers such as Alpine may receive fewer resources, while more resource-intensive containers like Ubuntu are given more memory and CPU cores.
|
||
|
||
```javascript
|
||
if (operatingSystem == "ubuntu") {
|
||
memoryVar = "--memory=1024m";
|
||
cpusVar = "--cpus=2";
|
||
}
|
||
```
|
||
|
||
### Storage Management
|
||
|
||
Storage is dynamically allocated based on the type of container. Free-tier users may be allocated less storage, while users with an active subscription or license are given access to larger storage volumes. Discord-Linux monitors storage usage and ensures containers don’t exceed their allotted limits.
|
||
|
||
|
||
## Container Extension and Expiration
|
||
|
||
Containers in Discord-Linux have a limited lifespan, typically lasting 7 days. Users can extend the life of their container by issuing the `/extend` command, which resets the expiration date. If a container is not extended before the expiration date, it is automatically deleted to free up resources.
|
||
|
||
### Automatic Cleanup
|
||
|
||
The system automatically checks for expired containers and removes them to ensure resources are not wasted on unused containers.
|
||
|
||
Here’s a snippet of how the system determines whether a container has expired:
|
||
|
||
```javascript
|
||
function isDateBeforeToday(date) {
|
||
return new Date(date.toDateString()) < new Date(new Date().toDateString());
|
||
}
|
||
```
|
||
|
||
## SSH and Remote Access
|
||
|
||
Once a container is generated, the user is provided with SSH access, allowing them to interact with the container remotely. The system generates a random root password for the container, which is shared with the user via a direct message in Discord.
|
||
|
||
### SSH Provisioning
|
||
|
||
SSH access is provisioned using Docker's built-in functionality, and the root password is set using a script:
|
||
|
||
```javascript
|
||
cmd("echo 'root:${rootPass}' | chpasswd").then(out => {
|
||
console.log('SSH password set:', out);
|
||
});
|
||
```
|
||
|
||
Users can reset their SSH password or update their container's configuration using Discord-Linux commands.
|
||
|
||
|
||
## Command Breakdown
|
||
|
||
Commands in Discord-Linux are designed to be simple yet powerful. Users interact with the bot using slash commands (e.g., `/generate`, `/extend`), and the bot handles the background processes. Below is an example command for generating a container:
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "generate",
|
||
description: "Generates a Container with Ubuntu.",
|
||
run: async (client, interaction) => {
|
||
await interaction.reply("Generating your container...");
|
||
// Generation logic here
|
||
}
|
||
};
|
||
```
|
||
|
||
Commands are structured to be intuitive, allowing users to manage their containers without needing to learn complex commands or configurations.
|
||
|
||
|
||
## Container Lifecycle and Automation
|
||
|
||
The lifecycle of a Discord-Linux container is fully automated. From the moment a user requests a container to its eventual expiration or extension, the system manages all aspects of its lifecycle. This includes:
|
||
|
||
- **Generation**: The container is created with the user’s requested specifications.
|
||
- **Management**: Users can extend, modify, or destroy their containers through simple commands.
|
||
- **Expiration**: Containers are automatically deleted when they reach their expiration date unless extended.
|
||
|
||
Automation reduces the need for manual intervention, making Discord-Linux a hands-off solution for container management.
|
||
|
||
# How Discord-Linux Handles Container Destruction
|
||
|
||
Managing containers involves more than just creating and extending them—it also requires efficient cleanup and resource management. In Discord-Linux, the process of **destroying** a container is initiated by a user command, ensuring that unused containers are properly removed to free up resources. But before destruction happens, several layers of verification, user interaction, and resource cleanup take place.
|
||
|
||
### The Container Destruction Command
|
||
|
||
The `/destroy` command in Discord-Linux initiates the process of removing a user’s container. When a user runs this command, the bot doesn’t immediately proceed with the destruction. Instead, it first retrieves the user’s container ID and then asks for confirmation before performing the actual deletion. This prevents accidental container destruction and provides users with a safety net to protect their data.
|
||
|
||
### Container Destruction
|
||
|
||
Once the user confirms that they want to proceed, the bot performs the actual destruction of the container. This is done by stopping and removing the container using Docker’s command-line interface.
|
||
|
||
```javascript
|
||
cmd('docker stop ' + sshSurfID + " && docker rm " + sshSurfID).then(out => {
|
||
console.log('Container destroyed:', out);
|
||
});
|
||
```
|
||
|
||
The bot uses the container ID (retrieved earlier) to stop and remove the Docker container associated with the user. The combination of `docker stop` and `docker rm` ensures that both the container and its data are completely removed from the system, freeing up resources.
|
||
|
||
### Cleaning Up Network Configuration
|
||
|
||
After the container is destroyed, the system also removes any network configurations associated with the container. This ensures that no leftover data, such as network routes or IP addresses, lingers after the container is removed.
|
||
|
||
```javascript
|
||
try {
|
||
fs.unlinkSync(netConfig);
|
||
console.log("Network Config Removed!");
|
||
} catch (err) {
|
||
console.log("No Config to remove");
|
||
}
|
||
```
|
||
|
||
In this snippet, the bot checks if a network configuration file exists for the container. If it does, the file is deleted, ensuring that the network resources used by the container are properly cleaned up.
|
||
|
||
### Notifying the User
|
||
|
||
Once the container has been successfully destroyed, the bot sends an embedded message to the user to inform them of the completion of the process. The message is displayed in Discord using an `EmbedBuilder`, which provides a visually appealing way to show important information.
|
||
|
||
```javascript
|
||
const embed = new EmbedBuilder()
|
||
.setTitle("🤯 The container was destroyed!")
|
||
.setDescription(`You may generate a new one if you would like using /generate at any time!`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
|
||
await interaction.editReply({ embeds: [embed] });
|
||
```
|
||
|
||
The message confirms to the user that the container has been successfully destroyed and reminds them that they can generate a new container if needed. This feedback is critical in ensuring the user knows the action has been completed.
|
||
|
||
### Additional Cleanup and Notifications
|
||
|
||
Beyond the basic destruction process, Discord-Linux goes a step further by notifying external systems and performing additional cleanup. For instance, the bot makes an HTTP request to notify another service that the user’s container has been deleted, ensuring that all traces of the container are removed from the wider network.
|
||
|
||
```javascript
|
||
const request = unirest.delete(`http://non-public-endpoint/${userID}`)
|
||
.headers({ 'password': PASSWORD });
|
||
|
||
request.end(function (response) {
|
||
if (response.error) {
|
||
console.error('Error:', response.error);
|
||
} else {
|
||
console.log('Response:', response.body);
|
||
}
|
||
});
|
||
```
|
||
|
||
This HTTP request ensures that all external systems are updated to reflect the removal of the container, helping maintain consistency across the network.
|
||
|
||
### Handling Errors
|
||
|
||
While the bot handles most scenarios smoothly, there’s always a chance that something could go wrong. To prevent unexpected behavior, the code includes error handling to manage cases where the container may not exist or cannot be destroyed. If the container doesn’t exist, for example, the bot notifies the user that there’s no container to destroy.
|
||
|
||
```javascript
|
||
cmd('docker stop ' + sshSurfID + " && docker rm " + sshSurfID).catch(err => {
|
||
if (err.toString().includes("such")) {
|
||
console.log("A container does not exist to destroy");
|
||
interaction.editReply("A container does not currently exist to destroy.");
|
||
}
|
||
});
|
||
```
|
||
|
||
This ensures that users aren’t confused if they attempt to destroy a container that no longer exists or was already removed. The bot provides clear feedback in these situations, keeping users informed at every step.
|
||
|
||
|
||
### User Confirmation Process
|
||
|
||
To avoid accidental deletions, the bot uses a confirmation dialog to prompt the user before proceeding. This prompt comes in the form of a **Discord select menu**, offering users the choice to either proceed with destruction or cancel the action.
|
||
|
||
```javascript
|
||
const row = new ActionRowBuilder()
|
||
.addComponents(
|
||
new StringSelectMenuBuilder()
|
||
.setCustomId(rand)
|
||
.setPlaceholder('To destroy or not destroy?')
|
||
.addOptions([
|
||
{
|
||
label: 'Destroy IT!',
|
||
description: 'Remove the container and ALL of its DATA!',
|
||
value: 'Destroy',
|
||
},
|
||
{
|
||
label: 'No, please do not destroy my precious data.',
|
||
description: 'This will cancel the command',
|
||
value: 'Action Canceled.',
|
||
},
|
||
]),
|
||
);
|
||
```
|
||
|
||
The menu is dynamically generated, allowing users to make their selection. If the user chooses "Destroy IT!", the bot moves on to the next steps; otherwise, the destruction process is canceled, and no further actions are taken.
|
||
|
||
This safeguard prevents users from accidentally deleting important containers or data, which is especially useful in environments where multiple users or containers are being managed.
|
||
|
||
|
||
# Executing Shell Commands Using Discord Bot
|
||
|
||
The platform’s ability to execute commands non-interactively via Discord is a game-changer for container management and remote server operations. This powerful feature lets users run commands, manage files, check system statuses, and more—all from a simple Discord interface, without needing further interaction after issuing a command. We will dive into how the system works, highlighting some special custom-coded commands like `cd`, `pwd`, and `ls`, and how the platform ensures non-interactive command execution.
|
||
|
||
### Non-Interactive Command Execution Overview
|
||
|
||
Non-interactive execution means that once a user submits a command through the bot, it completes without asking for additional input. This is incredibly useful when dealing with long-running processes or package installations, where user input (like confirmations) is usually required. The bot takes care of everything—from preparing the command, running it, and presenting the result.
|
||
|
||
The bot executes commands inside Docker containers specific to each user, providing isolated environments. These containers are used to execute commands non-interactively, meaning once you send the command, the bot does the rest.
|
||
|
||
### Command Handling Flow
|
||
|
||
The bot's `/x` command allows users to execute commands within their container.
|
||
|
||
Here’s the general structure of how it works:
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "x",
|
||
description: "Execute a command non-interactively.",
|
||
options: [
|
||
{
|
||
name: "cmd",
|
||
description: "Command to Execute",
|
||
required: true,
|
||
type: 3 // STRING type
|
||
}
|
||
],
|
||
run: async (client, interaction) => {
|
||
let sshSurfID = await getUserContainerID(interaction.user.id);
|
||
let userPWD = await getUserPWD(interaction.user.id);
|
||
|
||
if (!sshSurfID) {
|
||
await interaction.editReply("You don't have a container currently. Please generate one using /generate.");
|
||
return;
|
||
}
|
||
|
||
let commandToRun = interaction.options.get("cmd").value;
|
||
|
||
// Handle special cases (custom commands like cd, pwd, and ls)
|
||
commandToRun = handleCustomCommands(commandToRun, userPWD, sshSurfID, interaction);
|
||
|
||
// Run the command in a non-interactive manner within the user's container
|
||
executeCommandInContainer(sshSurfID, userPWD, commandToRun, interaction);
|
||
}
|
||
};
|
||
```
|
||
|
||
### Custom Coded Commands: `cd`, `pwd`, `ls`
|
||
|
||
In addition to executing common system commands like `apt`, `yum`, or `neofetch`, the platform includes custom implementations for commands like `cd`, `pwd`, and `ls`. These are custom coded to work seamlessly with the platform, ensuring non-interactive behavior.
|
||
|
||
#### `cd` (Change Directory)
|
||
The `cd` command allows users to navigate through the file system within their container. Since changing directories is fundamental for any file operation, the platform implements `cd` in a custom way.
|
||
|
||
The custom `cd` command ensures that the current working directory is updated on the platform, and it intelligently handles edge cases like moving up (`cd ..`), moving to a root directory (`cd /`), or using the home directory shortcut (`cd ~`). This is all handled behind the scenes, and the updated directory is stored in the user’s session for future commands.
|
||
|
||
Here’s a simplified view of the custom `cd` logic:
|
||
|
||
```javascript
|
||
if (commandToRun.startsWith("cd")) {
|
||
let newDirectory = handleCdCommand(commandToRun, userPWD); // Handles various cases like `..`, `~`, and `/`
|
||
updatePWD(newDirectory, sshSurfID); // Update user's current directory
|
||
interaction.editReply(`Directory changed to: ${newDirectory}`);
|
||
}
|
||
```
|
||
|
||
#### `pwd` (Print Working Directory)
|
||
The `pwd` command is another custom implementation. Instead of executing the Linux `pwd` command inside the container, the bot retrieves the user's current working directory directly from the platform’s stored session. This ensures faster responses and removes the need for running the command in the container itself.
|
||
|
||
```javascript
|
||
if (commandToRun === "pwd") {
|
||
interaction.editReply(`\`\`\`${userPWD}\`\`\``);
|
||
}
|
||
```
|
||
|
||
This custom `pwd` command guarantees that the bot provides accurate feedback on the user's current directory, without needing to execute a process inside the container.
|
||
|
||
#### `ls` (List Directory Contents)
|
||
The `ls` command is handled in a slightly more complex way because it needs to interact with the file system to show directory contents. The platform uses Docker's `exec` feature to run the `ls` command inside the container, while still ensuring non-interactive execution.
|
||
|
||
```javascript
|
||
if (commandToRun.startsWith("ls")) {
|
||
// Execute ls inside the user's container non-interactively
|
||
customerContainer.exec(['/bin/bash', '-c', `cd ${userPWD} && ls -lah`], { stdout: true, stderr: true }, (err, out) => {
|
||
interaction.editReply(`\`\`\`${out.stdout}\`\`\``);
|
||
});
|
||
}
|
||
```
|
||
|
||
The result is that `ls` works just like it would in a standard Linux environment, listing directory contents in a user-friendly way.
|
||
|
||
### Handling Non-Interactive Execution
|
||
|
||
Non-interactive commands are a core feature of this platform. Commands that typically require user input (like `apt install`) are automatically modified to run without interaction. For example, if a user runs `apt install`, the bot adds the `-y` flag to ensure the process runs without requiring further confirmation:
|
||
|
||
```javascript
|
||
if (commandToRun.startsWith("apt install") || commandToRun.startsWith("yum install")) {
|
||
if (!commandToRun.includes("-y")) {
|
||
commandToRun += " -y"; // Automatically add non-interactive flag
|
||
}
|
||
}
|
||
```
|
||
|
||
This customization ensures that no matter what command you issue, it will complete without stopping to ask for confirmation or additional input.
|
||
|
||
### Executing Commands in the Container
|
||
|
||
Once the command is prepared, the bot uses Docker to execute it inside the user's container. This is done non-interactively, meaning the user doesn’t need to stay engaged for the command to finish.
|
||
|
||
Here’s how the bot executes commands inside the container:
|
||
|
||
```javascript
|
||
const customerContainer = docker.getContainer(sshSurfID);
|
||
|
||
customerContainer.exec(['/bin/bash', '-c', `cd ${userPWD} && ${commandToRun}`], { stdout: true, stderr: true }, (err, out) => {
|
||
if (err || !out) {
|
||
interaction.editReply("```️️️️Your container either needs to be generated or is not running.```");
|
||
return;
|
||
}
|
||
|
||
if (out.inspect.ExitCode !== 0) {
|
||
interaction.editReply(`\`\`\`Error: ${out.stderr}\`\`\``);
|
||
} else {
|
||
interaction.editReply(`\`\`\`${out.stdout}\`\`\``);
|
||
}
|
||
});
|
||
```
|
||
|
||
The command is executed within the container, and the result (either success or failure) is sent back to the user in Discord.
|
||
|
||
### Output Management
|
||
|
||
Command outputs are returned directly in Discord. However, if the output is too long to fit in a single message, the platform uses a service like `dpaste` to upload the output and return a link to the user:
|
||
|
||
```javascript
|
||
if (out.stdout.length > 2020) {
|
||
fs.writeFile('/tmp/paste', `Command: ${commandToRun}\n${out.stdout}`, err => {
|
||
if (err) console.error(err);
|
||
});
|
||
cmd("sleep 2; cat /tmp/paste | dpaste").then(pasteout => {
|
||
interaction.editReply(`The command output was too large for Discord. Check it here: ${pasteout.stdout}`);
|
||
});
|
||
} else {
|
||
interaction.editReply(`\`\`\`${out.stdout}\`\`\``);
|
||
}
|
||
```
|
||
|
||
This ensures that even large outputs can be managed efficiently and that users always receive the results of their commands.
|
||
|
||
The platform’s ability to run commands non-interactively via Discord is incredibly powerful. With custom-coded commands like `cd`, `pwd`, and `ls`, users can easily navigate and interact with their containerized environments. The platform's non-interactive approach ensures that commands run to completion without requiring additional input, making it a smooth experience for users managing servers and systems remotely.
|
||
|
||
By leveraging Docker, REST logging, and thoughtful command handling, the platform delivers a flexible and secure way for users to run commands, all within a familiar interface like Discord.
|
||
|
||
# Editing Files with Discord-Linux
|
||
Discord-Linux’s **edit-file** command offers users the ability to edit files within their Docker containers through Discord, making it a highly effective and user-friendly solution for container management. This functionality is facilitated through a series of integrated steps, leveraging Docker, MySQL, and Discord.js to provide seamless file editing without needing direct access to the container shell. Here's an in-depth look at how this system works, with code examples to illustrate each step.
|
||
|
||
When the user issues the **edit-file** command, the bot first verifies whether the user has an active container. This step is critical to ensure that only users with valid containers can access the system. The bot queries a MySQL database to check for the user’s container ID (`sshSurfID`), which is used throughout the process to interact with the user’s container. If the user does not have a container, they are notified and prompted to create one.
|
||
|
||
Once the user’s SSH ID (`sshSurfID`) is retrieved from the database, the system checks whether the container is currently running. This check is performed using a shell script executed within the server. If the container does not exist or is not running, the user is informed, and the process is stopped, preventing unnecessary command execution.
|
||
|
||
If the container exists, the next step involves fetching the file contents. The user specifies the full file path as a command argument. This path is passed to the container using the **docker exec** command to retrieve the contents of the specified file. The bot uses the `cat` command inside the container to read the file and capture its contents.
|
||
|
||
```javascript
|
||
let argFile = interaction.options.getString('full-file-path'); // File path from user input
|
||
await cmd('docker exec ' + sshSurfID + ' bash -c "cat ' + argFile + '"').then(out => {
|
||
fileContentsData = out.stdout.toString(); // File contents stored as a string
|
||
}).catch(err => {
|
||
console.log('Error reading file from container:', err);
|
||
});
|
||
```
|
||
|
||
Discord has a message limit of 4000 characters, so if the file is too large to be displayed or edited within Discord, the bot sends an error message, preventing the user from proceeding with editing files that exceed the allowed size.
|
||
|
||
```javascript
|
||
if (fileContentsData.length > 4000) {
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("File too large to edit")
|
||
.setDescription(`Sorry, Discord's limit for file size is 4,000 characters, and your file is ${fileContentsData.length} characters long.`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
|
||
return interaction.reply({ embeds: [embed] });
|
||
}
|
||
```
|
||
|
||
If the file size is within the allowed limit, Discord-Linux presents a modal window to the user using **Discord.js**'s modal components. This allows the user to edit the file contents directly within Discord, providing a user-friendly interface for modifying files without needing to connect to the container through SSH.
|
||
|
||
```javascript
|
||
const modal = new ModalBuilder()
|
||
.setCustomId(rand) // Unique identifier for the modal
|
||
.setTitle('Editing ' + path.basename(argFile)); // Modal title shows the file being edited
|
||
|
||
const fileContents = new TextInputBuilder()
|
||
.setCustomId('contentsInput')
|
||
.setLabel("File Contents")
|
||
.setValue(fileContentsData) // The file contents are pre-populated in the modal
|
||
.setStyle(TextInputStyle.Paragraph); // Multi-line input for file editing
|
||
|
||
const fileContentsInputRow = new ActionRowBuilder().addComponents([fileContents]);
|
||
|
||
modal.addComponents([fileContentsInputRow]);
|
||
|
||
await interaction.showModal(modal); // Show the modal to the user
|
||
```
|
||
|
||
When the user submits the modal, Discord-Linux captures the edited content. The new file content is written to a temporary file on the server, which is used to overwrite the file in the container. This temporary file is saved using the **write** library, ensuring the new contents are stored securely before being transferred back into the container.
|
||
|
||
```javascript
|
||
client.on('interactionCreate', async modalInteraction => {
|
||
if (modalInteraction.isModalSubmit()) {
|
||
let editedContent = modalInteraction.fields.getTextInputValue('contentsInput'); // Get user input from modal
|
||
|
||
// Write the edited content to a temporary file on the server
|
||
echoFile.sync('/tmp/tmpfile/' + rand + '/' + path.basename(argFile), editedContent, {
|
||
newline: true
|
||
});
|
||
|
||
// Copy the temporary file back into the container
|
||
await cmd('docker cp /tmp/tmpfile/' + rand + '/' + path.basename(argFile) + ' ' + sshSurfID + ':' + removeFilePart(argFile)).then(out => {
|
||
console.log('File copied back into the container:', out);
|
||
}).catch(err => {
|
||
console.log('Error copying file back to container:', err);
|
||
});
|
||
|
||
// Clean up the temporary file from the server
|
||
try {
|
||
fs.unlinkSync('/tmp/tmpfile/' + rand + '/' + path.basename(argFile));
|
||
console.log('Temporary file deleted');
|
||
} catch (err) {
|
||
console.error('Error deleting temporary file:', err);
|
||
}
|
||
|
||
// Notify the user that the file has been successfully saved
|
||
return modalInteraction.reply("File saved as " + argFile);
|
||
}
|
||
});
|
||
```
|
||
|
||
After the file is copied back into the container, the temporary file is deleted from the server to ensure efficient resource management and prevent the accumulation of unnecessary data on the host machine.
|
||
|
||
This approach allows Discord-Linux to offer an efficient and streamlined file editing process without needing users to connect directly to their containers via SSH. The entire workflow is executed through Discord, simplifying container management for users.
|
||
|
||
By combining Docker, MySQL, and Discord.js, Discord-Linux provides a robust system for editing containerized files, allowing users to modify their environments quickly and securely within the confines of a familiar chat interface. This workflow significantly reduces the complexity of container management while providing essential features like security, scalability, and resource control.
|
||
|
||
|
||
# File Retrieval from Containers
|
||
Managing files inside a container can often be tricky, especially when interacting with containers remotely. In Discord-Linux, users can interact with their containers in various ways through the Discord chat interface, including retrieving files from within their containers. The `/openfile` command allows users to fetch files from their container and have them sent directly to a Discord channel.
|
||
|
||
This post delves into how Discord-Linux handles the file retrieval process from within a Docker container, focusing on the technical aspects of copying files, interacting with Docker, and managing user input.
|
||
|
||
### The Purpose of the `/openfile` Command
|
||
|
||
The **`/openfile`** command enables users to specify a file path inside their container and retrieve that file through Discord. The file is then sent directly to the channel where the command was invoked. This makes it easy for users to access important files without needing to log into the container manually. The process involves several steps, including verifying the user, copying the file from the container, and sending the file back to Discord.
|
||
|
||
Here’s how the command is structured and executed in detail.
|
||
|
||
### Command Structure
|
||
|
||
At the core of the `/openfile` command is the ability to take in user input (the file path) and interact with Docker to retrieve the file. The file is temporarily stored in the server's file system before being sent to the user via Discord.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "openfile",
|
||
description: "Sends a file from your container to the channel.",
|
||
options: [{
|
||
"name": "full-file-path",
|
||
"description": "Full path of the file to open.",
|
||
"required": true,
|
||
"type": 3 // 3 is a string input for the file path
|
||
}],
|
||
```
|
||
|
||
The command accepts a single required option, **`full-file-path`**, which is the full path of the file inside the container that the user wants to retrieve. This path is validated later on to ensure the file exists before proceeding.
|
||
|
||
### Fetching the Container ID
|
||
|
||
Before any file can be retrieved, the system first needs to identify the container associated with the user. This is done by querying the MySQL database for the user’s unique container ID (`uid`), which is stored when the container was first created.
|
||
|
||
```javascript
|
||
let getSSHuID = new Promise(function (resolve, reject) {
|
||
connection.query(
|
||
"SELECT uid from users WHERE discord_id = \'" + interaction.user.id + "\'",
|
||
function (err, results, fields) {
|
||
if (results.length == 0) {
|
||
console.log("User does not exist")
|
||
resolve("The user does not Exist");
|
||
} else {
|
||
resolve(results[0].uid);
|
||
}
|
||
}
|
||
)
|
||
});
|
||
```
|
||
|
||
The bot queries the database using the user’s Discord ID to fetch their associated container `uid`. If the user doesn’t have a container, the process halts. Otherwise, the `uid` is saved for use in the file retrieval process.
|
||
|
||
### Retrieving the File Path
|
||
|
||
Once the container ID is fetched, the next step is processing the file path that the user provided. The full path to the file inside the container is extracted from the user's input, and the file name is derived using Node.js’s built-in `path` module:
|
||
|
||
```javascript
|
||
let argFile = interaction.options._hoistedOptions[0].value
|
||
let fileName = getpath.basename(argFile)
|
||
```
|
||
|
||
Here, the **`argFile`** variable holds the file path inputted by the user, and **`fileName`** extracts the base name (i.e., the actual file name) from the full path. This step is important because the file will need to be referenced both in the container and in the local file system after it’s copied.
|
||
|
||
### Setting Up Temporary Storage
|
||
|
||
Before copying the file from the container, a temporary directory is created on the host machine. This directory will hold the file after it’s copied from the container but before it’s sent to Discord.
|
||
|
||
```javascript
|
||
var dir = '/tmp/files/' + name;
|
||
|
||
if (!fs.existsSync(dir)) {
|
||
fs.mkdirSync(dir);
|
||
}
|
||
```
|
||
|
||
A unique directory is created inside `/tmp/files/`, using a randomly generated number (stored in `name`) to avoid overwriting any existing files or directories. If the directory doesn’t already exist, it’s created using **`fs.mkdirSync`**.
|
||
|
||
### Copying the File from the Container
|
||
|
||
The next step is to copy the specified file from the user’s container to the temporary directory on the host machine. This is done using Docker’s **`docker cp`** command, which allows files to be copied from within a running container to a local directory.
|
||
|
||
```javascript
|
||
cmd('docker cp ' + sshSurfID + ':' + argFile + ' /tmp/files/' + name + "/" + fileName).then(out => {
|
||
console.log('out =', out)
|
||
interaction.editReply({
|
||
files: ['/tmp/files/' + name + "/" + fileName]
|
||
});
|
||
console.log(end)
|
||
}).catch(err => {
|
||
console.log('err =', err)
|
||
})
|
||
```
|
||
|
||
In this block, **`docker cp`** copies the file from the container (identified by its `sshSurfID`) to the local directory `/tmp/files/`. The file is then stored in the newly created folder under the user-specific path. If the copy operation is successful, the bot sends the file to the Discord channel using **`interaction.editReply`**, which attaches the file to the response.
|
||
|
||
### Error Handling and Final Steps
|
||
|
||
As with any system interacting with external services like Docker, there’s always the possibility of errors. For example, the file might not exist in the container, or there could be permission issues. The bot includes error handling to manage these cases gracefully:
|
||
|
||
```javascript
|
||
.catch(err => {
|
||
console.log('err =', err)
|
||
})
|
||
```
|
||
|
||
If any issues arise during the `docker cp` operation, the error is logged, and the process halts without crashing the bot.
|
||
|
||
Once the file is successfully sent to the channel, the process concludes, and the bot prints a final message to the console to indicate that the operation has finished:
|
||
|
||
```javascript
|
||
console.log(end)
|
||
```
|
||
|
||
The `/openfile` command in Discord-Linux is a powerful tool that enables users to easily access files from within their containers. By interacting with Docker through Node.js and managing user input via Discord, the command allows seamless file retrieval without requiring users to log into their containers manually.
|
||
|
||
From fetching the container ID, processing the file path, copying the file, and handling errors, the bot ensures a smooth experience for users who need quick access to files stored in their Linux containers. This capability adds significant value to the overall Discord-Linux platform, making it a versatile tool for developers, sysadmins, and hobbyists alike.
|
||
|
||
|
||
# Managing Privacy Mode in Discord-Linux
|
||
|
||
In Discord-Linux, user privacy is an important feature that allows individuals to control how their data and interactions are handled. The **`/privacy`** command is designed to give users control over their privacy settings, allowing them to toggle between private and public modes with ease. One key feature of **privacy mode** is that when enabled, all replies from the bot are sent as **ephemeral messages**, meaning only the user who initiated the command can see the bot's responses. This blog post explores the implementation of the privacy mode feature, focusing on how the bot manages user input, file storage, and interaction feedback, and how it ensures that responses are private.
|
||
|
||
### Introduction to the Privacy Mode Feature
|
||
|
||
The privacy mode in Discord-Linux enables users to control the visibility of their interactions and data within the system. By toggling between private and public modes, users can decide whether they want to operate in a more secure, private environment or a standard public setting. When in **private mode**, the bot's responses are sent as **ephemeral** messages, which ensures that only the user who executed the command can see the response.
|
||
|
||
The **`/privacy`** command is a simple toggle mechanism that changes the user’s privacy mode based on their current setting. If a user is in public mode, running the command switches them to private mode, and vice versa.
|
||
|
||
Here’s how the feature works, from file storage to user feedback.
|
||
|
||
### Command Structure
|
||
|
||
The `/privacy` command modifies the user's privacy status by storing this information in a JSON file specific to each user. The command first checks whether a privacy file exists for the user. If the file is found, the current privacy status is read, and the system toggles the value. If no file is found, the system creates one with a default value of `public` (or not private).
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "privacy",
|
||
description: "Turns privacy mode on or off.",
|
||
run: async (client, interaction) => {
|
||
const file = './cache/' + interaction.user.id + ".privacy";
|
||
```
|
||
|
||
In this part of the code, the path to the privacy file is constructed based on the user’s unique Discord ID. Each user has their own JSON file, ensuring that privacy settings are individualized.
|
||
|
||
### Checking for an Existing Privacy File
|
||
|
||
The first step in the process is checking whether a privacy file already exists for the user. This file holds the current privacy status, which can either be `true` (private mode) or `false` (public mode). The system uses Node.js’s **`fs.existsSync`** method to check for the existence of this file.
|
||
|
||
```javascript
|
||
if (fs.existsSync(file)) {
|
||
console.log("We have a privacy file.");
|
||
jsonfile.readFile(file, function (err, privacyInfo) {
|
||
console.log(privacyInfo.private);
|
||
```
|
||
|
||
If the file exists, it reads the current privacy setting using **`jsonfile.readFile`**. The file contains a simple JSON object that stores whether the user is in private mode or not. The system then toggles this value, switching between private and public modes based on the user's current status.
|
||
|
||
### Toggling Privacy Status and Ephemeral Messages
|
||
|
||
Once the privacy information is read from the file, the system toggles the value of the `private` field. If the user was in private mode, the system switches them to public mode, and vice versa. If the user switches to **private mode**, all bot responses to the user are sent as **ephemeral**, meaning only the user who executed the command can see the messages.
|
||
|
||
```javascript
|
||
const newValue = !privacyInfo.private; // Toggle privacy
|
||
const obj = { private: newValue };
|
||
```
|
||
|
||
This simple line of code inverts the current privacy setting. It ensures that every time the user runs the `/privacy` command, their privacy mode is switched to the opposite state. Once the new value is calculated, it’s written back to the file using **`jsonfile.writeFile`**.
|
||
|
||
When a user is in private mode, the system automatically sends the bot’s responses as ephemeral by including the **ephemeral** flag in the response, which is handled by Discord:
|
||
|
||
```javascript
|
||
interaction.reply({
|
||
content: `You are now in ${modeText} Mode!`,
|
||
ephemeral: newValue // Sends as ephemeral if true
|
||
});
|
||
```
|
||
|
||
This ensures that any response sent to a user in **private mode** remains visible only to them.
|
||
|
||
### Providing User Feedback
|
||
|
||
After updating the privacy status, the system sends feedback to the user via an embedded message in Discord. The **`EmbedBuilder`** is used to create a visually appealing response that informs the user of their new privacy mode. If the user has switched to **private mode**, the bot’s message will be visible only to them:
|
||
|
||
```javascript
|
||
const modeText = newValue ? "Private" : "Public";
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#0099ff")
|
||
.setTitle(`${modeText} Mode`)
|
||
.setDescription(`You are now in ${modeText} Mode!`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
|
||
interaction.editReply({
|
||
embeds: [embed],
|
||
ephemeral: newValue // Sends as ephemeral if true
|
||
});
|
||
```
|
||
|
||
If the user is in **private mode**, this ensures that the embedded message is only visible to them, giving them a discreet confirmation that their privacy settings have been updated. This is particularly useful for users who want to keep their interactions private without revealing them to others in the channel.
|
||
|
||
### Creating a Privacy File for New Users
|
||
|
||
If the privacy file does not already exist, the system assumes that the user is new to the privacy system and creates a default privacy file. This file is initialized with the user in **public mode** (`private: false`), meaning their interactions are not private by default. This provides a simple onboarding experience for users who are using the privacy feature for the first time.
|
||
|
||
```javascript
|
||
} else {
|
||
console.log("We do not have a file, generating.....");
|
||
const obj = { private: false };
|
||
|
||
jsonfile.writeFile(file, obj).then(() => {
|
||
console.log('Write complete');
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#0099ff")
|
||
.setTitle("Welcome to our Privacy controls!")
|
||
.setDescription(`You are now set up to use the privacy system, currently you are set to public\nTo go private, run the command again.`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
interaction.editReply({ embeds: [embed] });
|
||
}).catch(error => console.error(error));
|
||
}
|
||
```
|
||
|
||
The embed message in this case notifies the user that the privacy file has been created and explains how to toggle between public and private modes. For new users, this setup message ensures they understand how to enable private mode.
|
||
|
||
### Error Handling
|
||
|
||
As with any system that involves file I/O operations, there is always a chance of errors, such as a failure to read or write to the file. The system includes error handling to ensure that issues are logged and do not crash the bot.
|
||
|
||
```javascript
|
||
} catch (err) {
|
||
console.error(err);
|
||
}
|
||
```
|
||
|
||
If any error occurs during the file check, read, or write process, the error is logged to the console. This ensures that any issues can be traced and debugged without affecting the user's experience.
|
||
|
||
|
||
The `/privacy` command in Discord-Linux is a user-friendly way for individuals to control their privacy settings within the system. By toggling between private and public modes, users can decide how they want their interactions to be handled. When privacy mode is enabled, all bot replies are sent as **ephemeral**, meaning only the user who triggered the command can see the responses.
|
||
|
||
This combination of real-time feedback, persistent privacy settings, and private communication makes the feature both powerful and easy to use. Whether a user is concerned about keeping their interactions private or just exploring the public side of the system, Discord-Linux provides an intuitive way to manage these preferences directly through Discord.
|
||
|
||
|
||
# Writing Files to a Container in Discord-Linux
|
||
|
||
Users have full control over their container environments, allowing them to manage and manipulate files directly from within Discord. The **`/write-file`** command is one such feature, enabling users to create or update files within their container using a simple, interactive form. This blog post will break down how this feature works, detailing the process of capturing user input, writing content to a file, and transferring it to the container.
|
||
|
||
### Overview of the `/write-file` Command
|
||
|
||
The **`/write-file`** command allows users to write content into a file located in their container. This feature takes user input through a modal dialog in Discord, collects the file's location, name, and contents, and then writes this data to the specified file within the container. The entire process is managed using Docker commands behind the scenes, making file management seamless and straightforward for the user.
|
||
|
||
### Command Structure
|
||
|
||
At its core, the `/write-file` command starts by identifying the user's container, ensuring it exists, and then presenting a modal to the user for input. Once the necessary information is collected (file location, name, and contents), the system writes the content to a temporary file and copies it into the container using Docker.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "write-file",
|
||
description: "Writes a file to your container.",
|
||
|
||
run: async (client, interaction) => {
|
||
let getSSHuID = new Promise(function (resolve, reject) {
|
||
connection.query(
|
||
"SELECT uid from users WHERE discord_id = \'" + interaction.user.id + "\'",
|
||
function (err, results, fields) {
|
||
if (results.length == 0) {
|
||
console.log("User does not exist")
|
||
resolve("The user does not Exist");
|
||
} else {
|
||
resolve(results[0].uid);
|
||
}
|
||
}
|
||
)
|
||
});
|
||
```
|
||
|
||
The command first retrieves the container’s unique ID (`sshSurfID`) from the database based on the user's Discord ID. If no container is found, the user is informed that they need to generate one. This ensures that only users with active containers can write files.
|
||
|
||
### Container Existence Check
|
||
|
||
Once the user’s container ID is obtained, the system checks whether the container exists by running a script that verifies the container's status. If the container doesn’t exist, the process stops, and the user is informed.
|
||
|
||
```javascript
|
||
await cmd('bash /home/scripts/check_exist.sh ' + sshSurfID).then(out => {
|
||
if (out.stdout != 1) {
|
||
return await interaction.editReply("Sorry, you do not have a container currently, generate one using /generate");
|
||
}
|
||
});
|
||
```
|
||
|
||
## The check_exist.sh script
|
||
|
||
```bash
|
||
#!/bin/bash
|
||
if [ $( docker ps -a | grep $1 | wc -l ) -gt 0 ]; then
|
||
echo "1"
|
||
else
|
||
echo "0"
|
||
fi
|
||
```
|
||
|
||
This Bash script checks if a Docker container with a specific name or ID (passed as an argument to the script) exists.
|
||
|
||
Here's a breakdown of what each part does:
|
||
|
||
**`#!/bin/bash`**:
|
||
- This is the shebang line that tells the system to run the script using the Bash shell.
|
||
|
||
**`if [ $( docker ps -a | grep $1 | wc -l ) -gt 0 ]; then`**:
|
||
- This checks if there is any Docker container matching the argument (`$1`) provided when running the script.
|
||
- **`docker ps -a`**: Lists all containers, including both running and stopped ones.
|
||
- **`grep $1`**: Searches for the container name or ID specified by `$1` in the list of containers.
|
||
- **`wc -l`**: Counts the number of lines output by the `grep` command, i.e., the number of matching containers.
|
||
- **`-gt 0`**: Checks if the count is greater than zero, meaning there is at least one match.
|
||
|
||
**`echo "1"`**:
|
||
- If a container with the given name or ID is found, it prints `1`, indicating the container exists.
|
||
|
||
**`else`**:
|
||
- This handles the case where no matching container is found.
|
||
|
||
**`echo "0"`**:
|
||
- If no container matches, it prints `0`, indicating the container does not exist.
|
||
|
||
This step ensures that users cannot attempt to write to a non-existent container, which prevents potential errors and ensures smooth operation.
|
||
|
||
### Capturing User Input with a Modal
|
||
|
||
The core of this command revolves around gathering user input for the file’s location, name, and content. To capture this information, the bot displays a **modal dialog**. The modal is a form-like interface that collects text input from the user and passes it back to the system for further processing.
|
||
|
||
```javascript
|
||
const modal = new ModalBuilder()
|
||
.setCustomId(rand)
|
||
.setTitle('Let\'s save a file!');
|
||
|
||
const locationInput = new TextInputBuilder()
|
||
.setCustomId('locationInput')
|
||
.setLabel("Location: ex: /home/user")
|
||
.setStyle(TextInputStyle.Short);
|
||
|
||
const fileNameInput = new TextInputBuilder()
|
||
.setCustomId('fileNameInput')
|
||
.setLabel("FileName: ex: example.txt")
|
||
.setStyle(TextInputStyle.Short);
|
||
|
||
const fileContents = new TextInputBuilder()
|
||
.setCustomId('contentsInput')
|
||
.setLabel("Content")
|
||
.setStyle(TextInputStyle.Paragraph);
|
||
```
|
||
|
||
The modal captures three key inputs:
|
||
|
||
- **Location**: Where the file will be saved in the container.
|
||
- **File Name**: The name of the file to be created or updated.
|
||
- **Content**: The actual text content to be written into the file.
|
||
|
||
### Writing the File to a Temporary Location
|
||
|
||
Once the user submits the modal, the bot writes the content to a temporary file on the server. This temporary file will later be copied to the container using Docker’s **`docker cp`** command.
|
||
|
||
```javascript
|
||
echoFile.sync("/tmp/tmpfile/" + rand + "/" + filename, content, {
|
||
newline: true
|
||
});
|
||
```
|
||
|
||
This step ensures that the file is created in a secure, isolated environment before being moved into the container. The file is stored temporarily in the `/tmp/tmpfile/` directory on the host system.
|
||
|
||
### Copying the File to the Container
|
||
|
||
After writing the file to the temporary location, the system uses **`docker cp`** to copy the file into the container. The `docker cp` command allows files from the host machine to be transferred to a running container.
|
||
|
||
```javascript
|
||
cmd('docker cp ' + "/tmp/tmpfile/" + rand + "/" + filename + ' ' + sshSurfID + ':' + location).then(out => {
|
||
console.log('File copied to container:', out);
|
||
fs.unlinkSync("/tmp/tmpfile/" + rand + "/" + filename); // Cleanup temporary file
|
||
});
|
||
```
|
||
|
||
Once the file has been successfully copied to the container, the temporary file is deleted to keep the server clean and efficient. This prevents the buildup of unused files on the host machine.
|
||
|
||
### Final User Feedback
|
||
|
||
After the file has been written to the container, the system provides feedback to the user, confirming that the file has been saved successfully. The bot sends a message back to the Discord channel, notifying the user of the file’s location inside the container.
|
||
|
||
```javascript
|
||
interaction.reply("File saved as " + location + "/" + filename);
|
||
```
|
||
|
||
This feedback loop ensures that users are kept informed throughout the process and can easily verify that their file has been written correctly.
|
||
|
||
### Error Handling and Edge Cases
|
||
|
||
The command is designed to handle various edge cases, such as missing containers or invalid input. If the user’s container does not exist, the process halts early, and the user is informed. Additionally, if there are any errors during the file copy process, they are logged to the console for debugging, but the user is provided with a clear message indicating the issue.
|
||
|
||
```javascript
|
||
.catch(err => {
|
||
console.log('Error copying file:', err);
|
||
});
|
||
```
|
||
|
||
This level of error handling ensures that the bot remains stable and user-friendly, even in cases where things don’t go as expected.
|
||
|
||
The **`/write-file`** command in Discord-Linux is a powerful feature that allows users to create or update files inside their containers from within Discord. By integrating Docker commands with Discord's interactive modals, this feature simplifies the process of writing and managing files in containerized environments. With its seamless feedback loop, error handling, and intuitive user input, Discord-Linux provides users with an efficient and user-friendly way to interact with their containerized file systems directly from Discord.
|
||
|
||
# Managing Holesail Connections with Discord-Linux
|
||
|
||
Holesail is redefining how we connect to services on remote machines, providing a seamless Peer-to-Peer (P2P) tunneling solution that is decentralized, encrypted, and simple. If you're using Discord-Linux, managing these connections is made even easier with a suite of commands that allow you to create, list, restart, and manage ports directly from your Discord client. This post will guide you through the details of how each command works and how it integrates with Holesail’s innovative P2P network.
|
||
|
||
### Introduction to Holesail
|
||
|
||
Holesail allows you to securely connect to any service running on your machines, without needing static IP addresses, port forwarding, or centralized servers. With no complicated setup or accounts, all you need is a generated key to instantly access your services. Holesail's simplicity and speed make it an addictively efficient solution for remote connections.
|
||
|
||
Holesail is often described as a decentralized version of Tailscale, where there’s no middleman, and all connections are end-to-end encrypted. With Holesail, you scan a QR code, generate a key, and you're instantly connected—no hassle, no friction.
|
||
|
||
### Discord-Linux Holesail Command Suite
|
||
|
||
With Discord-Linux, you can manage your Holesail connectors directly through Discord, allowing you to control your JUMP host connections and local services from any device. The following commands give you full control over your ports and connections.
|
||
|
||
### Listing Your Holesail Connectors
|
||
|
||
The **`/holesail-connectors`** command allows you to list all your active port connections on your container. This command uses Docker to execute a script that retrieves a list of your active connectors. It checks for the existence of your container and fetches a list of currently active connections.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "holesail-connectors",
|
||
description: "List your port connections",
|
||
run: async (client, interaction) => {
|
||
const docker = new Docker();
|
||
let sshSurfID;
|
||
|
||
await getSSHuID.then((userid) => {
|
||
sshSurfID = userid;
|
||
cmd(`node /home/scripts/docker_exec.js ` + sshSurfID + " / " + "con --list")
|
||
.then(out => {
|
||
interaction.editReply(`\`\`\`${out.stdout}\`\`\``);
|
||
})
|
||
.catch(err => console.log('err =', err));
|
||
});
|
||
},
|
||
};
|
||
```
|
||
|
||
This command interacts with Docker and your Holesail setup to ensure all active connections are listed. It provides a quick snapshot of your current port connections without needing to dive into manual configurations.
|
||
|
||
### Creating a New Port via Holesail
|
||
|
||
With **`/create-port`**, you can create a new Holesail connector port. You provide a connection name and a connection hash, and this command will register the connection within our JUMP infrastructure. The request is sent to the API that handles the creation of ports and returns the necessary connection details.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "create-port",
|
||
description: "Create a port via a Holesail connector hash.",
|
||
run: async (client, interaction) => {
|
||
const name = interaction.options._hoistedOptions[0].value;
|
||
const hash = interaction.options._hoistedOptions[1].value;
|
||
|
||
const requestBody = {
|
||
name: `${interaction.user.id}_${name}`,
|
||
connectionHash: hash,
|
||
discordId: interaction.user.id
|
||
};
|
||
|
||
unirest.post('http://internal-api/start')
|
||
.headers({ 'Content-Type': 'application/json', 'password': 'sW7W4hX2uVdX' })
|
||
.send(requestBody)
|
||
.end(response => {
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("JUMP Host Manager")
|
||
.setDescription(`${name} was created on: ***ssh.surf:${response.body.port}***`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
interaction.followUp({ embeds: [embed] });
|
||
});
|
||
},
|
||
};
|
||
```
|
||
|
||
This command simplifies the process of setting up new connections, allowing you to specify a connection hash and create a port in seconds.
|
||
|
||
### Listing Active Ports on JUMP Server
|
||
|
||
The **`/list-ports`** command returns a list of all ports connected through the JUMP server. This command retrieves port details from the API and checks their statuses, indicating whether they are active or inactive.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "list-ports",
|
||
description: "Returns your ports from our jump server.",
|
||
run: async (client, interaction) => {
|
||
const discordId = interaction.user.id;
|
||
const apiUrl = 'http://internal-api/list-ports';
|
||
|
||
unirest.post(apiUrl)
|
||
.headers({ 'Content-Type': 'application/json', 'password': PASSWORD })
|
||
.send({ discordId })
|
||
.end(async (response) => {
|
||
const ports = response.body;
|
||
const statusPromises = ports.map(port => checkPortStatus('ssh.surf', port.port));
|
||
const portStatuses = await Promise.all(statusPromises);
|
||
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#00FF00")
|
||
.setTitle("Active ports on JUMP")
|
||
.setDescription(buildPortStatusMessage(ports, portStatuses))
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
interaction.followUp({ embeds: [embed] });
|
||
});
|
||
},
|
||
};
|
||
```
|
||
|
||
This command leverages the API to pull a comprehensive list of ports and their current statuses, providing instant feedback about the state of your connections.
|
||
|
||
### Restarting a JUMP Port
|
||
|
||
The **`/restart-port`** command allows users to restart an existing port on the JUMP server. This is particularly useful for resetting connections without having to delete and recreate the port from scratch.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "restart-port",
|
||
description: "Restart your JUMP port",
|
||
run: async (client, interaction) => {
|
||
const name = interaction.options._hoistedOptions[0].value;
|
||
|
||
const requestBody = {
|
||
name: `${interaction.user.id}_${name}`,
|
||
discordId: `${interaction.user.id}`
|
||
};
|
||
|
||
unirest.post('http://internal-api/restart')
|
||
.headers({ 'Content-Type': 'application/json', 'password': PASSWORD })
|
||
.send(requestBody)
|
||
.end(response => {
|
||
cmd(`node /home/scripts/docker_exec.js ${interaction.user.id} / pm2 restart ${name}`).then(() => {
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("JUMP Host Manager")
|
||
.setDescription(`${name} was restarted!`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
interaction.followUp({ embeds: [embed] });
|
||
});
|
||
});
|
||
},
|
||
};
|
||
```
|
||
|
||
By sending a request to our Holesail API and executing a restart command within the container, this command refreshes the connection, ensuring it continues to run smoothly.
|
||
|
||
### Auto-Generating a Local Port
|
||
|
||
The **`/start-port`** command auto-generates a local port for your service and connects it to the JUMP server. This command requires you to specify a port name and the desired port number. The system then uses Docker to set up the new connection.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "start-port",
|
||
description: "Auto Gen a port local and on jump.",
|
||
run: async (client, interaction) => {
|
||
const name = interaction.options._hoistedOptions[0].value;
|
||
const port = interaction.options._hoistedOptions[1].value;
|
||
|
||
let getSSHuID = new Promise(function (resolve, reject) {
|
||
connection.query(
|
||
"SELECT uid from users WHERE discord_id = \'" + interaction.user.id + "\'",
|
||
function (err, results, fields) {
|
||
if (results.length == 0) {
|
||
resolve("The user does not Exist");
|
||
} else {
|
||
resolve(results[0].uid);
|
||
}
|
||
}
|
||
);
|
||
});
|
||
|
||
await getSSHuID.then(userid => {
|
||
sshSurfID = userid;
|
||
cmd(`node /home/scripts/docker_exec.js ${userid} / node /usr/bin/con --new ${port} ${name}`).then(out => {
|
||
unirest.post('http://internal-api/start')
|
||
.headers({ 'Content-Type': 'application/json', 'password': PASSWORD })
|
||
.send({ name: `${interaction.user.id}_${name}`, connectionHash: out.stdout.trim(), discordId: interaction.user.id })
|
||
.end(response => {
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("JUMP Host Manager")
|
||
.setDescription(`${name} local (${port}) was created on
|
||
|
||
: ***ssh.surf:${response.body.port}***`)
|
||
.setTimestamp()
|
||
.setFooter({ text: `Requested by ${interaction.user.username}`, iconURL: `${interaction.user.displayAvatarURL()}` });
|
||
interaction.followUp({ embeds: [embed] });
|
||
});
|
||
});
|
||
});
|
||
},
|
||
};
|
||
```
|
||
|
||
This command integrates Docker commands with our custom Holesail API, making it simple to generate new port connections for local services with minimal effort.
|
||
|
||
Holesail and Discord-Linux offer a streamlined, decentralized approach to managing your remote connections. Through the commands we've explored, you can manage your JUMP host connections and ports directly from Discord. Whether you need to create, list, or restart ports, these commands make it easier than ever to leverage the power of Holesail’s P2P tunneling solution, ensuring secure and instant access to your local networks.
|
||
|
||
With Holesail, you no longer need to rely on complex configurations or static IPs—just generate a key, connect, and you're good to go!
|
||
|
||
# Automating Virtual Host (VHOST) Generation
|
||
|
||
Managing virtual hosts (VHOSTs) can be a complex and time-consuming process, requiring manual configurations, SSL certificate management, and network setup. By leveraging NGINX Proxy Manager's API and automating the process through a Discord bot, we've built a system that makes it easier than ever to create, update, list, and delete VHOSTs.
|
||
|
||
This automation is powered by RESTful API requests to NGINX Proxy Manager, combined with user inputs via Discord. We will walk through how the system is structured, and how each VHOST command is executed through a combination of Discord and API interaction.
|
||
|
||
### What is a VHOST?
|
||
|
||
A Virtual Host (VHOST) allows a single web server to serve multiple websites or services from different domain names. It’s particularly useful in multi-tenant environments, where several services are hosted on the same server. With VHOSTs, administrators can easily route traffic to the correct service based on the domain name.
|
||
|
||
Traditionally, VHOST setup involves manually creating configuration files, setting up SSL certificates, and modifying DNS settings. With the integration of NGINX Proxy Manager and a Discord bot, we eliminate most of the manual steps.
|
||
|
||
### The Power of Automation: Creating VHOSTs through a Bot
|
||
|
||
The Discord bot utilizes NGINX Proxy Manager's non-documented API, allowing us to fully automate VHOST management. By sending REST requests to the NGINX Proxy Manager API, we can automate everything from user registration to VHOST creation, SSL certificate management, and more.
|
||
|
||
Let’s dive into the details of how each feature is implemented.
|
||
|
||
### Registering an Account for Proxy Manager
|
||
|
||
Before creating VHOSTs, users must register with NGINX Proxy Manager. The **`/register-vhost-account`** command handles this process, setting up a new user in NGINX Proxy Manager and giving them permission to manage proxy hosts.
|
||
|
||
Here’s how the registration works:
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "register-vhost-account",
|
||
description: "Register an account for Proxy Manager",
|
||
run: async (client, interaction) => {
|
||
const adminUsername = "REDACTED";
|
||
const adminPassword = "REDACTED";
|
||
|
||
const connection2 = mysql.createConnection({
|
||
host: '127.0.0.1',
|
||
user: 'nginx-db-user',
|
||
database: 'nginx-proxy-manager-db',
|
||
password: 'db-password'
|
||
});
|
||
|
||
// Generate credentials for the new user and register them with the API
|
||
getSSHuID.then(userid => {
|
||
if (userid !== "User does not exist") {
|
||
interaction.followUp("You are already registered.");
|
||
return;
|
||
}
|
||
|
||
// Generate credentials for the user
|
||
const newUserEmail = `${interaction.user.id}@ssh.surf`;
|
||
const password = generator.generate({
|
||
length: 16,
|
||
numbers: true,
|
||
excludeSimilarCharacters: true
|
||
});
|
||
|
||
// REST Request to create the user in NGINX Proxy Manager
|
||
const token = await getToken(adminUsername, adminPassword);
|
||
const createUserUrl = 'http://internal-api/api/users';
|
||
const userData = {
|
||
email: newUserEmail,
|
||
name: interaction.user.username,
|
||
password: password
|
||
};
|
||
|
||
// Send the POST request to create the user
|
||
const response = await fetch(createUserUrl, {
|
||
method: 'POST',
|
||
headers: {
|
||
'Authorization': `Bearer ${token}`,
|
||
'Content-Type': 'application/json'
|
||
},
|
||
body: JSON.stringify(userData)
|
||
});
|
||
|
||
if (response.ok) {
|
||
interaction.followUp("Registration successful!");
|
||
} else {
|
||
interaction.followUp("Failed to register. Please try again.");
|
||
}
|
||
});
|
||
}
|
||
};
|
||
```
|
||
|
||
### REST Request for User Registration
|
||
|
||
The registration process sends a POST request to the NGINX Proxy Manager API to create a new user account.
|
||
|
||
Here’s what the request looks like:
|
||
|
||
```json
|
||
POST http://internal-api/api/users
|
||
Authorization: Bearer <token>
|
||
Content-Type: application/json
|
||
|
||
{
|
||
"email": "discord_id@ssh.surf",
|
||
"name": "Discord Username",
|
||
"password": "GeneratedPassword"
|
||
}
|
||
```
|
||
|
||
The bot interacts with the NGINX Proxy Manager API using a bearer token, allowing it to create user accounts programmatically. This token is obtained using the admin credentials for NGINX Proxy Manager.
|
||
|
||
### Creating a VHOST
|
||
|
||
After registering, users can create VHOSTs using the **`/create-vhost`** command. This command allows users to specify a domain name, port number, and optional path, and it handles the creation of the VHOST on NGINX Proxy Manager.
|
||
|
||
Here’s how the command works:
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "create-vhost",
|
||
description: "Add a domain or subdomain to your account",
|
||
run: async (client, interaction) => {
|
||
const modal = new ModalBuilder()
|
||
.setCustomId("vhostModal")
|
||
.setTitle("Generate a new VHOST");
|
||
|
||
// Collect domain, port, and path from the user
|
||
const domainName = new TextInputBuilder()
|
||
.setCustomId("domainforSSL")
|
||
.setLabel("Domain or Subdomain")
|
||
.setStyle(TextInputStyle.Short);
|
||
|
||
const portNumber = new TextInputBuilder()
|
||
.setCustomId("portNumber")
|
||
.setLabel("Port Number")
|
||
.setStyle(TextInputStyle.Short);
|
||
|
||
const path = new TextInputBuilder()
|
||
.setCustomId("path")
|
||
.setLabel("Path (default: /)")
|
||
.setValue("/")
|
||
.setStyle(TextInputStyle.Short);
|
||
|
||
const secondActionRow = new ActionRowBuilder().addComponents([domainName]);
|
||
const thirdActionRow = new ActionRowBuilder().addComponents([portNumber]);
|
||
const fourthActionRow = new ActionRowBuilder().addComponents([path]);
|
||
|
||
modal.addComponents([secondActionRow, thirdActionRow, fourthActionRow]);
|
||
interaction.showModal(modal);
|
||
|
||
// Process the user's input and send a POST request to create the VHOST
|
||
client.on('interactionCreate', async interaction => {
|
||
if (interaction.type === 5 && interaction.customId === "vhostModal") {
|
||
const domain = interaction.fields.getTextInputValue("domainforSSL");
|
||
const port = interaction.fields.getTextInputValue("portNumber");
|
||
const path = interaction.fields.getTextInputValue("path");
|
||
|
||
const token = await getToken(adminUsername, adminPassword);
|
||
const createVhostUrl = 'http://internal-api/api/nginx/proxy-hosts';
|
||
const vhostData = {
|
||
domain_names: [domain],
|
||
forward_scheme: 'http',
|
||
forward_host: '127.0.0.1',
|
||
forward_port: parseInt(port),
|
||
ssl_forced: true,
|
||
locations: [{
|
||
path: path,
|
||
forward_host: '127.0.0.1',
|
||
forward_port: parseInt(port)
|
||
}]
|
||
};
|
||
|
||
const response = await fetch(createVhostUrl, {
|
||
method: 'POST',
|
||
headers: {
|
||
'Authorization': `Bearer ${token}`,
|
||
'Content-Type': 'application/json'
|
||
},
|
||
body: JSON.stringify(vhostData)
|
||
});
|
||
|
||
if (response.ok) {
|
||
interaction.followUp(`VHOST created at https://${domain}`);
|
||
} else {
|
||
interaction.followUp("Failed to create VHOST. Please try again.");
|
||
}
|
||
}
|
||
});
|
||
}
|
||
};
|
||
```
|
||
|
||
### REST Request for VHOST Creation
|
||
|
||
This is the REST request sent to the NGINX Proxy Manager API to create a new VHOST:
|
||
|
||
```json
|
||
POST http://internal-api/api/nginx/proxy-hosts
|
||
Authorization: Bearer <token>
|
||
Content-Type: application/json
|
||
|
||
{
|
||
"domain_names": ["example.domain.com"],
|
||
"forward_scheme": "http",
|
||
"forward_host": "127.0.0.1",
|
||
"forward_port": 8080,
|
||
"ssl_forced": true,
|
||
"locations": [{
|
||
"path": "/",
|
||
"forward_host": "127.0.0.1",
|
||
"forward_port": 8080
|
||
}]
|
||
}
|
||
```
|
||
|
||
This request includes all the necessary configuration data, including the domain, forwarding host, and SSL enforcement.
|
||
|
||
### Listing VHOSTs
|
||
|
||
Once VHOSTs are created, users can view them with the **`/list-vhosts`** command. This command sends a GET request to retrieve all the VHOSTs owned by the user.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "list-vhosts",
|
||
description: "List the VHOSTs you own",
|
||
run: async (client, interaction) => {
|
||
const token = await getToken(adminUsername, adminPassword);
|
||
|
||
getAllProxies(token, interaction.user).then(proxies => {
|
||
const vhostList = proxies.map(proxy => {
|
||
return `[${proxy.domain_names[0]}](https://${proxy.domain_names[0]}) - Port: ${proxy.forward_port} - Status: ${proxy.status}`;
|
||
});
|
||
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("Your VHOSTs")
|
||
.setDescription(vhostList
|
||
|
||
.join("\n"));
|
||
|
||
interaction.editReply({ embeds: [embed] });
|
||
});
|
||
}
|
||
};
|
||
```
|
||
|
||
### Deleting a VHOST
|
||
|
||
If a user no longer needs a VHOST, they can use the **`/delete-vhost`** command to remove it. This command also deletes any associated SSL certificates.
|
||
|
||
```javascript
|
||
module.exports = {
|
||
name: "delete-vhost",
|
||
description: "Remove a VHOST and its certificate",
|
||
options: [{
|
||
name: "domain",
|
||
description: "The domain you wish to delete",
|
||
required: true,
|
||
type: 3
|
||
}],
|
||
run: async (client, interaction) => {
|
||
const domain = interaction.options._hoistedOptions[0].value;
|
||
|
||
getToken(adminUsername, adminPassword).then(token => {
|
||
getOwnerIdForDomain(domain, token).then(ownerId => {
|
||
if (ownerId === interaction.user.id) {
|
||
deleteVhostAndCertificate(domain, token).then(result => {
|
||
const embed = new EmbedBuilder()
|
||
.setColor("#FF0000")
|
||
.setTitle("VHOST Deleted")
|
||
.setDescription(`The VHOST for ${domain} has been deleted.`);
|
||
|
||
interaction.followUp({ embeds: [embed] });
|
||
});
|
||
} else {
|
||
interaction.followUp("You do not own this VHOST.");
|
||
}
|
||
});
|
||
});
|
||
}
|
||
};
|
||
```
|
||
### REST Request for VHOST Deletion
|
||
|
||
The following request is made to delete a VHOST:
|
||
|
||
```json
|
||
DELETE http://internal-api/api/nginx/proxy-hosts/9
|
||
Authorization: Bearer <token>
|
||
```
|
||
|
||
The API deletes the specified VHOST by its ID and removes any associated SSL certificates.
|
||
|
||
By integrating Discord bots with the NGINX Proxy Manager API, we’ve created a powerful tool for automating VHOST management. This system simplifies complex tasks, such as creating, updating, and deleting VHOSTs, by allowing users to perform these actions directly from Discord.
|
||
|
||
Whether you’re hosting multiple domains or managing SSL certificates, the combination of Discord and NGINX Proxy Manager provides an intuitive, automated solution that makes managing web services easier than ever.
|
||
|
||
|
||
# Creating a Custom Discord Notification Service
|
||
|
||
As our platform continued to evolve, we've introduced a powerful notification service that allows for direct messaging and alerts to be sent to users through Discord. Whether it’s for system alerts, status updates, or user-triggered notifications, this service provides seamless and instant communication between the platform and its users.
|
||
|
||
We’ll break down the architecture, explain how it works, and provide insight into both the server and client components of this notification system.
|
||
|
||
### The Architecture of the Notification System
|
||
|
||
At the core of our notification system are two components:
|
||
|
||
1. **Server-Side API**: A Node.js-based service that listens for HTTP requests and processes messages to be sent via Discord.
|
||
2. **Client Application**: A Go-based client that can send custom notifications to the platform users.
|
||
|
||
These components work together to deliver notifications from the platform to the user’s Discord account with minimal delay. Let's break down each part.
|
||
|
||
### Server-Side: The Notification API
|
||
|
||
The server-side of the notification service is built using Node.js with Express.js and `discord.js` for interfacing with Discord. It listens for HTTP requests containing the necessary parameters, verifies them, and then sends the appropriate message via Discord.
|
||
|
||
#### Key Components of the API
|
||
|
||
```javascript
|
||
var fs = require('fs');
|
||
var http = require('http');
|
||
var express = require('express');
|
||
var app = express();
|
||
const Discord = require('discord.js');
|
||
const { Client, Intents } = require('discord.js');
|
||
const client = new Client({ intents: 4097 });
|
||
const mysql = require('mysql2');
|
||
```
|
||
|
||
Here, we initialize several modules:
|
||
- **Express.js**: Handles incoming HTTP requests.
|
||
- **discord.js**: Manages communication with Discord, allowing us to send messages.
|
||
- **mysql2**: Connects to the MySQL database to fetch user details like Discord IDs.
|
||
|
||
#### MySQL Database Connection
|
||
|
||
The service uses MySQL to store user information, such as their Discord ID, which is crucial for sending direct messages.
|
||
|
||
```javascript
|
||
const connection = mysql.createConnection({
|
||
host: 'localhost',
|
||
user: 'myUser',
|
||
database: 'myDatabase',
|
||
password: 'myPassword'
|
||
});
|
||
```
|
||
|
||
This connection is essential as it allows the API to look up the user in the database using a unique identifier (hostname) provided by the client.
|
||
|
||
#### Handling Notification Requests
|
||
|
||
The `/` endpoint of the API is where the magic happens. When the client sends a request, the server processes the message and sends it to the appropriate Discord user.
|
||
|
||
```javascript
|
||
app.get('/', async (req, res) => {
|
||
const key = req.query.key;
|
||
const sshID = req.query.hostname;
|
||
const messageToSend = req.query.message;
|
||
|
||
// Your IP check logic here
|
||
|
||
if (key !== "KEYISHEREMAKEONE") {
|
||
console.log("Invalid key");
|
||
return res.end("Sorry, that did not work....");
|
||
}
|
||
|
||
connection.query(
|
||
"SELECT discord_id FROM users WHERE uid = ?",
|
||
[sshID],
|
||
function (err, results, fields) {
|
||
if (err) {
|
||
console.error("Error fetching Discord ID:", err);
|
||
return res.end("Sorry, something went wrong....");
|
||
}
|
||
|
||
if (results.length === 0) {
|
||
console.log("User does not exist in database");
|
||
return res.end("User not found....");
|
||
}
|
||
|
||
const discordID = results[0].discord_id;
|
||
|
||
client.users.fetch(discordID).then((user) => {
|
||
user.send(messageToSend)
|
||
.then(() => {
|
||
console.log("Message sent to Discord user");
|
||
res.end("Message sent successfully!");
|
||
})
|
||
.catch((err) => {
|
||
console.error("Error sending message to Discord user:", err);
|
||
res.end("Error sending message to Discord user....");
|
||
});
|
||
}).catch((err) => {
|
||
console.error("Error fetching Discord user:", err);
|
||
res.end("Error fetching Discord user....");
|
||
});
|
||
}
|
||
);
|
||
});
|
||
```
|
||
|
||
Here’s a breakdown:
|
||
1. **Key Verification**: The server checks the provided key to ensure that only authorized requests are processed.
|
||
2. **Database Lookup**: It queries the MySQL database to find the Discord ID associated with the provided `hostname`.
|
||
3. **Sending the Message**: Once the Discord ID is retrieved, the bot sends the message to the user via Discord.
|
||
4. **Error Handling**: If any issues arise during this process (e.g., invalid key, missing user, or Discord message failure), they are logged and returned to the client.
|
||
|
||
#### Discord Bot Initialization
|
||
|
||
To interact with users on Discord, we initialize a bot using the `discord.js` library.
|
||
|
||
```javascript
|
||
client.on('ready', async () => {
|
||
console.log('Bot is logged in and ready!');
|
||
});
|
||
|
||
client.login('DISCORD_BOT_TOKEN');
|
||
```
|
||
|
||
This code ensures the bot is connected to Discord and ready to send messages when the server processes a notification request.
|
||
|
||
### Client-Side: Sending Notifications
|
||
|
||
The client-side is written in Go and allows users to send notifications via a simple command-line interface. This script retrieves the server’s hostname and sends a GET request to the API along with the message content using `Golang`
|
||
|
||
#### Go Client Code
|
||
|
||
```go
|
||
package main
|
||
|
||
import (
|
||
"io/ioutil"
|
||
"log"
|
||
"net/http"
|
||
"os"
|
||
"strings"
|
||
"net/url"
|
||
)
|
||
|
||
func main() {
|
||
name, err := os.Hostname()
|
||
if err != nil {
|
||
panic(err)
|
||
}
|
||
|
||
argsWithoutProg := os.Args[1:]
|
||
msg := strings.Join(argsWithoutProg, " ")
|
||
|
||
resp, err := http.Get("https://api.yourdomain.com/?key=YOURKEYHERE&hostname=" + name + "&message=" + url.QueryEscape(msg))
|
||
if err != nil {
|
||
log.Fatalln(err)
|
||
}
|
||
|
||
body, err := ioutil.ReadAll(resp.Body)
|
||
if err != nil {
|
||
log.Fatalln(err)
|
||
}
|
||
|
||
sb := string(body)
|
||
log.Printf(sb)
|
||
}
|
||
```
|
||
|
||
#### How It Works:
|
||
**Fetch Hostname**: The script retrieves the server's hostname using `os.Hostname()`, which is later sent as part of the request to identify the sender.
|
||
**Build Message**: The message is constructed from command-line arguments and passed into the API request.
|
||
**Send Notification**: The client sends a GET request to the server, including the API key, hostname, and message content.
|
||
|
||
### Use Cases
|
||
|
||
The notification system can be extended for various purposes:
|
||
- **System Alerts**: Notify users when their containers need attention or when system events occur.
|
||
- **User Reminders**: Users can send themselves reminders or updates.
|
||
- **Platform Announcements**: The system can send platform-wide announcements or alerts.
|
||
|
||
# Command Line Uploader Tool: A Deep Dive
|
||
|
||
In our efforts to enhance user experience and simplify file management on our platform, we've developed a **Command Line Uploader Tool**. This tool allows users to upload files directly from their computer's command line to their personal container, providing a seamless and efficient file transfer method.
|
||
|
||
We’ll explore the architecture, explain how it works, and provide insight into both the server-side and client-side components of this upload service.
|
||
|
||
|
||
|
||
### Why We Built It
|
||
|
||
Managing files within containers can sometimes be cumbersome, especially when relying on traditional methods like SSH file transfers. We wanted a solution that would:
|
||
- **Simplify file uploads**: Allow users to upload files to their containers with a single command.
|
||
- **Automate the process**: Enable easy setup, key generation, and secure file transfers directly from the command line.
|
||
- **Provide feedback**: Users receive real-time feedback about the status of their upload, including the file size, destination, and container information.
|
||
|
||
Our **Command Line Uploader Tool** achieves all of this by utilizing a RESTful API for file uploads, integrated with our container management system.
|
||
|
||
|
||
|
||
### The Setup: Generate an Upload Key
|
||
|
||
To begin using the Command Line Uploader Tool, users first need to generate an upload key using the `/upload-key` command on our platform. This key is essential for authenticating the user and ensuring that only authorized uploads take place.
|
||
|
||
Once the key is generated, users can install the uploader tool on their local system with the following command:
|
||
|
||
```bash
|
||
bash <(curl -s https://files.yourdomain.com/uploadInstall)
|
||
```
|
||
|
||
This installation script creates two utilities:
|
||
- `sshup`: For uploading a single file.
|
||
- `sshup-all`: For uploading all files in the current directory.
|
||
|
||
|
||
|
||
### Client-Side Installer Breakdown
|
||
|
||
The installer script sets up the upload environment by creating necessary scripts and configuring permissions.
|
||
|
||
```bash
|
||
#!/bin/bash
|
||
echo "Hi, lets set up your access to upload files:"
|
||
|
||
read -p "SSHID: " SSHID
|
||
read -p "KEY: " KEY
|
||
|
||
sudo touch /usr/bin/sshup
|
||
sudo printf "#/bin/bash\ncurl -o /tmp/_bu.tmp -# \"https://up.yourdomain.com/?cid=$SSHID&key=$KEY\" -F myFile=@\$1 && cat /tmp/_bu.tmp && rm /tmp/_bu.tmp" > /usr/bin/sshup
|
||
sudo printf "find . -maxdepth 1 -type f -exec sshup {} \;" > /usr/bin/sshup-all
|
||
sudo chmod +x /usr/bin/sshup
|
||
sudo chmod +x /usr/bin/sshup-all
|
||
```
|
||
|
||
#### How it works:
|
||
**User Prompts**: The script prompts the user for their `SSHID` and `KEY`, which are necessary for identifying the container and validating the upload.
|
||
|
||
**Creating Upload Scripts**:
|
||
- `sshup`: A command-line utility for uploading a single file.
|
||
- `sshup-all`: A utility for uploading all files in the current directory.
|
||
|
||
**Permissions**: The scripts are saved under `/usr/bin/` and made executable, allowing users to run the uploader from anywhere on their system.
|
||
|
||
Once installed, users can upload files using the command `sshup <filename>` and receive an output like:
|
||
|
||
```bash
|
||
~ ❯ sshup test.js
|
||
######################################################################### 100.0%
|
||
📝 test.js 💾 (423.00 Bytes) ➡ 🐳 /root/express ➡ 🌐 SSH42113405732790
|
||
```
|
||
|
||
|
||
|
||
# Server-Side: Upload API
|
||
|
||
On the server side, the upload API is built using **Node.js** and **Express.js**. It handles file uploads, verifies the user’s credentials, and securely transfers files to the corresponding container.
|
||
|
||
```javascript
|
||
const express = require('express');
|
||
const multer = require('multer');
|
||
const cmd = require('cmd-promise');
|
||
var fs = require('fs');
|
||
require('dotenv').config();
|
||
|
||
const app = express();
|
||
var https = require('https');
|
||
var privateKey = fs.readFileSync('/your/path/to/file.key', 'utf8');
|
||
var certificate = fs.readFileSync('/your/path/to/file.crt', 'utf8');
|
||
var credentials = { key: privateKey, cert: certificate };
|
||
|
||
const mysql = require('mysql2');
|
||
const connection = mysql.createConnection({
|
||
host: '127.0.0.1',
|
||
user: '',
|
||
database: '',
|
||
password: ''
|
||
});
|
||
```
|
||
|
||
#### Key Components:
|
||
|
||
**Express.js**: Handles incoming HTTP requests and manages file uploads.
|
||
|
||
**multer**: A middleware for handling file uploads.
|
||
|
||
**cmd-promise**: Used to execute system commands, such as copying the file to the Docker container.
|
||
|
||
**MySQL Database**: Stores user information, including SSH IDs and keys, to verify upload requests.
|
||
|
||
#### Upload Logic
|
||
|
||
When a file is uploaded, the server performs several checks before transferring it to the appropriate container.
|
||
|
||
```javascript
|
||
app.post('/', multer({ dest: "uploads/" }).single('myFile'), (req, res) => {
|
||
|
||
let key = req.query.key;
|
||
let cid = req.query.cid;
|
||
let filePath = req.file.path;
|
||
let fileName = req.file.originalname;
|
||
let size = formatBytes(req.file.size);
|
||
|
||
async function uploadFile(req, res) {
|
||
const theKey = await getkey;
|
||
|
||
// Check the Key
|
||
if (theKey !== key) {
|
||
return res.end("📛 Sorry, the upload key is incorrect.\nUpload Failed!");
|
||
}
|
||
|
||
const path = await getPWD;
|
||
|
||
// Copy the file to the Docker container
|
||
try {
|
||
const cpOutput = await cmd(`docker cp ${filePath} ${cid}:${path}/${fileName}`);
|
||
console.log('Copy Successful: ', cpOutput);
|
||
} catch (error) {
|
||
if (error.toString().includes("No such container:path:")) {
|
||
return res.end(`📛 The upload failed. The directory ${path} does not exist.`);
|
||
}
|
||
}
|
||
|
||
// Remove temporary file
|
||
const rmOutput = await cmd(`rm -f ${filePath}`);
|
||
console.log('Temp File Removal: ', rmOutput);
|
||
|
||
res.send(`📝 ${fileName} 💾 (${size}) ➡ 🐳 ${path} ➡ 🌐 ${cid}\n`);
|
||
}
|
||
|
||
uploadFile(req, res);
|
||
});
|
||
```
|
||
|
||
#### Step-by-Step Process:
|
||
|
||
**Receive File**: The file is uploaded to a temporary directory.
|
||
|
||
**Verify Key**: The server checks the provided key to ensure the upload is authorized.
|
||
|
||
**Check Container**: The server checks if the container exists and is running.
|
||
|
||
**Copy File**: If everything checks out, the file is copied to the user’s container using `docker cp`.
|
||
|
||
**Cleanup**: The temporary file is removed from the server after the upload is complete.
|
||
|
||
#### Example Output:
|
||
```bash
|
||
📝 test.js 💾 (423.00 Bytes) ➡ 🐳 /root/express ➡ 🌐 SSH42113405732790
|
||
```
|
||
|
||
This output provides the user with important details:
|
||
- **File name and size**.
|
||
- **Destination directory within the container**.
|
||
- **Container ID**.
|
||
|
||
### Why It Works So Well
|
||
|
||
The Command Line Uploader Tool is designed with simplicity and efficiency in mind:
|
||
- **Easy Setup**: With just a single installation command, users can start uploading files directly to their containers.
|
||
- **Seamless Integration**: The tool integrates perfectly with our container management system, ensuring files are uploaded to the right place every time.
|
||
- **Real-Time Feedback**: Users receive instant feedback about the success or failure of their uploads, with clear messaging and details about the transfer.
|
||
|
||
Our Command Line Uploader Tool is a game-changer for anyone working with containers. Whether you’re managing files in a development environment or pushing updates to a production server, this tool simplifies the process, making it quick and painless.
|
||
|
||
With just a single command, users can upload files, receive real-time feedback, and get back to what matters most: building and managing their applications.
|
||
|
||
# My Final Thoughts
|
||
|
||
**Discord-Linux** is a transformative tool that merges the best of both worlds: the simplicity of Discord's chat interface with the powerful capabilities of Linux containerization. By doing so, it offers an intuitive yet robust platform for developers, system administrators, hobbyists, and tech enthusiasts to create, manage, and interact with containers in real-time, directly from a Discord server. This innovative fusion opens up a new realm of possibilities for container management by making it accessible to a broader audience, many of whom may not be familiar with the complexities of traditional container orchestration platforms like Docker.
|
||
|
||
### The Essence of Discord-Linux
|
||
|
||
At its core, Discord-Linux leverages several key technologies that allow it to function seamlessly in a decentralized environment. First, **Docker**, the leading containerization tool, is responsible for creating isolated Linux environments within containers. Discord-Linux capitalizes on Docker's flexibility to offer various Linux distributions—such as **Ubuntu**, **Debian**, **Arch**, and **Alpine**—depending on the user’s needs. Whether it’s a lightweight container for quick testing or a full-fledged environment for complex development, Discord-Linux ensures that users have the right tools at their disposal, all launched through a few simple Discord commands.
|
||
|
||
Supporting this infrastructure is **Discord.js**, a Node.js module that interacts with the Discord API, enabling bot functionality within Discord. The bot acts as the bridge between users and Docker, receiving commands through Discord chat and executing the corresponding container operations behind the scenes. This level of integration is both innovative and practical, allowing users to focus on development and operations without the need for a dedicated command-line interface.
|
||
|
||
Additionally, **MySQL** plays a vital role in maintaining and managing user data within the system. From tracking container expiration dates to ensuring that each user has the resources they need, MySQL enables Discord-Linux to operate efficiently, especially in multi-user environments. By keeping detailed records of each container and user profile, Discord-Linux can scale up to accommodate larger groups of users without the performance issues typically associated with resource-heavy platforms.
|
||
|
||
### The Power of Peer-to-Peer Networking
|
||
|
||
One of the standout features of Discord-Linux is its decentralized, peer-to-peer (P2P) networking system. This architecture breaks away from the conventional server-client model, instead allowing users to connect directly with each other’s containers through a secure P2P network. This not only reduces the dependency on centralized servers but also offers greater scalability and fault tolerance. Users can spin up containers and collaborate on projects without worrying about downtime or server overloads.
|
||
|
||
The P2P network also enhances the privacy and security of container interactions. By enabling direct communication between containers through secure gateways, Discord-Linux ensures that sensitive data never passes through an intermediary server, making it an attractive option for those concerned with security and data sovereignty. The decentralized nature of the network ensures that the platform is not reliant on a single point of failure, making it ideal for distributed teams working on critical projects.
|
||
|
||
### Simplifying Container Lifecycle Management
|
||
|
||
One of the primary advantages of Discord-Linux is its ability to simplify the lifecycle management of containers. Typically, managing containers requires in-depth knowledge of Docker and command-line tools, but Discord-Linux abstracts much of this complexity. Users can create, manage, and destroy containers using intuitive commands like **/generate**, **/extend**, and **/destroy**. These commands are handled entirely through Discord’s chat interface, allowing users to avoid the hassle of logging into servers or navigating complex interfaces.
|
||
|
||
The container lifecycle in Discord-Linux is automated to minimize manual intervention. Once a container is generated, it is automatically configured with the necessary resources, including CPU, memory, storage, and networking. Each container is provisioned with secure **SSH access**, allowing users to interact with their container as if it were a standalone server. Users can manage their containers through simple Discord commands, enabling them to focus on their projects without worrying about the underlying infrastructure.
|
||
|
||
Containers in Discord-Linux are assigned a limited lifespan, typically lasting seven days. This ensures that resources are not wasted on containers that are no longer in use. Users who need their containers to last longer can issue the **/extend** command, which resets the expiration date, giving them more time to complete their tasks. If a container is not extended, it is automatically destroyed to free up resources, ensuring that the platform remains efficient even as it scales.
|
||
|
||
### Enhanced User Authentication and Security Measures
|
||
|
||
Security is a top priority for Discord-Linux, and this is reflected in its multi-layered authentication and user verification system. Before a user can generate a container, Discord-Linux performs several key security checks to ensure the user is legitimate. One such measure is the **account age verification** system, which checks the age of the user’s Discord account. By converting the user’s Discord ID into a Unix timestamp, Discord-Linux can determine whether the account is older than 30 days. This precaution prevents newly created or potentially malicious accounts from exploiting the platform’s resources.
|
||
|
||
Additionally, Discord-Linux cross-references a **banlist** to prevent users who have been flagged for previous violations from accessing the service. This banlist is maintained locally, and any user found in the banlist is immediately denied access to container resources. This helps maintain the integrity of the platform, ensuring that only authorized users can take advantage of its capabilities.
|
||
|
||
The platform also includes built-in mechanisms to manage user permissions and access levels. For instance, Discord-Linux can limit the number of containers a user can create based on their role within the Discord server, ensuring that system resources are allocated fairly across the user base. This level of control is particularly useful in environments with multiple users, where resource management and security are critical concerns.
|
||
|
||
### Streamlined Resource Management
|
||
|
||
Managing resources efficiently is a key component of any containerized platform, and Discord-Linux excels in this area. The system automatically allocates CPU, memory, and storage to containers based on user requirements and available system resources. For lightweight containers such as Alpine, fewer resources are allocated, while more resource-intensive containers like Ubuntu are provisioned with more memory and CPU cores. This dynamic resource allocation ensures that containers run optimally without overwhelming the host system.
|
||
|
||
In addition to CPU and memory management, Discord-Linux also handles **storage allocation** dynamically. Free-tier users may be allocated less storage, while users with an active subscription or license are given access to larger storage volumes. The platform continuously monitors storage usage to ensure that containers do not exceed their allotted limits, preventing performance degradation and ensuring fair resource distribution.
|
||
|
||
### Advanced Networking Capabilities
|
||
|
||
Networking is another area where Discord-Linux shines, thanks to its integration with Docker’s powerful networking features and its custom P2P networking architecture. Each container is assigned a unique network configuration that isolates it from other containers, enhancing security and ensuring proper traffic routing. Containers are provisioned with specific IP addresses and port mappings, allowing users to interact with them remotely through SSH or other protocols.
|
||
|
||
The platform's advanced networking capabilities also extend to the peer-to-peer connections between containers. By utilizing custom scripts, Discord-Linux ensures that containers can securely communicate with one another without relying on a central server. This setup is ideal for environments where containers need to share resources or collaborate on tasks, such as in a development team working on a distributed application.
|
||
|
||
### Simplified File Management and Container Interaction
|
||
|
||
In addition to its container creation and management features, Discord-Linux offers powerful file management capabilities. Users can interact with files inside their containers directly from Discord, using commands such as **/openfile** to retrieve files or **/write-file** to edit and save changes. This feature is particularly useful for developers who need to modify configuration files or update code without having to log into the container through SSH.
|
||
|
||
The system also supports advanced file manipulation, such as reading large files or managing multiple file operations simultaneously. If a file exceeds Discord’s message limit, the bot provides feedback to the user, preventing them from accidentally trying to open or edit files that are too large to be handled through the chat interface. This ensures a smooth and user-friendly experience, even when working with complex file operations.
|
||
|
||
### Automation and Lifecycle Management
|
||
|
||
One of the key strengths of Discord-Linux lies in its ability to automate many of the repetitive tasks involved in managing containers. For instance, the platform’s automatic **cleanup system** ensures that expired containers are removed promptly, freeing up resources for new users. This is handled by a background process that checks container expiration dates and removes any container that has not been extended.
|
||
|
||
In addition to container lifecycle automation, Discord-Linux also automates tasks such as restarting containers, adjusting resource allocation, and handling network configurations. This automation significantly reduces the need for manual intervention, making the platform a hands-off solution for container management.
|
||
|
||
### Future of Discord-Linux: Scalability and Community Impact
|
||
|
||
As Discord-Linux continues to evolve, its potential for scalability and broader community impact becomes increasingly apparent. The platform’s P2P architecture ensures that it can scale horizontally without the limitations imposed by centralized server architectures. This makes it a viable solution for large-scale environments, such as distributed teams, educational institutions, or open-source communities, where users need to spin up containers quickly and efficiently.
|
||
|
||
Furthermore, the community-driven nature of Discord means that users can collaborate, share containers, and contribute to the development of new features for Discord-Linux. This opens the door for future integrations, such as support for more complex orchestration systems like Kubernetes, or advanced features like machine learning model training within containers.
|
||
|
||
### Conclusion: The Future of Simplified Containerization
|
||
|
||
Where containerization is becoming increasingly essential for developers and system administrators, **Discord-Linux** stands out as a unique, user-friendly solution that breaks down the traditional barriers of entry. By simplifying the container management process and offering a decentralized, secure, and scalable platform, Discord-Linux empowers users to focus on what matters most: building, testing, and deploying their projects.
|
||
|
||
Whether you’re a seasoned developer or a newcomer to containerization, Discord-Linux provides the tools and flexibility you need to create and manage Linux environments in real-time. Its intuitive integration with Discord, combined with its powerful backend technologies, positions it as a game-changer in the realm of container management, offering a glimpse into the future of decentralized, peer-to-peer systems that are easy to use, secure, and scalable. |