forked from snxraven/ravenscott-blog
172 lines
9.6 KiB
Markdown
172 lines
9.6 KiB
Markdown
|
<!-- lead -->
|
|||
|
A deep dive into building an external bash monitoring micro service.
|
|||
|
|
|||
|
Docker has become a key tool for deploying and managing applications. However, with the rise of containers comes a significant challenge: inspecting and auditing what occurs inside these containers. One often overlooked aspect is the command history—specifically, the `.bash_history` files. These files can reveal important information about user actions, debugging sessions, or potential security issues, but manually inspecting them across dozens or even hundreds of containers can be daunting.
|
|||
|
|
|||
|
This post presents a programmatic solution using Node.js, Dockerode, and Discord to inspect the `.bash_history` files across an entire Docker infrastructure. The solution automates the process of tailing these history files, batching their content, and sending notifications to Discord in near real-time, with rate limiting and error handling built in.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
## Concept Overview
|
|||
|
|
|||
|
The idea is to automate the process of inspecting `.bash_history` files across Docker containers by continuously tailing these files and pushing the extracted command history to a central logging service, in this case, a Discord channel.
|
|||
|
|
|||
|
This approach allows you to track what commands were executed inside the containers at any point in time, whether they were legitimate debugging sessions or potentially harmful actions. By utilizing Docker's overlay filesystem and a programmatic approach, we can automate the discovery and monitoring of these files. We then send these logs to a remote system (Discord in this example) for further inspection, ensuring that no action goes unnoticed.
|
|||
|
|
|||
|
This setup covers multiple layers:
|
|||
|
|
|||
|
1. **Container Inspection**: We gather container information and map their overlay2 filesystem to identify `.bash_history` files.
|
|||
|
2. **File Tailing**: We use the `tail` package to continuously monitor these `.bash_history` files in real time.
|
|||
|
3. **Batching and Rate Limiting**: The logs are collected and sent in batches to avoid spamming the monitoring system and to respect API rate limits.
|
|||
|
4. **Container Name Mapping**: We translate the filesystem paths to meaningful container names, making it easier to identify where each command was executed.
|
|||
|
5. **Error Handling and Resilience**: The system is designed to handle file access issues, rate limiting, and other potential pitfalls, ensuring that it remains robust even under challenging conditions.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
## The Code Breakdown
|
|||
|
|
|||
|
### Dockerode Integration
|
|||
|
|
|||
|
We start by setting up the connection to the Docker API using the Dockerode library. Dockerode allows us to interact with Docker containers, retrieve their metadata, and inspect their filesystems:
|
|||
|
|
|||
|
```javascript
|
|||
|
const Docker = require('dockerode');
|
|||
|
const docker = new Docker({ socketPath: '/var/run/docker.sock' });
|
|||
|
```
|
|||
|
|
|||
|
The Docker socket (`/var/run/docker.sock`) provides the necessary interface to communicate with the Docker daemon. Using Dockerode, we can list all containers, inspect their filesystems, and map their overlay2 directories to locate `.bash_history` files.
|
|||
|
|
|||
|
### Monitoring `.bash_history` Files
|
|||
|
|
|||
|
The core task of this script is to discover and monitor `.bash_history` files for each container. In Docker, each container’s filesystem is managed using the overlay2 storage driver, which layers file system changes. Every container gets its own unique directory in `/var/lib/docker/overlay2`. By scanning these directories, we can find `.bash_history` files inside each container.
|
|||
|
|
|||
|
The `scanForBashHistoryFiles` function recursively scans the overlay2 directory:
|
|||
|
|
|||
|
```javascript
|
|||
|
function scanForBashHistoryFiles(directory) {
|
|||
|
fs.readdir(directory, { withFileTypes: true }, (err, files) => {
|
|||
|
if (err) {
|
|||
|
console.error(`Error reading directory ${directory}:`, err);
|
|||
|
return;
|
|||
|
}
|
|||
|
|
|||
|
files.forEach((file) => {
|
|||
|
if (file.isDirectory()) {
|
|||
|
const subdirectory = path.join(directory, file.name);
|
|||
|
scanForBashHistoryFiles(subdirectory);
|
|||
|
} else if (file.name === '.bash_history') {
|
|||
|
const filePath = path.join(directory, file.name);
|
|||
|
let overlayId = directory.split('/').slice(-3, -1)[0];
|
|||
|
tailFile(filePath, overlayId); // Tail the file for changes
|
|||
|
}
|
|||
|
});
|
|||
|
});
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
This function crawls through the directories in the overlay2 folder, checking for `.bash_history` files. Once a file is found, it triggers the `tailFile` function, which starts monitoring the file for changes.
|
|||
|
|
|||
|
### File Tailing
|
|||
|
|
|||
|
Tailing a file means continuously monitoring it for new lines. This is critical in real-time logging because we need to capture each new command entered into a container.
|
|||
|
|
|||
|
```javascript
|
|||
|
function tailFile(filePath, overlayId) {
|
|||
|
const tail = new Tail(filePath);
|
|||
|
tail.on('line', (data) => {
|
|||
|
let messages = messageGroups.get(overlayId) || new Set();
|
|||
|
messages.add(data); // Add new command to the set
|
|||
|
messageGroups.set(overlayId, messages);
|
|||
|
});
|
|||
|
|
|||
|
tail.on('error', (error) => {
|
|||
|
console.error(`Error tailing file ${filePath}:`, error);
|
|||
|
});
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
The `tailFile` function uses the `tail` package to listen for new lines in the `.bash_history` file. Each new line (representing a command entered in the container) is added to a `messageGroups` map, which organizes messages by container ID (`overlayId`). This is critical for keeping logs organized and ensuring each container’s commands are batched and sent separately.
|
|||
|
|
|||
|
### Mapping Container Names to Overlay2 IDs
|
|||
|
|
|||
|
Since the `.bash_history` files exist within obscure overlay2 directory names, it's crucial to map these IDs back to human-readable container names. This is where the function `getContainerNameFromOverlayId` comes into play:
|
|||
|
|
|||
|
```javascript
|
|||
|
async function getContainerNameFromOverlayId(overlayId) {
|
|||
|
try {
|
|||
|
const containers = await docker.listContainers({ all: true });
|
|||
|
for (const containerInfo of containers) {
|
|||
|
const container = docker.getContainer(containerInfo.Id);
|
|||
|
const inspectData = await container.inspect();
|
|||
|
if (inspectData.GraphDriver.Data.LowerDir.includes(overlayId)) {
|
|||
|
return inspectData.Name.replace(/^\//, '');
|
|||
|
}
|
|||
|
}
|
|||
|
} catch (error) {
|
|||
|
console.error(`Error mapping overlay2 ID to container name for ID ${overlayId}:`, error);
|
|||
|
}
|
|||
|
return overlayId;
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
This function inspects each container, looking for an overlay2 directory that matches the provided `overlayId`. Once found, it extracts and returns the container’s human-readable name. If no match is found, the function simply returns the `overlayId` as a fallback.
|
|||
|
|
|||
|
### Sending Logs to Discord
|
|||
|
|
|||
|
To centralize the logs, the system sends command history to a Discord channel via a webhook. Messages are batched and sent periodically to avoid overwhelming the Discord API.
|
|||
|
|
|||
|
```javascript
|
|||
|
async function sendToDiscordBatch(overlayId, messages) {
|
|||
|
const containerName = await getContainerNameFromOverlayId(overlayId);
|
|||
|
const message = messages.join('\n');
|
|||
|
await axios.post(DISCORD_WEBHOOK_URL, {
|
|||
|
embeds: [
|
|||
|
{
|
|||
|
title: `Container: ${containerName}`,
|
|||
|
description: message,
|
|||
|
color: 0x0099ff,
|
|||
|
},
|
|||
|
],
|
|||
|
});
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
The `sendToDiscordBatch` function batches commands for each container and sends them to a Discord channel using a webhook. Each log message is accompanied by the container’s name, making it easy to identify which container each command belongs to.
|
|||
|
|
|||
|
### Handling Rate Limiting
|
|||
|
|
|||
|
To ensure that the Discord API is not overloaded, the code includes logic to handle rate limiting. If too many requests are sent too quickly, Discord will reject them with a 429 status code. This system respects rate limits by queuing messages and retrying them after a specified delay:
|
|||
|
|
|||
|
```javascript
|
|||
|
function handleRateLimitError(response) {
|
|||
|
if (response.status === 429) { // Rate limit exceeded
|
|||
|
const retryAfter = response.headers['retry-after'] * 1000;
|
|||
|
console.warn(`Rate limit exceeded. Retrying after ${retryAfter} ms.`);
|
|||
|
setTimeout(processMessageQueue, retryAfter);
|
|||
|
}
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
This ensures that the system remains functional even under heavy usage, without dropping any log data.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
## Robustness and Resilience
|
|||
|
|
|||
|
This solution is designed with resilience in mind. Containers can come and go, logs can be deleted or rotated, and `.bash_history` files may not always be available. The system handles all these issues gracefully by:
|
|||
|
|
|||
|
1. **Continuously rescanning** the overlay2 directories for new or deleted `.bash_history` files.
|
|||
|
2. **Handling file access errors**, ensuring that permission issues or missing files do not cause the program to crash.
|
|||
|
3. **Managing rate limits** by batching requests and retrying failed ones.
|
|||
|
|
|||
|
This resilience makes the system suitable for use in production environments where the container landscape is dynamic and constantly changing.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
## What I found
|
|||
|
|
|||
|
By automating the inspection of `.bash_history` files across an entire Docker infrastructure, this solution provides a powerful tool for auditing, debugging, and ensuring security compliance. Through the integration of Dockerode, file system monitoring, and Discord for centralized log management, it becomes possible to monitor actions inside containers in real-time.
|
|||
|
|
|||
|
This approach can be extended further with additional logging, command filtering, or even alerting on specific patterns in the bash history. As containers continue to play a key role in modern infrastructure, the ability to inspect and audit their internal state will become increasingly important, and this solution offers a scalable, real-time mechanism to do so.
|
|||
|
|