Docker/主机上用 nftables 做透明代理端口改写


场景:当容器或主机访问 xx.xx.xx.xx:5668 / 9100 时,自动改写为 yy.yy.yy.yy.yy:26880 / 26881


主机上执行(PREROUTING)

# 新建一个名为 nat 的 IPv4 表;若系统已存在同名表会报错,可用 `nft list tables` 先检查
sudo nft add table ip nat

# 在 ip nat 表下创建基础链 prerouting;用于 NAT,挂载在 prerouting hook(路由前处理进入本机的包),priority -100 是常见优先级
sudo nft add chain ip nat prerouting '{ type nat hook prerouting priority -100; }'

# 规则:当目标 IP=xx.xx.xx.xx 且 TCP 端口=5668 时,做 DNAT,把目标改写为 yy.yy.yy.yy.yy:26880
sudo nft add rule ip nat prerouting ip daddr xx.xx.xx.xx tcp dport 5668 dnat to yy.yy.yy.yy.yy:26880

# 规则:当目标 IP=xx.xx.xx.xx 且 TCP 端口=9100 时,做 DNAT,把目标改写为 yy.yy.yy.yy.yy:26881
sudo nft add rule ip nat prerouting ip daddr xx.xx.xx.xx tcp dport 9100 dnat to yy.yy.yy.yy.yy:26881

# 查看当前 prerouting 链规则,确认是否写入成功(会显示 handle 编号,可用于回滚删除)
sudo nft list chain ip nat prerouting


(可选)容器内做出站透明代理(OUTPUT)

# 仅在容器内部执行;容器需要具备 CAP_NET_ADMIN
sudo nft add table ip nat

# 在 ip nat 表下创建 output 链;用于 NAT,挂载在 output hook(处理容器自己发出的包),priority -100 常见
sudo nft add chain ip nat output '{ type nat hook output priority -100; }'

# 规则:容器内发向 xx.xx.xx.xx:5668 的连接,DNAT 到 yy.yy.yy.yy.yy:26880
sudo nft add rule ip nat output ip daddr xx.xx.xx.xx tcp dport 5668 dnat to yy.yy.yy.yy.yy:26880

# 规则:容器内发向 xx.xx.xx.xx:9100 的连接,DNAT 到 yy.yy.yy.yy.yy:26881
sudo nft add rule ip nat output ip daddr xx.xx.xx.xx tcp dport 9100 dnat to yy.yy.yy.yy.yy:26881

# 查看当前 output 链规则,确认是否写入成功
sudo nft list chain ip nat output


PREROUTING vs OUTPUT 对比表

场景使用链生效时机适用对象典型用途
主机 / 宿主机执行prerouting路由前处理进入主机的数据包外部请求、容器出网经过主机转发的流量拦截发往特定 IP:PORT 的连接并转发到代理
容器内执行output容器自己发出的数据包刚产生时容器内进程主动发起的连接在容器内部“透明代理”出站连接

PREROUTING是“从外面拦截”,不跨命名空间、不要容器权限、不依赖容器DNS;OUTPUT是“从里面改写”,容易踩权限/命名空间/DNS的坑,更容易出问题。

Two Must‑Do Keys to Make RustDesk + ZeroTier Work Reliably from Outside Your Home LAN

1) ZeroTier Managed Routes (both are required)

  • <ZT_CIDR>:(LAN): the ZeroTier overlay subnet (ensures traffic to ZeroTier members goes through the tunnel)
  • <LAN_CIDR> via <ZT_SERVER_IP>:via : route all traffic to the home LAN via the server node (so the phone can reach the home LAN hbbr/target PCs)

How to set:

  • ZeroTier Central → your Network → Routes → Add
  • Ensure the phone’s ZeroTier client is Authorized and Online

2) Quick fix for common issue
“ID does not exist”
Check:

  • ID Server:
  • Relay Server:
  • Public Key: the full content of the server’s id_ed25519.pub

Client service not started:

  • Open the RustDesk Client GUI, click “Start service” at the bottom, wait until the status shows “Ready,” then try again

How to Integrate Anthropic Models into Open WebUI


If you already have Open WebUI running properly in a Docker environment, simply follow these steps to add Anthropic models support:

  1. Open your Open WebUI in a browser (your server address) and log in with your account.
  2. Click your username at the bottom left corner to enter the Admin Panel.
  3. In the Admin Panel, select “Connections”. Change the API address to http://your server address:9099, and set the API Key to 0p3n-w3bu!.
  4. Open a new terminal (or Docker container terminal) and enter your openwebui directory.
    If you are using Docker, make sure the pipelines service is reachable by the Open WebUI container. You can run the following steps on your host machine or inside the container.
  5. Clone the pipelines project and enter the directory:
    git clone https://github.com/open-webui/pipelines && cd pipelines
  6. Install required dependencies and start the pipelines service:
    pip install -r requirements.txt
    ./start.sh
  7. In your browser, visit https://openwebui.com/f/justinrahb/anthropic.
  8. On that page, click Get, then click Open WebUI URL, and then click Import to WebUI. This will integrate the Anthropic model provider into your local Open WebUI instance.
  9. After import is successful, click the gear icon next to “Functions” to set your Anthropic API Key (you can obtain your key at https://console.anthropic.com/settings/keys).
  10. Save your settings.
  11. Restart both services/terminals if needed (for Docker, restart containers; for terminal, Ctrl+C and rerun pipelines and Open WebUI).
  12. Return to your Open WebUI home – now you can use Anthropic models with streaming capability!

Notes:

  • For Docker deployment, ensure the pipelines service’s 9099 port can be accessed from the Open WebUI container. You might need to set up port mappings or use the same Docker network.
  • If you have firewall, SELinux, or other security software, make sure to allow access to port 9099.
  • If you encounter loading or error issues, check that both the pipelines service and Open WebUI are running, and that network/ports are correctly configured.

That’s it! Now you can harness the power of Anthropic’s state-of-the-art language models right inside your local Open WebUI.


🔧 Update: Support for Claude 4.5 and New Pipelines Behavior

If you see a 403 Forbidden or Not authenticated error when connecting, set the following environment variable before starting Pipelines:

export PIPELINES_API_KEY="0p3n-w3bu!"
./start.sh

Then, in Open WebUI → Admin Panel → Connections, set the same API Key value (0p3n-w3bu!).


Anthropic API Key Now Configured Separately

After importing the anthropic_manifold_pipeline.py, go to Settings → Pipelines → anthropic_manifold_pipeline, and enter your official Anthropic API key (sk-ant-...) under Anthropic API Key.

Do not enter 0p3n-w3bu! here — that one is only for the WebUI ↔ Pipelines connection.


If Claude 4.5 Models Don’t Show Up

Edit the file:

examples/pipelines/providers/anthropic_manifold_pipeline.py

And add these lines inside the get_anthropic_models() list:

{"id": "claude-sonnet-4-5-20250929", "name": "claude-4.5-sonnet"},
{"id": "claude-haiku-4-5-20251001", "name": "claude-4.5-haiku"},

Then restart Pipelines and re-import the .py file from the WebUI.


Uploading or Installing the Pipeline

Option 1: Upload directly

Via Settings → Pipelines → “Click here to select a .py file” and choose your local anthropic_manifold_pipeline.py.

Option 2: Install from GitHub Raw URL

https://raw.githubusercontent.com/open-webui/pipelines/main/examples/pipelines/providers/anthropic_manifold_pipeline.py

Verify the Connection

Run:

curl -H "Authorization: Bearer 0p3n-w3bu!" http://127.0.0.1:9099/models

If it returns a JSON list of models (including claude-4.5-sonnet and claude-4.5-haiku), your Pipelines service and Open WebUI are now properly connected.


References:

Gemini CLI Installation Guide


🔧 Installation Methods

Method 1: Quick Installation with npx (Recommended)

Features: No need for global installation, ideal for temporary use

npx https://github.com/google-gemini/gemini-cli

Method 2: Global Installation

Features: Suitable for continuous use

npm install -g @google/gemini-cli

🚀 How to Start

After installation, launch Gemini CLI with the following command:

gemini

🔑 Setting Your API Key

1. Obtain Your API Key

🛠️ Steps to set up API key in GCP:

  • Log in to GCP and enable the Gemini API:
    Access: https://console.cloud.google.com/
  • Enable: search for “Generative Language API” and enable it.
  • Create API key: “APIs & Services” > “Credentials” > Create API key

2. Set the API Key Environment Variable

For Linux / macOS:

export GEMINI_API_KEY="your-api-key"

For Windows:

set GEMINI_API_KEY="your-api-key"

📚 References

Official Repository:
https://github.com/google-gemini/gemini-cli


⚠️ Notes

  • Make sure to manage permissions appropriately in production environments.
  • Always check the official documentation for the latest information.

Claude Code Installation Guide

1. System Requirements

ItemDetails
OSmacOS 10.15 or later / Ubuntu 20.04+ / Debian 10+ / Windows (WSL required)
HardwareMemory: 4GB or more
SoftwareNode.js 18 or higher
NetworkInternet connection required
ShellBash / Zsh / Fish recommended

2. For Windows Users (Installing WSL/Ubuntu)

  1. Install WSL (Windows Subsystem for Linux) and Ubuntu distribution: wsl --install After rebooting, check available distributions as needed: wsl -l -o
  2. If older versions of node/npm are present in the initial state, remove them: sudo apt remove nodejs npm

3. Installing Node.js (Recommended: nvm)

  1. Install nvm (Node Version Manager): curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
  2. source ~/.bashrc
  3. Install and use the Node.js LTS version: nvm install --lts
  4. nvm use --lts

4. Installing Claude Code

To install globally (recommended method):

npm install -g @anthropic-ai/claude-code

Note: It is not recommended to use sudo with the install command.
If you encounter permission errors, please refer to the “Configuration Guide” in the official documentation.

After installation, verify the version:

claude --version

5. Starting and Authenticating Claude Code (OAuth)

  1. Move to your project directory: cd <your-project-directory>
  2. Start Claude Code: claude
  3. On first launch, a selection screen for authentication method will appear.
    Select “2. Anthropic Console account”.
  4. Log into Console via the displayed link or screen in your browser to obtain an authentication code.
  5. Paste the authentication code into the terminal to complete registration.
    The settings will be saved and auto-authentication will occur from the next startup.

6. Manually Setting the API Key for CLI

Specifying via Environment Variable

export ANTHROPIC_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
claude

Persistent Configuration with settings.json

  1. If the .claude folder does not exist in your home directory, create it: mkdir -p ~/.claude
  2. Create or edit ~/.claude/settings.json as follows: { "env": { "ANTHROPIC_API_KEY": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" } }
  • After setting this, the claude command will automatically read your API Key.

7. Checking Configuration & Troubleshooting

  • To check installation and authentication status: claude doctor
  • If you encounter permission issues or npm permission errors, please refer to the “Troubleshooting” section in the official documentation.

8. Bedrock Integration Setup

Add the same three lines to your shell startup file.
vi ~/.bashrc
Then append:
export CLAUDE_CODE_USE_BEDROCK="1"

export AWS_REGION="ap-northeast-1"

export AWS_BEARER_TOKEN_BEDROCK=""


Reload:
source ~/.bashrc

use sonnet as model:
/model sonnet

9. How to Update

Automatic Updates (Enabled by Default)

Claude Code updates itself automatically.

To disable automatic updates:

claude config set autoUpdates false --global
# Or
export DISABLE_AUTOUPDATER=1

Manual Update

claude update

10. Frequently Asked Questions & Notes

  • Most errors during installation or startup are related to permission, Node.js version, or network settings.
  • Windows users must always operate in a WSL + Linux environment.
  • For the latest information, FAQ, and support, please consult the official documentation and the official Discord, etc.

EC2 定時自動開關機實踐


1. 技術方案與原理

  • 目標: EC2 定時自動開關機,徹底自動化省錢。
  • 主要組件:
    • Lambda(自動呼叫 EC2 API)
    • EventBridge Scheduler(定時觸發 Lambda)
    • 單個 IAM 角色即可(Lambda用),權限和信任設置到位。

2. Lambda function 編寫與部署

  1. 登錄 AWS Lambda 控制台。
  2. Create function > Author from scratch
  3. 命名(建議分「EC2AutoStop」&「EC2AutoStart」)。
  4. 選 Python 3.x。
  5. 貼入以下代碼:
import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2', region_name='ap-northeast-3')
    ec2.stop_instances(InstanceIds=['i-xxxxxxxxxxxxxxxxx'])
    return 'EC2 stopped'

(開機則改 .stop_instances → .start_instances)

  1. ⚠️ 每次修改都必須點「Deploy」!否則Lambda實際行為不會更新。
  2. 初次建好後用測試功能(Test)驗證能正確操作EC2並產生日誌。

3. Lambda執行角色設定(包含自定義policy/inline policy與信任關係)

步驟1.建立角色與 Action Policy、Inline Policy

  1. IAM > Roles > Create role
  2. Trusted entity 選 Lambda
  3. Attach Policy: AWSLambdaBasicExecutionRole,然後Attach 自定義托管 policy
    名稱建議:AmazonEC2StopStartInstances
  • 內容如下,用於賦予 Lambda 停啟 EC2 權限:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*"
        }
    ]
}
  1. Attach 另一個 inline policy
    名稱建議:InvokeSpecificLambda
  • 內容如下,授權本角色可以調用特定 Lambda function(可根據需求擴展其它 function 的 ARN):
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "lambda:InvokeFunction",
            "Resource": [
                "arn:aws:lambda:ap-northeast-3:088xx0000000:function:EC2AutoStop",
                "arn:aws:lambda:ap-northeast-3:088xx0000000:function:EC2AutoStart"
            ]
        }
    ]
}

注意替換 ARN,與你環境中的 Lambda 名稱/region/account-id 一致。

步驟2.補充信任(Trust Relationships)政策

  1. IAM > Roles > 找到你 Lambda 執行用的角色。
  2. 點 “Trust relationships” 分頁,選 “Edit trust policy”
  3. 貼入如下 JSON,允許 Lambda 和 Scheduler 兩種方式 assume 這個角色:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com",
          "scheduler.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

這裡加入 "scheduler.amazonaws.com",即可允許 EventBridge Scheduler 以這個角色調用 Lambda。

  1. 存檔,確認無誤。

4. EventBridge Scheduler 定時任務設置

  1. AWS Console > EventBridge > Schedules > Create schedule
  2. 務必與 Lambda 選同一 region
  3. 任務 Type 選「Invoke」→「AWS Lambda」。
  4. Lambda function 處選你剛剛部署的 function。
  5. Execution role 處,選 Lambda 上一步設好的執行角色(已信任並賦權)。
  6. 時間排程(cron)範例(JST 08:00 週一至週五):
   cron(0 23 ? * MON-FRI *)
  1. 完成儲存。

5. Lambda 配置最佳實踐(Timeout/Memory)

  • Timeout 設 30-60 秒(防止 Sandbox.Timedout)
  • Memory 推薦 256-512MB
  • 設置路徑:Lambda 控制台 > Configuration > General configuration > Edit
  • 設定完記得 Deploy

6. 問題診斷(CloudWatch 日誌&Scheduler Run History)

狀況核查方式與解決
Lambda test 成功但計劃不中斷檢查 Lambda 角色信任 policy 是否含 scheduler.amazonaws.com
定時沒觸發檢查 Scheduler Run history 與 Lambda CloudWatch Logs
超時Lambda Timeout 要夠長

7. 常見錯誤對應表 & 補充建議

情況對應解法
Lambda執行超時Timeout改30s↑
Scheduler無觸發LambdaLambda角色加 scheduler.amazonaws.com到trust policy
Lambda沒產生日誌執行一次自動產生log group
Lambda可Test計劃無效檢查角色信任關係和權限 policy
改了Lambda沒生效忘記點 Deploy,請務必 Deploy!
  • Action policy及信任完全在Lambda這唯一角色上設,管理簡單有效。
  • inline policy管理更易查維護,精細到臨時或特定EC2都行

How to Control Baloo File Indexer Resource Usage on KDE

If you’re using KDE on Linux with a large home directory, you might notice high disk IO, CPU usage, or unexpected swap activity. In many cases, Baloo File Indexer is the cause—especially during initial indexing. Here’s a quick guide to check, control, and optimize Baloo for a smooth KDE experience:


1. Check Baloo Status and Progress

balooctl status

Shows current state, total files indexed, and files pending for indexing.

Example output:

Indexer state: Indexing file content
Total files indexed: 178,275
Files waiting for content indexing: 355

2. Temporarily Disable Baloo

If you want to immediately stop resource consumption:

balooctl disable

To enable again:

balooctl enable

3. Exclude Unnecessary Folders from Indexing

Recommended! Only index what you really need:

  • Go to System Settings → Search → File Search
    Add folders you wish to exclude (e.g., Downloads, big backup folders, development projects).

Or edit (advanced):

nano ~/.config/baloofilerc

Add lines like:

[General]
excludeFolders[$e]=/home/youruser/Downloads /home/youruser/projects /home/youruser/big_backup

Restart Baloo for changes to take effect:

balooctl disable
balooctl enable

4. Monitor Disk Usage by Baloo

Get the actual index size:

du -sh ~/.local/share/baloo

Several GB is typical with 100k+ files. If it’s too much, use exclusion above.


5. Key Points

  • Initial indexing: Heavy IO/CPU, will finish after the first complete pass.
  • Post-indexing: Resource usage drops dramatically, only new/modified files are scanned.
  • Disabling: Safe, and can be toggled anytime. Your files remain untouched.
  • Rebooting during indexing: Safe, progress resumes automatically.
  • Clean up everything (if needed):
    balooctl purge balooctl enable

System Setup Guide for Remote Local File Manipulation Using Flask and MCP Server

Overview

This project aims to build and validate a remote API service enabling local filesystem operations by integrating MCP Server and Flask. The main functionalities include:

  • Remote execution of operations such as directory listing, file creation, file deletion, and file modification.
  • Providing an HTTP API interface with Flask, powered by MCP Server.

Setup Instructions

1. Install Required Packages

Node.js and MCP Server

MCP Server requires Node.js. Follow these steps to install:

  1. Install the latest LTS version of Node.js from Node.js Official Website.
  2. Install MCP Server globally:
   npm install -g @modelcontextprotocol/server-filesystem

Python and Flask

To build the remote API server with Flask, you’ll need the Python environment:

  1. Download and install Python from Python Official Website.
  2. Install the required libraries:
   pip install flask flask-cors

2. Project Directory Structure

Create the files and directories according to the following structure:

C:\dev\mcp\
│
├── server-filesystem.cmd
├── bridge.js
└── flask_api.py

Code Details

server-filesystem.cmd

A script for launching MCP Server to enable remote filesystem operations:

@echo off
npx -y @modelcontextprotocol/server-filesystem C:\dev\mcp\test

bridge.js

A Node.js script to execute and test MCP Server behaviors (CRUD operations):

const { spawn } = require("child_process");

// 调用 server-filesystem.cmd 启动 MCP Server
const mcpServer = spawn("cmd.exe", ["/c", "C:\\dev\\mcp\\server-filesystem\\server-filesystem.cmd"], {
    env: process.env, // 继承当前环境变量
    shell: true       // 必须启用 shell 模式以运行 .cmd 文件
});

// 监听标准输出,打印 MCP 服务输出日志
mcpServer.stdout.on("data", (data) => {
    console.log(`MCP Server Output: ${data}`);
});

// 监听标准错误(已分类处理)
// 将已知的启动信息归类为 "MCP Server Info",其他则作为真正的错误
mcpServer.stderr.on("data", (data) => {
    const message = data.toString();
    if (message.includes("Secure MCP Filesystem Server")) {
        console.log(`MCP Server Info: ${message}`);
    } else if (message.includes("Allowed directories")) {
        console.log(`MCP Server Info: ${message}`);
    } else {
        console.error(`MCP Server Error: ${message}`);
    }
});

// 捕获 MCP 服务关闭事件
mcpServer.on("close", (code) => {
    console.log(`MCP Server exited with code ${code}`);
});

// === 新增功能:测试 MCP Server 的功能 ===
// 延迟执行功能以确保 MCP Server 已启动完成
setTimeout(() => {
    // 测试 1:列出目录内容
    console.log("Running Directory List Test...");
    const listTest = spawn("cmd.exe", ["/c", "dir C:\\dev\\mcp\\test"], {
        env: process.env,
        shell: true,
    });

    listTest.stdout.on("data", (data) => {
        console.log(`Directory List Output: ${data}`);
    });

    listTest.stderr.on("data", (data) => {
        console.error(`Directory List Error: ${data}`);
    });

    listTest.on("close", () => {
        console.log("Directory List Test Finished.");
    });
}, 3000); // 延迟 3 秒进行目录列表测试

setTimeout(() => {
    // 测试 2:创建新文件
    console.log("Running File Creation Test...");
    const writeTest = spawn("cmd.exe", ["/c", "echo Hello, MCP Server! > C:\\dev\\mcp\\test\\test-write.txt"], {
        env: process.env,
        shell: true,
    });

    writeTest.stdout.on("data", (data) => {
        console.log(`File Creation Output: ${data}`);
    });

    writeTest.stderr.on("data", (data) => {
        console.error(`File Creation Error: ${data}`);
    });

    writeTest.on("close", () => {
        console.log("File Creation Test Finished. Please check the file in 'C:\\dev\\mcp\\test'.");
    });
}, 7000); // 延迟 7 秒执行文件创建测试

// 测试 4:删除刚创建的文件
setTimeout(() => {
    console.log("Running File Deletion Test...");
    const deleteTest = spawn("cmd.exe", ["/c", "del C:\\dev\\mcp\\test\\test-write.txt"], {
        env: process.env,
        shell: true,
    });

    deleteTest.stdout.on("data", (data) => {
        console.log(`File Deletion Output: ${data}`);
    });

    deleteTest.stderr.on("data", (data) => {
        console.error(`File Deletion Error: ${data}`);
    });

    deleteTest.on("close", () => {
        console.log("File Deletion Test Finished. Please check if the file is removed from 'C:\\dev\\mcp\\test'.");
    });
}, 15000); // 延迟 15 秒测试文件删除

flask_api.py

A Flask API server that provides CRUD operations through HTTP requests:

from flask import Flask, request, jsonify, make_response
from flask_cors import CORS
import subprocess
import json  # Use json module for building responses manually

# Initialize Flask app
app = Flask(__name__)

# Enable Cross-Origin Resource Sharing (CORS)
CORS(app)

# Ensure Flask does not escape non-ASCII characters in responses
app.config['JSON_AS_ASCII'] = False


# Utility function: Construct JSON response with UTF-8 encoding
def json_response(data, status=200):
    response = make_response(json.dumps(data, ensure_ascii=False), status)
    response.headers['Content-Type'] = 'application/json; charset=utf-8'
    return response


# Route: List directory contents
@app.route('/list-files', methods=['GET'])
def list_files():
    # Fetch 'dir' parameter from the GET request
    directory = request.args.get('dir')
    if not directory:
        return json_response({"error": "Missing 'dir' parameter"}, 400)

    try:
        # Use subprocess to run the Windows 'dir' command
        result = subprocess.run(
            ["cmd.exe", "/c", f"dir {directory}"],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
        )
        # If the command failed, return the error
        if result.returncode != 0:
            return json_response({"error": result.stderr.strip()}, 500)
        # Return the directory listing as the response
        return json_response({"output": result.stdout.strip()})
    except Exception as e:
        # Catch any unexpected exceptions
        return json_response({"error": str(e)}, 500)


# Route: Create a file with content
@app.route('/create-file', methods=['POST'])
def create_file():
    # Parse the file_path and content from the POST request
    data = request.get_json()
    file_path = data.get('file_path')
    content = data.get('content', '')

    if not file_path:
        return json_response({"error": "Missing 'file_path' parameter"}, 400)

    try:
        # Use subprocess to run the Windows 'echo' command to write to a file
        result = subprocess.run(
            ["cmd.exe", "/c", f"echo {content} > {file_path}"],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
        )
        # If the command failed, return the error
        if result.returncode != 0:
            return json_response({"error": result.stderr.strip()}, 500)
        # Return success message
        return json_response({"message": f"File '{file_path}' created successfully"})
    except Exception as e:
        # Catch any unexpected exceptions
        return json_response({"error": str(e)}, 500)


# Route: Delete a specific file
@app.route('/delete-file', methods=['DELETE'])
def delete_file():
    # Parse the file_path from the DELETE request
    data = request.get_json()
    file_path = data.get('file_path')

    if not file_path:
        return json_response({"error": "Missing 'file_path' parameter"}, 400)

    try:
        # Use subprocess to run the Windows 'del' command to delete the file
        result = subprocess.run(
            ["cmd.exe", "/c", f"del {file_path}"],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
        )
        # If the command failed, return the error
        if result.returncode != 0:
            return json_response({"error": result.stderr.strip()}, 500)
        # Return success message
        return json_response({"message": f"File '{file_path}' deleted successfully"})
    except Exception as e:
        # Catch any unexpected exceptions
        return json_response({"error": str(e)}, 500)


# Route: Read contents of a specific file
@app.route('/read-file', methods=['GET'])
def read_file():
    # Fetch 'file_path' parameter from the GET request
    file_path = request.args.get('file_path')
    if not file_path:
        return json_response({"error": "Missing 'file_path' parameter"}, 400)

    try:
        # Open the file in 'read' mode with UTF-8 encoding
        with open(file_path, 'r', encoding='utf-8') as file:
            content = file.read()
        # Return the file content in the response
        return json_response({"file_path": file_path, "content": content})
    except FileNotFoundError:
        # Return 404 if the file does not exist
        return json_response({"error": f"File '{file_path}' not found."}, 404)
    except Exception as e:
        # Catch any unexpected exceptions
        return json_response({"error": str(e)}, 500)


# Run the Flask app on all available IPs (0.0.0.0) at port 5000
if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)

Important Notes

  1. Path Formatting:
  • Ensure file paths sent to the Flask API use double backslashes (e.g., C:\\path\\to\\file).
  1. Security:
  • Add additional security layers (e.g., API Keys or access control) as file operations can pose risks.
  1. Production Deployment:
  • Flask’s development server is not suitable for production. Use Gunicorn or uWSGI for deployment.

Using bridge.js (Node.js Script)

To test and interact with the MCP server for file operations, run:

node bridge.js
  • Starts the MCP server.
  • Performs directory listing, file creation, and file deletion tests, with results outputted in the console.

Starting Flask API (flask_api.py)

To host the API server:

  1. Navigate to the project directory.
  2. Run the script:
   python flask_api.py
  1. API will be available at http://0.0.0.0:5000 for HTTP operations.

Testing Results

  1. Sample curl Commands (for Ubuntu):
# check folder
curl "http://yourTargetIP:yourTargetPort/list-files?dir=C:\\dev\\mcp\\test"
 
# create file
curl -X POST -H "Content-Type: application/json" \
-d '{"file_path": "C:\\dev\\mcp\\test\\test.txt", "content": "Created via Flask API"}' \
"http://yourTargetIP:yourTargetPort/create-file"

# read file
curl "http://yourTargetIP:yourTargetPort/read-file?file_path=C:\\dev\\mcp\\test\\test.txt"

# delete file
curl -X DELETE -H "Content-Type: application/json" \
-d '{"file_path": "C:\\dev\\mcp\\test\\test.txt"}' \
"http://yourTargetIP:yourTargetPort/delete-file"


Future Enhancements

  • Enable file upload and download functionality.
  • Introduce authentication mechanisms to RESTful APIs.
  • Add batch operations and file movement capabilities.

Setting Up Qt Development Environment on Ubuntu

---

**Update Your Package List:**  
`sudo apt update -y`  

---

**Install Qt Creator (Qt Integrated Development Environment):**  
`sudo apt install qtcreator -y`  

---

**Install Qt 5 Development Libraries and Tools:**  
Install `qtbase5-dev` and related toolkits, which provide core development files for Qt 5:  
`sudo apt install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools -y`  

---

**For a Complete Qt Development Environment (with GUI support):**  
You can additionally install:  
`sudo apt install qtdeclarative5-dev qttools5-dev-tools -y`  

---

**Verify Installation and Check Tool Versions:**  
Ensure `qmake` is installed and available:  
`qmake --version`  

Expected output should resemble:  

QMake version 3.x
Using Qt version 5.x in /usr/lib/x86_64-linux-gnu

---

**Test the Installation:**  
Launch the Qt Creator development tool:  
`qtcreator`  

Resize the swap partition to free space and reassign it to the root partition

1.Insert and boot your Ubuntu Live CD/USB

2.Mount the root partition (assuming nvme0n1p2 is your root partition):

sudo mkdir /mnt/root
sudo mount /dev/nvme0n1p2 /mnt/root

3.Edit the /etc/fstab file within the root partition:

sudo vi /mnt/root/etc/fstab

4.Comment out or delete the swap entry, then save it. e.g.:

# UUID=yourUUID none swap sw 0 0

5.Start GParted:

sudo gparted

6.Shrink swap partition nvme0n1p3:

Right-click nvme0n1p3 swap partition, choose Resize/Move.

Adjust it to the desired size (e.g., 16GB).

Click Resize/Move.

7.Expand root partition nvme0n1p2(Note: Resize operations are possible only on contiguous space.):

Right-click nvme0n1p2 root partition, choose Resize/Move.

Drag the right slider to use the space freed from the swap partition.

Click Resize/Move.

8.Mount the root partition again:

sudo mount /dev/nvme0n1p2 /mnt/root

Edit the /etc/fstab file within the root partition:

sudo vi /mnt/root/etc/fstab

modify the existing swap entry with the new UUID

9.Reboot to your normal system, not the Live CD/USB.

10.Enable the new swap partition:

sudo swapon -a

11.Check if the new swap partition is active:

sudo swapon --show

Confirm root and swap partition sizes:

df -hT
通过 WordPress.com 设计一个这样的站点
从这里开始