Creating a Fully Restorable Windows System Backup (GPT + EFI + MSR + C)


Recommended Full System Backup Strategy

Boot into a Linux live USB environment.


① Backup the GPT Partition Table

sgdisk --backup=/media/xxx/DiskBackup/gpt.bin /dev/nvme0n1

✔ Includes both primary and backup GPT headers
✔ Very small file
✔ Mandatory

This preserves your exact partition structure.


② Backup the EFI Partition

Assuming EFI is p1:

dd if=/dev/nvme0n1p1 of=/media/xxx/DiskBackup/efi.img bs=4M status=progress

EFI contains your bootloader and UEFI boot files.


③ Backup the MSR Partition (16MB)

Assuming MSR is p2:

dd if=/dev/nvme0n1p2 of=/media/xxx/DiskBackup/msr.img bs=4M status=progress

⚠ Takes about 1 second
⚠ Technically optional
⚠ But since it’s tiny, backing it up completes the snapshot


④ Backup C: Using partclone

partclone.ntfs -c -s /dev/nvme0n1p3 -o /media/xxx/DiskBackup/c.img

✔ Only backs up used NTFS blocks
✔ Much faster than dd
✔ Image size is significantly smaller


Full Restore Procedure

If the system ever fails:


1️⃣ Restore GPT

sgdisk --load-backup=gpt.bin /dev/nvme0n1

2️⃣ Restore EFI

dd if=efi.img of=/dev/nvme0n1p1 bs=4M

3️⃣ Restore MSR (Optional)

dd if=msr.img of=/dev/nvme0n1p2 bs=4M

4️⃣ Restore C:

partclone.ntfs -r -s c.img -o /dev/nvme0n1p3

5️⃣ (Safest Step) Rebuild Boot Files

Boot into Windows recovery or installation media and run:

bcdboot C:\Windows /f UEFI

The Real Cause of Intermittent Ubuntu Boot Failures on Hyper-V

— and Why “Preparing in Advance” Solves It Once and for All

When running Ubuntu on Hyper-V, some users encounter a very confusing issue:

  • On a cold boot, the system occasionally drops into emergency mode
  • Errors indicate failure to load kernel modules or mount the root filesystem
  • Repeatedly clicking “Stop → Start” eventually allows Ubuntu to boot normally
  • The system disk and files are not corrupted

This problem is hard to reproduce reliably and difficult to search for online.
After a complete investigation, the conclusion is clear:

This is not accidental, nor mysterious behavior —
it is a classic engineering problem caused by insufficient boot-time preparation.


1. The Key Conclusion (Important)

The root cause is not a broken system.

The real issue is:

Linux boots faster than the virtual disk is ready.

And the solution can be summarized in one sentence:

Move work that is normally done during boot
to a point before the system starts booting.


2. Where Exactly Does the Failure Occur?

A simplified Linux boot sequence looks like this:

GRUB
 → Linux kernel
 → initramfs (minimal early boot environment)
 → Mount root filesystem (/)
 → systemd startup

The failure happens precisely at this transition:

initramfs → mounting the root filesystem

In a Hyper-V cold-boot scenario:

  • The kernel has already started mounting /
  • But the virtual disk controller / I/O path is still initializing
  • The device “will exist very soon”, but does not yet exist at that moment

Linux does not wait indefinitely by default, so the mount fails and the system
drops into emergency mode.


3. Why Rebooting Sometimes “Fixes” It

This is where many people are misled.

What actually happens:

First cold boot

  • Virtual device initialization is slow
  • Disk readiness lags behind kernel startup

Subsequent boots

  • Controllers, caches, and resources are already warm
  • The disk becomes ready much faster

This creates the illusion:

“If I restart a few times, it works.”

But this only changes the probability, not the underlying problem.


4. The Real Solution: Prepare in Advance

The effective fix consists of four steps:

sudo update-initramfs -u -k all
sudo nano /etc/default/grub

Change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

to:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash rootdelay=10"

Then run:

sudo update-grub
sudo reboot

What This Achieves

  • update-initramfs
    Ensures all required kernel modules are already available at boot time,
    instead of being loaded on demand.
  • rootdelay=10
    Explicitly tells the kernel to wait (up to 10 seconds) for the virtual disk
    before attempting to mount /.
  • update-grub
    Applies the new boot configuration.
  • reboot
    Activates the changes, which only take effect during startup.

Together, these steps eliminate the boot-time race condition entirely.


Appendix

Why rootdelay Fixes Disks but Breaks Your Assumptions

This section exists to clarify an important but easily misunderstood point:

rootdelay fixes a storage-layer race condition —
it does not fix system readiness as a whole.

Understanding this distinction is critical to applying the solution safely.


1. What rootdelay Actually Does (and What It Does Not)

The kernel parameter:

rootdelay=10

has a very narrow and specific purpose:

  • It delays mounting the root filesystem (/)
  • It gives block devices (e.g. virtual disks) extra time to appear
  • It only affects the transition: initramfs → mount /

This makes it effective for scenarios such as:

  • Hyper-V cold boot
  • Slow virtual disk initialization
  • Storage controllers that appear shortly after kernel start

However, rootdelay does not:

  • Delay systemd startup globally
  • Delay network stack initialization
  • Synchronize higher-level services
  • Fix Layer-3 (IP) configuration issues

It operates entirely at the storage boundary.


2. Why This Can Accidentally “Break” Networking

Many Linux systems implicitly assume:

“If the system boots, the network will be provided by DHCP.”

This assumption is usually correct on:

  • Home networks
  • Cloud images
  • Default Ubuntu installations

But it is not universally valid, especially in:

  • Enterprise networks
  • Corporate VLANs
  • Environments with statically assigned IP addresses

When rootdelay is introduced:

  • Boot timing changes slightly
  • Service initialization order may shift
  • Latent configuration assumptions become visible

If the network does not provide DHCP, the system will now clearly fail to acquire an address —
not because rootdelay broke networking, but because:

The system was never correctly configured for the network it was on.


3. The Key Misinterpretation to Avoid

It is tempting to conclude:

“After adding rootdelay, networking stopped working.”

This is not the correct causal model.

The correct interpretation is:

rootdelay removed a storage race
→ system booted deterministically
→ network configuration assumptions were exposed
→ DHCP failed (as expected in static IP environments)

In other words:

rootdelay did not break networking —
it removed a distraction that previously hid the real issue.


4. Static IP Environments Require Explicit Configuration

In networks where IP addresses are assigned manually, the correct fix is not boot-time tuning, but network-layer correctness.

That means explicitly configuring:

  • IP address
  • Subnet mask
  • Default gateway
  • DNS servers

For example (NetworkManager):

nmcli connection modify "Wired connection 1" \
ipv4.method manual \
ipv4.addresses <STATIC_IP>/<PREFIX> \
ipv4.gateway <GATEWAY> \
ipv4.dns "<DNS1> <DNS2>"

Then forcing a reconnection:

nmcli connection down "Wired connection 1"
nmcli connection up "Wired connection 1"

This is not a workaround, and it is not related to rootdelay.

It simply aligns the system with the reality of the network.


5. Engineering Takeaway

The deeper lesson is not about rootdelay itself, but about scope:

A fix that is correct at one layer
should never be assumed to generalize upward.

  • rootdelay fixes storage readiness
  • It does not define system readiness
  • It cannot compensate for incorrect Layer-3 assumptions

Fixing “wrong fs type / bad superblock” on an External Drive

(When Linux Sees /dev/sda but No /dev/sda1)

When mounting an external drive, I hit the following error:

sudo mount /dev/sda /media/external
mount: /media/external: wrong fs type, bad option, bad superblock on /dev/sda, ...

At first glance, this looks like a “broken filesystem.”
In reality, Linux could not see any partition at all (no /dev/sda1), so I was effectively trying to mount the entire disk device instead of a partition.

This post documents a safe, reproducible recovery workflow:

  • Identify the real problem
  • Create a full disk image with ddrescue
  • Use TestDisk to locate the lost partition
  • (Optional) Write the partition table back so the disk mounts normally again

Boot Sector vs. Partition Table — What Was Actually Broken?

Before doing anything, it’s critical to understand where the failure occurred:

  • Partition table (MBR/GPT)
    • Lives at the very beginning of the disk
    • Describes where partitions start/end
    • If this is missing or corrupt, you won’t even get /dev/sda1
  • Boot sector / filesystem metadata
    • Lives inside a partition
    • If only this is damaged, you usually still see /dev/sda1, but mounting fails

👉 In this case, the partition table was missing/corrupt, which is why lsblk showed sda but no sda1.


0) Safety Rules (Read This First)

  • Do NOT format the disk
  • Do NOT run destructive commands
    (e.g., mkfs, wipefs without -n, or random “repair” tools)
  • Always double-check the device name
    One typo in /dev/... can destroy your system disk
  • If you see many USB resets or I/O errors in dmesg,
    suspect cable/enclosure/power issues first

1) Confirm the Disk Is Detected

lsblk -o NAME,SIZE,MODEL,SERIAL,TYPE,MOUNTPOINTS

Identify your external drive by size and model.
Example:

/dev/sda   ~476GiB

2) Check for Partitions or Filesystems

lsblk -f
sudo fdisk -l /dev/sda

Key observation

  • If you see only sda and no sda1/sda2,
    the partition table is likely missing or corrupt.

Force a partition table reread (safe):

sudo partprobe /dev/sda
lsblk -o NAME,SIZE,TYPE,FSTYPE,LABEL,MODEL /dev/sda

Read-only signature checks:

sudo wipefs -n /dev/sda
sudo file -s /dev/sda

If wipefs -n shows nothing and file -s prints only data,
Linux does not recognize any partition table or filesystem header.

Quick read test (read-only):

sudo dd if=/dev/sda of=/dev/null bs=1M count=16 status=progress

If this runs at normal speed, the disk is at least readable.


3) Create a Full Disk Image First (Strongly Recommended)

Make sure another disk has enough free space (≥ disk size) and supports large files (avoid FAT32):

df -hT /mnt/recovery

Install tools:

sudo apt update
sudo apt install -y gddrescue testdisk

Create directories:

sudo mkdir -p /mnt/recovery/sda_backup
sudo mkdir -p /mnt/recovery/sda_recovered

First ddrescue pass (fast, minimal retries)

sudo ddrescue -f -n /dev/sda \
  /mnt/recovery/sda_backup/sda.img \
  /mnt/recovery/sda_backup/sda.log

Optional second pass if there were read errors:

sudo ddrescue -f -d -r3 /dev/sda \
  /mnt/recovery/sda_backup/sda.img \
  /mnt/recovery/sda_backup/sda.log

If you end up with 100% rescued and 0 read errors, the entire disk has been safely captured.


4) Use TestDisk on the Image to Find the Lost Partition

sudo testdisk /mnt/recovery/sda_backup/sda.img

In the interactive UI:

  1. Select the disk image
  2. Analyse → Quick Search
    (Use Deeper Search only if needed)
  3. Highlight a candidate partition and press P to list files

If P shows your real folders/files, that partition entry is correct.

Optional: Copy Files Out

  • Press a to select all → C (uppercase) to copy
  • Destination:/mnt/recovery/sda_recovered

If space is limited, copy only what you need.


5) Restore the Partition Table to the Original Disk

⚠️ Do this only after you have a full backup image.

Run TestDisk on the real device:

sudo testdisk /dev/sda

Steps:

  1. Analyse → Quick Search
  2. Highlight the correct partition and press P to confirm files
  3. Back in the list:
    • Set the correct entry to P (Primary)
    • Set wrong/overlapping entries to D (Deleted)
  4. Press Enter → Write → Y

This writes the recovered partition table back to /dev/sda.


6) Reload Partition Info and Mount (Read-Only First)

sudo partprobe /dev/sda
sudo partx -u /dev/sda
lsblk -f /dev/sda

You should now see:

/dev/sda1

Mount read-only to verify:

sudo mkdir -p /media/external
sudo mount -o ro /dev/sda1 /media/external

If everything looks correct:

sudo umount /media/external
sudo mount /dev/sda1 /media/external

Troubleshooting Notes

  • Seeing NTFS when you expected exFAT
    Trust what TestDisk reports; the drive may have been formatted differently than you remember.
  • NTFS “hibernated / unsafe state” warnings
    Best fix is on Windows:chkdsk /f Linux-side helper:sudo ntfsfix /dev/sda1
  • I/O errors or USB resets
    Change cable/port/enclosure, avoid hubs, ensure sufficient power, then image with ddrescue.
  • TestDisk can’t list files
    The filesystem metadata may be damaged. As a last resort, use PhotoRec (filenames/folders are usually lost).

Summary

  • The mount error was a symptom; the key clue was no /dev/sda1
  • The real issue was a missing or corrupt partition table
  • Best practice is always:

Image first with ddrescue, then analyze and recover with TestDisk.

Surface Laptop 2 修复 BLInitializeLibrary failed / 0xc0000001

适用于系统文件仍在,但EFI 引导区损坏导致无法启动、自动修复失败、bootrec 扫描不到 Windows 的情况。

以下步骤已被实际验证有效。


#️⃣1. 制作恢复 U 盘(Linux 环境)

① 下载微软恢复镜像

选择设备型号(如 Surface Laptop 2)
输入序列号
下载 ZIP(约 7.8GB)


② 在 Linux 中将 U 盘格式化为 FAT32(最重要步骤)

先确认你的 U 盘设备名称,例如:

lsblk

假设是 /dev/sda(⚠ 注意:千万不要选错)

格式化为 FAT32:

sudo umount /dev/sda*
sudo mkfs.fat -F 32 /dev/sda

③ 挂载 U 盘

sudo mkdir -p /mnt/usb
sudo mount /dev/sda /mnt/usb

④ 解压微软恢复镜像

unzip Surface_Recovery.zip -d surf

⑤ 将恢复文件复制到 U 盘(必须复制内容,而不是文件夹)

sudo cp -r surf/* /mnt/usb/

⑥ 卸载 U 盘

sudo umount /mnt/usb

#️⃣2. 从 U 盘启动进入恢复界面

  1. 插入 U 盘
  2. 长按 音量 –
  3. 按电源
  4. 松开电源但继续按住音量 –
  5. 进入 Windows 蓝色恢复界面

#️⃣3. 打开命令提示符

疑难解答 → 高级选项 → 命令提示符


#️⃣4. 可选:修复系统文件(推荐)

sfc /scannow /offbootdir=C:\ /offwindir=C:\Windows
dism /image:C:\ /cleanup-image /restorehealth

#️⃣5. 运行 bootrec(如果扫描到 0 个 Windows,继续下一步)

bootrec /fixmbr
bootrec /fixboot
bootsect /nt60 sys
bootrec /fixboot
bootrec /scanos

#️⃣6. 手动重建 EFI 引导分区(真正解决问题的步骤)

① 打开 diskpart

diskpart
list volume

记住EFI 分区编号:
通常是 100MB FAT32(如 Volume 1)


② 选择 EFI 分区并挂载为 Z:

sel volume 1
assign letter=Z
exit

③ 清空 EFI 目录并重建

cd /d Z:\
rmdir /S /Q Z:\EFI
mkdir Z:\EFI

④ 将 Windows 引导文件写入 EFI(最关键一步)

bcdboot C:\Windows /s Z: /f UEFI

看到:

启动文件创建成功

代表修复成功。


#️⃣7. 重启

exit

选择:

Continue → Continue to Windows

系统即可恢复正常启动。


🟦 总结(极简版流程)

# Linux 制作恢复盘
sudo umount /dev/sdX*
sudo mkfs.fat -F 32 /dev/sdX
sudo mount /dev/sdX /mnt/usb
unzip Recovery.zip -d surf
sudo cp -r surf/* /mnt/usb/
sudo umount /mnt/usb

# Windows 修复:
sfc /scannow /offbootdir=C:\ /offwindir=C:\Windows
dism /image:C:\ /cleanup-image /restorehealth
bootrec /fixmbr
bootrec /fixboot
bootsect /nt60 sys
bootrec /fixboot
bootrec /scanos

diskpart
list volume
sel volume 1
assign letter=Z
exit

cd /d Z:\
rmdir /S /Q Z:\EFI
mkdir Z:\EFI

bcdboot C:\Windows /s Z: /f UEFI

Microsoft 365 Business Premium: From License Activation to Teams Compliance and eDiscovery


A complete technical walkthrough of Microsoft 365 Business Premium,
covering Outlook mailboxes, Entra ID identity management, Teams message retention,
and Microsoft Purview eDiscovery for auditing and data governance.


1 Overview of Microsoft 365 Business Premium

License Components

  • Outlook Exchange Online (50 GB mailbox)
  • Teams for chat, calls, and meetings
  • OneDrive and SharePoint for cloud storage
  • Intune for device management
  • Entra ID (former Azure AD) for authentication and SSO
  • Purview for compliance and eDiscovery
  • Defender for Business for security protection

Each user license can activate Office apps on up to 5 PCs/Macs + 5 tablets + 5 phones.
All services update automatically after login with a valid subscription.


2 Outlook and Exchange Online Mailboxes

  • A Business Premium tenant automatically provisions an
    @yourtenant.onmicrosoft.com mailbox.
  • You can add a custom domain (e.g., @yourcompany.com) in
    Microsoft 365 Admin Center → Settings → Domains.
  • DNS records (MX, SPF, DKIM, DMARC) are handled automatically.

Mail data is stored in Exchange Online, enabling retention and eDiscovery.


3 Microsoft Entra ID (former Azure AD)

Purpose

  • Central identity provider for the tenant
  • User, group, and role management
  • SSO for internal and third-party apps
  • MFA and conditional access policies

Admin Portal


4 Teams Integration Architecture

Teams FeatureData LocationSearchable by eDiscovery
1:1 & group chatsExchange Online (hidden folder “TeamChat”)
Channel messagesSharePoint Team Site
File attachmentsOneDrive / SharePoint
Meeting recordingsStream or OneDrive
Calendar eventsExchange Calendar

Teams Admin Center provides usage reports:
message counts, meetings joined, call duration, and device types.


5 Microsoft Purview Compliance Portal

New unified entry point: https://compliance.microsoft.com

Included modules for Business Premium:

  • Audit – activity logs (90 days)
  • eDiscovery (Standard) – search and export content
  • Data Lifecycle Management – retention policies
  • Communication Compliance (basic rules)

6 Default Retention and Policy Extension

Data TypeDefault RetentionNotes
Teams chats≈ 30 days (if no policy)Deleted after expiration
Teams channel posts≈ 1 yearStored in SharePoint
Emails (Exchange)UnlimitedUntil user deletes
Files (OneDrive/SharePoint)UnlimitedUntil deleted
Audit logs90 daysExtend with E5 license

Create a Permanent Retention Policy

  1. Open Purview → Data Lifecycle Management → Microsoft 365 → Retention Policies
  2. Create policy → select Teams chats / channel messages
  3. Choose Keep Forever
  4. Apply to All Users → Save → Publish

After this, Teams messages remain permanently searchable and recoverable.


7 eDiscovery (Standard) Workflow

Step 1 Open Module

Purview → Solutions → eDiscovery → eDiscovery (Standard)

Step 2 Create a Case

Example name: Teams_Compliance_Audit

Step 3 Add Search

  • Locations: Teams Chats and Mailboxes
  • Keywords: e.g. ("resignation" OR "quit" OR "leave company")
  • Date range: optional
  • Run the search

Step 4 View and Export

Results show: sender, recipients, timestamp, and message snippet.
Select Preview to see context or Export to download a ZIP package.


8 Export Structure and HTML Preview

TeamsChatExport_20251112_XXXX/
├─ manifest.csv
├─ TeamsChat_1.html
└─ Metadata/
    ├─ MessageDetails.csv
    └─ Attachments/

Example HTML View

───────────────────────────────
User A  ( 2025-11-10 17:42 )
I haven’t mentioned the resignation yet.

User B  ( 2025-11-10 17:43 )
When will you bring it up?

User A  ( 2025-11-10 17:44 )
Probably end of the month, after handover.
───────────────────────────────

Each file includes message context, timestamps, and participants—
allowing a full reconstruction of conversation flow.


9 Permissions Required for eDiscovery

RoleCapabilities
Compliance AdministratorCreate and run Content Search, set policies
eDiscovery ManagerCreate cases, search Teams chats, export results
eDiscovery AdministratorManage all cases and exports
Global AdministratorHas all above permissions by default

Add roles via Purview → Permissions → Microsoft Purview solutions.

All features above are included in Business Premium—no extra license needed.
(Only eDiscovery Premium and Advanced Audit require E5 plans.)


10 Teams Privacy and Compliance Reality

ScenarioRetainedSearchableStorage Location
Private chat (1:1 / group)Exchange Online hidden folder
Channel postsSharePoint
Deleted messages (within policy)Retention snapshot
AttachmentsOneDrive / SharePoint

Even deleted messages remain discoverable until their retention period expires.

Conclusion for users:

Teams chat is not a private messenger.
All content belongs to the organization and may be retained or audited.


11 License Tier Comparison

FeatureBusiness PremiumE3 / E5
eDiscovery (Standard)
eDiscovery (Premium)
Audit log retention90 days1 year / 10 years
Communication ComplianceBasicAdvanced
DLP (Data Loss Prevention)BasicFull

12 Logical Architecture Summary

[User]
   ↓
[Teams Client]
   ↓
[Exchange Online / SharePoint / OneDrive]
   ↓
[Microsoft Purview Services]
   ├─ eDiscovery
   ├─ Audit
   ├─ Data Lifecycle Management
   ↓
[Compliance Admin Center]

13 Key Takeaways

  1. Business Premium integrates identity (Entra ID), collaboration (Teams), and compliance (Purview).
  2. Teams chat data is retained by default for 30 days unless a policy extends it.
  3. eDiscovery (Standard) can search and export full conversation context in HTML.
  4. Chat content is organizational data, not personal property.
  5. Administrators should set clear retention and privacy policies for all users.

Useful Portals

Install and Configure OpenAI Codex CLI on Ubuntu / WSL


🛠 Overview

This guide explains how to install the OpenAI Codex CLI on Ubuntu or Windows Subsystem for Linux (WSL), set up your API key, and switch authentication methods between apikey and chatgpt.


⚙️ Installation Steps

# Update packages
sudo apt update -y
sudo apt upgrade -y

# Install Node.js (using Node 22 as an example)
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs

# (Optional) Ensure git is installed
sudo apt install -y git

# Install the Codex CLI globally via npm
sudo npm install -g @openai/codex

# Set your OpenAI API key
export OPENAI_API_KEY="your_OpenAI_API_Key"

# To make this persistent across sessions:
echo 'export OPENAI_API_KEY="your_OpenAI_API_Key"' >> ~/.bashrc
source ~/.bashrc

# Verify installation
codex --version

🔄 Switching Authentication Methods

You can choose how Codex authenticates your account:

# Use direct API key authentication
codex --config preferred_auth_method='apikey'

# Or use ChatGPT-based login
codex --config preferred_auth_method='chatgpt'

🔑 Set API Key Manually

If needed, you can manually edit your authentication file:

vi ~/.codex/auth.json

Add or update your API key inside that file.


🪄 One-Line Quick Install (for Termux / Android)

pkg update -y && pkg upgrade -y && \
pkg install -y curl git && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash && \
source ~/.bashrc && \
unset PREFIX && \
nvm install 20 && \
npm install -g @openai/codex && \
echo 'export OPENAI_API_KEY="your_OpenAI_API_Key"' >> ~/.bashrc && \
source ~/.bashrc

Use Local LLM:
Open your Codex configuration file: C:\Users\<username>\.codex\config.toml Replace or append the following: # Use Ollama as model provider model = "gpt-oss:20b" model_provider = "ollama"

like below:

# Use Ollama as model provider
model = "gpt-oss:20b"
model_provider = "ollama"

[model_providers.ollama]
name = "Ollama (local)"
base_url = "http://127.0.0.1:11434/v1"
# Ollama does not require API Key
wire_api = "chat"   # Codex uses Chat Completions API

Restart VS Code
After saving, restart VS Code (or reload the Codex extension) to apply the new settings.


📚 References

How to Add serena MCP Servers in Claude Code

Here are two common methods to configure them permanently — both survive restarts and will always auto-load when Claude Code launches.


Method 1: Command Line (Recommended)

Run the following command in your project directory:

claude mcp add serena -- uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context ide-assistant --project $(pwd)
  • Automatically updates ~/.claude/settings.json
  • Adds the serena MCP server into the configuration
  • Will auto-connect each time Claude Code starts

Method 2: Manual Configuration File Edit

  1. Open ~/.claude/settings.json
  2. Add a new MCP server block, for example:
{
  "mcpServers": {
    "fujisoft-code-agent": {
      "command": "cmd.exe",
      "args": ["/c", "code-agent", "server"]
    },
    "serena": {
      "command": "uvx",
      "args": [
        "--from", "git+https://github.com/oraios/serena",
        "serena", "start-mcp-server",
        "--context", "ide-assistant",
        "--project", "/your/project/path"
      ]
    }
  }
}
  • You can add multiple MCP servers here
  • Save the file and restart Claude Code → the servers will auto-connect

Verify Your Configuration

To check your current MCP servers:

cat ~/.claude/settings.json

Or inside Claude Code:

/status

This will display the list of configured MCP servers and their status.


✅ Conclusion

  • Command Line Method → Quickest and easiest, auto-writes to config
  • Manual File Edit → More flexible, can add multiple servers at once
  • Result is the same → Both are persistent and survive restarts

Pro Tip:
On Windows, replace $(pwd) with %cd% when running the command.
Example:

claude mcp add serena -- uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context ide-assistant --project %cd%

👉 With this setup, you can extend Claude Code with custom MCP servers like Serena and streamline your coding workflow!


🔹 Additional Knowledge: Execution Status & Management

After Adding Serena with --project

  • When you run: claude mcp add serena --project $(pwd)
    • Serena will only auto-start when you run Claude Code inside that project directory.
    • Other directories will not see Serena.

Manage and Modify MCP Servers

  1. List configured MCP servers: claude mcp list
  2. Remove Serena from config: claude mcp remove serena
  3. Re-add as a global configuration (visible everywhere): claude mcp add serena -- uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context ide-assistant
  4. Add Serena to another project (project-level): cd /path/to/other/project claude mcp add serena -- uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context ide-assistant --project $(pwd)

Config File Locations

  • Project-level config → created inside the project directory
  • Global config → stored in ~/.claude/settings.json

If you want to switch Serena from project-level to global:

  1. Remove the existing entry claude mcp remove serena
  2. Add it again without --project.

Deploying Ring Camera Recording with Docker: From Continuous Recording to Motion-triggered Capture


1. Goal

  • Enable local recording of Ring camera streams.
  • Phase 1: Continuous recording.
  • Phase 2: Motion-triggered recording (battery/storage friendly).

2. Environment

  • Host: Ubuntu/Debian server or VM
  • Docker / Docker Compose installed
  • Ring camera already linked in the Ring app
  • Testbed used: local VM (same steps apply to production).

3. Core Configuration Files

MQTT broker

mosquitto/mosquitto.conf

listener 1883
allow_anonymous true

Ring-MQTT

ring-mqtt/config.json

{
  "mqtt_url": "mqtt://mqtt",
  "mqtt_options": {
    "username": "xxx@mail.com",
    "password": "yyy"
  },
  "ring_token": ""
}

⚠️ Note: ring_token is obtained via WebUI (http://<host_ip>:55123/). Do not expose the actual value.

Docker Compose

docker-compose.yml

version: "3.8"

services:
  mqtt:
    image: eclipse-mosquitto
    container_name: mosquitto
    restart: unless-stopped
    ports:
      - "1883:1883"
    volumes:
      - ./mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf

  ring-mqtt:
    image: tsightler/ring-mqtt
    container_name: ring-mqtt
    restart: unless-stopped
    depends_on:
      - mqtt
    ports:
      - "8554:8554"      # RTSP 流端口
      - "55123:55123"    # Web UI 端口
    volumes:
      - ./ring-mqtt:/data


4. Steps

  1. Start services: docker compose up -d
  2. Check Ring-MQTT logs: docker logs -f ring-mqtt → Access WebUI at http://<host_ip>:55123/ and generate your Ring token.
  3. Test recording manually: ffmpeg -i rtsp://localhost:8554/<DEVICE_ID>_live -t 30 test.mp4
  4. Verify recordings are saved under ./recordings/.

5. Common Issues & Fixes

  • Cannot connect RTSP → Ensure token generated & container restarted.
  • Recorder container fails → Use correct <DEVICE_ID>_live instead of camera name.
  • Files not saved → Check docker logs -f ring-recorder, confirm volume mounts.

6. Security Considerations

  • Always hide ring_token and <DEVICE_ID> in public configs.
  • Avoid continuous 24/7 recording if using battery-powered Ring devices.
  • Plan for storage growth if recording continuously.

7. Advanced Reference (Motion-triggered Recording)

By default, recording runs continuously, which can drain battery.
With MQTT events, we can start recording only when motion is detected.

Script Example

record_on_motion.sh

#!/bin/bash

DEVICE_ID="<DEVICE_ID>"       # Camera ID (hidden)
DURATION=90             # 每次录制时长(秒)
SAVE_PATH="./recordings"
LOG_FILE="./record_on_motion.log"

mkdir -p "$SAVE_PATH"

log() {
    MSG="[$(date +"%Y-%m-%d %H:%M:%S")] $1"
    echo "$MSG" | tee -a "$LOG_FILE"
}

# 持续订阅 motion 事件
mosquitto_sub -h localhost -t "ring/+/camera/$DEVICE_ID/motion/state" | while read state
do
    if [ "$state" == "ON" ]; then
        TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
        FILE="$SAVE_PATH/Front_$TIMESTAMP.mp4"
        log "Motion detected, start recording: $FILE (duration $DURATION seconds)"

        ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/${DEVICE_ID}_live \
            -t $DURATION -c copy "$FILE" >>"$LOG_FILE" 2>&1

        log "Recording finished: $FILE"
    fi
done

⚠️ Replace <DEVICE_ID> with your Ring camera’s internal ID.

Usage

chmod +x record_on_motion.sh
./record_on_motion.sh

→ When motion is detected, a short recording is automatically created.

8. Directory Layout

~/dev/ring
├── docker-compose.yml
├── record_on_motion.sh
├── mosquitto
│   ├── data
│   ├── log
│   └── mosquitto.conf
├── recordings
└── ring-mqtt
    ├── config.json
    ├── go2rtc.yaml
    └── ring-state.json

9. Appendix: Port Explanations

MQTT → 1883

  • Purpose: Message broker for events (motion, doorbell).
  • Who connects: record_on_motion.sh, Home Assistant, automation systems.

WebUI → 55123

  • Purpose: Token generation interface for Ring authentication.
  • Who connects: You (via browser, one-time login).

RTSP → 8554

  • Purpose: Provides camera video stream as RTSP.
  • Who connects: ffmpeg, VLC, recording service.

Quick Reference Table

PortProtocol/ServiceFunctionWho Uses It
1883MQTT brokerTransmit event messagesMotion script, Home Assistant, automations
55123WebUI (HTTP)Generate/manage Ring tokenYou (browser login)
8554RTSP video streamProvide live video streamffmpeg, VLC, recording service

Install Nextcloud on Ubuntu 25.04 (with HTTPS and Common Issues Fixed)


1. Update the System

sudo apt update && sudo apt upgrade -y

2. Install Apache, MariaDB, PHP and Extensions

sudo apt install -y apache2 mariadb-server libapache2-mod-php \
php php-mysql php-gd php-curl php-xml php-zip php-mbstring php-bz2 \
php-intl php-gmp php-imagick unzip wget -y

3. Create Database and User

sudo mysql -u root

Inside MariaDB:

CREATE DATABASE nextcloud;
CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'yourPassword';
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost';
FLUSH PRIVILEGES;
EXIT;

4. Download and Install Nextcloud

cd /tmp
wget https://download.nextcloud.com/server/releases/latest.zip
unzip latest.zip
sudo mv nextcloud /var/www/
sudo chown -R www-data:www-data /var/www/nextcloud

5. Configure Apache

5.1 HTTP → HTTPS Redirect (Port 80)

This avoids the issue where Apache’s default page shows up or ZeroTier IP access fails.
Create a global redirect config:

sudo tee /etc/apache2/sites-available/nextcloud-http.conf >/dev/null <<'EOF'
<VirtualHost *:80>
    RewriteEngine On
    RewriteCond %{HTTPS} !=on
    RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
EOF

sudo a2enmod rewrite
sudo a2ensite nextcloud-http.conf

5.2 HTTPS VirtualHost

sudo nano /etc/apache2/sites-available/nextcloud-ssl.conf

Example content (replace with your real IP/domain):

<VirtualHost *:443>
    ServerName 192.168.xx.xx
    ServerAlias 192.168.yy.yy

    DocumentRoot /var/www/nextcloud
    <Directory /var/www/nextcloud>
        Require all granted
        AllowOverride All
        Options FollowSymLinks MultiViews
    </Directory>

    SSLEngine on
    SSLCertificateFile /etc/ssl/nextcloud/nextcloud-selfsigned.crt
    SSLCertificateKeyFile /etc/ssl/nextcloud/nextcloud-selfsigned.key

    ErrorLog ${APACHE_LOG_DIR}/nextcloud_ssl_error.log
    CustomLog ${APACHE_LOG_DIR}/nextcloud_ssl_access.log combined
</VirtualHost>

6. Generate a Self-Signed SSL Certificate

sudo mkdir -p /etc/ssl/nextcloud
cd /etc/ssl/nextcloud

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout nextcloud-selfsigned.key \
  -out nextcloud-selfsigned.crt

Tip: For Common Name (CN) enter the IP or domain you plan to use.
If you need to support multiple IPs (LAN + ZeroTier), generate a SAN certificate.


7. Enable Modules and Sites

sudo a2enmod ssl headers env dir mime
sudo a2ensite nextcloud-ssl.conf
sudo systemctl reload apache2

8. Configure Firewall (if UFW is enabled)

sudo ufw allow 80
sudo ufw allow 443

9. Finish Nextcloud Setup

9.1 Access the Installer

Open in browser:

https://192.168.xx.xx
https://192.168.yy.yy   (ZeroTier IP)

Both should now work.


9.2 Fill in Setup Details

  • Admin username & password
  • Data folder (recommended: /var/nextcloud-data, not a VMware hgfs share)
  • Database user: nextclouduser
  • Database password: the strong password you set earlier
  • Database name: nextcloud

9.3 Fix Data Directory Permissions

If on a native Linux disk:

sudo mkdir -p /var/nextcloud-data
sudo chown -R www-data:www-data /var/nextcloud-data
sudo chmod -R 770 /var/nextcloud-data

If on VMware hgfs/Windows share, chmod won’t work → add to config.php:

'check_data_directory_permissions' => false,

9.4 Configure Trusted Domains

Edit:

sudo nano /var/www/nextcloud/config/config.php

Add your LAN and ZeroTier IPs:

'trusted_domains' =>
  array (
    0 => 'localhost',
    1 => '192.168.xx.xx',
    2 => '192.168.yy.yy',
  ),

This prevents the “Access through untrusted domain” error.


10. Client Notes

In the Nextcloud mobile app:

  • Yes → delete file on server and all synced devices.
  • Remove locally → only delete the local copy, file remains on the server.

📌 Key Takeaways

  • Default Apache page issue → solved by disabling 000-default.conf or global 80→443 redirect.
  • ZeroTier IP access issue → solved by adding ServerAlias and global redirect.
  • Data directory permission issue → solved by correct 770 on native disk or disabling check in config.
  • Untrusted domain error → solved by adding all used IPs/domains into trusted_domains.
  • HTTPS → self-signed is fine for testing; for production, use Let’s Encrypt.

VMware Ubuntu 25.04 Bridged Networking Setup Notes


During the setup of Ubuntu 25.04 on VMware Workstation, several common networking issues were encountered when configuring bridged networking.
This note documents the problems and their solutions in detail.


1. Initial Situation

  • After installation, the VM network defaults to NAT mode, with an IP in the 172.x.x.x range.
  • This allows Internet access, but all traffic goes through the host. The VM cannot be directly reached from other devices on the LAN.

2. Issue 1: Missing VMnet0 (Bridged Network)

  • Opening the Virtual Network Editor only showed:
    • VMnet1 (Host-only)
    • VMnet8 (NAT)
  • VMnet0 (bridged network) was missing.

Solution

  1. In the Virtual Network Editor, click Add Network (E).
  2. Select VMnet0 and set it to Bridged mode.
  3. In Bridged to (G):, avoid “Automatic” and instead manually bind VMnet0 to the physical adapter:
    • If the host uses Wi-Fi → choose the wireless adapter (Realtek / Intel Wireless).
    • If the host uses Ethernet → choose the wired Ethernet adapter.

3. Issue 2: VM Does Not Receive LAN IP

  • Even after enabling bridged networking, ip a still showed a NAT IP (172.17.x.x).
  • Cause: VMnet0 was not correctly bound to the physical NIC.

Solution

  1. In Virtual Network Editor, manually bind VMnet0 to the correct NIC (Wi-Fi or Ethernet).
  2. Inside the VM, request a new DHCP lease: sudo dhclient -4 ens33
  3. The VM should then receive a LAN IP, such as 192.168.1.63.

4. Issue 3: Other PCs Cannot Ping the VM

  • The VM can reach the Internet and ping other LAN devices.
  • But other LAN PCs cannot ping the VM.
  • Checking ARP tables shows the VM’s IP maps to the host’s Wi-Fi MAC, not the VM’s MAC.
  • This is a Wi-Fi bridging limitation: wireless NICs often do not allow VMs to use separate MAC addresses.

Solution Options

Option A: Use Wired Ethernet (most stable)

  • Plug in a LAN cable and bridge VMnet0 to the Ethernet adapter.
  • The VM becomes a fully independent LAN node, accessible from other devices.

Option B: Edit VMX Configuration

  • Edit the VM’s .vmx file and add: ethernet0.noPromisc = "FALSE" ethernet0.noForgedTransmit = "FALSE" ethernet0.noMACOverride = "FALSE"
  • Save and restart the VM.
  • ⚠️ Note: Not all Wi-Fi adapters support promiscuous mode. Success depends on hardware.

5. Verification Steps

  • Inside Ubuntu VM: ip a Confirm the VM has an IP in the same LAN subnet (e.g., 192.168.1.x).
  • Test Internet connectivity: ping -4 google.com
  • Test LAN access from another PC: ping <VM-IP> If reachable → bridged networking works.
    If unreachable → likely a Wi-Fi bridging limitation.

6. Summary

  • NAT mode: Internet access works, but LAN cannot reach the VM.
  • Bridged mode (VMnet0): The VM should act as a LAN node, but Wi-Fi bridging often fails.
  • Solutions:
    • Wired bridge (Ethernet) → most stable.
    • Wi-Fi + VMX config tweaks → may work, hardware dependent.

通过 WordPress.com 设计一个这样的站点
从这里开始