Skip to main content

This page brings together the projects and technical practices I am actively building, with a focus on background, core goals, implementation ideas, current status, and the next steps worth documenting.

Status: Iterating

Hybrid Cloud-Edge LLM Assistant Platform

A practical assistant platform designed to balance local control with expandable cloud capability.

This project explores how local and cloud-hosted models can work together under different resource conditions. It focuses not only on model access, but also on deployment structure, maintainability, permission boundaries, and daily usability trade-offs.

LLMAPI IntegrationLocal DeploymentHybrid Architecture

Next: document model access strategy, deployment topology, context handling, and real trade-offs across different usage modes.

Status: Organizing

Automated Deployment and Rollback Script System

Turning repeated deployment work into safer, executable, rollback-friendly operational scripts.

The goal is not to build a heavy platform, but to make everyday deployment work more reliable and repeatable. The focus is on consistency, failure handling, rollback paths, and maintainability rather than simply wrapping commands into scripts.

ShellDeployment FlowRollback StrategyAutomation

Next: expand the script structure, configuration layout, failure scenarios, and environment-specific adaptation notes.

Status: Building

Self-Hosted Services and Domain Access Setup

A maintainable access setup for self-hosted services through domains, proxies, certificates, and shared entry points.

This project is closely tied to daily operations practice. It organizes how self-hosted services are exposed and maintained, including DNS, reverse proxy, certificate management, and access-entry planning, with an emphasis on clarity and long-term maintainability.

DockerCloudflareCaddyReverse Proxy

Next: continue documenting how Cloudflare, Caddy, Docker, access control, and availability tuning work together in practice.