OpenCode HopGPT Bridge
A complete pipeline that connects OpenCode (AI-powered IDE) to JHU’s HopGPT service, turning a web-only university AI chat interface into a fully programmable development environment. The system spans browser automation, API translation, token management, and IDE configuration.
The Stack
┌─────────────┐ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────┐
│ OpenCode │─────►│ FastAPI Server │─────►│ Browser Service │─────►│ HopGPT │
│ (IDE) │ HTTP │ (Port 8000) │ ZMQ │ (Playwright) │ CDP │ (JHU's AI) │
└─────────────┘ └─────────────────┘ └──────────────────┘ └─────────────┘
Components
- Browser Service (Playwright Daemon)
- Persistent Chromium instance with stealth settings to avoid automation detection
- Automated Microsoft SSO login flow (email → password → MFA handling → session persistence)
- Chrome DevTools Protocol (CDP) network interception for real-time token capture
- ZeroMQ IPC server for communication with the API layer
- Automatic session recovery when authentication expires
- FastAPI API Server
- OpenAI-compatible
/v1/chat/completionsendpoint - Request translation from OpenAI format to HopGPT’s internal API structure
- Automatic model routing based on model family:
- Claude models →
/api/agents/chat/AnthropicClaude - GPT models →
/api/agents/chat/AzureOpenAI - Llama models →
/api/agents/chat/MetaLlama
- Claude models →
- SSE streaming response parsing and reformatting
- Retry logic with automatic token refresh on 401 errors
- Token Management System
- CDP’s
Network.requestWillBeSentevent captures Authorization headers passively - No cookie parsing or localStorage manipulation needed
- Detects null/invalid tokens and triggers refresh automatically
- Token refresh via UI automation: navigates to new chat, sends test message, captures fresh token from the authenticated request
- OpenCode Configuration
- Custom provider configuration pointing to
localhost:8000 - Model definitions matching HopGPT’s available models (Claude Opus 4.5, Claude Sonnet 4.5, GPT-4o, Llama 4, etc.)
- Seamless integration - OpenCode treats it as any other OpenAI-compatible provider
- Custom Prompt System
- JSON-based configuration for system prompts and personas
- Hot-reloadable without server restart
- Supports multiple named configurations with easy switching
Setup & Infrastructure
The project includes:
- Makefile for service orchestration (
make start,make stop,make status) - Environment configuration for credentials and model defaults
- PostgreSQL integration for conversation persistence (optional)
- Logging infrastructure for debugging token flows and API translation
Technical Highlights
- Zero API Keys Required: Uses existing university authentication - if you can log into HopGPT in a browser, this works
- Transparent Proxying: OpenCode (or any OpenAI-compatible tool) has no idea it’s talking to a browser automation layer
- Graceful Degradation: Token expiration, network issues, and session timeouts are handled automatically with retry logic
- Model Agnostic: Same interface works for Claude, GPT, and Llama models - just change the model name in your request
Outcome
Full IDE integration with university-provided AI models. Write code with Claude Opus 4.5 assistance, get explanations, refactor - all through OpenCode’s native interface, all running through JHU infrastructure. No API keys, no usage limits beyond what HopGPT allows, no cost.
Repo Link
Private (university credentials required)