What it does
Give it a URL. It extracts everything a website tells the browser: layout, typography, colors, component structure, animations, responsive breakpoints. Then it reconstructs that site as clean HTML and CSS in a fresh workspace.
Not a screenshot. Not a PDF. A real, editable codebase you can modify, extend, or drop into your own project.
Why I built it
I kept hiring agencies and freelancers to clone marketing sites as starting points for client work. Three to five days, five figures, mixed quality. I wanted the output in ten minutes.
Every other "AI website builder" in this space starts from a blank canvas or a template library. This one treats the target site as a deterministic spec and matches it exactly. The output is yours. No vendor lock-in, no subscription, no template library.
How it works
Three phases, each running as an isolated Claude Code sub-agent so your main conversation stays clean:
Extractor
Visits the site at every breakpoint, captures reference screenshots, downloads assets, and writes a complete site-dna.json describing the page.
Reconstructor
Builds the page section by section from the DNA file, generating design tokens on the first section so the whole site stays consistent.
Comparator
Runs pixel-level diffs between your build and the reference, section by section, and tells you exactly what is off.
Each sub-agent returns a short summary, not raw output. That keeps the main conversation tight while the agents do the heavy work in isolation.
What it handles well
- Static marketing sites (landing pages, product pages, pricing pages)
- Component-heavy layouts with complex grid or flex structures
- Design systems with tokens, typography scales, spacing scales
- Responsive breakpoints (desktop, tablet, mobile)
- Hover states, focus states, subtle animations
- Wayback Machine archived pages (use the archive URL directly)
Known limitations
- Dynamic content behind auth walls is out of scope (the extractor sees what an unauthenticated browser sees)
- Single Page Apps with heavy client-side rendering need a longer extraction pass
- Video-heavy sites: reconstructs the layout, not the video files
- Font licensing: downloads the font files the site serves, same as any browser would. You are responsible for the license if you ship the output
Requirements
- Claude Code installed
- Node.js 18 or higher (for extraction, token generation, and visual diff scripts)
Install
Clone into your Claude Code plugins directory:
Then register the plugin per the Claude Code local plugin loading flow. Full instructions in the GitHub README.
License
MIT. Use it, fork it, ship with it. No attribution required.
Built by Azeem Khan at Calyber AI. If you're a founder who needs an AI product shipped in two weeks instead of a tool you assemble yourself, we scope sprints here.