Most visual effects tools for the web have the same problem: they look great in the editor, but the moment you export, you're locked into someone else's runtime. Your embed phones home to a CDN. It breaks when the vendor changes their API. It adds 200kb of JavaScript your visitors didn't ask for.

Highlights

Challenge

Most visual effects tools for the web have the same problem: they look great in the editor, but the moment you export, you're locked into someone else's runtime. Your embed phones home to a CDN. It breaks when the vendor changes their API. It adds 200kb of JavaScript your visitors didn't ask for.

I wanted to build the opposite. A creative tool where the output is a single self-contained HTML file. No SDK. No network requests. No vendor dependency. The file is the product. You can drop it on any host, open it offline, embed it in any framework, and it just works.

That's Refract. It's a visual effects engine where you compose multi-layer scenes (GLSL shaders, 3D environments, particles, animated text) and a compiler turns the whole thing into a standalone artifact.

Approach

What it actually does

There are four main pieces:

A multi-layer compositor, which is the core. You stack shader layers, images, 3D objects on a spatial canvas and blend them together with standard blend modes. Each layer compiles independently; the compositor blits them together in UV space.

A per-element animation system with scroll, hover, click, and mouse-driven triggers. 30+ easings including spring and elastic, ping-pong looping, cross-layer chaining. Everything serializes into the embed.

3D world building with GLB asset import, lighting, fog, scroll-driven camera paths with CatmullRom interpolation and per-keyframe FOV control. This was the most recent major addition and the one I'm most excited about.

And 38 MCP tools that give AI agents the same creative surface as the visual editor. Not a simplified subset. The full thing. An agent can build a 3D world, set up scroll animations, adjust post-processing, and export, all without the UI.

The stack is React + Vite on the frontend, a custom WebGL2 renderer, Three.js r170 for 3D, Node/Express + PostgreSQL on the backend. 51 presets ship across 9 categories, 118 material options including full PBR and matcap.

Outcomes

The hard parts:

The compiler. This is ~5,000 lines of bundler code that does something no creative tool I know of does: it produces a truly standalone HTML file. That means inlining the WebGL renderer, GLSL shaders, the animation engine, Three.js (only when 3D is used), the particle system, and even fonts. Google Fonts get fetched at export time, converted from WOFF2 to TTF, and base64-encoded into the file. Nothing is left external.

This is the defining technical bet of the whole project. It shaped everything downstream: how the renderer is structured, how assets are referenced, what the Player SDK looks like. If I'd built on top of an existing creative framework, I wouldn't have had this level of control over the output.

Two rendering worlds in one GL context. 2D shader layers live in my custom WebGL compositor. 3D objects live in Three.js. Getting them to coexist without doubling GPU memory meant building a zero-copy texture bridge. The compositor renders to a framebuffer, and Three.js reads that texture directly via the internal __webglTexture property. No readPixels, no canvas-to-texture copies. A GLSL shader layer can literally serve as a live matcap texture for a 3D object sitting above it.

The data pipeline problem. Every new field has to flow through ~8 layers: component state, App.jsx, API hook, server route, database migration, bundler, generated embed, client restore. Miss any one and you get silent data loss. This was the single biggest source of bugs throughout development. Not the rendering, not the shaders, but making sure a new property actually survives the full round trip from UI to export to reload.

Making AI work as a real creative tool. The 38 MCP tools only work because every feature has a clean programmatic API underneath the UI. That wasn't a given. It was a constraint I had to enforce from early on. Any time I built a feature that only worked through the editor, I knew I'd have to go back and give it an API surface. The payoff is that Refract is one of the most AI-addressable creative tools out there, and MCP stays free because the user's AI client handles inference costs.

Summary

Design decisions:

I designed every screen: editor, dashboard, export flow, pricing, landing page. A few decisions worth noting:

The editor uses progressive disclosure heavily. The spatial canvas and property panel adapt to whatever layer type you've selected. Shader uniforms, 3D material settings, animation curves. All of that is there, but it doesn't all hit you at once.

The UI is custom, all made in Figma. I followed common creative tool patterns, used various mental models to create each panel in the Studio and library with ownership and creative freedom in mind.

Figuring out how to creatives can collaborate with AI and all of the bi-directional tools was one of the biggest design challenges. You want a conversational tool through AI that gives users the same tools as the AI. This means that both user facing and AI properties need to be in alignment and human enough to understand and interact with.

Refract keeps everything in mind when it comes to the creative workflow of WebGL. With that, a dashboard and library needed to be thorough and easy to use.

The export flow generates platform-specific embed code. Not just a download button. You get copy-paste snippets for HTML, React, Next.js, Webflow, WordPress, Framer, Squarespace, and web components. MP4 video export (up to 15s) covers social and presentations.

More Images and design details coming soon. This project is very large.