designer/engineer

Wicked good design for wicked problems.

Featured Work

PROJECT FELINES

PROJECT FELINES

The Alzheimer’s field spent billions clearing amyloid plaques from brains that kept declining. FELINES is what happened when I stopped accepting that narrative and started tracing upstream: iron dysregulation, ferroptosis, vascular failure, across six neurodegenerative diseases that enter through different doors but break down the same way.

GOINVO WEBSITE

GOINVO WEBSITE

Migrating a healthcare UX studio off aging Gatsby infrastructure onto Sanity CMS, animated page transitions, and accessibility-first architecture. Nine document types, card-morph transitions, and a content pipeline that lets non-technical editors publish without touching code.

WRITROSPECT

WRITROSPECT

An AI-assisted journaling tool that solves the blank-page problem. Built on neuroscience research about why people abandon commitments to themselves, with a prompt architecture that adapts to what you actually need rather than loading everything at once.

RENDOMAT

RENDOMAT

A programmatic video studio built at GoInvo. Turns structured content into multi-platform video with professional motion design, replacing a manual production pipeline that took a full day per video with an automated system that renders in minutes.

PHYLACTORY

PHYLACTORY

100,000+ concurrent units in a fantasy game that fuses factory management with real-time strategy. Built a custom Entity-Component-Space system from scratch in Godot 4 with C++ and GDExtension, hand-crafted every sprite and tileset in Aseprite, and designed twelve interconnected game systems such as energy grids, trains, pipes, and formation AI.

Case Study

Biological causation
is a graph problem.

A Rust/WASM graph analysis engine built for the FELINES project, designed to map and analyze causal relationships across biological mechanisms, with every algorithm running in the browser without blocking the UI.

6+
Analysis algorithms
17
Relation types
7
Causal confidence levels
4
Export formats
01

The Problem

The FELINES project maps how six neurodegenerative diseases converge on the same iron-driven cell death mechanism. That map is a directed graph: nodes are biological mechanisms, edges are causal relationships with varying levels of evidence behind them.

Analyzing that graph means answering questions like: which mechanism has the most downstream influence? What's the strongest evidence path between iron dysregulation and oligodendrocyte death? Where are the reinforcing feedback loops that make the cascade self-sustaining? Which node, if disrupted, causes the most damage to the network?

Existing graph tools either required desktop software (Gephi, Cytoscape), couldn't handle causal confidence as edge weights, or blocked the browser's main thread during computation. I needed something that ran natively in the browser, treated evidence quality as a first-class concept, and stayed responsive while crunching a network with hundreds of relationships.

The graph isn't the visualization.
The graph is the analysis.

02

The Framework

Before writing a line of Rust, I needed a formal system for representing disease mechanisms. Existing standards didn't fit: BEL (Biological Expression Language) captured causal semantics but not stock-flow dynamics. SBGN had visual notation but no computational model for path analysis. Systems dynamics frameworks tracked flows but lacked biological entity classification.

So I designed the Systems Biology Stock-Flow (SBSF) framework, combining Donella Meadows' stock-flow thinking with BEL causal semantics, SBGN entity classification, and the OBO Relation Ontology. The result is a typed graph representation where every node carries biological meaning and every edge carries evidence provenance, and where the graph structure itself enables computational reasoning about disease mechanisms.

Typed Nodes

Four categories—stocks (accumulations like protein levels), states (qualitative conditions like cell phenotype), processes (dynamic flows like phagocytosis), and boundaries (system edges like genes or drug interventions)—with deep subtypes that preserve biological semantics.

Evidence-Weighted Edges

Seven causal confidence levels (L1–L7) tied to experimental methods: randomized controlled trials at L1, genetic knockouts at L3, in vitro assays at L5, case reports at L7. These map directly to mathematical weights for path analysis.

Quantitative Annotations

Edges can carry stoichiometric coefficients, kinetic parameters (Km, Vmax, IC50), effect sizes, and dose-response curves—enabling rough calculations through causal chains rather than just qualitative reasoning.

Interoperable Export

Four export formats—NetworkX JSON for Python workflows, GraphML for Cytoscape and yEd, GEXF for Gephi, and CSV for spreadsheet analysis—so the same graph moves between browser, desktop tools, and programmatic pipelines.

03

The Architecture

The engine is a three-layer system. Rust handles all graph algorithms, compiled to a 331KB WASM binary. A Web Worker loads that binary and runs computation in a background thread so the UI never freezes. A React hook wraps the worker with a promise-based API that feels like calling any other async function.

This means a user can trigger a betweenness centrality calculation on the full network, continue interacting with the page, and receive the results when they're ready: no loading spinners, no frozen tabs.

Rust Core

petgraph-backed graph engine compiled to WebAssembly with wasm-bindgen. All algorithms run in pure Rust with zero JavaScript dependencies in the computation layer.

Web Worker

Message-driven interface that dynamically loads the WASM binary and runs all computation off the main thread. Twenty message types with async request/response ID tracking.

React Hook

useGraph() manages the full worker lifecycle, exposes promise-based methods for every algorithm, and supports automatic layout on mount with useCallback optimization.

Not all evidence is equal.

Every edge in the graph carries a causal confidence level from L1 (direct experimental manipulation) to L7 (theoretical). These levels map to mathematical weights that shape path analysis: stronger evidence means shorter distance and higher strength.

L1
Randomized controlled trial
L2
Mendelian randomization
L3
Genetic knockout / knock-in
L4
Animal or human intervention
L5
In vitro / ex vivo mechanistic
L6
Cohort or case-control
L7
Cross-sectional / case report
04

The Analysis

The engine exposes six categories of graph analysis, each designed for a specific research question. All run in the Web Worker and return structured results that the React layer can render immediately.

Centrality

Degree, betweenness (Brandes algorithm with weighted Dijkstra variant), harmonic closeness for disconnected graphs, and PageRank with configurable damping.

Path Analysis

Shortest path (BFS and Dijkstra), strongest path maximizing minimum confidence along the route, all simple paths with bounded enumeration, and neighborhood exploration.

Feedback Loops

Tarjan’s SCC-based cycle detection that classifies each loop as reinforcing or balancing, with minimum confidence scoring across edges.

Community Detection

Label propagation algorithm identifying clusters of tightly connected mechanisms, with modularity scoring and cross-module connectivity matrices.

Robustness

Systematic node removal simulation that ranks which mechanisms, if disrupted, cause the greatest connectivity loss across the network.

Sugiyama Layout

Hierarchical graph drawing with longest-path layering, ghost node insertion for clean multi-layer edges, and barycentric crossing minimization.

05

The Data Model

The SBSF framework defines 17 relation types (increases, decreases, produces, degrades, binds, transports, traps, protects, disrupts, and more) that capture the semantics of mechanistic biology rather than flattening everything to generic edges. The engine knows that “traps” and “degrades” are inhibitory, so feedback loop polarity is derived automatically from edge composition.

When quantitative data exists, edges carry stoichiometric coefficients (reactant and product ratios), kinetic parameters (Km, Vmax, IC50), or effect sizes with confidence intervals. This means the engine can trace not just whether A affects B, but roughly how much, turning a qualitative causal map into a semi-quantitative reasoning tool.

Generic graph tools

Mechanistic graph

Nodes and edges with optional labels
Typed nodes (stock, state, boundary, process) with semantic meaning
Single edge weight (numeric)
Causal confidence (L1–L7) mapping to both strength and distance weights
Layout optimizes for aesthetics
Sugiyama layout respects information flow direction in causal networks
Analysis treats all paths equally
Strongest-path algorithm finds the most evidence-supported causal chain

Every edge carries the weight of its evidence.

06

What I Learned

Rust/WASM is production-ready for browser computation.

The Web Worker + WASM pattern eliminates the main-thread blocking that makes JavaScript graph libraries unusable at scale. The 331KB binary loads once and runs every algorithm at near-native speed. The developer experience with wasm-bindgen has matured to the point where the Rust/JS boundary is nearly invisible.

Domain-specific tools outperform general ones.

A generic graph library forces you to encode biological semantics as ad hoc metadata. Building causal confidence, relation types, and feedback loop polarity into the engine's core data structures meant every algorithm could reason about evidence quality natively rather than through post-hoc filtering.

The hook is the product.

All the Rust, WASM, and worker infrastructure exists so that the React consumer can write useGraph(data) and call await betweennessCentrality(). The complexity lives behind an interface simple enough that the research team never has to think about WASM or workers.

Built with

RustWebAssemblypetgraphwasm-bindgenTypeScriptReact 18/19Web WorkersXY Flowvitest

About

Someone didn't give up on me.
That's why I build.

There is no definitive formulation of a wicked problem.

When I was three months old, I had intestinal volvulus. I wouldn’t stop crying, every doctor had a different explanation, and my parents kept looking until they found one who could see what the others missed.

The choice of explanation determines the resolution.

That persistence saved my life. It also gave me something I’ve carried ever since: the recognition that how you frame a problem determines whether you can solve it at all.

Every wicked problem can be considered a symptom of another problem.

I don’t just say “wicked” because I’m from Massachusetts. In 1973, Horst Rittel and Melvin Webber described wicked problems: the kind where the crying is a symptom of something deeper, and treating the surface never reaches the root.

Every wicked problem is essentially unique.

I started where a lot of CS grads start: writing Perl scripts for an escalation engineering team at Dell EMC, then two jobs in the gaming industry building things that were complex, fast, and fun. But games ship patches. The problems I kept gravitating toward were the ones where you don’t get to patch. I earned a Master’s in bioinformatics because I wanted to bring engineering somewhere the problems don’t repeat: healthcare, government, regulated systems where every deployment is its own context and every patient is their own case.

Wicked problems do not have an enumerable set of potential solutions.

I joined a small studio designing for healthcare and government because the solution space is never closed. There’s no dropdown menu of correct answers. You design, ship, learn, and revise, knowing the next version will be different, not because the last one failed, but because the problem shifted underneath you.

Every solution is a “one-shot operation”; every trial counts.

In regulated environments, every release matters. There are no sandbox deployments when someone’s care depends on the system working. That weight is something I chose, not something I stumbled into.

The planner has no right to be wrong.

When you build software for medical contexts, you carry a specific accountability. The system you ship becomes part of someone’s care, and “move fast and break things” is not an option when the things that break are people.

Wicked problems have no stopping rule.

Over the past several years, that same instinct has pulled me toward problems with no finish line. I started researching neurodegenerative disease on my own, not because I expected to solve it, but because I couldn’t stop looking once I started seeing what others were missing. That’s how I work on everything: once I care about a problem, there is no clean stopping point, only the next question.

There is no immediate and no ultimate test of a solution.

I’ve learned to be comfortable building things I can’t fully validate yet. Whether it’s a healthcare prototype shipping to users whose feedback won’t arrive for months, or a research framework whose predictions won’t be tested for years, the work has to be good enough to act on before you know if it’s right. That uncertainty isn’t a flaw in the process. It is the process.

Solutions are not true-or-false, but good-or-bad.

That’s the lens I bring to everything I build. Not “is this correct” but “is this better.” Better communication, better questions, better tools for the people using them. My independent research on neurodegeneration, Project FELINES, follows the same principle: it doesn’t claim to be the right answer, just a framework that generates better questions than the ones we’ve been asking.

I build software. I design systems. But what drives me is the same thing that’s driven me since before I could talk: someone needs to keep looking until they find what the others missed.

Contact

Let's build something together.

Interested in working together, have a question, or just want to say hello? I'd love to hear from you.