Gisela treats AI agent orchestration like a compiler treats source code. You define workflows in ASL (based on AWS Step Functions), and Gisela compiles them into typed, deterministic artifacts with full audit trails. The compiled machines can be shared, composed, and executed with replay guarantees.
The problem
Most AI agent frameworks chain prompts together with glue code. When something goes wrong, you can’t tell what happened, you can’t replay the failure, and you can’t reuse the workflow as a building block for something else. The model is treated as the system, rather than as a component within a system.
How Gisela works
You write your agent workflow as an ASL JSON file. Gisela’s compiler parses, validates, and generates a native artifact with typed interfaces and deterministic semantics. At runtime, every state transition is journaled. You can replay any execution and get the same result.
Machines can invoke other machines (submachines), forming hierarchical, reusable components. A compiled machine has declared inputs, outputs, and required capabilities, so you know what it needs before you run it.
Project structure
- gisela - the Python runtime and CLI
- gisela-spec - the formal specification
- vendor/pytsme - Rust bindings to the TSME compiler and state machine engine
Key ideas
- The model is a fallible worker inside deterministic machinery, not the machinery itself
- Every execution is journaled and replayable
- Machines have typed interfaces and can be composed into larger workflows
- Compiled artifacts can be published and shared without exposing source
- Reliability comes from system design, not model scale
Tech stack
Python 3.9+, Rust (via pytsme/TSME), ASL/JSON, Sphinx