SRSWTI Inc.
Building the world's fastest retrieval and inference engines.

Bodega's Own
Exclusive models trained/optimized for Bodega Inference Engine
Raptor Series
Ultra-compact reasoning models
- bodega-raptor-0.9b - 900M params. Runs on mobile/Pi with 100+ tok/s. Handles document classification, query reformulation, and lightweight reasoning at the edge.
- bodega-raptor-90m - Extreme edge variant. Sub-100M params for amazing tool calling support and reasoning.
- bodega-raptor-1b-reasoning-opus4.5-distill - Distilled from Claude Opus 4.5 reasoning patterns. Enhanced logical deduction chains.
- bodega-raptor-8b-mxfp4 - Balanced power/performance for laptops.
- bodega-raptor-15b-6bit - Better raptor.
Flagship Models
Frontier intelligence, distilled and optimized
- deepseek-v3.2-speciale-distilled-raptor-32b-4bit - DeepSeek V3.2 distilled to 32B with Raptor reasoning. Exceptional math/code generation in 5-7GB footprint. 120 tok/s on M1 Max.
- bodega-centenario-21b-mxfp4 - Production workhorse. 21B params optimized for sustained inference workloads. Behemoth in all categories.
- bodega-solomon-9b - multimodal and best for agentic coding..
Axe-Turbo Series
Agentic Coding Models
- axe-turbo-1b - 1B params, 150 tok/s, sub-50ms first token. Edge-first architecture.
- axe-turbo-31b - High-capacity variant for desktop/server deployments.
Specialized Models
Task-specific optimization
- bodega-vertex-4b - 4B params. Optimized for structured data
- blackbird-she-doesnt-refuse-21b - Uncensored 21B variant for unrestricted generation.
→ Explore Bodega's Own Collection
Optimized for Bodega Inference Engine