Murmuration

Autonomous navigation for simulated rovers, drones & swarms

Cascaded MLPs • Stereo + LiDAR fusion • MCP/LLM orchestration

System Architecture

Digital twins run in parallel inside a Babylon.js simulation. Each twin has its own sensor stack (LiDAR, stereo, IMU, wheels) feeding a cascaded MLP brain. An LLM supervisor coordinates training via MCP, and the best policy is exported to the physical rover.

System architecture — LLM supervisor, MCP aggregation, digital twins, ONNX export, physical rover

Navigation Pipeline

The depth-to-action pipeline is a configurable compute graph. Swap between LiDAR-only, stereo + classical matching, stereo + learned MatchingCortex, or fused — all at runtime.

PerceptCortex

42 → 16 → 8

Compresses LiDAR/stereo depth + IMU into 8 learned environment features.

DecisionCortex

21 → 16 → 4

Maps features + pose + slip + goal to steering, throttle, brake, risk.

MatchingCortex

CNN — TBD

Learned stereo matcher — replaces BM/SGM with a trained CNN. Architecture under development.

Samples

LiDAR Scan

Visualize a simulated LiDAR scan from the depth pipeline with convolution pooling.

Stereo Depth

Compare stereo matching (BM vs MatchingCortex) on synthetic scenarios.

Navigation

Run the full perception → decision cascade on random obstacle layouts.

Training

Watch the PerceptCortex train live — loss curves, per-feature MSE, weight visualization.

Compute Graph

Build and run custom pipeline configurations — swap depth sources at runtime.

Odometry

6-wheel differential odometry with slip detection and N-wheel averaging.

Documentation

Navigation Architecture

Two-tier brain, sensor layer, cascaded MLP design, MCP strategic layer.

Stereo Vision

Matching algorithms, failure modes, noise characteristics, trade-offs vs LiDAR.

Differential Odometry

N-wheel per-side averaging with slip filtering for 6-wheel rovers.