Model Analysis
15 instruments · Weight topology · Attention analysis · Quantization profiling
Load a model in the Canvas workspace to unlock live analysis instruments.
Map weight distributions and connectivity patterns across all transformer layers.
Visualize high-dimensional token embeddings projected into interpretable semantic space.
Profile INT4/INT8/FP16 quantization fidelity and identify precision loss hotspots.
Run a holistic diagnostic across gradient norms, activation saturation, and dead neurons.
Weight Topology
Persistent homology of weight manifolds
Quantization Error
Per-layer quantization error distribution (FP32 → INT4/8)
Attention Head Diversity
Pairwise cosine similarity across all attention heads
Weight Histogram
Distribution histogram of weight values per layer
SVD Rank Analysis
Effective rank via singular value decomposition
Layer Activation Map
Heatmap of activation magnitudes across layers
KV Cache Efficiency
Key-value cache hit rates and memory usage projection
Gradient Flow
Simulated gradient magnitude per layer for training stability
Dead Neuron Scanner
Detects neurons with zero activation across test inputs
Model Fingerprint
Unique structural hash and parameter census
No configuration required — ready to run.
Perplexity Probe
Token-level perplexity on a user-provided corpus
Embedding Geometry
UMAP projection of token embedding space with cluster analysis
Attention Entropy
Shannon entropy of attention distributions per head
Weight Drift Monitor
L2 distance from a baseline checkpoint to detect drift
Model Comparison
Side-by-side weight statistics for two model handles