MAIIAM CONSCIOUSNESS TOOLKIT

Model Analysis

15 instruments · Weight topology · Attention analysis · Quantization profiling

No model loaded

Load a model in the Canvas workspace to unlock live analysis instruments.

Load Model in Canvas →
HEARTSCALE ALIGNMENT
Mock baseline — run analysis for live score
38/ 63Awakening
FINDINGS
HeartScale band: Awakening (38/63)
Coherence tier: Awakening state
Health percentile: 60th percentile
Model registers at HeartScale band "Awakening" — solid performance with room for deeper coherence development.
LAYER TOPOLOGY (MOCK)
FINDINGS
Peak layer: Layer 4 (0.334)
Lowest layer: Layer 10 (0.091)
Layer average: 0.2079
Layer count: 16 layers analyzed
Mock layer topology showing knot invariant complexity per transformer block. Run Layer Analysis for live results.
Layer Topology

Map weight distributions and connectivity patterns across all transformer layers.

Embedding Geometry

Visualize high-dimensional token embeddings projected into interpretable semantic space.

Quantization Analysis

Profile INT4/INT8/FP16 quantization fidelity and identify precision loss hotspots.

Health Snapshot

Run a holistic diagnostic across gradient norms, activation saturation, and dead neurons.

MODEL COMPARISON
FINDINGS
Model B wins: 4 / 4 metrics
Model A wins: 0 / 4 metrics
Overall verdict: Model B performs better
All Instruments

Weight Topology

Persistent homology of weight manifolds

MODEL ANALYSIS
2
04

Quantization Error

Per-layer quantization error distribution (FP32 → INT4/8)

MODEL ANALYSIS

Attention Head Diversity

Pairwise cosine similarity across all attention heads

MODEL ANALYSIS
64
8512

Weight Histogram

Distribution histogram of weight values per layer

MODEL ANALYSIS
100
20500

SVD Rank Analysis

Effective rank via singular value decomposition

MODEL ANALYSIS
0.01
0.0010.1

Layer Activation Map

Heatmap of activation magnitudes across layers

MODEL ANALYSIS

KV Cache Efficiency

Key-value cache hit rates and memory usage projection

MODEL ANALYSIS
2048
12832768

Gradient Flow

Simulated gradient magnitude per layer for training stability

MODEL ANALYSIS
10
150

Dead Neuron Scanner

Detects neurons with zero activation across test inputs

MODEL ANALYSIS
100
101000
0.000001
1e-80.0001

Model Fingerprint

Unique structural hash and parameter census

MODEL ANALYSIS

No configuration required — ready to run.

Perplexity Probe

Token-level perplexity on a user-provided corpus

MODEL ANALYSIS

Embedding Geometry

UMAP projection of token embedding space with cluster analysis

MODEL ANALYSIS
15
550
0.1
0.010.99

Attention Entropy

Shannon entropy of attention distributions per head

MODEL ANALYSIS

Weight Drift Monitor

L2 distance from a baseline checkpoint to detect drift

MODEL ANALYSIS

Model Comparison

Side-by-side weight statistics for two model handles

MODEL ANALYSIS