MAIIAM CONSCIOUSNESS TOOLKIT

AI Training

9 instruments · LoRA fine-tuning · Training dynamics · Catastrophic forgetting detection

9 Tools
LoRA Adapters
Optimizer 3070
Model Diff
TRAINING LOSS
FINDINGS
Final train loss: 0.2636
Final val loss: 0.3721
Train/val gap: 0.1084
Convergence: Mild overfitting risk
Training converged to 0.264 loss over 100 steps. Small train/val divergence. Consider early stopping or dropout regularisation.
All Instruments

LoRA Fine-Tune

Low-rank adaptation fine-tuning with custom dataset

AI TRAINING
8
164
32
1128
3
120

Training Dynamics

Loss landscape, gradient norm, and learning rate analysis

AI TRAINING
0.0001
0.0000010.01

Catastrophic Forgetting

EWC-based detection of forgetting on original task distribution

AI TRAINING
0.4
01

Gradient Checkpoint Profiler

Measures memory savings from gradient checkpointing

AI TRAINING
4
132

Learning Rate Finder

Smith cyclical LR range test with loss curve

AI TRAINING
100
20500

Dataset Quality Audit

Perplexity, deduplication, and contamination checks

AI TRAINING

Checkpoint Delta

Parameter diff between two training checkpoints

AI TRAINING

Optimizer State Inspector

Inspect AdamW moment estimates and effective learning rates

AI TRAINING

No configuration required — ready to run.

Training Replay

Replay saved training logs with interactive loss visualization

AI TRAINING