AION-Torch: Adaptive Input/Output Normalization
AION-Torch is a PyTorch library that implements Adaptive Input/Output Normalization (AION), a method for stabilizing deep neural networks. AION automatically adjusts residual connections to prevent vanishing and exploding gradients, enabling stable training of very deep networks with minimal configuration.
🚀 Features
- Adaptive Residual Scaling: Automatically adjusts residual connection strength based on signal statistics
- Stable Deep Training: Prevents vanishing/exploding gradients even in networks with 1000+ layers
- Drop-in Replacement: Works with any architecture using residual connections (Transformers, ResNets, etc.)
- Distributed Ready: Fully supports DDP with synchronized statistics across all GPUs
- Zero Config: Sensible defaults work out-of-the-box, no hyperparameter tuning needed
📦 Installation
From PyPI
⚡ Quick Start
1. The AionBlock (Recommended)
The easiest way to use AION is to replace your standard residual blocks with AionBlock. It implements the Pre-LayerNorm pattern augmented with AION scaling.
import torch import torch.nn as nn from aion_torch import AionBlock # Define your transformation layer (e.g., Attention or MLP) mlp_layer = nn.Sequential( nn.Linear(512, 2048), nn.GELU(), nn.Linear(2048, 512) ) # Wrap it in an AionBlock # Structure: x + alpha * layer(norm(x)) block = AionBlock(layer=mlp_layer, dim=512) # Forward pass x = torch.randn(8, 128, 512) output = block(x)
2. Low-Level AionResidual
For custom architectures, you can use the AionResidual adapter directly.
from aion_torch import AionResidual class MyLayer(nn.Module): def __init__(self, dim): super().__init__() self.norm = nn.LayerNorm(dim) self.ffn = nn.Linear(dim, dim) # Initialize AION adapter self.aion = AionResidual(alpha0=0.1, beta=0.05) def forward(self, x): residual = x x_norm = self.norm(x) y = self.ffn(x_norm) # Apply adaptive residual connection # Formula: x + alpha * y return self.aion(residual, y)
🧠 How It Works
AION adaptively scales residual connections using a simple but effective formula:
where ratio measures the relative magnitude of the transformation output compared to the input. When the network becomes unstable (high ratio), AION automatically reduces the scaling factor. When stable (low ratio), it uses a stronger connection.
Key insight: By maintaining balanced signal propagation, AION ensures gradients flow stably through arbitrarily deep networks without exponential growth or decay.
AION as the General Form
Mathematically, other stabilization methods are just special cases of the AION formula where adaptivity (
| Method | AION Equivalent Parameters | Behavior |
|---|---|---|
| DeepNorm | Fixed static scaling based on depth | |
| Pre-LN | No scaling (identity) | |
| ReZero | Learnable static scalar | |
| AION | Dynamic adaptation based on signal energy |
AION generalizes these approaches by adding the control term (
📚 Documentation
For the theoretical foundation and mathematical proofs, see the following documents:
- Balance Theory - Core theoretical foundation for AION
These are more general math papers that inspired the ideas, but are not required to use the library:
🤝 Contributing
Contributions are welcome! Please read our Contributing Guide (coming soon) and check out the issues.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📜 License
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ for the ML community