1Ben-Gurion University, 2NVIDIA, 3Technion
*Equal contribution
Abstract
Neural networks are famously nonlinear. However, linearity is defined relative to a pair of vector spaces, $f:\mathcal{X}\rightarrow\mathcal{Y}.$ Is it possible to identify a pair of non-standard vector spaces for which a conventionally nonlinear function is, in fact, linear? This paper introduces a method that makes such vector spaces explicit by construction. We find that if we sandwich a linear operator $A$ between two invertible neural networks, such that $f(x)=g_{y}^{-1}(A g_{x}(x))$, the corresponding vector spaces are induced by newly defined operations. This framework makes the entire arsenal of linear algebra applicable to nonlinear mappings. We demonstrate this by collapsing diffusion model sampling into a single step, enforcing global idempotency for projective generative models, and enabling modular style transfer.
Key Results
One-Step Flow Matching & Inversion
Globally Projective Generative Model
Citation
@misc{berman2025linearizer,
title = {Who Said Neural Networks Aren't Linear?},
author = {Nimrod Berman and Assaf Hallak and Assaf Shocher},
year = {2025},
note = {Preprint, under review},
howpublished = {GitHub: assafshocher/Linearizer},
url = {https://github.com/assafshocher/Linearizer}
}
Acknowledgements
We thank Amil Dravid and Yoad Tewel for insightful discussions. A.S. is supported by the Chaya Career Advancement Chair.
© 2025 Nimrod Berman, Assaf Hallak, & Assaf Shocher