Draft:Kolmogorov-Arnold Network (KAN)


A Kolmogorov-Arnold Network (KAN) is a new class of feedforward neural networks inspired by the Kolmogorov–Arnold representation theorem. Unlike Multilayer Perceptrons (MLPs) which use a fixed activation function and weights at each neuron, KANs use learnable weight functions on their edges, modeled as splines.[1]

References

edit
  1. ^ Liu, Ziming; Wang, Yixuan; Vaidya, Sachin; Ruehle, Fabian; Halverson, James; Soljačić, Marin; Hou, Thomas Y.; Tegmark, Max (2024-05-02). "KAN: Kolmogorov-Arnold Networks". arXiv:2404.19756 [cs.LG].
edit
  • Nadis, Steve (2024-09-11). "Novel Architecture Makes Neural Networks More Understandable". Quanta Magazine. Retrieved 2024-09-12.
  • Liu, Ziming; Ma, Pingchuan; Wang, Yixuan; Matusik, Wojciech; Tegmark, Max (2024). "KAN 2.0: Kolmogorov-Arnold Networks Meet Science". arXiv:2408.10205 [cs.LG].
  • Wang, Yizheng; Sun, Jia; Bai, Jinshuai; Anitescu, Cosmin; Eshaghi, Mohammad Sadegh; Zhuang, Xiaoying; Rabczuk, Timon; Liu, Yinghua (2024). "Kolmogorov Arnold Informed neural network: A physics-informed deep learning framework for solving forward and inverse problems based on Kolmogorov Arnold Networks". arXiv:2406.11045 [cs.LG].
  • Yu, Runpeng; Yu, Weihao; Wang, Xinchao (2024). "KAN or MLP: A Fairer Comparison". arXiv:2407.16674 [cs.LG].