Different Neural Networks

$$\gdef \sam #1 {\mathrm{softargmax}(#1)}$$ $$\gdef \vect #1 {\boldsymbol{#1}} $$ $$\gdef \matr #1 {\boldsymbol{#1}} $$ $$\gdef \E {\mathbb{E}} $$ $$\gdef \V {\mathbb{V}} $$ $$\gdef \R {\mathbb{R}} $$ $$\gdef \N {\mathbb{N}} $$ $$\gdef \relu #1 {\texttt{ReLU}(#1)} $$ $$\gdef \D {\,\mathrm{d}} $$ $$\gdef \deriv #1 #2 {\frac{\D #1}{\D #2}}$$ $$\gdef \pd #1 #2 {\frac{\partial #1}{\partial #2}}$$ $$\gdef \set #1 {\left\lbrace #1 \right\rbrace} $$

Here we present different Neural network types and structures.

  1. Abstract
  2. Physics-Informed NN (PINN)
    • Neural ODE (NODE)
    • PINNs
    • RNN vs ODE
  3. Kolmogorov-Arnold Networks (KANs)
  4. Liquid Neural Networks
  5. Spiking Neural Networks (SNNs)
  6. Capsule Networks
  7. Inductive bias
  8. Summary

Abstract

Physics-Informed NN (PINN)

Background in ODE/PDE with great videos with illustrations can be found in this playlist

Neural ODE (NODE)

  • Special NNs such as Hamiltonian NN for Hamiltonian equations or Lagrangian NN for Euler-Lagrange equations, can be found in the paper: “Hamiltonian neural networks for solving equations of motion - Marios Mattheakis et al” and in the video.
  • This video also based on this article, this article, and this article.

PINNs

  • This video also based on this article.
  • Note that $f$ or the dynamic function is referred to as vector field. Why? Since it shows the trajectory (through time for ODE, and also through space for PDE) of any “particle” starting from some initial position.
  • Enforcing causality in PINNs described in the paper.
  • PINNs could be extended to fractional PINNs handling integrals and fractional derivatives (incomplete ones). Also delta-PINNs that incorporates geometry prior of the problem. See more in the video.
  • A full online course about PI-ML in the playlist.

RNN vs ODE

  • Note that NODEs and PINNs without free parameters, are autonomous systems, i.e. determined deterministically by ICs and BCs. However, parameters are just like inputs to the system, that intervene/interfere the system, as also happens in RNNs, which in its turn affect the output. This idea is also illustrated in the state plane and input and output vectors interacting with it in State space section.

Kolmogorov-Arnold Networks (KANs)

Liquid Neural Networks

Spiking Neural Networks (SNNs)

Capsule Networks

Inductive bias

Summary