Deep learning architecture decision guide

Select your problem characteristics to filter and rank architectures. Click any filter to toggle it.

Strength Limitation Combines with

Common architecture combinations

Autoencoder + Neural ODE
Compress high-dimensional state to a latent space, then learn continuous-time dynamics in that space (Latent Neural ODE).
Use for: Reduced-order modeling of CFD, climate, or structural dynamics when full-order simulation is too expensive.
Autoencoder + Neural Koopman
Encoder lifts state to coordinates where dynamics are linear (zₖ₊₁ = Kzₖ). Enables spectral analysis and linear control design.
Use for: Modal decomposition of fluid flows, vibration analysis, MPC/LQR control in the latent space.
CNN / GNN encoder + latent dynamics
Use CNN (regular grids) or GNN (meshes) to encode spatial fields, then propagate in latent space with ODE, Koopman, or transformer.
Use for: Spatiotemporal forecasting — weather, turbulence, structural response under dynamic loads.
GNN + PINN loss
GNN handles irregular mesh topology while a physics-informed loss enforces PDE residuals at each node.
Use for: Mesh-based simulation surrogates that generalize across geometries (MeshGraphNets-style).
FNO / DeepONet + physics loss
Neural operator learns the solution map between function spaces; physics loss regularizes to ensure PDE consistency.
Use for: Fast surrogates for parametric PDEs (varying BCs, geometry, material properties) with physical guarantees.
KAN as drop-in MLP replacement
Replace MLP layers inside PINNs, autoencoders, or DeepONet with KAN layers for improved interpretability.
Use for: Problems where extracting a symbolic governing equation from the learned model is valuable.