Learn JAX with Real Code Examples
Updated Nov 24, 2025
Practical Examples
Compute gradient of a scalar function with `grad`
Train a simple neural network using JAX arrays and `grad`
Vectorize loss computation over a batch with `vmap`
JIT-compile a physics simulation for GPU execution
Parallelize reinforcement learning environment rollout across multiple GPUs using `pmap`
Troubleshooting
Ensure functions are pure (no side effects) for `jit`/`grad` compatibility
Check data types: JAX often requires float32 arrays for GPU
Debug uncompiled functions before applying `jit`
Use `jax.debug.print` for intermediate values
Update `jax` and `jaxlib` to compatible versions
Testing Guide
Validate functions on small arrays before batching
Compare gradients with numerical approximations
Test JIT-compiled and vectorized functions separately
Ensure reproducibility with PRNG keys
Benchmark performance on target device
Deployment Options
Run JAX computations on CPU/GPU/TPU
Export trained models parameters for Flax/Haiku
Integrate with production ML pipelines via XLA-compiled functions
Serve batch predictions using compiled functions
Use JAX in research simulations or cloud TPU workflows
Tools Ecosystem
NumPy for array operations
SciPy for scientific computation
Optax for gradient-based optimization
Flax/Haiku for neural network modeling
TensorFlow Datasets (TFDS) for dataset loading
Integrations
Optax for optimizers
Flax/Haiku for high-level neural networks
TensorFlow and PyTorch interoperability via ONNX/XLA
GPU/TPU hardware for acceleration
NumPy and SciPy for scientific computation
Productivity Tips
Use JIT to accelerate heavy computations
Vectorize functions instead of Python loops
Keep functions pure for composability
Cache and reuse compiled functions
Leverage multi-device parallelism with `pmap`
Challenges
Transitioning to functional programming mindset
Debugging JIT-compiled functions
Ensuring reproducibility with PRNG keys
Managing multi-device parallelism
Integrating JAX with larger ML frameworks for production