Overview
- Tensor Processing Unit (TPU)—developed in 2016
- Since 2012—AI compute power doubling every 3.5 months
- CPUs and GPUs no longer able to scale
- Domain specific hardware
- e.g. Matrix multiplication
- Faster to train ML models than GPUs
- 27x faster
- 38% cheaper
- e.g. GPU 26hrs vs TPU 7.9mins