[[THE EFFECTS OF NUMERICAL PRECISION IN SCIENTIFIC APPLICATIONS.pdf]]
Since neither posit nor 16-bit floating-point formats are supported in commercial general-purpose processors
Can be one motivation? NVIDIA GPUs seems to be the very few available hardware for half precision.
How to evaluate:
- use three HPC applications and three ML apps
- Use 64-bit results as ground truth
- Run those apps with specific formats
- Compute the MSE with ground truth