In
computing,
floating point is the formulaic representation that approximates a
real number so as to support a
trade-off between range and
precision. A number is, in general, represented approximately to a fixed number of
significant digits (the
significand) and scaled using an
exponent; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
where significand ∈
Z, base ∈
N, and exponent ∈
Z.