In
computer science,
arbitrary-precision arithmetic, also called
bignum arithmetic,
multiple precision arithmetic, or sometimes
infinite-precision arithmetic, indicates that
calculations are performed on numbers whose
digits of
precision are limited only by the available
memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most
arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64
bits of precision.