# Algebra For Accounting ðŸ”—︎

When programming numeric amounts for financial systems, you quickly learn that floats and doubles are bad news. They exhibit various levels of imprecision (`(1.0 / 3.0) * BIG_AMOUNT`

may result in amounts that don’t add up) that can be hard to deal with reliably.

The most common solutions, however, still miss the mark - Long, BigDecimal, or joda-money, while they solve the precision problem, all suffer from being *arbitrary* precision, but not *infinite* precision. The crux of the issue, of course, is division and multiplication.

Each of those operations must specify a *desired level of precision* and *rounding policy* in order to remain an arbitrary-precision number. `BigDecimal`

, for example, while it allows you to control *how* imprecise you wish to be, still has no ability to represent the exact value of `1.0 / 3.0`

.

This becomes particularly problematic if you need to chain these computations together - how much precision loss can you tolerate on any given operation? How can you be sure that you won’t be multiplying by a large number later? And whatever policy you decide to use, *every other system* you interchange data with must be using the same policy or you will fail to get consistent numbers.

## Algebraic Closure ðŸ”—︎

The problem is that the set of *Arbitrary Precision* numbers is not *closed* over the operations of multiplication and division. `1.0 / 3.0`

is the simplest counter-example, as it has two finite-precision numbers as input and yet requires an infinite number of decimal places to represent its output.

In order to compensate for the closure property, we are then forced to modify the operations of multiplication and division to take two numbers *and* a (PRECISION_LEVEL, ROUNDING_STRATEGY) pair as input. Now, our operations are closed, but we’ve lost a great deal in the process. These new “multiplication” and “division” operations have lost two important properties:

- In certain circumstances (multiplying by a sufficiently large number), their results will not agree with exact math.
- They are no longer invertible, meaning that given a particular output and one of the input numbers, we can no longer exactly determine the other input. (As a thought exercise, imagine we did integer division and you were told the output was 0 and the numerator was 1 - what’s the denominator? It could be anything from 2 to infinity)

These are really costly properties to lose, and it’s really complicated coordinating precision levels and rounding strategies across an ecosystem in order to ensure real-world amounts never manifest these problematic behaviors.

## Rational Numbers ðŸ”—︎

What we need are *Exact Precision* numbers that are closed over the operations of addition, subtraction, multiplication, and division. It turns out that that’s exactly what the Rationals are.

This is the set of *integer fractions*, where the numerator and denominator are both infinite-precision integers. I have a toy implementation to illustrate.

The idea is simply that when performing internal calculations, you should be using exact-precision math over the Rationals so that you avoid all of the complexity having to do with precision strategies, and that when you finally get to a number that you need to display to customers externally, you can then render your Rational to a Decimal, and use a single level of precision and rounding strategy that’s easy to coordinate across the board for display purposes.