#### Algebra For Accounting

##### March 27, 2019

When programming numeric amounts for financial systems, floats and doubles can cause significant issues. Limited precision can cause many issues, such as multiplying large amounts, or attempting to amortize an expense over an amount of time that doesn’t divide evenly. All of the solutions, such as joda-money, focus on adding support for things like currencies or ensuring that there are consistent levels of precision (e.g. NUMERIC(13, 4)) and rounding policies throughout a company’s backend ecosystem. The problem with these solutions is that an arbitrary precision number still has no ability to represent the exact value of ⅓.

This becomes particularly problematic if you ever need to use these amounts in further calculations - how much precision loss can you tolerate on any given operation? How can you be sure that you won’t be multiplying by a large number later? How can you keep track of how much precision has been lost so that you ensure you have enough precision left to retain a valid financial value?

The fundamental problem, of course, is that the set of Arbitrary Precision numbers is not **closed** over the operations of multiplication and division. ⅓ is the simplest counter-example, as it has two finite-precision numbers as input and yet requires an infinite number of decimal places to represent its output. In order to compensate for this lack of closure, the operations of multiplication and division need to be modified to take two numbers *and* a (PRECISION_LEVEL, ROUNDING_STRATEGY) pair as input. This helps form a closure, but we’ve lost a great deal in the process. These new “multiplication” and “division” operations have lost two important properties:

- In certain circumstances (e.g. multiplying by a sufficiently large number), their results will not agree with their infinite-precision versions.
- They are no longer invertible, meaning that given a particular output and one of the input numbers, we can no longer exactly determine the other input. (As a thought exercise, imagine we did integer division and you were told the output was 0 and the numerator was 1 - what’s the denominator? It could be anything from 2 to infinity)

We need a set of numbers that is closed over the infinite-precision versions of addition, subtraction, multiplication, and division. It turns out that that’s exactly what the Rationals are. This is the set of *integer fractions*, where the numerator and denominator are both infinite-precision integers. I have a toy implementation to illustrate.

The simple idea is that all internal math for working with financial amounts should be done with Rationals, so that you avoid all of the complexity of precision and rounding strategies. Only when you get to the *edge* of your systems, where you need to render a decimal amount with specific precision in order to interact with customers or external financial systems, does it become necessary to “render” your Rational into a Decimal.