Archive:

Subscribe:Atom Feed

"Fixed Point"

Posted: 03 July, 2022

One thing I never tried back in the day, was 3D. I was crap at maths cos I didn’t pay attention at school. Fortunately, every day’s a school day, and since the 90s I’ve learnt enough Linear Algebra to do the basics.

Time to have a go on the Amiga?

But before all that, I need a quick way to do the maths, which is where Fixed-point comes in.

(If you grew up programming before the Pentium era then all this will be trivial to you. I needed to refresh my memory. I only ever did this once, and that was a very long time ago. I thought I’d write it up for posterity…)

What’s Fixed-point?

The 68000 processor in the Amiga and ST is a hybrid 16/32 bit CISC processor; 32bit instruction set, on a 16bit bus. Unlike modern processors, it has no Floating-Point Unit [FPU], so processing numbers with fractional values is slow in comparison to integer based math.

3D graphics calculates a lot of angles/positions that rely on these fractional values, so we need a way to trade mathematical precision for processing speed. Fixed-Point is a decent compromise.

In fixed-point, numbers are stored in standard C types (ints, short and long, etc.) but the content of the variable differs from that of a normal int type. Fixed-point implies a position within the variable where the decimal point would be, and this is kinda arbitrary. For example, a signed 16bit integer normally has the range -32768 to 32767. If we create a fixed-point type of 8.8 – 8 bits for the integer part, 8bits for the fractional part – the integer range is reduced to -128, 127, but we’ve gained 8 bits of fractional precision.

Where to put the decimal point depends on the use-case. The resolution of the Amiga, in PAL low res, is 320x256, meaning unsigned 8.8 fixed-point numbers would nearly give us enough range to cover the screen. 12.4 would be more than enough. 16.16 takes us to the moon and back (and may be faster for some operations?).

For trigonometry functions, like Sine, where the output is -1 to 1, we could use a 2.14 fixed-point notation, giving us 14 bits of fractional precision. More than enough for smooth rotations.

Fixed-point Ops

Conversion to and from a fixed-point number is simple: Shift by the number of bits in the fractional part of the number. Ie: If you’re using 8.8 fixed-point, shift by 8 bits. A set of pre-processor macros is enough to go to and from any of the fixed-point representations need, but you must be mindful of potential under or overflow.

Addition and Subtraction of fixed-point numbers – assuming they’re of the same notation, 8.8 or whatever -- works the same as normal integers, but multiplication and division need slightly different handling.

When two fixed-point numbers are multiplied, the result is shifted by the number of bits in the fractional part, so must be stored in a type large enough to hold the extra bits to avoid truncation. (Multiplying 16bit words gives results in 32bit longs.) Also, the result must be shifted back before being stored:

(short) (((long) i * (long) j) >> 8);

Fixed-point division also shifts the result, but in the opposite direction, losing precision. To compensate, code needs to shift one of the arguments up, before the operation.

(short) (((long) i << 8) / j);

Trigonometry

Fast trig on a 16bit computer requires the use of pre-calculated look-up tables. And maybe some cheats.

Let’s take Sin/Cos. These functions take an angle as input. To pre-calculate their look-up tables, we need to decide what sort of precision is required for this input. Instead of having 360 degrees in a circle, we can say there are 256, meaning an angle can be stored within a byte and we get angular modulo for free, as the integer overflows (handy).

A precalculated look-up table for Sin/Cos, with this assumption would have 256 entries. The fixed-point precision of the values in the table being arbitrary; 2.14, 16.16… whatever is required of the results.

If you take advantage of the symmetry of Sin/Cos, it’s possible to reduce the number of entries: Only the first quadrant [“90 degrees”, or 64 entries in this example] need be calculated. The result can be mirrored/negated as appropriate, depending on the quadrant of the input [angle] passed to the function.

256 “degrees” in a circle would be enough to bounce a sprite up and down smoothly, but it probably wouldn’t be enough for silky vector rotations of lines and polygons. In these cases it’s worth bumping up to 4096, resulting in a 1024 entry table, with a precision of 360/4096 = 0.087890625 degrees.

That’s a decent trade of memory for speed!

Here’s one someone else made earlier

I’m writing this for sport and it doesn’t need to do much beyond clearing out the cobwebs from my old-geezer brain. You, dear reader, don’t need to know any of this. And, since you live in the future, you don’t need to code it, either. You can get a battle-tested fixed-point library that’ll work on old computers (or tiny embedded devices, whatever) right from Git Hub:

PetteriAimonen/libfixmath

I hear it’s great, but it’s possibly not as much fun as working all this out [again] on a rainy Sunday after noon 😊

Previous Post: "Tweaking Bart's VSCode Ext."

Further Reading:

We live in an era of enshittification. It's going to get worse before it gets better. But... Bright, funny (and slightly eccentric) people — like those listed below — made the internet.

Each curates a corner of the web full of magic, entertainment & wisdom. Several may say slightly positive things about Dizzy, but nobody's perfect. :(

Ping me on Masto if you'd like to be added to the list. Submissions insulting the Yolk Folk will be given higher priority.