greenhybrid's avatar
The main idea is that with fixed-point you can have the same precision everywhere, so I could even do an animation of a flight from neptune to the sun, and inbetween zoom in onto a bunch of ants sitting on a leaf of some small tree on earth. And eeeevery where I have the same absolute spacetime resolution, so it doesn't matter if I locate the sun at (0,0,0) or at (55555.132,321,54) in my next render.

But with floating point numbers, you lose "absolute" precision the further you get away from 0, and it would make a big difference whether I locate the sun at (0,0,0) or at (55555.132,321,54) in my next render.

The nice thing is, for real intergalactic travels (imagine you build up a nice universe for your next MMORPG(GSDFKLHJNSDFKLN)) I just increase the number of bits, and still have the same fractional precision everywhere, but with floats you will get even higher precision where you don't need it (i.e. wasted bits), and just shift the "precision-horizon" further away with each bit.

A little mindplay: I have int64 with resolution 0.00 002. I have a float that occupies the same store of 64bits, with resolution 0.000 000 2. With int64, I reach Neptune and a bit more. Will I also reach it with my float (which has same storage)?

Hmm, maybe I enlighten you with this (yay?!)?

:D
lyc's avatar
hmm, i think you have it the wrong way around, and that's the key to understanding why floating point is so much better for such differing scales than fixed point...

you said: "I just increase the number of bits, and still have the same fractional precision everywhere, but with floats you will get even higher precision where you don't need it (i.e. wasted bits)" -> think about it, specifically your comment about wasted bits!

are you sure you have no wasted bits in a cornell box, stadium and solar system? are you sure that float/double waste bits at any scale?
greenhybrid's avatar
hmm. i am not the most competent to argue in that discipline :D

about the "wasted bits": with a double i can have a resolution of 0.0000000001 (which i (imho) don't really need in a non-unbiased enivronment) or so, *when* a double is small. but for *big* doubles, the resolution gets smaller, say, like 0.1, or if big enough, even bigger than one or two or yet bigger. while the resolution wrt the number of states between any pair of 2^x and 2^(x-1) remains the same, it also follows that the number of states between each natural number y and (1+y) decreases, and gets even much smaller than 1.

so, while on earth you can place your l-system plants (bwaha, when when when) very freely and exactly, it could get a problem to place them exactly on mars, where you than would have to place all plants on a "very" discrete grid. this would be where my wasted bits on planet scale would go.

sure you can use hierarchies, and place your plants relative to the center of mass of mars, but then you also want to place mars relative to the center of mass of the sun, because else mars could move at "very discrete" space-steps, say with a relative move-vector of (0,0,23), where earth will travel at (0,0,0.5). then, you also want to place the camera relative to ... what exactly?

of course, it would be easiest to use doubles and restrict what can be done, and just place the planets onto a cubemap texture. but that will never ge me an award (...) for a nice solar-system animation with a final zoom in on some l-system plant on a bogo-planet. also, using doubles will also restrict me from future increasements, because i am not tbp who can code every double at every precision in every language (including malbolge) :D

tbp provided a super link: [link] , i am pretty sure da will mess up the link, so basically the link is [link] , and then the article "A matter of precision".
lyc's avatar
doubles are still needed for accurate vector computations, although making a seperate point class in fixed point is an interesting option. on the other hand, to maximally benefit from this you will need source data occupying all those huge scales (at least partially), which in an ideal scenario would be normalized to the bounding box of the scene (i.e. the first objects have eg x = 1 or 2, last have x = 2^n - 1 or 2, etc for all axes) somehow at greater precision still ;)

it's interesting, but personally i find that even single precision floats are sufficient to produce good images :D
greenhybrid's avatar
interestingly, the last week i wrestled with lots of template magic to provide four base types:

* fixed_point_t<typename int_t, unsigned fractional_bits> + some basic operations, like casting float-to-fixed (and vice versa), fixed-to-fixed (and vice versa :D), and int-to-fixed (...), (the fix-fix-cast operations use round-to-even)

* vector_t, point_t, normal_t with each having parameters <typename scalar_t>


Now I can switch easily between spheres (+ co.) that use integer centers and double radius, or float center and fixed-point radius, etc. etc. .

The implementation will (at least in the beginning) be as simple as (further examining sphere example) calculating the difference between sphere-center and ray-zero, that difference (with vector=point-point, as in PBRT) will then be cast to vector<float|double>, and the canonical ray/sphere-intersection is done.

Example:
sphere.center ::= ray.position ::= vector<fixed_point<int64_t, 16> >
ray.direction ::= vector<float>

vector<float> diff = vector_cast<vector<float> > (sphere.center-ray.position);

...

So basically like you mentioned. Best thing it runs already, both with float and int64_t as the scalar-type for points.

The cool thing now is, all the redshift-code can just use point_t, which is a typedef of point_t<int64_t>, and the other types (vector, normal) are typedefed so they use float. This all will yield no problems, as point_t only supports addition and subtraction:

vector = point [+-] point
point = point [+-] vector
(operands can be exchanged)

Of course there could be problems if I really typedef normal_t to be of integer-type, but that would anyways not what is intended.

As for single precision: In the beginning of picogen I switched to double because you already suffer upon intersection with a sphere with radius=100,000.0f (I got very ugly artifacts, like with hardware zbuffering). Clearly, these where also epsilon problems, but then I did not want to wrestle with epsilons being dependent on the magnitude (which would have been needed).