I designed my compression scheme around the need for real-time ray-casting and real-time deformation. After a few iterations, I settled on a variation of run-length encoding (RLE), which I call variable run-length compression (VRLC). The difference is that RLE is an inline code with a fixed max repetition allowance, whereas VRLC uses side-by-side data sets with a variable max repetition allowance. The result of this is that very high compression can be achieved for low frequency data, as well as having low overhead for high frequency data.
(Details in a later post)
The next problem I found was providing surface normal’s for a voxel model. I have not yet finalised my representation of this, but at the moment I am planning to have "Normal Materials" which would work in a similar way to property materials. The difference being that they would describe a Bézier approximation of the surface. By making it a material, parts of multiple Bézier surfaces could be combined to create complex shapes.
(It's a hard idea to describe)
For the aforementioned 3D texture tiles, I am considering procedural generation. As part of this, I am thinking that a mono-chrome texture could be saved with a function defining the range of colours. A standard 3D texture would also work of course.
I think rendering the models will be the biggest task. My research tells me the best way is to ray-cast for each screen pixel. Not looking forward to that. However, I am curious to see if it can be done using only integer addition, subtraction and bit shifting by extending Bresenham's line algorithm to 3D.