Monday 3 October 2011

What I'm Thinking

Over the past few weeks, I have been devising a way to efficiently store large voxel models. As I have previously stated, I approached this by first rethinking what a voxel representation of an object consists of. It is easy to see this as simply being a 3D image. However, this mindset provides no way to reduce the amount of data required. Instead, I looked at it as being a model. With this mindset, large chunks of voxels could be assigned a single material of properties (just like a mesh model). The assigned materials could then provide a small tile-able 3D texture which would repeat across the model, as well as any other properties. This immediately provides opportunity for compression.

I designed my compression scheme around the need for real-time ray-casting and real-time deformation. After a few iterations, I settled on a variation of run-length encoding (RLE), which I call variable run-length compression (VRLC). The difference is that RLE is an inline code with a fixed max repetition allowance, whereas VRLC uses side-by-side data sets with a variable max repetition allowance. The result of this is that very high compression can be achieved for low frequency data, as well as having low overhead for high frequency data.
(Details in a later post)

The next problem I found was providing surface normal’s for a voxel model. I have not yet finalised my representation of this, but at the moment I am planning to have "Normal Materials" which would work in a similar way to property materials. The difference being that they would describe a Bézier approximation of the surface. By making it a material, parts of multiple Bézier surfaces could be combined to create complex shapes.
(It's a hard idea to describe)

For the aforementioned 3D texture tiles, I am considering procedural generation. As part of this, I am thinking that a mono-chrome texture could be saved with a function defining the range of colours. A standard 3D texture would also work of course.

I think rendering the models will be the biggest task. My research tells me the best way is to ray-cast for each screen pixel. Not looking forward to that. However, I am curious to see if it can be done using only integer addition, subtraction and bit shifting by extending Bresenham's line algorithm to 3D.

No comments:

Post a Comment