Sunday, 6 November 2011

Experiments

For a couple of weeks, I have been experimenting with raycasting algorithms to render my compressed voxel format. Initial tests were unpromising but after trying various approaches, I have now found an algorithm which allows a few hundred raycasts per second. The speed of the algorithm increases as the frequency of the data decreases. Assuming I can achieve even half that on the GPU per Pixel (by performing the raycasts in parallel), I should easily achieve interactive frame rates.

I discovered the algorithm by drawing and analysing an image of a Segment which covers many layers of Z:



This demonstrates all the unique properties of my compression. These are:
  • Any Segment is known to be inside the bounds of (0, 0, SegmentStartZ) and(MaxX, MaxY, SegmentEndZ)
  • If SegmentStartZ is Equal to SegmentEndZ, the Segment is known to be Inside the Bounds of (0, SegmentStartY, SegmentStartZ) and(MaxX, SegmentEndY, SegmentEndZ)
  • If SegmentStartY is Equal to SegmentEndY, the Segment is known to be Inside the Bounds of (SegmentStartX, SegmentStartY, SegmentStartZ) and(SegmentStartX, SegmentEndY, SegmentEndZ)
  • When SegmentStartZ is Not Equal to SegmentEndZ, If the RayHitZ is Not Equal to SegmentStartZ or SegmentEndZ an intersection exists.
  • When SegmentStartY is Not Equal to SegmentEndY, If the RayHitY is Not Equal to SegmentStartY or SegmentEndY an intersection exists.
  • When SegmentStartY is Equal to SegmentEndY, an intersection exists.

Monday, 24 October 2011

Implicit Normals?

I found an interesting snippet on the wiki of a voxel engine known as "Thermite3D". It speaks of a pair of fragment shader functions, dFdx and dFdy which give the partial derivative of a variable. They use it with vertices as a way to obtain normals.
If I have a texture which holds the world positions of the visible voxels, I could sample it and estimate the derivative. I can then combine this with another texture with the lighting information (generated by raycasting the landscape) and its associated material, to calculate the final colour value.
Unfortunately, the functions don't work with textures, but I can still do the estimation myself, as shown here.


Monday, 3 October 2011

What I'm Thinking

Over the past few weeks, I have been devising a way to efficiently store large voxel models. As I have previously stated, I approached this by first rethinking what a voxel representation of an object consists of. It is easy to see this as simply being a 3D image. However, this mindset provides no way to reduce the amount of data required. Instead, I looked at it as being a model. With this mindset, large chunks of voxels could be assigned a single material of properties (just like a mesh model). The assigned materials could then provide a small tile-able 3D texture which would repeat across the model, as well as any other properties. This immediately provides opportunity for compression.

I designed my compression scheme around the need for real-time ray-casting and real-time deformation. After a few iterations, I settled on a variation of run-length encoding (RLE), which I call variable run-length compression (VRLC). The difference is that RLE is an inline code with a fixed max repetition allowance, whereas VRLC uses side-by-side data sets with a variable max repetition allowance. The result of this is that very high compression can be achieved for low frequency data, as well as having low overhead for high frequency data.
(Details in a later post)

The next problem I found was providing surface normal’s for a voxel model. I have not yet finalised my representation of this, but at the moment I am planning to have "Normal Materials" which would work in a similar way to property materials. The difference being that they would describe a Bézier approximation of the surface. By making it a material, parts of multiple Bézier surfaces could be combined to create complex shapes.
(It's a hard idea to describe)

For the aforementioned 3D texture tiles, I am considering procedural generation. As part of this, I am thinking that a mono-chrome texture could be saved with a function defining the range of colours. A standard 3D texture would also work of course.

I think rendering the models will be the biggest task. My research tells me the best way is to ray-cast for each screen pixel. Not looking forward to that. However, I am curious to see if it can be done using only integer addition, subtraction and bit shifting by extending Bresenham's line algorithm to 3D.

Tuesday, 20 September 2011

A Brief History of Voxels

Voxels have been around years but have seen relatively little use in computer games. For years, the majority of 3D computer games have been made using polygons, because of there inherently low memory usage and fast rendering algorithms. This pushed graphics hardware to be designed specifically for the purpose of drawing millions of triangles as quickly as possible, which meant that voxels never reached their full potential.

The most popular and most recent game which uses voxels is "Minecraft", which has a landscape of oversized voxels. However, it's voxels are so large, that I would question whether it can truly be called a voxel game.


Going back a few years, the popular series "Worms" entered the 3rd dimension in "Worms 3D" using a mixed polygon and voxel engine to mimic the landscape deformation seen in it's 2D games. It was this game which made me curious about 3D destructible land.


At the start of the millennium, one of my favourite games, "Command & Conquer: Red Alert 2" was released, which uses voxels to render it's vehicles. I first encountered voxels when I looked into modding this game.

In my opinion, these three are the best use of voxels in computer games which exist today. Of course, that isn't saying much, when you consider that two of them don't even do it properly.

Monday, 19 September 2011

new Blog("Volumetric Worlds");

I have started this blog to record the progress of my 4th year project and I have decided to base this around the use of voxels in computer games.
The main problem with voxels is the large amount of data which they imply. This is because voxels are normally only considered to be 3D bitmaps. A small 512 x 512 x 512 contains over 100 million voxels (a lot of data). To put this in perspective here is a 2D 512 x 512 bitmap:

That's about 4 times the size of a single face of a rubix cube. So, if you wanted to make a voxel landscape for example, you would potentially need many gigabytes of memory (That is of course, without any sort of compression).
By rethinking voxels as models rather than images, I think I can reduce the memory requirements to be little bigger than that of an image.