Sunday, 1 April 2012

Collision Response

For a proper collision response, a normal to the surface is required.
But of course my geometry has none...

Easy fix, calculate the normals on the fly when they are needed!

How?

Sample the points surrounding the point on the surface.
This gives 9x9x9 points (if you include the point itself).
For each of these points, if it is filled, move the normal away from the sampled point.

Doing this I have managed to achieve a very nice collision response to any point on the surface.
(Its awesome =P)

Wednesday, 28 March 2012

Point Collisions

I've finally moved away from working on the rendering side of things and have managed to implement a point collision detection system.

To do this I have a slightly decompressed version of the voxels in main memory (for use on the CPU not the GPU). I decided it was best to do the collision detection on the CPU since the GPU is already struggling with the insane number of triangles.
(Side Note: Have achieved slightly better performance by fixing a bug which caused half of the voxels to not be rendered...oops =P)

The CPU version of the data is stored as start and end indices of the voxel segments. Since these are in order I am able to binary search the data. Giving me O(Log2(N)) access to voxels.
This allows me to check whether a single point is filled or empty, therefore providing point collision detection.

I am trying to think of a way to check for intersections with shapes without checking per voxel but I'm not sure how this could work yet.

Tuesday, 27 March 2012

And Now For Something Completely Different...

In an attempt to create a more interesting landscape, I had an idea.
Offset the time values of 3D Brownian Motion with the results of a 2D slice of complex fractal.
Shown here is the results of a Rigid Multi Fractal.




Stats:
512 x 512 x 512 = 134,217,728 Voxels
8.17MB = 8,573,371 Byte Filesize

A fair bit larger file than the previous landscape but it contains a lot more detail.

Procedural 3D Texturing

This mixes 3D Perlin Noise Turbulence with a 2D detail image to attempt to create an appearance of rocks covered in moss.

I've fixed my filtering issues =)
They were cause by the way I was storing the fragment positions. I resolved the issue by changing to recreating them from the depth buffer.
This combined with the previously mentioned dFdx/y functions creates perfect deferred texture filtering.

Also I've altered and simplified my compression it now uses fixed size run length codes (unsigned short (2 bytes)) which has reduced file sizes by a few megabytes.

This "landscape" consists of 512 * 512 * 300 = 78,643,200 voxels.
Its file is 1.56MB = 1,643,755 bytes.
Therefore, a 78 times reduction.

Monday, 26 March 2012

3D Brownian Motion

Interestingly, 2D fractals such as Brownian Motion and Rigid Multi Fractals appear to have very little difference to each other in 3D...

Friday, 23 March 2012

3D Noise and Texturing

I'm currently working on Landscape generation using 3D perlin noise. Unfortunately, there seems to be something wrong with my noise or the way I'm sampling it...
But its still pretty cool....

Also, pictured is deferred texturing and lighting, with point light at the camera's position.

Note to Erin: slightly improved texturing by using the glsl functions dFdx and dFdy with textureGrad.

Wednesday, 7 March 2012

My First Volume Render



Statistics:
Dimensions: 832 x 832 x 494
Uncompressed Data Size: 652MB
Compressed Data Size: 0.3MB
Fps: 60

Monday, 5 March 2012

New Plan!

So raycasting didn't work...

But I have a new plan!

I will use OpenCL to to expand the compressed format from RLE to start and end points, I will then pass these points to the geometry shader to create cuboids!

Reasons For Doing This In OpenCL:
Means less data to send to GPU.
Means I get to use OpenCL.

Erin's Update

I'm technically 3 months into my project now and here's what I've done so far:
  • OpenCL is up and running.
  • I wrote a voxel data set compressor.
  • I rendered a few voxels!
OpenCL has been fairly nice to work with. The nVidia code samples and The OpenCL Programming Book helped me get to grips with it. Also, their Best Practices Guide was a good read.
I've managed to hard crash my computer several times due to carelessness. The OpenCL compiler has very informative error reporting but runtime errors less so...

My voxel compressor was written to compress data sets from:
The lowest resolution of the stag beetle data set compressed from 10MB to 7KB!

I found that link on a website which is full of papers etc on everything you ever wanted to know about volume rendering:

I have attempted rendering the compressed stag beetle but I think it was too much processing for the GPU to handle... so I'm looking into other ways to render the data.

I am doing everything in OpenCL expect outputting the result to the screen, which I need OpenGL for.

PS.

Sunday, 4 March 2012

Thursday, 24 November 2011

My Thoughts On Unlimited Detail...

I was shown the "Unlimited Detail" videos when I started talking to people about my dissertation and it has recently appeared again. This provoked me into considering how the technology might work (assuming it is real) and by altering my compression slightly, I think it could work.

I am assuming that "Unlimited Detail" is not volumetric because of the scanning techniques used to created the data and it has been stated that it works using "Point Cloud Data" aka Voxels.

Using this as a starting point, I considered how this would be represented using my compression. I decided it would essentially consist of a length of empty space, followed by a color, followed by a length of empty space, followed by a color etc.

If a color is 4 bytes and a length is 4 bytes, assuming a single line along the X axis contains an average of 2 points (front and back), a single 1024 x 1024 XY Slice would only require 2 x (4 + 4) x 1024 = 16384 bytes or 16 kilobytes. Which means a 1024 x 1024 x 1024 volume would need 16 megabytes. While this is a bit blotted, it could be considerably reduced using my frequency compression. I haven't actually mentioned much about frequency compression in this blog before, so I'll explain it here.

Frequency compression is my way of compressing run-length data. It consists of an array of bytes, where each byte encodes a 6 bit count and a 2 bit type. The type specifies the frequency of a run of run-lengths and the count defines how many run-lengths are associated with it. The frequencies are thus:

Low - lengths above 65535, 4 bytes
Mid - lengths below 65536 and above 255, 2 bytes
High -lengths below 256, 1 byte
Ultra High - Single item (not run-length encoded), 0 bytes.

Using this method, color points can be stored with only 5 or 4 bytes (1 byte frequency and 4 or 3 byte color) and empty space can be stored with 1, 2 or 4 bytes.

This would mean that a best cast scenario would only need about 1 megabyte per 1024 x 1024 x 1024 block in a model.

In order to render this would only require ray-point intersection, where the X and Y math would be the same for each vertical slice of the model, leaving only the Z to be checked. Which could potentially run in realtime.

Thursday, 10 November 2011

Collision and Destruction

By branching out from the final raycast algorithm I have designed the algorithms for collision and creation/destruction.

For collision, the raycasting algorithm simplifies to an iterative bounding box algorithm. This starts by checking the full extents of the volume, it then checks the Z component of each segment until and intersection is found. If required it will check the Y and finally the X component. As soon a segment is found to be above the test bounds, we know that no further intersections will be found.

For creation/destruction, a new segment can replace all or part of one or more existing segments. To do this is quite simple. First, we find the first and last intersecting segments and remove any segments between them. The decide which of the following to do.
  • If the first and last segments are the same segment and the same type as the replacement, there is no change.
  • If the first and last segments are the same segment but not the same type as the replacement, two new last segments are created and the first segment shrinks.
  • If the first and last segment are the same type as the replacement, the first segment grows and the last segment is removed.
  • If only the first segment is the same type as the replacement, the first segment grows and the last segment shrinks.
  • If only the last segment is the same type as the replacement, the last segment grows and the first segment shrinks.
  • Otherwise, both the first and last segments shrink.

Sunday, 6 November 2011

Experiments

For a couple of weeks, I have been experimenting with raycasting algorithms to render my compressed voxel format. Initial tests were unpromising but after trying various approaches, I have now found an algorithm which allows a few hundred raycasts per second. The speed of the algorithm increases as the frequency of the data decreases. Assuming I can achieve even half that on the GPU per Pixel (by performing the raycasts in parallel), I should easily achieve interactive frame rates.

I discovered the algorithm by drawing and analysing an image of a Segment which covers many layers of Z:



This demonstrates all the unique properties of my compression. These are:
  • Any Segment is known to be inside the bounds of (0, 0, SegmentStartZ) and(MaxX, MaxY, SegmentEndZ)
  • If SegmentStartZ is Equal to SegmentEndZ, the Segment is known to be Inside the Bounds of (0, SegmentStartY, SegmentStartZ) and(MaxX, SegmentEndY, SegmentEndZ)
  • If SegmentStartY is Equal to SegmentEndY, the Segment is known to be Inside the Bounds of (SegmentStartX, SegmentStartY, SegmentStartZ) and(SegmentStartX, SegmentEndY, SegmentEndZ)
  • When SegmentStartZ is Not Equal to SegmentEndZ, If the RayHitZ is Not Equal to SegmentStartZ or SegmentEndZ an intersection exists.
  • When SegmentStartY is Not Equal to SegmentEndY, If the RayHitY is Not Equal to SegmentStartY or SegmentEndY an intersection exists.
  • When SegmentStartY is Equal to SegmentEndY, an intersection exists.

Monday, 24 October 2011

Implicit Normals?

I found an interesting snippet on the wiki of a voxel engine known as "Thermite3D". It speaks of a pair of fragment shader functions, dFdx and dFdy which give the partial derivative of a variable. They use it with vertices as a way to obtain normals.
If I have a texture which holds the world positions of the visible voxels, I could sample it and estimate the derivative. I can then combine this with another texture with the lighting information (generated by raycasting the landscape) and its associated material, to calculate the final colour value.
Unfortunately, the functions don't work with textures, but I can still do the estimation myself, as shown here.


Monday, 3 October 2011

What I'm Thinking

Over the past few weeks, I have been devising a way to efficiently store large voxel models. As I have previously stated, I approached this by first rethinking what a voxel representation of an object consists of. It is easy to see this as simply being a 3D image. However, this mindset provides no way to reduce the amount of data required. Instead, I looked at it as being a model. With this mindset, large chunks of voxels could be assigned a single material of properties (just like a mesh model). The assigned materials could then provide a small tile-able 3D texture which would repeat across the model, as well as any other properties. This immediately provides opportunity for compression.

I designed my compression scheme around the need for real-time ray-casting and real-time deformation. After a few iterations, I settled on a variation of run-length encoding (RLE), which I call variable run-length compression (VRLC). The difference is that RLE is an inline code with a fixed max repetition allowance, whereas VRLC uses side-by-side data sets with a variable max repetition allowance. The result of this is that very high compression can be achieved for low frequency data, as well as having low overhead for high frequency data.
(Details in a later post)

The next problem I found was providing surface normal’s for a voxel model. I have not yet finalised my representation of this, but at the moment I am planning to have "Normal Materials" which would work in a similar way to property materials. The difference being that they would describe a Bézier approximation of the surface. By making it a material, parts of multiple Bézier surfaces could be combined to create complex shapes.
(It's a hard idea to describe)

For the aforementioned 3D texture tiles, I am considering procedural generation. As part of this, I am thinking that a mono-chrome texture could be saved with a function defining the range of colours. A standard 3D texture would also work of course.

I think rendering the models will be the biggest task. My research tells me the best way is to ray-cast for each screen pixel. Not looking forward to that. However, I am curious to see if it can be done using only integer addition, subtraction and bit shifting by extending Bresenham's line algorithm to 3D.

Tuesday, 20 September 2011

A Brief History of Voxels

Voxels have been around years but have seen relatively little use in computer games. For years, the majority of 3D computer games have been made using polygons, because of there inherently low memory usage and fast rendering algorithms. This pushed graphics hardware to be designed specifically for the purpose of drawing millions of triangles as quickly as possible, which meant that voxels never reached their full potential.

The most popular and most recent game which uses voxels is "Minecraft", which has a landscape of oversized voxels. However, it's voxels are so large, that I would question whether it can truly be called a voxel game.


Going back a few years, the popular series "Worms" entered the 3rd dimension in "Worms 3D" using a mixed polygon and voxel engine to mimic the landscape deformation seen in it's 2D games. It was this game which made me curious about 3D destructible land.


At the start of the millennium, one of my favourite games, "Command & Conquer: Red Alert 2" was released, which uses voxels to render it's vehicles. I first encountered voxels when I looked into modding this game.

In my opinion, these three are the best use of voxels in computer games which exist today. Of course, that isn't saying much, when you consider that two of them don't even do it properly.

Monday, 19 September 2011

new Blog("Volumetric Worlds");

I have started this blog to record the progress of my 4th year project and I have decided to base this around the use of voxels in computer games.
The main problem with voxels is the large amount of data which they imply. This is because voxels are normally only considered to be 3D bitmaps. A small 512 x 512 x 512 contains over 100 million voxels (a lot of data). To put this in perspective here is a 2D 512 x 512 bitmap:

That's about 4 times the size of a single face of a rubix cube. So, if you wanted to make a voxel landscape for example, you would potentially need many gigabytes of memory (That is of course, without any sort of compression).
By rethinking voxels as models rather than images, I think I can reduce the memory requirements to be little bigger than that of an image.