Friday, 23 March 2012

3D Noise and Texturing

I'm currently working on Landscape generation using 3D perlin noise. Unfortunately, there seems to be something wrong with my noise or the way I'm sampling it...
But its still pretty cool....

Also, pictured is deferred texturing and lighting, with point light at the camera's position.

Note to Erin: slightly improved texturing by using the glsl functions dFdx and dFdy with textureGrad.

Wednesday, 7 March 2012

My First Volume Render



Statistics:
Dimensions: 832 x 832 x 494
Uncompressed Data Size: 652MB
Compressed Data Size: 0.3MB
Fps: 60

Monday, 5 March 2012

New Plan!

So raycasting didn't work...

But I have a new plan!

I will use OpenCL to to expand the compressed format from RLE to start and end points, I will then pass these points to the geometry shader to create cuboids!

Reasons For Doing This In OpenCL:
Means less data to send to GPU.
Means I get to use OpenCL.

Erin's Update

I'm technically 3 months into my project now and here's what I've done so far:
  • OpenCL is up and running.
  • I wrote a voxel data set compressor.
  • I rendered a few voxels!
OpenCL has been fairly nice to work with. The nVidia code samples and The OpenCL Programming Book helped me get to grips with it. Also, their Best Practices Guide was a good read.
I've managed to hard crash my computer several times due to carelessness. The OpenCL compiler has very informative error reporting but runtime errors less so...

My voxel compressor was written to compress data sets from:
The lowest resolution of the stag beetle data set compressed from 10MB to 7KB!

I found that link on a website which is full of papers etc on everything you ever wanted to know about volume rendering:

I have attempted rendering the compressed stag beetle but I think it was too much processing for the GPU to handle... so I'm looking into other ways to render the data.

I am doing everything in OpenCL expect outputting the result to the screen, which I need OpenGL for.

PS.

Sunday, 4 March 2012

Thursday, 24 November 2011

My Thoughts On Unlimited Detail...

I was shown the "Unlimited Detail" videos when I started talking to people about my dissertation and it has recently appeared again. This provoked me into considering how the technology might work (assuming it is real) and by altering my compression slightly, I think it could work.

I am assuming that "Unlimited Detail" is not volumetric because of the scanning techniques used to created the data and it has been stated that it works using "Point Cloud Data" aka Voxels.

Using this as a starting point, I considered how this would be represented using my compression. I decided it would essentially consist of a length of empty space, followed by a color, followed by a length of empty space, followed by a color etc.

If a color is 4 bytes and a length is 4 bytes, assuming a single line along the X axis contains an average of 2 points (front and back), a single 1024 x 1024 XY Slice would only require 2 x (4 + 4) x 1024 = 16384 bytes or 16 kilobytes. Which means a 1024 x 1024 x 1024 volume would need 16 megabytes. While this is a bit blotted, it could be considerably reduced using my frequency compression. I haven't actually mentioned much about frequency compression in this blog before, so I'll explain it here.

Frequency compression is my way of compressing run-length data. It consists of an array of bytes, where each byte encodes a 6 bit count and a 2 bit type. The type specifies the frequency of a run of run-lengths and the count defines how many run-lengths are associated with it. The frequencies are thus:

Low - lengths above 65535, 4 bytes
Mid - lengths below 65536 and above 255, 2 bytes
High -lengths below 256, 1 byte
Ultra High - Single item (not run-length encoded), 0 bytes.

Using this method, color points can be stored with only 5 or 4 bytes (1 byte frequency and 4 or 3 byte color) and empty space can be stored with 1, 2 or 4 bytes.

This would mean that a best cast scenario would only need about 1 megabyte per 1024 x 1024 x 1024 block in a model.

In order to render this would only require ray-point intersection, where the X and Y math would be the same for each vertical slice of the model, leaving only the Z to be checked. Which could potentially run in realtime.