Monday, 5 March 2012

New Plan!

So raycasting didn't work...

But I have a new plan!

I will use OpenCL to to expand the compressed format from RLE to start and end points, I will then pass these points to the geometry shader to create cuboids!

Reasons For Doing This In OpenCL:
Means less data to send to GPU.
Means I get to use OpenCL.

Erin's Update

I'm technically 3 months into my project now and here's what I've done so far:
  • OpenCL is up and running.
  • I wrote a voxel data set compressor.
  • I rendered a few voxels!
OpenCL has been fairly nice to work with. The nVidia code samples and The OpenCL Programming Book helped me get to grips with it. Also, their Best Practices Guide was a good read.
I've managed to hard crash my computer several times due to carelessness. The OpenCL compiler has very informative error reporting but runtime errors less so...

My voxel compressor was written to compress data sets from:
The lowest resolution of the stag beetle data set compressed from 10MB to 7KB!

I found that link on a website which is full of papers etc on everything you ever wanted to know about volume rendering:

I have attempted rendering the compressed stag beetle but I think it was too much processing for the GPU to handle... so I'm looking into other ways to render the data.

I am doing everything in OpenCL expect outputting the result to the screen, which I need OpenGL for.

PS.

Sunday, 4 March 2012

Thursday, 24 November 2011

My Thoughts On Unlimited Detail...

I was shown the "Unlimited Detail" videos when I started talking to people about my dissertation and it has recently appeared again. This provoked me into considering how the technology might work (assuming it is real) and by altering my compression slightly, I think it could work.

I am assuming that "Unlimited Detail" is not volumetric because of the scanning techniques used to created the data and it has been stated that it works using "Point Cloud Data" aka Voxels.

Using this as a starting point, I considered how this would be represented using my compression. I decided it would essentially consist of a length of empty space, followed by a color, followed by a length of empty space, followed by a color etc.

If a color is 4 bytes and a length is 4 bytes, assuming a single line along the X axis contains an average of 2 points (front and back), a single 1024 x 1024 XY Slice would only require 2 x (4 + 4) x 1024 = 16384 bytes or 16 kilobytes. Which means a 1024 x 1024 x 1024 volume would need 16 megabytes. While this is a bit blotted, it could be considerably reduced using my frequency compression. I haven't actually mentioned much about frequency compression in this blog before, so I'll explain it here.

Frequency compression is my way of compressing run-length data. It consists of an array of bytes, where each byte encodes a 6 bit count and a 2 bit type. The type specifies the frequency of a run of run-lengths and the count defines how many run-lengths are associated with it. The frequencies are thus:

Low - lengths above 65535, 4 bytes
Mid - lengths below 65536 and above 255, 2 bytes
High -lengths below 256, 1 byte
Ultra High - Single item (not run-length encoded), 0 bytes.

Using this method, color points can be stored with only 5 or 4 bytes (1 byte frequency and 4 or 3 byte color) and empty space can be stored with 1, 2 or 4 bytes.

This would mean that a best cast scenario would only need about 1 megabyte per 1024 x 1024 x 1024 block in a model.

In order to render this would only require ray-point intersection, where the X and Y math would be the same for each vertical slice of the model, leaving only the Z to be checked. Which could potentially run in realtime.

Thursday, 10 November 2011

Collision and Destruction

By branching out from the final raycast algorithm I have designed the algorithms for collision and creation/destruction.

For collision, the raycasting algorithm simplifies to an iterative bounding box algorithm. This starts by checking the full extents of the volume, it then checks the Z component of each segment until and intersection is found. If required it will check the Y and finally the X component. As soon a segment is found to be above the test bounds, we know that no further intersections will be found.

For creation/destruction, a new segment can replace all or part of one or more existing segments. To do this is quite simple. First, we find the first and last intersecting segments and remove any segments between them. The decide which of the following to do.
  • If the first and last segments are the same segment and the same type as the replacement, there is no change.
  • If the first and last segments are the same segment but not the same type as the replacement, two new last segments are created and the first segment shrinks.
  • If the first and last segment are the same type as the replacement, the first segment grows and the last segment is removed.
  • If only the first segment is the same type as the replacement, the first segment grows and the last segment shrinks.
  • If only the last segment is the same type as the replacement, the last segment grows and the first segment shrinks.
  • Otherwise, both the first and last segments shrink.

Sunday, 6 November 2011

Experiments

For a couple of weeks, I have been experimenting with raycasting algorithms to render my compressed voxel format. Initial tests were unpromising but after trying various approaches, I have now found an algorithm which allows a few hundred raycasts per second. The speed of the algorithm increases as the frequency of the data decreases. Assuming I can achieve even half that on the GPU per Pixel (by performing the raycasts in parallel), I should easily achieve interactive frame rates.

I discovered the algorithm by drawing and analysing an image of a Segment which covers many layers of Z:



This demonstrates all the unique properties of my compression. These are:
  • Any Segment is known to be inside the bounds of (0, 0, SegmentStartZ) and(MaxX, MaxY, SegmentEndZ)
  • If SegmentStartZ is Equal to SegmentEndZ, the Segment is known to be Inside the Bounds of (0, SegmentStartY, SegmentStartZ) and(MaxX, SegmentEndY, SegmentEndZ)
  • If SegmentStartY is Equal to SegmentEndY, the Segment is known to be Inside the Bounds of (SegmentStartX, SegmentStartY, SegmentStartZ) and(SegmentStartX, SegmentEndY, SegmentEndZ)
  • When SegmentStartZ is Not Equal to SegmentEndZ, If the RayHitZ is Not Equal to SegmentStartZ or SegmentEndZ an intersection exists.
  • When SegmentStartY is Not Equal to SegmentEndY, If the RayHitY is Not Equal to SegmentStartY or SegmentEndY an intersection exists.
  • When SegmentStartY is Equal to SegmentEndY, an intersection exists.

Monday, 24 October 2011

Implicit Normals?

I found an interesting snippet on the wiki of a voxel engine known as "Thermite3D". It speaks of a pair of fragment shader functions, dFdx and dFdy which give the partial derivative of a variable. They use it with vertices as a way to obtain normals.
If I have a texture which holds the world positions of the visible voxels, I could sample it and estimate the derivative. I can then combine this with another texture with the lighting information (generated by raycasting the landscape) and its associated material, to calculate the final colour value.
Unfortunately, the functions don't work with textures, but I can still do the estimation myself, as shown here.