Practical Quantum Supersampling for Computer Graphics
Transcript of Practical Quantum Supersampling for Computer Graphics
Practical Quantum Supersampling for Computer Graphics
Including Quantum Noise Elimination
Implementation, simulation, source code,
and etching patterns for silicon-based photonic devices
Eric R. Johnston
Copyright 2015 Machine Level Ltd. London
August 31, 2015
Updated November 5, 2015
Abstract This document illustrates a method for using quantum logic devices to improve speed and quality of computer-
generated images. Complete implementation source code is provided, as well as links to functional simulations. The
method described herein has application in ray tracing, as well as 2D and 3D pixel-shaded graphics.
[ej TODO]
Single eval – The information is in there, needs to be teased out.
Based on the output color, you know the probability of error.
QSS – Quantum Supersampling
QNE – Quantum Noise Elimination – know which pixels are most likely to be erroneous, and fix them, perfectly and
cheaply. Also know the most likely fix without re-tracing. Or by voting. QNE is not simple noise reduction; it enables
smart low-cost noise elimination.
How to expand to more than black & white
classical supersampling
quantum supersampling
Author’s note: This is under construction. There’s one big unsolved problem, which involves using stashed
results of one ray evaluation as an oracle during sum refinement. Maybe that can be solved, maybe it can’t.
Maybe trying will lead to something better. Please see this whitepaper for more information on this experiment.
Scaling up: What’s the largest number of objects I can hit which will affect this bit? If you sum and then average, what
matters is the bit depth of the average.
Supersampling Modern computer graphics systems use supersampling to produce high-quality images. Instead of evaluating each pixel
once, each pixel is treated as a smaller image composed of sub-pixels. Each sub-pixel is evaluated, and then the resulting
colors are combined (usually a sum or an average), and only the combined value is stored in the final image.
The cost of producing an image increases with the number of pixels and sub-pixels, and with the per-pixel evaluation
cost. For an image made up of 32x32 pixels, where each pixel is evaluated as 8x8 sub-pixels, a complete evaluation
would require 65,536 separate sub-pixel evaluations, each of which must perform separate computation on the scene
data in order to produce the sub-pixel color. The sub-pixel groups are combined, resulting in an image of 1,024 pixels.
In order to increase performance, most modern systems reduce the number of samples per pixel. Ray tracing systems
will send out one ray per sample, distributing them at random or in a pattern.
To render a single pixel (made of sub-pixels), the following process takes place:
In contrast, the method described in this paper uses the following process:
image
sub-pixels (evaluated, but not stored)
scene
data
evaluate one
sub-pixel
(expensive) next
sub-pixel
position
accumulate
sum
pixel
color
repeat for all
sub-pixels
one
sub-pixel
color
Classical Supersampling
The expensive pixel evaluation is performed only once per pixel, instead of once per sub-pixel. This means the scene
data is no longer needed after the first step, and as long as the process of refining the sum precision can be efficient,
high quality results may be produced with less computation.
Note: Readers familiar with QC issues may point out that the refinement cycle still requires an oracle, which usually
requires a pixel evaluation. This is not solved yet, but I have a neat idea which seems promising. Read on.
scene
data
evaluate one
sub-pixel
(expensive) all
sub-pixel
positions
refine
sum
precision
pixel
color
repeat for desired
precision
all sub-pixel
colors
Quantum Supersampling
Author’s note: Eliminating
re-evaluation is the
“unsolved problem”
mentioned earlier.
Component 1: Sub-Pixel Evaluator For both classical and quantum solutions, we need a way to turn the scene data into a color, for a given pixel position.
This can get very (very) complicated, so we’ll start with a very simple version, for illustration. Suppose each pixel is
composed of 1x8 sub-pixels, and the scene data contains one rectangular object.
In this case, our scene data is composed of obj_x1 and obj_x2, plus the object color and the background color. For each
sub-pixel, ray_x is the x position. If the ray hits the object, that subpixel gets the object color. The logic is pretty easy, but
slightly different for the quantum (reversible) version than for the classical version:
Classical version
function trace_one_ray(ray_x, scene)
{
if (ray_x >= scene.obj_x1
&& ray_x <= scene.obj_x2)
return scene.obj_color;
else
return scene.bkg_color;
}
This version does what you’d expect; if the ray is within the object bounds, return the object’s color. Otherwise, return the background color.
Quantum version
function trace_one_ray_quantum(ray_x, scene, out_color)
{
var high_bit = 1 << (ray_x.numBits - 1);
var sign_mask = qintMask([ray_x, high_bit,
scene.obj_x2, high_bit]);
scene.obj_x2.subtract(ray_x);
ray_x.subtract(scene.obj_x1);
out_color.exchange(scene.obj_color, ~0, 0, sign_mask);
ray_x.add(scene.obj_x1);
scene.obj_x2.add(ray_x);
}
We need to do things a little differently here, but it’s still simple. We set up some condition bits so we can detect negative values, and then we subtract ray_x from obj_x2 and then obj_x1 from ray_x. If both results are non-negative, swap in the object’s color.
Here’s the circuit produced by the quantum version, for 3 bits of position and 1 bit of color:
Notice that the “set color” step is an exchange, not a set. That works in this simple case, but we’ll need to be careful in
more complex cases (such as multiple objects with the same x value but different z).
Both of these work, and accomplish the task. Just to check, there’s a test harness and a full running sim at this link:
1 1 1
ray_x
obj_x1
0 1 2 3 4 5 6 7
obj_x2
obj_color
bkg_color
[add link]
…and when run with our three-hit test, the “exhaustive search” if each version returns the number 3, as it should.
Classical version
classical ray 0 = 0
classical ray 1 = 0
classical ray 2 = 1
classical ray 3 = 1
classical ray 4 = 1
classical ray 5 = 0
classical ray 6 = 0
classical ray 7 = 0
Exhaustive classical sum: 3
Quantum version
quantum ray 0 = 0
quantum ray 1 = 0
quantum ray 2 = 1
quantum ray 3 = 1
quantum ray 4 = 1
quantum ray 5 = 0
quantum ray 6 = 0
quantum ray 7 = 0
Exhaustive quantum sum: 3
To speed up the classical case, we can fire only a few of the rays, and then approximate an answer from the results. This
causes noise. For example, by picking two rays in the case above, we could receive 0, 1, or 2 hits, which we’d interpret as
a sum of 0, 4, or 8 for this supersampled pixel. That’s a wide range, but over many pixels it will produce a dither pattern
which comes close to the correct color.
In this case, we get a 4x speedup, by adding noise to the final image. We can use this with our current quantum solution,
and it will work just as well.
Release the Quantum We have another option here, which is to let the quantum version calculate all possible answers at once. That’s what it
does naturally, so let’s see that happen. All we need to do is Hadamard transform the ray’s position, and the rest
happens automatically.
1 1 1
0 1 2 3 4 5 6 7
exhaustive search
exact sum = 3
1 1 1
0 1 2 3 4 5 6 7
classical speedup: choose fewer samples
approx. sum = (1 * (8 / 2)) = 4
This does in fact work, and all rays are traced at the same time. When we read the answer, the quantum state collapses,
and we’re left with a single result, picked at random.
Using quantum superposition
Random quantum ray 1 = 0
Random quantum ray 7 = 0
Random quantum ray 2 = 1
Random quantum ray 0 = 0
Each answer is correct, and consistent with our exhaustive search. This is not really more useful than our existing
methods, but now we can take a look to see what’s happening to the quantum state.
A Peek Inside the Qubits We can’t inspect the state with an actual device, but in a simulation it’s easy to see what’s happening.
First, we initialize ray_x and out_color to zero, and take a peek at the resulting state:
We’ve initialized them to zero. Performing a “read” operation on ray_x will just select one column (based on the listed
probabilities), and zero out everything else. Reading out_color will select one row, and zero out the other one.
So in this case, we can see that reading either one has a 100% chance of returning the zero value. That makes sense, as
we’ve just set both of these variables to zero.
Now we apply our Hadamard:
ray_x values
ou
t_co
lor
valu
es
We haven’t touched out_color, so that’s still zero. ray_x has become a superposition of all possible values (8 possible ray
positions, in this case). Their phases and magnitudes are all the same. If we read ray_x now, we’ll get a completely
random value, from our 8 possibilities.
Next, we run the sub-pixel evaluation logic (the ray trace):
Now things are interesting. For each value of ray_x which hits the object, out_color has become 1. When we say “all
possible values can be calculated at once” this is exactly what we’re talking about. The result of every possible sub-pixel
evaluation is contained here, and we can see them all.
When we read out_color, we could get either value (zero is slightly more likely in this case). Suppose it returns 1.
ray_x values
ou
t_co
lor
valu
es
ray_x values
ou
t_co
lor
valu
es
The “out_color = 0” row collapses, and now the only possible values we can get from ray_x are ones consistent with
“out_color = 1”. So when we read ray_x, one of the columns is picked, and we’re done.
…and our program prints this:
Random quantum ray 4 = 1
For supersampling, we don’t actually care about the individual values; we just want the sum. So now we need another
component to produce this.
ray_x values
ou
t_co
lor
valu
es
ray_x values
ou
t_co
lor
valu
es
Component 2: Quantum Count With our simple 1-bit color setup from the previous section, a count is the same as a sum.
[Under Construction]