(WIP) Smooth rendering – log density mapping
Plenty of my concepts require point based rendering, i.e. setting millions of points on the canvas. This method sometimes works and sometimes doesn’t when using Java2D (default Processing rendering method). There is one main problem: over-saturation. This is caused by the immediate pixel blending with the canvas. Blending is done by adding color channels with weight taken from alpha and after just tens or hundreds points given pixel is fully saturated.
To fight with this issue I’ve created a small library which implements algorithm called log density mapping or rendering. The idea is based mainly on algorithms described in The Fractal Flame Algorithm paper and Physically Based Rendering code.
I will construct small log density render engine from the scratch by adding progressively all necessary features. Presented code is in Java/Processing. Each chapter has its own script.
This renderer is available also in Clojure2d library (renderer function).
To show the difference let’s compare two images: left one is created with native Java2D rendering engine, right with log density renderer.
The concept
Instead of immediate rendering of each pixel let’s define two phases.
- Accumulate number of hits and color information for given pixel
- Actual rendering of the image which calculates final color and alpha
What is important both phases are independent.
Step 0 – native rendering (CODE)
First, let’s define formula for our object: the nebula which is just 2d gaussian blob distorted by scaled perlin noise:
float noiseScale = 50.0;
PVector calcNebulaPoint() {
float x = randomGaussian(); // take random point from gaussian distribution
float y = randomGaussian();
float r = noiseScale * sqrt(x*x+y*y); // calculate noise scaling factor from distance
float nx = r * (noise(x, y) - 0.5); // shift x
float ny = r * (noise(y-1.1, x+1.1, 0.4) - 0.5); // shift y
return new PVector(100.0*x+nx, 100.0*y+ny);
}
To render it, just call above function milions of times with highly transparent white color.
After a while you’ll see that object saturates quickly and is losing details.

Java2d rendering
Step 1 – linear density (CODE)
Now, the simplest algorithm. When you set the pixel you need to increment number of hits for this pixel. The number of hits will determine color saturation or alpha.
To make this, we will create renderer class containing:
- array of ints – to simulate canvas and to keep number of hits for each pixel
- function which increments given pixel
- function which finally renders image
Final rendering
After adding desired number of points (say: hundreds of millions) we need to generate final image. How to convert hits into pixels? What are hits actually?
We can interpret hits as alpha channel for our pixel. At the beginning we will treat 0 as no color at all, i.e. background. Pixel with maximum hits value will be our most saturated point, i.e. white in our example. The rest values are just result of linear interpolation.
So rendering algorithm goes like that:
- Find maximum value from all hits values
- For each pixel, scale down hits value to 0.0 – 1.0 range
- Set color as linear interpolation between background and most saturated color (white)
That’s all for first step. And result:

Oh yes… oh no! 😦 too dark, no details…
Gamma correction (CODE)
The main reason for that is simple: there is big gap between maximum number of hits and the rest of the image. We can try to fix it adding gamma correction in our rendering function
float alpha = pow(hits[i]/mx, 1.0/2.4); // gamma correction

Much, much better. But still not perfect, contrast seems to be low. Decreasing gamma doesn’t help, low hits pixels are now easily visible which is not good.

Step 2 – log function to the rescue (CODE)
To remove gap between low and high hits pixels let’s treat collected values as they are in log scale. This way pixels with highest hits do not differ drastically from pixels with lower number of hits.
This time gamma is equal 2.0 to darken image.
mx = log(mx+1.0); // before the loop ... float alpha = pow(log(hits[i]+1.0)/mx,2.0);

Yes! That’s it! Contrast and brightness are ok. Low density points are barely visible. Lovely.
Indeed?
Step 3a – but… where are the colors? (CODE)
Yes, indeed, there are no colors. Till now we can only interpolate between background and foreground but it’s not enough. We need multi-color rendering. To make it we need to add more elements and collect more information.
We leave log density part as it is. Color will be collected in three arrays, each for separate color channel: red, green and blue. We will sum there color channel values.
During rendering final color is an average of collected sum of channels.
// calculate average for each channel separately int rr = (int)(r[i]/hitsNo); // simple average int gg = (int)(g[i]/hitsNo); int bb = (int)(b[i]/hitsNo);

So far so good. We have colors but image is dull, dark, contrast is low. There are three tricks we can apply here.
Step 3b – better colors (CODE)
First: gamma for colors, vibrancy
Since we have gamma correction for alpha, we can introduce separate gamma correction for color channels. Additionally we can add another parameter which controls mix between calculated color (as before) and gamma corrected color. This way you can control how much gamma corrected color you want to use (0.0 – no correction, 1.0 – only gamma corrected color).
So let’s set gamma for alpha to 1.0 and gamma for color to 2.0. Below you see three results with color mix values: 0.0 (no gamma for color), 0.5 and 1.0 (full gamma for color).
Second: brightness and contrast
Operating on gamma only is not enough as you can see above. More gamma for color, darker image and necessity to decrease gamma for alpha, which can cause artifacts.
Let’s add some postprocessing for the pixel: contrast and brightness. We’ll use algorithm from JH Labs code. There are two parameters: one controlling brightness and second controlling contrast. Value 1.0 means no change at all.
See below images for a couple of parameters combinations.
Third: saturation
To adjust color saturation we can use HSB color space. Fortunately java.awt.Color provides convenient method to convert RGB to HSB and vice versa. Adjusting is just simple multiplying by given factor. Again, value 1.0 doesn’t change anything.
All parameters:
So we have all necessary parameters to control final rendering. These are:
- gamma – gamma correction for calculated alpha channel
- cgamma – gamma correction for calculated color channels
- color_mix – ratio to control how much gamma corrected colors we want to use
- brightness
- contrast
- saturation
That’s all here.
Step 4 – what about antialiasing? (CODE)
Now the hardest part. Our method is good when we draw blobs. But sometimes aliasing artifacts may appear. How to deal with it?
There mainly two strategies: oversampling and reconstructions kernels. We will use the latter one.
Instead of setting pixel color at given position we will set “splat”. The color will be spread around given position with some weights. Then we will use sum of weights to calculate alpha and weighted average of color.
Weights are taken from kernels.
Kernels (CODE)
Reconstruction kernels are 2D functions which helps to remove higher frequencies. They blur an image a little bit, which helps to avoid aliasing artifacts.
Read more about it under following links:
I prepared the tool which helps visualize all defined kernels. These are:
- Gaussian – key 0
- Windowed Sinc – key 1
- Blackman-Harris – key 2
- Triangle – key 3
- Hann – key 4
When you move mouse around you change radius (x axis) and spread (y axis) of the filter.
Pattern
Let’s render some new pattern using each filter. Pattern point is calculated as follows
float x = random(-1, 1); float y = random(-1, 1); float angle = atan2(y, x); float r = 1.0+x*x+y*y; color c = color(255*sq(sq(sin(200/r*angle)))); lr.add(400.0+x*400, 400+y*400, c);
And our image rendered with Gaussian filter

Step 5, final – multithreading (CODE)
We have all rendering elements done already. There is another aspect: speed. Although algorithm is fast enough we can improve performance and use all processor cores in for rendering.
There are three places where multithreading can be used: plotting, merging and rendering.
Till now we used one thread to add points and render result. But you can easily notice, that you can add points to separate renderers and combine all of them just by adding values from hits and color arrays. Multithreading version will work following way:
- Create as many rendrer objects as you have processor cores
- Run threads to plot a batch of pixels
- Collect results and merge them into one by adding values from hits and color arrays
- Do final rendering
- Repeat point 2 to add more points.
What’s interesting, merging arrays and final rendering can also be run parallel.
Futures
Instead of normal threads from Processing we will use Futures. Future is a thread which can result a value when our task is finished. If it’s not finished yet, thread blocks. We don’t have to do any external synchronization. Just run, do something else and collect results (or wait and collect).
Future class and related are the part of java.util.concurrent package since Java 1.5.
I wrapped Future mechanism into Runner class in multithread.pde file. You can just copy&paste this code into your sketch and use in other cases. There are two external variables which are used: number of processor and thread pool (executor object).
int numberOfProcessors = Runtime.getRuntime().availableProcessors(); ExecutorService executor = Executors.newCachedThreadPool();
Callable
Callable is generic interface to implement when you want to create task run by Future. Type T is returning value of the task (which is just T call() function).
Drawing points
To draw points we have to create separate class for task. Each task will create it’s own rendering object, set points and return the result.
class DrawNebulaTask implements Callable {
Renderer call() {
Renderer lr = new Renderer(800, 800, filter); // create renderer
drawNebula(lr, pointsNo);
return lr;
}
}
To run and merge result into final object we need to:
- Create Runner
- Add tasks
- Run tasks and gather results
- Merge results to the final object
As in following function
void drawAndMergeNebulas() throws InterruptedException, ExecutionException {
Runner threads = new Runner(executor);
for (int i=0; i<numberOfProcessors; i++) {
threads.addTask(new DrawNebulaTask());
}
for (Renderer r : threads.runAndGet()) {
buffer.merge(r);
}
}
Merging
Merging is necessary to combine all results from separate threads into one. It’s just adding values from corresponding arrays to target. And it also can be done in parallel, each thread for each table.
private class MergeTask implements Callable {
double[] in, buffer;
MergeTask(double[] buffer, double[] in) {
this.in = in;
this.buffer = buffer;
}
Boolean call() {
for (int i=0; i<in.length; i++) {
buffer[i] += in[i];
}
return true;
}
}
Rendering
Final rendering can be also refactored to multithread version. Our target PImage pixels array can be divided into separate ranges and filled by rendering function in separate threads. Check out code for details.
I measured only speed gain in this step and it’s about 3x faster than single threaded version (30ms vs 90ms)
Other concepts
There are some other elements which can be implemented here.
- Density estimation, blur low density regions to remove single pixels. Blur radius is related to density (high density – no blur, low density – wide blur radius)
- Other color spaces can be used like XYZ, LMS etc.
- Adding color channels separately to simulate chromatic abberations
- Additional weight for color to control pixel strength (semi-alpha)
- Additional gaussian spread to simulate blur.



























































































