Skip to content

Infinite Color Engine#1261

Merged
awawa-dev merged 44 commits intomasterfrom
infinite_color_engine
Sep 27, 2025
Merged

Infinite Color Engine#1261
awawa-dev merged 44 commits intomasterfrom
infinite_color_engine

Conversation

@awawa-dev
Copy link
Copy Markdown
Owner

@awawa-dev awawa-dev commented Aug 28, 2025

Infinite Color Engine & New Smoothing Algorithms

I’m pleased to present a feature I’ve been looking forward to for a long time and have worked hard to bring to life:
the Infinite Color Engine, along with completely new smoothing algorithms.
This has been on the roadmap for quite a while, but other priorities had to come first.


What is the Infinite Color Engine?

In short, it marks a clean break from the old 24-bit color space, moving to floating-point precision while still maintaining performance.

You might ask: Why bother, if most sources don’t provide more than 24-bit color, except perhaps the P010 format?

Well, here’s the thing:
such sources have always been available—even with something as basic as a cheap MS2109 grabber or flatbuffers external source.
In our application, when averaging colors for each LED, we’ve always had access to a high-precision theoretical color space.
Until now, that precision was being brutally truncated to 24-bit.

Another hidden “source” of precision has long been smoothing, which interpolates intermediate colors.
This entire subsystem has now been rewritten to handle floating-point numbers at both input and output.


What Does This Mean in Practice?

  • The obvious: using the possibilities of lamps that offer support for an extended color palette
  • Reduced banding, noise, and flicker
  • More stable color transitions across frames
  • Smoother post-processing corrections

Previously, when the final averaged color happened to fall on the boundary between two integers, even tiny changes in the source could trigger sudden jumps between frames.
By switching to floating-point, this issue has been drastically minimized.


Hardware Support

Extended color support is now available in drivers for:

  • All Philips Hue lamps using the Entertainment API (v1 & v2)
  • HD108

This covers a wide range of popular devices.
On my own Philips Hue lamps, the first noticeable improvement was much smoother transitions
where before there were obvious “steps” in color changes, now the shift feels more continuous.


Why “Infinite”?

The name isn’t just marketing.
The Infinite Color Engine works with a color palette of 1.24 × 10²⁷ possible values.
For comparison:

  • Old 24-bit color: 1.67 × 10⁷ values
  • New floating-point engine: 1.24 × 10²⁷ values

The difference is, quite literally, astronomical. 🌌


New smoothing algorithms

In addition to the transition to floating-point numbers, the focus was on adding parameters that control the abruptness of color changes (brightness limiter or averaging with the previous frame) and adding the ability to work in the YUV space, not just RGB.

Title Description
Linear Interpolator The Linear Interpolator is an adaptation of a legacy algorithm from previous HyperHDR versions, rewritten to use floating-point arithmetic for higher precision. It works by linearly interpolating the current color towards a target color, ensuring the transition completes over a defined duration regardless of the frame rate. Its key characteristic is the ability to smoothly retarget mid-animation, initiating a new, full-duration transition from its current state toward the new goal.
RGB Infinite Interpolator This algorithm smoothly animates RGB colors over a set duration, offering two distinct modes for the transition. The first is a direct linear interpolation for a straight path between colors, while the second is a smoothed mode where the current color gracefully 'chases'' the target to create an ease-in/ease-out effect. A key feature is its ability to intelligently rescale an animation's duration when interrupted, ensuring a perceptually constant speed of change.
YUV Infinite Interpolator This algorithm smoothly interpolates colors by operating in the YUV color space for more perceptually uniform transitions. Its main feature is limiting the rate of luminance change in each step, preventing sudden, jarring flashes of brightness. This ensures a visually pleasing effect, even if it extends the animation beyond its initially set duration to maintain that smoothness.
Hybrid Physics Infinite Interpolator This algorithm smoothly transitions between colors using a hybrid physical model. A linear 'pacer' defines the direct path and timing to the target color, while the actual output color follows this pacer like an object attached to a damped spring. This two-part approach creates fluid, natural-looking animations with customizable inertia and overshoot, all while operating in the perceptually-uniform YUV color space.
Exponential Infinite Interpolator Classic exponential implementation of smoothing updates LED colors toward a target, reacting quickly to large differences and slowing as they approach the target, producing smooth, natural ambient lighting transitions.

Processing Pipeline (Before → Now)

⬇️ 1. Video Frame Area Averaging for LED

  • Before: color (float) truncated to 24-bit at output
  • Now: output remains as float

⬇️ 2. Post-Processing

  • Before: received, processed in pipeline, and output 24-bit color. some functions, such as changing luminance or saturation, could use float, but they still received and output 24-bit color
  • Now: both processing and input/output work on float colors

⬇️ 3. Smoothing

  • Before: input/output in 24-bit color, internal processing on float
  • Now: input, processing, and output all on float colors

⬇️ 4. LED Drivers

  • Before: always received and rendered 24-bit color
  • Now: if the device supports it (Philips Hue, HD108) → rendering at higher precision;
    otherwise → fallback to 24-bit rendering

The post-processing Pipeline (Before → Now)

Old approach (24-bit pipeline)

  • Operated directly on 8-bit nonlinear sRGB values.
  • Every transformation (brightness, per-channel gamma, calibration, temperature, backlight) worked on already quantized integers → rounding/precision loss at every step.
  • Per-channel gamma correction was not equivalent to standard gamma adjustment, and could distort hues 😬.
  • Input being nonlinear made operations like calibration and saturation adjustment less accurate (they are meant to be done in linear space).

New approach (float3 pipeline)

  • Operates first in linear sRGB space with float precision → avoids cumulative rounding errors.
  • Color temperature applied first: tinting works best before calibration so that calibration LUT/matrix can correct consistently.
  • Calibration in colorspace applied in linear domain → mathematically correct, ensures LUT/matrix is used as designed.
  • Scale color output (user parameter)
  • Convert back to nonlinear sRGB only once: avoids repeated conversions and precision loss.
  • Global user gamma correction applied consistently on all channels → perceptually uniform adjustment ✅.
  • Brightness and saturation after calibration: adjusting perceptual properties after ensuring primaries are correct.
  • Minimal backlight last: guarantees final safety floor regardless of prior adjustments.
  • Limit power output: scale down only if the power limit (user parameter) is exceeded

👉 Overall: the new pipeline is much more accurate, perceptually consistent, and scientifically correct. The old one was simpler but mathematically flawed.

Old approach (24-bit) New approach (float3)
Input: sRGB nonlinear 😬 (24-bit).
All further steps already lose accuracy due to rounding 😬
Input: sRGB linear (float3).
Correct starting point for calibration & math ✅
Steps:
1. Apply Brightness & Saturation
2. Apply User Gamma per channel 😬
3. Calibrate in Colorspace (on nonlinear data 😬)
4. Apply Color Temperature
5. Apply Minimal Backlight
Steps:
1. Apply Color Temperature (before calibration ✅)
2. Calibrate in Colorspace (linear domain ✅)
3. Scale color output (user parameter ✅)
4. Convert to nonlinear sRGB (only once ✅)
5. Apply User Gamma (global, consistent ✅)
6. Apply Brightness & Saturation (perceptual ops ✅)
7. Apply Minimal Backlight (final floor ✅)
8. Limit power output (scale down only if the power limit is exceeded ✅)
Output: sRGB nonlinear (24-bit). Output: sRGB nonlinear (float3 → quantized later).

And One More Thing: Modernizing the Codebase

Since we recently moved to Qt 6.8.3 or higher
(remember, Qt 5.15—released over a decade ago—is now end-of-life),
the next natural step in this PR is adopting C++20.

This isn’t just about upgrading a version number:
it brings us better language features that improve readability, safety, and overall code quality.


Known issues

  • Since the post-processing flow has been verified and rewritten in accordance with the correct processing order, it may happen that if you had a specific color calibration set, you will have to check it and possibly correct it to suit the current pipeline.

  • The configuration of post-processing per LED range was removed because it was complicating the code significantly and is unlikely to return, or at least I don't see any reason to migrate it

  • If you used or compiled HyperHDR code under Qt5 before, it may no longer work: Qt5.15 has very poor support for C++20. so far we are good with Qt5.15

@awawa-dev awawa-dev force-pushed the infinite_color_engine branch 3 times, most recently from fcb4aec to ef6531c Compare August 28, 2025 21:19
@awawa-dev awawa-dev force-pushed the infinite_color_engine branch from ef6531c to 93df95c Compare August 28, 2025 21:30
@satgit62
Copy link
Copy Markdown

@awawa-dev

Hi,
That's great news! I think it's brilliant, because many users use Philips Hue lamps with an extended color spectrum.
The new smoothing method also sounds very good. HyperHDR is already perfect, but I think it's good that they are experimenting with new, modern ideas and algorithms to achieve absolute perfection.🤗🙏
As for using Qt 6.8.3, that's not up to me, but to the team that maintains our SDK toolchain. However, I will compile it myself and test all these new features.

@awawa-dev
Copy link
Copy Markdown
Owner Author

awawa-dev commented Aug 29, 2025

Let's give Qt5.15 a chance. If the compilation process breaks, we'll assess whether it's worth creating a workaround for Qt5.15, or whether, for example, the cross-compilation package should be updated to Qt6.8. I can help you cross-compile this Qt6.8 package, although it will probably lead to a split with webos homebrew: another matter is that the version there has long been obsolete, and your repo offers newer version and many more out-of-the-box features, e.g., ready-made LUT tables.

@satgit62
Copy link
Copy Markdown

Let's give Qt5.15 a chance. If the compilation process breaks, we'll assess whether it's worth creating a workaround for Qt5.15, or whether, for example, the cross-compilation package should be updated to Qt6.8. I can help you cross-compile this Qt6.8 package, although it will probably lead to a split with webos homebrew: another matter is that the version there has long been obsolete, and your repo offers newer version and many more out-of-the-box features, e.g., ready-made LUT tables.

Hi, thank you. I would appreciate your help cross-compiling this Qt6.8 package if it doesn't work with Qt5.15. Yes, I know that webOS homebrew is not kept up to date, which is why I always follow it and always compile the latest packages. I then distributed the self-compiled versions to the webOS community.

@awawa-dev
Copy link
Copy Markdown
Owner Author

Hi @satgit62 After few small changes I've managed to compile HyperHDR on Windows using 5.15. I've updated the PR so could you test it again? Maybe my concerns about mixing Qt5.15 with C++20 were exaggerated, although often gcc on Linux is the real functional test ;-)

@satgit62
Copy link
Copy Markdown

Hi @satgit62 After few small changes I've managed to compile HyperHDR on Windows using 5.15. I've updated the PR so could you test it again? Maybe my concerns about mixing Qt5.15 with C++20 were exaggerated, although often gcc on Linux is the real functional test ;-)

Hi, thank you very much for the change. I'll test it right away and let you know if it was successful.

@satgit62
Copy link
Copy Markdown

@awawa-dev , thanks to the change “Infinite Color Engine: Compatibility with Qt5.15,” it could be compiled for webOS without any problems. I will test it and give my feedback tomorrow. Thank you.

@satgit62
Copy link
Copy Markdown

@awawa-dev
Hello,
I was able to do a quick test and I'm already impressed. However, I noticed that the “Smoothing Factor” is missing from the UI settings in both the Windows and WebOS versions, even though it is described in the help section. Or is it hidden among other parameters?
3

@NatyaSadella
Copy link
Copy Markdown

NatyaSadella commented Aug 30, 2025

Nice work Awawa, I've tried it and the new smoothing is immediately noticable.

Do we need to re-calibrate our LUT's for this new color engine?

Edit: I see new settings in the image processing tab. About the color temperature settings: Is this to "correct" a SK6812 neutral-white version to your recommended cold-white version? Or is this about adjusting your LED's to match the white point of the TV picture? Almost everywhere a D65 white point is recommended, is this setting supposed to match that?

@satgit62
Copy link
Copy Markdown

satgit62 commented Aug 30, 2025

The question about temperature settings is valid and would also interest me, but I think everyone should select the temperature of the LEDs accordingly. In the HyperSerial or HyperSerialPico driver, the RGB “white channel aspect” can be balanced. With the WLED controller, you can select the appropriate white balance in the LED settings, and in HyperHDR, this can be balanced using the “Temperature Custom Adjust” function.
However, @awawa-dev should confirm this, as it is only my interpretation.
As far as recalibration is concerned, please also note the issues known to awawa:
“Since the post-processing workflow has been reviewed and rewritten according to the correct processing order, you may need to check a specific color calibration you had set and possibly adjust it to the current pipeline.”

@awawa-dev
Copy link
Copy Markdown
Owner Author

awawa-dev commented Aug 30, 2025

Hi
I've updated the description after getting an independent, outside "review" 😉 of our new post-processing path. I'm open to adding new settings, as long as they don't break the current calibration path.

For brightness, you can adjust it in two places:

  • The Brightness(Luminance) & Saturation step, which uses HSL (Hue, Saturation, Luminance). Here, 'L' is for luminance/brightness.

  • The temperature setting. If red equals green and blue, you are essentially adjusting the scale, not the temperature itself.

I hope this explains how color processing worked previously and how it works now.


The post-processing Pipeline (Before → Now)

Old approach (24-bit pipeline)

  • Operated directly on 8-bit nonlinear sRGB values.
  • Every transformation (brightness, per-channel gamma, calibration, temperature, backlight) worked on already quantized integers → rounding/precision loss at every step.
  • Per-channel gamma correction was not equivalent to standard gamma adjustment, and could distort hues 😬.
  • Input being nonlinear made operations like calibration and saturation adjustment less accurate (they are meant to be done in linear space).

New approach (float3 pipeline)

  • Operates first in linear sRGB space with float precision → avoids cumulative rounding errors.
  • Color temperature applied first: tinting works best before calibration so that calibration LUT/matrix can correct consistently.
  • Calibration in colorspace applied in linear domain → mathematically correct, ensures LUT/matrix is used as designed.
  • Brightness and saturation after calibration: adjusting perceptual properties after ensuring primaries are correct.
  • Convert back to nonlinear sRGB only once: avoids repeated conversions and precision loss.
  • Global gamma correction applied consistently on all channels → perceptually uniform adjustment ✅.
  • Minimal backlight last: guarantees final safety floor regardless of prior adjustments.

👉 Overall: the new pipeline is much more accurate, perceptually consistent, and scientifically correct. The old one was simpler but mathematically flawed.

Old approach (24-bit) New approach (float3)
Input: sRGB nonlinear 😬 (24-bit).
All further steps already lose accuracy due to rounding 😬
Input: sRGB linear (float3).
Correct starting point for calibration & math ✅
Steps:
1. Apply Brightness & Saturation
2. Apply User Gamma per channel 😬
3. Calibrate in Colorspace (on nonlinear data 😬)
4. Apply Color Temperature
5. Apply Minimal Backlight
Steps:
1. Apply Color Temperature (before calibration ✅)
2. Calibrate in Colorspace (linear domain ✅)
3. Apply Brightness & Saturation (perceptual ops after calibration ✅)
4. Convert to nonlinear sRGB (only once ✅)
5. Apply User Gamma (global, consistent ✅)
6. Apply Minimal Backlight (final floor ✅)
Output: sRGB nonlinear (24-bit). Output: sRGB nonlinear (float3 → quantized later).

@awawa-dev
Copy link
Copy Markdown
Owner Author

Sorry, I missed that question earlier: “Smoothing Factor” is only for the RGB Infinite interpolator (one of the new Smoothing algos).

@satgit62
Copy link
Copy Markdown

satgit62 commented Aug 30, 2025

Your explanation was a veritable seminar on what functions and algorithms in HyperHDR can achieve, both old and new. I must admit that even at my advanced age, I am happy to learn and internalize new things. Thank you for that. 🙏🫡 👍

@awawa-dev
Copy link
Copy Markdown
Owner Author

awawa-dev commented Aug 30, 2025

@NatyaSadella

Do we need to re-calibrate our LUT's for this new color engine?

No, you don't.

Edit: I see new settings in the image processing tab. About the color temperature settings: Is this to "correct" a SK6812 neutral-white version to your recommended cold-white version? Or is this about adjusting your LED's to match the white point of the TV picture? Almost everywhere a D65 white point is recommended, is this setting supposed to match that?

I think it should be disabled by default. In our case, it's a completely subjective option and depends on how you want the white shifted in processing, which of course affects the output. Or/and you can use the HyperSerial/HyperSPI option to calibrate the output colors, which works differently: simultaneous calibration of white color and temperature. By the way, in the HyperSerial/HyperSPI , the 'cold' setting has equal correction factors (something like setting the same value for all channels here for temperature, so like 'neutral' temperature) because all SK6812 LEDs on the market that I met already have a red shift, and in fact, the SK6812 cold is 'neutral' here, and there's no such thing as a "real-cold white" SK6812.

@satgit62
Copy link
Copy Markdown

satgit62 commented Sep 3, 2025

Hello @awawa-dev

I have noticed that something is wrong with the JSON in the “infinite_color_engine” branch. Everything worked in the old version. I also tested it with the Windows version to rule out the possibility that webOS is faulty.
When I send a JSON packet as a POST request to IP:PORT/json-rpc to the RPC HyperHDR service, I either get an error message or no response at all.
The worst thing is that when I try to change the luminance or saturation—whether higher or lower—the LEDs go out completely. After restarting the daemon, the LEDs/bulbs light up again.
Same errors with Curl over SSH.

Example 1:

http://192.168.2.81:8090/json-rpc?request=%7B%22command%22:%22adjustment%22,%22adjustment%22:%7B%22classic_config%22:false,%22temperatureSetting%22:neutral%7D%7D

Log:

Failed to parse json data from JsonRpc@::ffff:192.168.2.61: Error: illegal value at Line: 0, Column: 84, Data: '{"command":"adjustment","adjustment":{"classic_config":false,"temperatureSetting":neutral}}'

Example 2:

http://192.168.2.81:8090/json-rpc?request=%7B%22command%22:%22adjustment%22,%22adjustment%22:%7B%22classic_config%22:false,%22luminanceGain%22:2%7D%7D

LEDs go out completely!

Thank you, and I hope you find a solution.

@awawa-dev
Copy link
Copy Markdown
Owner Author

awawa-dev commented Sep 5, 2025

Hi @satgit62
Thanks for testing and reporting these bugs. I've pushed a commit that fixes these issues. In the first case, the neutral value is missing escape characters "" . The old HyperHDR UI could generate something like this, but now it uses escape characters, so the value will be "neutral." Furthermore, the custom temperature values ​​were in the range 0-255 integers instead of 0-1.0 floats, and applying custom values ​​for different settings in color calibration didn't apply correctly anyway.

Now, after each color calibration change, you should have clear confirmation of the settings in the logs.
Post-processing color calibration page was not affected, only remote-control live color calibration and JSON-API playground command generator.

@satgit62
Copy link
Copy Markdown

satgit62 commented Sep 5, 2025

Great work, @awawa-dev,
With the new JSONRPC_Schema, Home Assistant can now change values. This also works great internally via SSH and cURL, because with the input hook, you can remap such a command to a button on your remote control, for example.
Thank you🙏Now the new webOS Qt6 version exceeds all expectations—it's perfect!!!!👍

@awawa-dev awawa-dev force-pushed the infinite_color_engine branch from 8a6ae7f to d4aec4f Compare September 6, 2025 01:07
@awawa-dev
Copy link
Copy Markdown
Owner Author

awawa-dev commented Sep 6, 2025

New features:

  • The Hue lamp HyperHDR driver has been significantly rewritten. Protection against inaccuracies when calculating very dark colors has been added to the algorithms. "Candy gamma" is now permanently enabled, as it should be. A bug in the wizard has been potentially fixed: Hue setup doesn't let me click "Save" #1268

  • Processing (the entire color calibration pipeline) was previously disabled for colors and effects; it is now disabled only for colors.

  • Processing: Added the "Scale color output" option (multiplies colors by a user-specified factor).

  • Processing: Added the "Limit power output" option. This works a bit like scaling, but dynamically when a given frame exceeds a specified limit. Please do not confuse this with a power limiter for your power supply, where you can set a specific value in watts. We use an abstract 0-1 scale, not watts, and it only regulates the maximum brightness of scenes.

the doc has been updated

@awawa-dev awawa-dev force-pushed the infinite_color_engine branch from 0a844b9 to 5d8d274 Compare September 11, 2025 20:35
@awawa-dev
Copy link
Copy Markdown
Owner Author

@satgit62 few small improvements:

  • if the USB grabber is disabled from the build: do not create the LUT if it is not found
  • the flat_lut_lin_tables.3d file is only searched in the home folder
  • remove the video grabber and system capture component from the user interface (overview, remote) if they are not included in the build

@satgit62
Copy link
Copy Markdown

@awawa-dev Hallo,
Thank you for the improvements. The UI elements that were not used in webOS have finally been removed from the "Overview Components" and "Remote Control Components" pages, which should clear up any confusion.
Also, flat_lut_lin_tables.3d is now only searched for in the Home folder, not the Binary folder.
In my opinion, Automatic Tone Mapping from "Image Processing" could be removed or hidden from the webOS UI because it only applies to connected grabbers.

@awawa-dev
Copy link
Copy Markdown
Owner Author

Sure, that makes sense. I added hiding this option if grabbers were disabled in build.

Also I found and fixed a memory leak, update is recommended. If you are using new updated version and you see "Set LED strip to black/power off, but the LED strip is empty. Skipped" in the logs then you were affected.

@satgit62
Copy link
Copy Markdown

Sure, that makes sense. I added hiding this option if grabbers were disabled in build.

Also I found and fixed a memory leak, update is recommended. If you are using new updated version and you see "Set LED strip to black/power off, but the LED strip is empty. Skipped" in the logs then you were affected.

Thanks, now the "Automatic Tone Mapping" option has disappeared from the "Image Processing" menu in the UI. Perfect! 👍Obviously, I wasn't affected by this memory leak, but it's good that you fixed it.
I'm looking forward to your next ideas.🙏

@awawa-dev
Copy link
Copy Markdown
Owner Author

This latest commit is just cosmetic, just in case.

I think I'll slowly freeze the code for v22beta1, although it still requires a ton of testing: the changes related to the introduction of the Infinite Color Engine affect the HyperHDR's core itself, so I want to make sure everything works fine. I might refactor the code a bit for c++20 and add a few minor fixes from Issues, but they'll be specific to Linux or macOS configurations. It seems like it could be released in about a month. We'll see what comes out of the tests.

@satgit62
Copy link
Copy Markdown

satgit62 commented Sep 12, 2025

This latest commit is just cosmetic, just in case.

I think I'll slowly freeze the code for v22beta1, although it still requires a ton of testing: the changes related to the introduction of the Infinite Color Engine affect the HyperHDR's core itself, so I want to make sure everything works fine. I might refactor the code a bit for c++20 and add a few minor fixes from Issues, but they'll be specific to Linux or macOS configurations. It seems like it could be released in about a month. We'll see what comes out of the tests.

Yes, their decision is wise and well thought out. By then, there will be plenty of feedback from the LG webOS and Enigma2/VU+4K communities. (These are powerful set-top boxes) with ARMv7l SoC, which I own myself, like the LG, and am constantly testing, and I will share it with you.

The expanded spectrum of your HyperHDR application thus surpasses the known devices and limits! 🥰

VU+4K Vu+ Duo4K

@satgit62
Copy link
Copy Markdown

@awawa-dev ,
As far as I can tell, you have pushed a new commit that affects the Infinite Color Engine (stronger YUV target achievement detection limit (3)). Thank you for that.

@awawa-dev awawa-dev merged commit e2c7094 into master Sep 27, 2025
@awawa-dev
Copy link
Copy Markdown
Owner Author

Important fix: #1302

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants