Single Channel Denoise with DeepSNR
Background
DeepSNR, created by Mikita Misiura, the creator of StarNet, is one of the most powerful neural network tools that we astrophotographers have access to. It has revolutionized the way we denoise our data, providing exceptionally clean results, far exceeding any other tool. The only caveat, however, is that in order for DeepSNR to effectively remove noise from an image, the source image must be a 3-channel combined RGB image inside of Pixinsight. This is no problem at all for broadband applications but is a limitation for single or dual channel data. This article will go over a powerful, yet simple technique for working around the three-channel limitation without the need to collect additional data.
Why can’t I just convert the image to color to use DeepSNR?
As a result of DeepSNR being trained on color images only, it requires three channels that all contain distinct noise patterns. It is able to use the difference in these noise patterns to get a “best guess” as to what the underlying signal might have been. When you convert a grayscale image to RGB, you only have one set of noise across all three channels which will result in artifacting.
But why not just add synthetic noise to my image?
While adding synthetic noise to an image to denoise it will allow DeepSNR to run, adding noise will degrade the quality of the image post-denoise significantly. It will introduce blotchiness and uncertainty into the data. If you imagine the extreme where you add a very large amount of noise, DeepSNR will not be able to pick apart the signal from the noise background. Using this technique, no additional noise is ever added to the data, we are just sampling different portions of the data which will produce a more accurate and clean result.
Special thanks to @astro_che for supplying some of the data for this guide, go check out his Instagram!
The Technique
The technique is very simple, in essence, we will break apart the dataset that forms the integration of a single channel into three separate integrations, combine into a color image, denoise, extract the luminance, and blend with PixelMath to achieve the desired level of denoise. This works because we maintain three distinct and unique noise profiles in each synthetic channel while retaining the same structure ‘under the noise’.
[ This process will be performed on each channel individually! ]
First, stack the data with WBPP, which will output registered images that can be separated for the next stage of integration. Produce one master integration like normal.
Next, in the registered image folder, create three subdirectories for “Integration1”, “Integration2” and “Integration3”. Put approximately 1/3rd of the registered images into each folder.
For example, if I had 119 Hydrogen-alpha images, I would put 40 images in each “Integration1”, 40 into “Integration2”, and 39 into “Integration3”.
Using the ImageIntegration process in Pixinsight, you will create three individual integrations from each respective folder. Remember to select the appropriate rejection algorithm.
It is very important that every time you run ImageIntegration, you clear the previous files, integrating duplicate data with this technique will cause hallucinated stars or ‘pockmarks’ after denoising. [Figure 1]
Using PixelMath or ChannelCombination, create an RGB color image, assigning Integrations 1, 2, and 3, to RGB respectively.
If when the STF is applied and you see a dominant hue, change the stretch preview to ‘unlinked’ by ctrl-clicking the nuclear button in the top right, or disabling the chain link icon in the STF process. The unlinked stretch should look approximately neutral color.
Run the DeepSNR process at 1.00 power with the linear checkbox enabled.
This highly denoised image will likely look a bit ‘plasticky’ and smooth. Do not worry, you will blend the original integration with this denoised image.
Using PixelMath, use the following expression to average the three-color channels into a single grayscale output.
Ensure that the “Create New Image” button is checked, and the output color space is set to “Grayscale”. Apply this process to the now-denoised image.
RGB/k:
avg( $T[0], $T[1], $T[2] )
Using PixelMath once again, we will now blend in the original integration to re-introduce some of the natural noise back into the image. Open the original integration of all the data into Pixinsight. For convenience we will call this image “Original”. Similarly, rename the denoised grayscale image to “Denoised”. To change the amount of denoise, simply change the A value in the symbols tab in the range of 0 - 1.0, with 1.0 being 100% weight favoring the denoised image.
To quickly iterate on the amount of denoise you want, create a preview on “Original” and run the PixelMath on this with “Replace Target Image” selected.
If you use a preview, be sure to apply it to the primary image!
RGB/k:
Original * ~A + Denoised * A
Symbols:
A = 0.8
Your image should now be denoised and ready to be processed!
Please feel free to contact me with any questions!
See my Contact page for info on how to contact me!