"Color" in 3D engine means just a 3-component vector. It's not meaningless to multiply those component-wise at all. Nobody is multiplying the color of an apple by the color of a dog if it's what you meant.
If it has meaning, then it should be easy to define what it means.
In fact, multiplying two RGB colors is about as meaningful as multiplying the color of an apple by the color of a dog. It's certainly not rooted in physics.
It is, and it's defined in my previous comment - it's per-component product i.e. {a,b,c} * {x,y,z} = {ax,by,cz}. It's used to compute attenuation of light separately for different bands. And attenuation has a clear and obvious meaning - it's a portion of light energy that passes through an interaction of light and medium.
And yet, it has no basis in reality. Point out the physics equivalent of multiplying two colors.
Those bands are arbitrary, defined by human color receptors. If we were dogs, would we be arguing that lighting should be defined as {a,b} * {x,y} = {ax,by}? Dogs can only see blue and yellow. So what's special about r,g,b, and why do we mutiply them together?
There's nothing special about them, and it's arbitrary and misleading.
Are you arguing that the renderer should model a continuous spectrum instead of RGB?
Some can, but it’s obviously much more computationally intensive. Outside of some niches, it’s also not particularly useful because the final product will need to be in RGB anyway if it’s to be displayed on a monitor or TV. Even if you were to directly inject the renderer’s output into the retina, there are lots of....wavelength distributions that will produce identical percepts, so it’s not a clear win even there.
It's actually not that much more computationally intensive, even doing texture RGB -> spectral wavelength conversion for each BSDF / light evaluation.
But you do get a fair bit more spectral noise in the image, but this can be handled by handling multiple wavelengths at once with SIMD and importance-sampling the wavelengths to use (Hero wavelength sampling).
Where it does actually make a lot of sense over RGB triplets is for volume scattering / absorption
(so more accurate hair and skin rendering for example) - calculating the mean free path with wavelengths is much more accurate than using an RGB triplet which is an approximation - however, this means to enjoy this benefit, you need to store the volume coefficients in wavelength bins (instead of RGB values), which uses quite a bit more memory.
Again, I just pointed it in the previous message. It's attenuation of light. If you have a surface, which absorbs half of the light then you multiply whatever light energy that came in by 0.5 to get the light energy that escapes the surface.
>Those bands are arbitrary, defined by human color receptors.
Sure. And if we had monochrome vision then we could have used just scalars for light. If you are arguing against 3 component color model then you should not have been baselessly attacking such uncontroversial thing as attenuation. There are engines, which use more points to represent spectra yet they still use multiply.
This is mistaken. A correct renderer would look correct to whatever creature was looking at it. If we had monochrome vision, we couldn't simply use scalars because it wouldn't look right for those who could see more.
And how could it be otherwise? Real life looks correct to any creature that looks at it. And our renderers supposedly model real life.
This is a contradiction, and rather than trying to disprove what I'm saying, it's worth chasing down the logical fallacy.
> A correct renderer would look correct to whatever creature was looking at it.
Impossible. RGB is just a 3-bin spectrum. It's the minimum number of bins needed to represent any color humans can perceive because we have 3 color sensors that are at least partly independent.
You could always imagine a hypothetical creature with yet more sensors, until you need a bin for every possible wavelength of light. Eventually you'd effectively be simulating individual photons. The world has too many atoms for us to simulate it at that level.
We sometimes use larger spectrums than RGB because while humans may not always be able to perceive the difference between two different spectrums of equivalent color, sometimes they interact differently with other things.
For example, white light formed by the full spectrum of visible light will seperate into a rainbow when passed through a prism. White light formed by red, green and blue lasers may be indistinguishable when viewed directly, but will only seperate into red, green and blue when passed through a prism.
The number of bins we use for that sort of simulation depends on how many we need to get a result that is a satisfactory approximation of reality. All models are wrong. The question is always whether they are good enough for our purposes.
>This is mistaken. A correct renderer would look correct to whatever creature was looking at it.
I am not discussing "correctness" whatever it means, I am arguing against your assertion that "multiplying colors makes no sense". Are we done with that?
You can come up with whatever mathematical definitions you want. When I say "It makes no sense," I'm speaking very precisely: There is nothing in nature which corresponds to the idea of multiplying two RGB colors together.
In that context, it makes no sense to multiply two colors together.
There is. There is energy attenuation in different frequencies. Look at an object of any color under artificial light then take it outside. Now the light hitting the object's surface has orders of magnitude more energy yet you still see it as same color. Because its surface attenuates the incoming light the same way and your brain detects that attenuation as "color". Speaking of which (the brain), the "color" does not exist in nature at all, it's just an effect of human brain analyzing image projected on the retina. So let's agree that there is nothing in nature which corresponds to the human concept of color.
Simply that our monitors reproduce images using red, green and blue light.
A PBR engine models the entire process of the light from a scene, falling into a lens, measuring the light that hits the imaginary rectangles that are the pixels of the output image.
A PBR engine can model the light internally as a continuous spectrum using Monte Carlo, integrating paths of weighted-random wavelengths at each pixel.
You could stop here if you just wanted physical accuracy, you got the spectral data. But if you want to see it, you'll have to get it onto a computer monitor somehow.
So, you model a photo-sensitive medium using an exposure function, and apply colour-space transforms to turn the spectrum into just the three RGB wavelengths, in a way that has been measured empirically to be perceptually close to the original spectrum (look up how we got the CIE standards).
This is all science so far.
If you don't care about the full spectrum but only the RGB (or why not CMYK) result, there are some optimisations and approximations you can do because in most cases you don't need to simulate the full spectrum.
Did you ever stop to think that RGB screens might look weird to dogs as well? However, if you somehow built a dog-monitor with only blue and yellow, you could optimize even further. It would be a lot harder to get accurate colour-space data like CIE from dogs, though.
If you wanted to simulate a black-and-white photograph, it would be easier again.
> and why do we multiply them together?
I actually don't think we do, in modern PBR engines.
Multiplying two colours together is done in simpler 3D engines because it models reflection; one RGB triplet represents an approximation of the light spectrum, and the other represents the reflective properties of the material. In every-day language we use the word "colour" to mean both. But they are actually of a different "type", if you will. We say "colour" to both the additive type (light) and subtractive type (material).
What we model is the transformation of our photon path (additive) by the material of the surface it hits (subtractive). In the simpler model this happens to equate an element-wise multiplication (not "just" a multiplication).
However, in modern PBR engines, the reflection operator is a lot more complex than that. The material is no longer simply represented by an RGB triplet, and the operation between the photon packet and the material isn't a multiplication.
The correct physic interpretation can be found by setting up the model and computing the integrals, it's probably not "neutral" light absorption but a variant of it