"Sharp Quattron’s fourth primary color is yellow, and there is nothing for it to do because yellow is already reproduced with mixtures of the red and green primaries, he said."
This 20-year display "expert" has never heard of hi-fidelity color? Inexcusable. It is accurate to say that yellow is not a primary color of light, so it is true enough that no one should refer to the yellow pixels as a "fourth" primary color. However, it is completely false to say that "there is nothing for it [the yellow pixels] to do". The yellow pixels are used to extend the gamut of the device to cover portions of the visible spectrum that the primary LEDs, in combination, cannot.
We do this all the time with Hi-Fi printers. We add RGB or Orange, Green, and Violet to the traditional CMYK primaries to expand the gamut of the device far beyond what traditional process color can accomplish.
Wasn't the point that yellow was a poor choice of colour to add since there is not much missing from the gamut between the red and green primaries? I'm no expert, but looking at the diagram I'd say a cyan colour would be far more gamut expanding.
Your eyes are sensitive to those areas in different amounts - while it looks like there is a lot more space to expand towards cyan - you probably wouldn't notice it as much as the extra R+G pop you're going to get with the added yellow.
The edge of the blob on that chart represents the color that can be achieved by mixing the three chosen primaries. If you wanted to expand the curve outward along the top edge of the blob, you need yellow pixels with a more intense color than can be produced by mixing red and green.
What this will achieve is not just brighter yellows. The whole curve expands upward, leading to more intense colors along that entire portion of the gamut.
No, the triangle is what can be achieved by mixing the primaries. The blob is what the eye can distinguish between. I think the blob is the http://en.wikipedia.org/wiki/CIELUV_color_space but I'm not certain.
Can I suggest we look at http://en.wikipedia.org/wiki/CIE_1931_color_space#Definition... instead? There you can see that three primaries cover a large range of human colour perception. In particular yellow is well covered, but there is a big gap around cyan, hence my earlier comment that a cyan primary would seem more useful. This corresponds to the requirement for a negative red colour to reach between 440nm and 540nm. Note red is the only primary that goes significantly negative.
Also since we expect television to be displayed using three RGBish primaries, the triangle is also the only colours that are actually transmitted in any TV signal or recording.
Now, it is possible that the small gain in yellow is noticeable, and leads to a nice visual "pop" when viewing the screen, but it really isn't gaining you a lot. I'd be more interested in a cyan primary (think of the skies!) or an HDR screen with a real range of brightnesses.
> The yellow pixels are used to extend the gamut of the device to cover portions of the visible spectrum that the primary LEDs, in combination, cannot.
What color in the visible spectrum cannot be expressed as a combination of red, green, or blue? Octarine perhaps? Adding additional colors only helps if your pixels are large and putting a red and a blue dot next to each other when aiming for purple ends up just looking like two dots of different colors instead of a combined dot.
Your sarcasm does not become you. Neither does your ignorance. If you actually worked in the field of color management, like I do, you would know that, as with subtractive pigments, so with LEDs: There is no such thing as a pure primary in the real world. It is impossible to create an LED or a color of ink that is mathematically perfect pure Red, Green, or Blue. Because of this it is impossible for an RGB-only display to display all colors that are visible to the human eye. (Let's exclude UV-florescence from the equation for the sake of simplicity).
RGB displays use color correction curves and various profiling tricks to correct for the difference between the pure primaries and the actual light being output by the LEDs.
Yellow LEDs can extend the color range of an RGB display into areas of the gamut that the display could not otherwise "hit". The same could be said for (presumably theoretical) Cyan or Magenta LEDs.
How does the technology used to capture the video factor into that?
I would assume that with capturing only RGB you already limited the color space – the information is lost – and you cannot extend it, no matter what you try.
I know that, when printing, additional colors can help because the RGB space doesn’t map exactly to the CMYK space. There are RGB colors you just cannot get with CMYK.
Are the RGB(capture) and RGB(display) color spaces so different that additional colors can help? I would assume they would have to be, wouldn’t they?
The capture technology actually has little to do with it. First of all, yes, only so much can be captured. But no one broadcasts, in raw form, exactly what they capture. They filter and transform it in all sorts of ways, adding, replacing, and shifting color. Also, consider how much content is digitally generated, where any arbitrary color can be specified, by the numbers. There is no capture device.
HDMI 1.4 specifies color spaces that, at present, no display can fully reproduce. As time goes on, there will be more hi-fi content that can target such displays when they are invented.
Also, it is common to print in the Adobe98RGB colorspace. Hi-fi printers can do it. There is a lot of content in Adobe98. There are, however, very few displays that can display it.
You cannot get back information - no, although you can artificially guess and make something that looks better in some situations.
But just because you recorded in one RGB space doesn't mean the particular display you are using can reproduce that exact RGB space - take any dozen RGB flat-panel displays and see if their color gamuts match up exactly. They probably don't - so while I'm ignorant of whatever feature the yellow is trying to achieve here - whether it's just fluff or actually filling out a deficiency in the output of their LCD gamut - either way, their display with the added yellow pixels can produce colours that their previous displays could not.
I have no doubts about its ability to display a wider range of colors. It’s just that after attending a few lectures about data transmission I rather got the feeling that those responsible for designing the compression and coding would rather quit and sail the South Pacific than transmit stuff that’s not actually used.
That might just have been a by-product of all the compression and coding I learned about being really old (PAL, NTSC), newer stuff might be a lot more forward looking (as teilo suggested).
You should not call other people ignorant. Your explanation is very confused.
You are confusing subtractive color mixing (as in printing) with additive color mixing (which kind of happens in LED displays although LED displays are not purely additive, but they are definitely not subtractive in any way).
Honestly I have to admit I am not certain why the Sharp display uses yellow pixels, but it has nothing to do with what happens in printers, as they operate on quite different principles.
My explanation may be confusing to you, but it is certainly not confused.
I did not confuse additive and subtractive primaries, and I clearly know the difference. It is my job to know the difference. I have profiled everything from litho presses, to numerous ink jets, displays of every kind, scanners, cameras, and light-jet photographic emulsion printers.
It just so happens that the same essential principles apply to either. Both reflective and emissive color gamut can be expanded by adding non-primary colors. People tend to understand ink on paper better than they understand light. I used it as a point of comparison.
Furthermore, color printing is no more "mixing" inks than display technology is "mixing" LEDs, LCD cells, or CRT phosphors. Discreet cells. Discreet dots. Same principle.
It would be the same principle, if color printing were only about placing discrete colored dots next to each other in order to form new colors, but that is not usually the case. Color printing usually involves placing different color inks over each other, and that makes everything different.
In any case, if you are curious, look up wikipedia for subtractive color and additive color and color mixing.
In short, the "cowlick" is the range of human color perception, and no triangle made of linear-combinations of three visible primaries can cover the entire cowlick.
I'm not familiar with the details of Samsung's technology. And certainly it is true that display manufacturers create wholesale fabrications of supposed technical improvements in order to sell displays. But theoretically, adding additional dedicated colors to a display would increase the gamut (the field of colors a display can present, as represented by a gamut chart: http://www.pcmonitors.org/wp-content/uploads/2009/12/colour-... ). In a perfect world, a red, green and blue color combination would be capable of presenting every possible color. This not being a perfect world, the presentation of color is limited, sometimes dramatically across displays and display manufacturers (if this were not the case, all displays would be perfectly equal, excepting white and black levels).
Whether Samsung has implemented their yellow in a way that increases gamut is a separate question I cannot answer.
Phil didn't point out that pupil dilation also is a factor (presumably for the sake of simplicity). You might be able to resolve two points in one lighting condition but not in another.
50 cycles per degree is not the same as 50 pixels per degree.
If your eye can resolve 50 cycles per degree, that means it can tell the difference between a uniform grey and something that alternates between black and white 50 times per degree. To display such a pattern, you'd need 100 pixels per degree (black, white, black, white, ..., with 50 pairs).
I don't think it's entirely clear from the article what 'cycle' should mean in this context. But 1/50 of a degree matches relatively closely to the traditional Snellen (as in the guy who made the eye charts) definition of normal vision (20/20 or 6/6 in metric countries http://en.wikipedia.org/wiki/Visual_acuity#Normal_vision) as being able to discern letters whose features were 1 minute of arc(e.g. 1/60 of a degree). At 12 inches, the angle subtended by a pixel (which is, I think, the corresponding minimum feature size) is arcsine( (1 in /326)/12 in) is 0.88 minutes (that is, less than the 1 minute Snellen definition of normal vision, which is in turn smaller than the 1/50 definition that this guy gives).
So I think Soneira guy is off base. But I think the much bigger problem is that both Soneira and the 1 minute definition are talking about the acuity of a 'normal' person, and seem to be largely ignoring the issue of a significant variation in the population. My question is, for what portion of the population does this display exceed the limits of their visual acuity?
The definition in that Wikipedia article says that 20/20 vision means being able to resolve two points separated by one arc-minute. Again, on the output side that would mean being able to display two points at that separation with something contrasting in between.
On the other hand, elsewhere Wikipedia says that it means being able to distinguish Snellen optotypes whose total size is one arc-minute. On the face of it, that implies more resolution than two pixels per arc-minute. (Hand-wavily, maybe it's about the same: if you can resolve 2x2 pixels in a square of side 1 arcminute, then for high-contrast Snellen-type images that maybe suggests that you ought to be able to distinguish about 2^(2x2) = 16 different such images, and a Snellen chart actually uses 10 or 12 different optotypes, depending on whether it's one of Snellen's original ones or a modern variant. That's pretty close to 16.)
Phil Plait's blog entry that someone else linked to talks about the difference between "normal" and "ideal", and concludes that an average person looking at a new iPhone at 1' distance will indeed not be able to resolve the pixels (though not by much).
Actually, wrt the Snellen optotypes, the entire optotype subtends 5 minutes (I couldn't find the 'elsewhere' you're talking about), but distinguishing them requires that you be able to resolve features 1 minute in size. In fact, on Snellen charts, the letters are carefully designed to reflect this. For instance, on the 'E', the width of each bar of the 'E' is equal to the width of the white space between bars. http://en.wikipedia.org/wiki/Snellen_chart#.2220.2F20.22_.28...
So yeah, if you drew out a tiny "E" (or "P" or "F", I don't think it matters) on a 326 ppi screen a bit more than 12 inches from the viewer's eyes, where the width of each bar was 1 pixel and the space between bars was 1 pixel, then I think that would match up closely with the standard for normal vision, at least in terms of visual acuity for one eye.
Actually, now I can't find the "elsewhere" either and I wonder whether I misread. Having looked again, I agree with you: standard-according-to-Snellen vision means being able to resolve features corresponding to (e.g.) an "E" on a 5x5 pixel grid with pixels of size 1 arc-minute.
Matching that up with the resolution of a display device is still a bit subtle. For instance, suppose you're trying to display an "E" of that size on the display, but it's offset by half a pixel vertically. Result: you get a grey rectangle that's a bit darker along the left edge. :-)
(I think my conclusion from all this is: what Apple are claiming about the iPhone 4 display is about as close to the truth as it's reasonable to expect in marketing materials. That is: everything they've said is at least defensible, but they've put a very positive spin on everything. Seems fair enough to me. And as a pixel-freak who isn't currently a smartphone user, I'm awfully tempted by the new iPhone...)
Don't want to be pedant here, but your calculation does approximate quite a lot:
Degrees in one inch - arctan(1/24)2 = 4.77
"Data points" in eye for one inch, 12" away: 4.7750*2 = 477 (magic number cited in article)
Why the last multiplication by 2? The eye doesn't see pixels, it sees color difference, which means that the limit (not the minimum) is when two of those pixels cover the same "cycle" -> 1 color change in the cycle (in any case, that's how I understood the article).
First of all, iPhone 4's 300 "pixels" per inch actually comprise three color sub-pixels each. Most display drivers today can use the color sub-pixels to carry spatial as well as color information, so that would bump the performance of this display comfortably over the detection threshold of the human eye.
Second, the throw-away comment that "magazines are printed at 300 dots per inch" is incorrect and mis-leading. Dots are not the same as pixels, a printed dot is either "on" or "off". A magazine printed at 300 dpi has a substantially lower actual resolution than a 300 pixel-per-inch display that can render 18-24 bits of color at each pixel.
Finally, as others have mentioned, the slamming of Sharp's Quattron technology is wrong-headed. Adding a yellow pixel can significantly increase the color gamut of an LCD display, since in many cases the color resists used to make the RGB color filters do a particularly poor job of producing yellows.
It will likely cause purchases of the device that would not otherwise happen.
In the US, your advertising is required to be truthful, or at worst, vague claims. Those images are neither.
Now I honestly believe Jobs misunderstood the science, as he's a suit, not an eye doctor/etc, but they shouldn't repeat that anymore now that they've been corrected until they can show their claim is true.
"Sharp Quattron’s fourth primary color is yellow, and there is nothing for it to do because yellow is already reproduced with mixtures of the red and green primaries, he said."
This 20-year display "expert" has never heard of hi-fidelity color? Inexcusable. It is accurate to say that yellow is not a primary color of light, so it is true enough that no one should refer to the yellow pixels as a "fourth" primary color. However, it is completely false to say that "there is nothing for it [the yellow pixels] to do". The yellow pixels are used to extend the gamut of the device to cover portions of the visible spectrum that the primary LEDs, in combination, cannot.
We do this all the time with Hi-Fi printers. We add RGB or Orange, Green, and Violet to the traditional CMYK primaries to expand the gamut of the device far beyond what traditional process color can accomplish.