In Part Three of “The Color Project” blog post series, we began to discuss the vast amount of data collected from the NCCA color experiment at METALCON. We also looked at how each individual observer compared to the other observers. We found that about 20% fell into an “extreme” category (i.e., they were notably far less critical or far more critical), but the majority—80%—of the observers were more or less in agreement.
I also mentioned in an earlier blog that part of the study included repeated showings of panel pairs to test how consistently an observer would rate color difference). We also included 8 pairs of identical panels (i.e., take a panel, cut it in half, tape the halves together, and call it a color difference pair). Let’s first take a look at the data from the identical panels.
We used the following rating scale throughout the experiment:
5 = No color difference
4 = Extremely slight color difference
3 = Slight color difference
2 = Noticeable color difference
1 = Very noticeable color difference
This identical-pair test is not intended to point fingers at anyone, nor is it intended to embarrass anyone. The 28 observers were all reasonable people from our industry, and the following data simply reflects honest observations. That’s what a good experiment is: one that is properly constructed and in which data is objectively collected and analyzed in an unbiased fashion. One always hopes to learn from a proper experiment. There is neither “good” data nor “bad” data. There is only data.
Remember that the data from Observers 1, 2, and 3 was omitted because they were the “test testers,” so to speak. So let’s dive into the data, shown here:
Let’s start with the bottom row (outlined with a red box.) These are the average ratings of all the observations of these identical-panel pairs. The average ratings range from 4.1 (extremely slight color difference) and 4.5 (somewhere between “extremely slight” and “no color difference”). Right out of the gate I’d say our observers were reflecting the fact that these panels had—in fact—no color difference.
Now let’s look at the green check marks. These are the observers who saw essentially no difference between the pairs. NOTE: I am allowing a rating of 4—extremely slight color difference—and a rating of 5—no color difference—to be lumped together, and I am declaring this to be all called “no difference.” About half of the observers saw no color difference. And of course there was no color difference. But what about the other observers?
I have highlighted in yellow all of the ratings of 3 or less. You don’t see all that many yellow cells—they represent only 9% of the observations. If you look at the uppermost row, the cell colors are an approximation of the colors of the panels. (Note that Panel Pair 23 and Panel Pair 25 were repeat panels.)
I can only draw a few conclusions from this part of the color experiment. One observation is that the two saturated colors (Panel Pair 22, Bright Red, and Panel Pair 25, Purple)—with just three exceptions—were more often seen as being the same color, compared to how the other colors were discerned.
There is one spurious data point. Observer #18 rated Panel Pair 22 at a “1” (very noticeable color difference). I am sure that they meant to indicate “5” (no color difference). Hey, it happens!
My overall conclusion is this: If you expect to see color differences, then you will, even when none exist. This experiment shows this phenomenon, but not to any extreme extent. It just demonstrates human nature, yet we must factor in a human nature element to any potential conversion to a new color system, which is why we must proceed slowly and cautiously.
If you would like to get into more of the details, please feel free to contact me at ncca.cocuzzi@gmail.com. I am more than happy to offer further explanation, and I would be grateful to hear any suggestions you have to offer.
In The Color Project: Part Five, we’ll look at the repeatability data. And after that, in Part Six, we’ll include the correlation between the observations and color measurements. Hang in there; it keeps getting better!
David Cocuzzi
NCCA Technical Director