Parts One and Two of this series of posts on NCCA’s “The Color Project” discussed why we needed to run a visual assessment experiment and how we structured the study. You may recall that we created 54 panel pairs, and within this set there were 15 repeats (i.e., pairs that were shown to the observers—unbeknownst to them—a second time to see how closely they would rate the pairs), as well as 8 pairs of identical panels (i.e., take a panel, cut it in half, tape the halves together, and call it a color difference pair). I also mentioned the tedium of collecting data for 13 solid hours. And lastly, I teased you with promise of revealing data here in Part Three. So, without further ado, let’s dive in. But first, let’s discuss the visual observations. We’ll talk color data later.
Okay, 28 people looking at 54 panel pairs. That’s 1,512 data points. Let’s see how an individual observer’s ratings compare to the average ratings from the other 27 observers. We used the following rating scale:
5 = No color difference
4 = Extremely slight color difference
3 = Slight color difference
2 = Noticeable color difference
1 = Very noticeable color difference
Here is a snapshot of the data collection table. (Please don’t dwell on the details. Just know that a lot of information was collected.):
If you are wondering what happened to the data for observers 1, 2 and 3: I was observer 1 and tested myself. Of course, I knew the answers, and therefore my answers are loaded with confirmation bias. Observers 2 and 3 were two folks, not from our industry, whom I badgered into running through the panel pairs just so I could assess how smoothly the process would work. Since there was no emphasis on discerning color differences, hardly any color differences were noticed.
For our first analysis, let’s look (above) at Observer #4, for Color Pair #92. You will see that Observer #4 rated Panel #92 as “4” (extremely slight color difference). But if you look at the value in the Average Ratings column for Panel #92, you will see that the average value for the total of 28 observers was 2.96 (slight color difference). The group average, therefore, was “slight color difference,” but Observer #4 only saw a very slight color difference. Is this a big deal? Let’s find out.
We’ll make the same comparison as we did above, but for all observers for all panel pairs (1,511 more data points), and you get this:
… which is hardly helpful.
Let’s look at it this way.
There is a lot going on in this chart, but it is actually rather easily explained:
The Observer ID runs across the X-axis.
The Y-axis represents the sum of the deviation for each observer, versus the average rating, for all 54 pairs of panels. For example, for our first comparison mentioned above, Observer #4 was 1.04 points higher than average. Then Observer #4 was compared against all others for the color pair #18 (4 vs. 4.63, or 0.63 points lower than the average). Add up Observer #4’s deviation for all 54 pairs and that’s the value that appears for Observer #4 on the Y-axis.
Then we do the same kind of comparison for the other 27 observers to create the graph above.
You will see that four individuals seemed rather generous in their ratings (e., they rated the panels—on average—much better than the mean).
There were just two individuals that were substantially tougher than the average.
You can also see which of the observers were color blind.
Although there appears to be quite a bit of variability between the observers’ ratings, the blue arrows to the right of the graph indicate that there is not much more than one-half of a rating point difference from the mean. For example, if the mean was 3 (slight color difference), a half-point higher would fall somewhere between “slight” and “extremely slight” color difference. I concluded that about 20% of the observers would be considered in the “extremes” (i.e., too generous or too critical). The other 80% of the observers see color quite similarly, but not identically, and this is the real issue when it comes to color. We all see color differently, albeit only slightly in this experiment. However, “slightly” could be the difference that makes a consumer reject a color. And that’s a condition we must avoid.
If you would like to get into more of the details, please feel free to contact me at ncca.cocuzzi@gmail.com. I am more than happy to offer further explanation, and I would be grateful to hear any suggestions you have to offer.
In the next post, we’ll discuss the results from the observers’ ratings of the identical panels. Shouldn’t be a big deal … or should it? Part Four of this series will reveal the answer.
David Cocuzzi
NCCA Technical Director