Efficient application of panel monitoring can improve the value of a sensory panels result

Highly trained sensory panels are expected to work with machine-like proficiency. To ensure this, the performance of the panel and of the single panel members should be regularly checked. Dr. Ciarán Forde from CSIRO, the Australian member of the European Sensory Network, discusses how this can be realized.


Dr. Ciarán Forde, CSIRO Food Science Australia


What are typical problems that may reduce a sensory panel's performance?

When a sensory panel has been well trained its members can function collectively with instrumental precision to identify both quantitative and qualitative perceptual differences between a set of products. Poor agreement between assessors is often the cause for poor panel performance. This may occur, for example, when a panel does not agree on the precise meaning of the semantic label chosen to describe a perceived sensory difference. If we consider the sensory vocabulary to be an instrument for rating product sensory differences, then working with a poorly understood vocabulary term is similar to trying to play an instrument that is out of tune! In such cases the trained panel may suggest attribute reference standards to help clarify the specific perceptual characteristics of troublesome attributes and thus help gain consensus.

Even when all attributes are well understood, it is still possible to have poor discrimination and reproducibility at an individual and group level as panel members struggle to make the same judgment across successive attribute ratings. Panel monitoring and feedback at individual and group levels can encourage confidence among panel members and will help improve the consistency of their judgements.

How often should such tests be performed?

It is good practice to apply panel performance analyses to panel data every time data is collected, since a badly performing panel costs time and money. Some simple tests can be used to quickly tell you if the panel is performing consistently, and whether or not there is good agreement among panel members. Panel performance monitoring can be successfully applied during training to identify (i) poorly performing assessors (ii) attributes that do not discriminate (iii) areas of the vocabulary that are not well understood or agreed on by the trained panel.

Diagnosis of these issues early in a panel’s training enables the panel leader to deliver targeted feedback on performance and to focus the training on the identified problems.  It is important to be consistent in the approach taken to measuring panel performance so that over time the results are comparable and it becomes easy to identify what constitutes good and poor performance

Which are the most important dimensions of panel performance that should be tested?

The most important aspects of panel performance are repeatability, reproducibility, and discrimination. When we look at repeatability we look to see if an assessor gives the same score to an attribute of a given product each time that product is presented. We understand reproducibility to mean consistency across the entire panel. Reproducibility measures whether all members of a panel detect and agree on the same sensory differences in a product set. The third index of panel performance is discrimination, and is best described as the ability of a panel to tell the difference between samples based on the sensory attributes that are being rated.

In addition to monitoring a panel’s rating ability, it is also important to consider how complete a sensory vocabulary is when describing a set of products. To ensure the product set is fully profiled, the sensory scientist should consider whether all of a product’s attributes that the trained panel is able to perceive are represented in the vocabulary.

Which methods do you prefer to test sensory panel performance?

There are a number of good approaches that can be used to identify aspects of poor performance, both at an individual assessor level, and across the whole panel. Where possible, it is best to plot data to identify outliers, variations, and discrepancies in the data set. In its simplest form, this can involve using a profile plot, which takes all assessor ratings across a series of samples and plots a simple two dimensional line plot for each attribute, which can be used to instantly see where panel members disagree.

Using univariate statistical analyses (such as 2-way ANOVA with assessor and product as independent treatments) are effective methods to avoid spurious results that could lead to Type 1 errors associated with interpretation of false positive results. Plotting the p value for assessors between product analysis of variance against the mean squared error at an individual level is also an effective way of visualising the differences between individual assessors. This “p Vs. MSE” plot is a quick and effective way to visually represent the individual panel member’s discrimination between products against the mean error associated with their ratings and allows examination of assessors at an individual level. Univariate calculations also allow examination of correlations between assessors and rank order agreement across assessors for specific attributes and products. Descriptive sensory data collected across a number of attributes, products, individuals and replicates are multidimensional, and thus challenging to visualise.

Multivariate statistics enable a quick diagnosis of panel performance by comparing the panel’s ratings of all attributes and products. Techniques such as Principal Component Analysis (PCA) and Generalised Procrustes Analysis (GPA) can be used to quickly see which assessors disagree with the panel consensus, which products are most different, and which attributes are poorly understood. Plotting the correlation loadings of assessor ratings in a PCA is a quick and useful way of investigating the degree of assessor agreement. We use the PanelCheck software to do this, as it allows us to quickly examine performance with easy-to-interpret plots such as the Tucker 1 plot, which gives a quick evaluation of assessor agreement and discrimination.

The software has been developed by the Norwegian member of the European Sensory Network, Matforsk, and is available for free from the  Matforsk website. PanelCheck is a very user friendly and efficient tool that combines all of these techniques, and makes panel monitoring a painless task.

As to your experience - is it worthwhile to spend time on extra training of bad assessors, or rather, should they be excluded and replaced?

It is probably not correct to describe them as “bad assessors”. “Confused” assessors might be a better description.  More often than not training targeted at problem areas will correct any performance issue. We all have bad days, so removing an assessor because of bad results on a project may be extreme. At any rate, it is an inefficient way of collecting data; you have paid for the data, so you ought to try to use it. Assessors have to be correctly recruited and screened for perceptual sensitivity to assure that those chosen have good perceptual ability and will be highly motivated panel members.

If an assessor is consistently a poor performer, it is best to speak with the person one on one to make sure he or she is still motivated enough to be effectively involved, and ensure there are not other issues outside of the panel environment. Sensory acuity can sometimes be influenced by external factors such as surgery or medication, so it is important for trained panel members to keep the panel leader informed of any relevant developments in their personal health that may influence their efficacy as a sensory panellist.

What can be done to improve long term panel performance?

A trained panel member will be happy and effective in his or her role provided they are continually motivated by the work they are doing. When standard operating procedures or long term projects are initiated, it is easy to forget the effect this has on a panel’s morale; the process of training and evaluating foods and beverages often becomes monotonous, routine, and repetitive. The onset of these changes in morale is slow, but the result is always the same – poor panel performance. Providing clear and honest feedback on good and poor performance is a useful approach that can be used to lessen the risk of decreased panel motivation.

Well-constructed feedback has the power to motivate the assessors and promote the feeling that their results are important and useful. Where possible, it is a good idea to present the findings from studies that the trained panel have completed to demonstrate the value of their sensory insights, and highlight how important they are to the success of your projects.


Thank you, Mr. Forde