Hazard Analysis and Critical Control Points (HACCP) principles require identified and realistic food safety hazards to be prevented, eliminated or reduced to acceptable levels. The first two options are straightforward, from a conceptual perspective. The result in both cases is—again conceptually—a complete absence of the hazard, which should be acceptable to everyone. Things get more complicated with the last option: “reduced to acceptable levels,” because acceptability is a multifaceted concept.

Risk-Based Approaches
Traditionally, acceptable levels of food contaminants have been defined on the basis of scientific dose-response insights—leading to, for example, the establishment of “allowable daily intake” (ADI) limits for substances of toxicological concern or of infectious dose for microbiological pathogens.

This approach leads then to the adoption of (legal) limits, such as “absence in 25 g,” which is a more stringent requirement than “absence in 10 g,” indicating that the relative risk of a negative outcome (illness) is deemed lower. For our current purposes, we will term this approach “risk-based.” It is important to note that “risk-based” always implies that a certain level of risk, not zero, is deemed acceptable.

Allergenicity has long been a special problem in the context of this approach, especially in “may contain” cases. The distinguishing factor in allergenicity is the extreme variability of individuals’ sensitivity to the material in question. Most people are not sensitive at all, a few percent are and some may be extremely sensitive. Furthermore, in products that “may contain” a certain allergen, the idea is that the actual content may also vary significantly. The situation therefore gets increasingly complex. One approach that individual food processors have taken is to label their products as “may contain everything that could be present on the manufacturing site (or sites if the same product is manufactured at multiple sites).” In response, consumers sensitive to specific allergens have been known to “calibrate themselves against the market,” trying little bites of “may contain” products to see if they have an immediate reaction. If not, they would conclude that this product could safely be consumed now and in the future, counting on allergen levels to be effectively constant in subsequent production lots. Where existing products have been transferred from one manufacturing site to another, resulting in different actual allergen residue levels, this has sometimes proven to be an unreliable strategy with severe consequences. Simple “may contain” labeling is therefore not a real risk-based strategy, because it does not consider the actual level of allergens in the product, invites risky behavior on the side of consumers and makes no attempt to link dose to effect. Fortunately, more systematic approaches like VITAL (Voluntary Incidental Trace Allergen Labeling)[1] make a thorough attempt to bring allergen labeling back into the fold of scientifically valid, risk-based approaches.

Hazard-Based Approaches
Increasingly, however, very different considerations are being included in the concept of acceptability. The Sudan Red case in the UK in 2004 is an example.[2] In this case, the nonfood dye Sudan Red had been found as a contaminant in certain spices. As no acceptable level had been determined, all products containing Sudan Red at any level were recalled. This included products that could be linked, through traceability, to the issue, even if no Sudan Red could be detected in them anymore. It is questionable whether homeopathic dilutions of a suspected hazard continue to pose an actual risk, but the case was handled on an absolute zero-tolerance basis. For our current purposes, we will term this approach “hazard based.”

Whereas the Sudan Red case had conventional toxicological considerations (based on very little available evidence, which was where most of the problem originated) as the main driver, a few other cases seem to go well beyond that.

Imports of American long-grain rice came to a halt in 2006, when traces of genetically modified LL601 were detected.[3] “The protein found in LLRice 601 is approved for use in other products. It has been repeatedly and thoroughly scientifically reviewed and used safely in food and feed, cultivation, import and breeding in the U.S. as well as nearly a dozen countries around the world,” argued U.S. Secretary of Agriculture Mike Johanns, but Japan and the European Union (EU) stopped import immediately. As ships with rice were still underway when the issue became known, tests had to be developed to analyze cargoes upon arrival. The proposed sampling and testing regime was significantly more likely to pick up LL601 than a very sensitive routine Salmonella test in case of suspected Salmonella contamination, although nobody would suggest that LL601 was more hazardous than Salmonella. Furthermore, EU Member States would not agree on point-of-entry acceptance testing, but continued to test once-cleared batches when they arrived within their borders. Adopting a hazard-based approach in the absence of an actual hazard is, in this case, explained by the EU public’s strong aversion to genetically modified organism (GMO) technology, which in turn guided authorities’ actions.

The case of azo dyes in food—which must now be labeled by EU law so parents can choose to avoid the product for their children—illustrates how public and political perception may designate an additive as a hazard in a case where the European Food Safety Authority clearly was very reluctant to go that far. The labeling obligation then effectively acts as a hazard-based preventive measure, and the hazard status of these additives has become a self-fulfilling prophecy and is now a well-accepted “fact” in public media. Much the same applies to the “E-number” ingredients (chemicals permissible as food additives in the EU). The system was originally designed to assure consumers about the safety of these ingredients, but public opinion has moved in the opposite direction and producers are now keen to “keep the label clean” by eliminating such ingredients, thereby reinforcing the trend.

Lastly, there seems to be an emerging tendency to treat even contaminants for which a maximum residue level has been legislated on a zero-tolerance basis. The thinking behind this appears to be that detected levels may vary case by case, that the presence of a contaminant indicates the possibility that other samples might exceed limits and that a positive decision to allow the product into the market might in the end be seen as negligent and indefensible.  

Risk- or Hazard-Based: Does It Matter?
The problem with the above examples is not so much that people should be forced to eat what they don’t wish to eat, or that food safety would benefit from lax rules and enforcement. The problems are inherent in the concept of hazard-based food safety approaches, as the underlying drivers are on a collision course:

• Analytical methodology becomes ever more sensitive and selective. That holds for chemical as well as microbiological analyses, in which recent advances have shown that multiple, different strains of organisms could be detected where traditionally none or only one would be found. On the chemical/analytical side, we will get ever closer to the point where every possible environmental substance can be detected in every substrate. An elegant example was already shown in the 1980s, when low levels of BTEX (benzene, toluene, ethylbenzene and xylene) components were detected in olive oil, which caused great concern, mainly in Germany. Much negative publicity resulted until it was proven that the mere exposure of olives to the exhaust fumes of normal traffic at a significant distance for a few days (essentially just storing the olives in a shack off road) would cause the levels found and that consumers would be exposed to higher levels of these substances on a regular basis by filling up the gas tanks of their cars. Only then did public concern fade away.

• Allowing hazards to be designated on other than strictly scientific grounds, or pushing scientific requirements to allow for fast-track hazard designation, is helping increase the number of hazards. Furthermore, these aversion-based hazards have a tendency to end up in the zero-tolerance category either directly (GMO example) or indirectly (Sudan Red dye example). Additionally, in many countries, authorities must now be informed about any noncompliant test results—even if they are tentative or otherwise dubious or unconfirmed. This obligation has been implemented to prevent real issues from being covered up or necessary communication from being delayed, but in practice it often means that authorities will be under pressure to warn the public from the very first moment a potential issue is suspected. Once a warning has gone out, there is, in practice, no way back.

• As methodologies are continually refined and more zero-tolerance hazards are added, noncompliant test results can be expected to become ever more frequent, leading to ever more “scandals,” recalls, feelings of uncertainty in the general public and perfectly acceptable food being destroyed as if it were toxic waste. The feelings of uncertainty also typically drive calls to carry out more tests and at lower levels of sensitivity, turning the circle ever faster.

Hazard-based approaches tend to develop a runaway internal logic that drives developments to their limits: for example, the recall of Sudan Red-related products only on the basis of traceability. As the logic, once adopted, seems impossible to argue with, different approaches are normally only possible in an entirely different setting. The Food Standards Australia New Zealand agency saw things differently: “… the authority believes that Australians are safe. For a start, because the amounts of the dye in these products are so small, and because the link to cancer in humans hasn’t been proven, the overall risk to health is small.” A similar pattern was seen in other cases—the BTEX controversy in olive oil was never as big outside Germany. Solid risk-based approaches generally do not have this inherently inescapable logic.

In addition, the ongoing pursuit of these cases diverts efforts from more urgent food safety priorities—most of which have to do with implementing basic HACCP/hygiene management systems in the food chain—in developing as well as developed countries. The certification and general food safety awareness level for suppliers in Europe and elsewhere still needs improving, and significant work is required to raise standards.

So What Can Be Done About It?
The main goal must be to target our food safety efforts toward the prevention of actual harm. With the ongoing high incidence of foodborne illness around the world (the World Health Organization has mentioned 1.8 million fatalities), most of which are expected to be due to microbiological issues, there is every reason to continue working on basic hygiene and HACCP in our markets and in other countries, where some of our products are grown, farmed or manufactured.

This includes ongoing training, certification efforts and the development of analytical methodology to trace pathogens involved in outbreaks [one reason the German enterohemorrhagic Escherichia coli (EHEC) outbreak of 2011 took so long to be resolved was that the appropriate analytical tools for E. coli O104 in food were not available at the time] and traceability technology for the entire supply chain. With an ongoing background level of around 1,000 EHEC cases annually in Germany and around 70,000 in the U.S., this is a prime example of risk-based priority setting.

The gradual efforts toward food safety in terms of risk reduction are hard work, not very glamorous and never finished. Any preventive effects can be demonstrated only indirectly and statistically, so the immediate rewards in terms of a demonstrable intervention in an acute case are simply not there. Looking back at the behaviors of some of the stakeholders in the aforementioned German EHEC case, where there was no shortage of actors claiming a part of the responsibilities and public communication duties, impatience with the slower pace and less precipitous actions of a risk-reduction approach may have been a factor.   

At the same time, we would benefit from a very critical assessment of all zero-tolerance, hazard-based approaches for the reasons provided above. Risk communication must be an important element of this exercise. Recognizing that much of the driving force behind the designation of zero-tolerance hazards is aversion-based (zero tolerance was not put in place because there is infinite risk), the best options here might be in stressing the negative consequences of many hazard-based approaches:

•    The unstoppable drive toward complete elimination of implicated products from the market in the absence of any significant risk
•    The associated food waste
•    The inherent tendency toward “discovering” more of these instances
•    The very considerable efforts involved
•    The tendency to reconfirm existing feelings of uncertainty among the public
•    The absence of a contribution toward reducing the rate of foodborne diseases

Moving forward, the food industry—primary producers, manufacturers, retailers and foodservice—will need to continue to work together to improve risk-based food safety management along the entire supply chain. We will also need to become more vocal in challenging many hazard-based, zero-tolerance approaches that may effectively undermine any confidence the consumer may have in our global food safety efforts.   

Peter Overbosch, Ph.D., is vice president of corporate quality assurance, Metro AG, based in Dusseldorf, Germany.

References
1. www.eu-vital.org/en/home.html.
2. www.abc.net.au/health/thepulse/stories/2005/03/03/1313354.htm.
3. www.cbgnetwork.org/1629.html.
 

>