Sampling and related testing is often in the spotlight when pathogens cause illness in most any food product. Inadequate sampling with testing is often identified as the cause of the illness when in fact testing does not mitigate food safety problems. Sampling and testing together are an assessment tool that can divert some effected material, but its mitigation power does not compare to the power of preventive measures for mitigating pathogen risk. Successful sampling programs must address the needed sensitivity and the ability to represent the lots under assessment, which are part of a “fit for purpose” assessment. This latter aspect relates to the accuracy of any single assessment that is critical if the sampling and testing are being used to divert affected materials.
There are many recent examples where the sensitivity and accuracy of sampling methods have been improved in the face of pathogen-related challenges. The Leafy Green Marketing Agreement is incorporating increased raw material testing in its guidance for leafy greens. The almond industry has developed guidelines for sampling and testing for Salmonella. Compost products have come under increasing scrutiny. Research regarding pathogens in various waters continues to explore larger samples for bacterial pathogens and Cyclospora. All of these changes reflect efforts to mitigate or control food safety hazards and improve methodologies. Unfortunately, the consumer risks associated with these microbiological hazards are greatly amplified by the large exposure of consumers to the affected products or to materials affected by the contamination. Some would point to these improvements as the successful evolution of sampling and related testing methods. I prefer to view them as a return to the science as they are a better application of truths that have been known for decades if not longer.
All these improvements reflect the application of truths known by most food safety scientists. I present here four of these truths:
- The sensitivity or limit of detection (LOD) of a sampling scheme is linked to the amount of material tested.
- The number of specimens or grabs is intrinsically linked to the accuracy of any sampling derived assessment unless the lot is known to be homogenous as for a well-mixed liquid.
- Without a priori knowledge, the best sample is a random sample where all specimens in a lot have equal probability of being selected.
- A negative result or nondetection event does not show that a lot is contamination free but only provides a measure of confidence that it is less than the LOD.
Recently, the industry was reminded that safe and unsafe are not simple binary conditions. Absolute safety is impossible. As an industry, everyone involved in the food supply chain must seek to minimize risk to move as close to safe as practical and possible. There are many tools for moving towards safe including root cause analysis, quantitative risk modeling, Hazard Analysis and Critical Control Points, preventive control measures, Good Manufacturing Practices, Sanitation Standard Operating Procedures, and more. These programs are driven by knowledge, information, and data. These data generally come from sampling-based assessments when results are analyzed and applied.
If a random sample is assumed, a simple calculation yields an operating curve where the probability of detection increases with the level of contamination. This calculation will accurately predict the average probability of detection if each unit of contamination can be assumed to be independent; no specimen contains more the one unit of contamination even when the contamination is inhomogeneous. A unit of contamination could be considered a cell or a colony-forming unit depending on the testing method. This does not mean that any given lot will be detected or accepted, just that the average detection level for any contamination is known. Most people understand that dice are generally very fair. The probability of any number coming up is 1/6 when no bias is introduced. The more throws that are made, the closer the observed distribution will be to this expectation. In sampling a lot of food, we face the same challenge. We know the probability of detection at any level of contamination, but the actual detection of contamination in any given lot is not assured. There is always a measure of uncertainty.
The extreme case of a very contaminated point source is often raised as a challenge to a sampling program. If such a contamination is severe, it probably violates the premise of independence as the affected specimens contain more than one unit of contamination. In such cases, the size of the point source relative to the size of the lot can be used to provide an operating curve. Contamination must be sampled to be detected. Some in-between cases need to be approached with conditional probabilities where the probability of sampling the contamination is used in conjunction with the probability of detection.
There has correctly been discussion about sampling patterns and the number of specimens or grabs that make up samples. Such considerations are very important when there is inhomogeneity in the distribution of the contamination. Efforts to specify specific patterns are helpful only to the extent that they help approaching a truly random sample unless there is a priori knowledge about the inhomogeneity. Increasing the number of specimens increases the probability of sampling a cluster when inhomogeneity exists to the extent that each specimen can be considered independent. Recently, aggregating sampling techniques have been implemented to greatly increase the effective number of specimens by sampling the surface layer of large portions of the lot. Given that error is expected to be proportional to the square root of the number of specimens, the accuracy of a determination will be greatly enhanced by aggregated sampling. The U.S. Department of Agriculture Food Safety and Inspection Service has allowed a patented aggregated sampling technique to replace the traditional excision method of sampling beef trims (U.S. patent 10,663,446 assigned to FREMONTA, Fremont, CA). Similar advantages can be anticipated in other protein foods, produce, nutmeats, and powdered products where contamination is not uniform and present on the surface of the product.
It is critical to understand the data provided by a sampling plan and the related assessments. The information quality must match the need. Inadequate sampling programs can provide emotional comfort and acceptance, which can lead to illness and the loss of consumer confidence.
Eric Wilhelmsen, Ph.D., is a recognized world authority in food authentication, serving for over 25 years in both academic and industrial positions. In these roles, he has provided technical leadership and innovation for industrial collaborations. His technical contributions and practical innovations have been fundamental in establishing new revenue streams and profitable businesses in juices, dietary supplements and botanicals, agricultural commodities, byproducts and beverages. He can be reached at the Alliance of Technical Professionals: email@example.com.