I recall sitting in the office one day in the fall of 2018 when I received a call from a reporter who informed me that there had been a further outbreak of Escherichia coli O157:H7 linked to romaine lettuce. After an initial response of “oh, no,” the reporter asked why do we continue to have outbreaks linked to lettuce? It was a valid question, and all I could answer is that the regulators and industry do the same thing after an outbreak but expect a different outcome. 

The typical play made by the Leafy Greens Marketing Agreement (LGMA) and the U.S. Food and Drug Administration (FDA) is to coordinate a response where the former comes out with positive messaging, and the latter says they are going to improve food safety standards. In the aftermath of the latest outbreak linked to romaine lettuce contaminated with E. coli O157:H7, FDA rolled out the Smart Farming initiative that was essentially furthering the aim of introducing blockchain but also promoting increased testing. The LGMA also came out with new irrigation water standards, given the seven outbreaks linked to contaminated romaine lettuce could be attributed to contaminated water. 

 

Test, Test, Test 

It is instinctive for policy makers and the media to call for increased testing to address food safety issues. It is also a good “go-to solution” for industry to appease those who press for testing, as it is easier to go with the flow. The reality is that testing is expensive, and a balance needs to be struck on how to provide assurance to retailers and consumers without burdening the industry in a sector that works on razor-thin margins. Therefore, maximizing the value of testing is key, which means pausing to reflect on how testing programs are developed, why they have been deficient to date, and, most importantly, how they can be improved.  

 

A Brief History of Irrigation Water Testing

Robert Koch and his research team started developing general growth media to cultivate microbes in the 1870s, then rolled out the first version in 1881. It took another 9 years to further develop media, introduce gelatin, then later agar, as a solidifying agent, and invent the Petri dish. As in these modern times, the new tool of being able to enumerate and isolate microbes was focused in the medical field and then trickled down to the food sector.   

The first documented microbiological criteria in the food sector were published in 1905 within Standard Methods for the Examination of Water and Wastewater, first edition. At that time, the incubation conditions for enumerating total count were 20 °C for 48 hours. The incubation temperature of 20 °C was based on practical considerations rather than microbiological, given that the gelatin solidifying agent would melt above 28 °C. Soon after the water testing standards were introduced, the dairy sector developed microbiological criteria for milk. At this time, agar was starting to be introduced, thereby enabling the incubation temperature to be increased to 37 °C, where mesophilic bacteria of concern to human health could be enumerated. This posed an issue to the early microbiology test labs, as now they would require two incubators, with one to perform water testing and the other for dairy analysis. Eventually, 34 °C was selected as a standard incubation temperature to enable laboratories to have a single incubator—hence the “standard plate count” was born. 

The story of how the standard plate count came to be reveals how the development of standard methods was influenced by practicalities and is often a compromise rather than based on sound science. 

 

Development of Microbiological Criteria 

Growth media developed over the years to first enable selective enumeration of key microbial groups and then later formulations for specific pathogens. The next (if not the current) problem is how to interpret such data—what is a good count and what is bad? The first microbial criteria were based on observations whereby physicians would develop criteria based on levels of microbes recovered from food and water implicated in outbreaks. The microbiological criteria of foods were further developed and eventually led to the establishment of the International Commission on Microbiological Specifications for Foods in the 1960s. The science of risk assessment and statistics evolved into acceptance testing, providing the basis for sampling plans along with criteria. However, it became evident by the end of the 1980s that testing was limited as a food safety net, which subsequently led to the introduction of Hazard Analysis and Critical Control Points. 

 

Establishing the Microbial Criteria for Irrigation Water in Leafy Greens Production

The standards for irrigation water were developed by FDA and LGMA following the 2006 E. coli O157:H7 outbreak linked to spinach. The powers that be knew that they needed to come up with a sampling plan and microbiological criteria. I am sure they looked at the potable water standards but then decided that these could only be achieved if disinfection treatment was applied, as it was for drinking water. Since this was too costly, the LGMA went down a level and selected recreational waters, with the reasoning, maybe, that if you can swim in it, then it should be safe to irrigate crops with. FDA was on board with the idea, and the standards were later adopted under the final Fresh Produce rule under FSMA.    

However, the LGMA didn’t fully embrace the U.S. Environmental Protection Agency (EPA)’s recreational standards, as this would have meant screening for enterococci, in addition to generic E. coli, then further indicators upon detecting a positive sample. Moreover, EPA did not state the frequency of testing, although most states implemented weekly testing for recreational water, which increased to daily if a contamination event could potentially occur (for example, after a flood event). FDA proposed weekly testing of irrigation water, but the LGMA considered one to two samples per season sufficient. 

A long-standing debate on water testing is how indicators, such as generic E. coli, reflect the actual safety status of a water source. It was known early in water testing that the gastrointestinal tract harbored coliforms and, hence, indicators of fecal contamination—that is, of enteric pathogens. The U.S. Public Health Service took this logic and in the 1960s came up with microbiological criteria based on coliform counts. To arrive at a number, the health officials went through historical dates, then noted an outbreak linked to contaminated water occurred when the coliform counts were 2,300 CFU/100 mL. The National Technical Advisory Committee took this level, then suggested a standard of 235 MPN/100 mL should be adopted, which exists to this day.  

 

Response to E. coli O157:H7 Outbreaks Linked to Romaine Lettuce: 2018–2020

The irrigation water standards were built on a shaky foundation, and perhaps it was no surprise that the series of outbreaks linked to romaine lettuce occurred. The LGMA still needed to come up with plans to tighten the irrigation water standards. The results were a somewhat convoluted solution, although it did have some positive aspects with respect to risk assessments and preventive controls. 

The new irrigation water standards recognized two water sources, with Type A being one that has low risk of being contaminated with enteric pathogens—for example, well or groundwater. In contrast, Type B includes sources such as surface water, which is susceptible to contamination. Type A water testing takes six water samples (three at the start and then another three 7 days later) with a pass being when five of the six samples have generic E. coli less than 10 MPN/100 mL. A further set of samples is taken 21 days before harvest if the water is to be applied for overhead irrigation. In the event a sample is out of compliance, then a root-cause analysis can be performed, and a second set of samples is taken. If the second set of tests passes, then the first failed tests can be ignored.

With Type B water, the water is screened for coliforms (< 99 MPN/100 mL) and generic E. coli (< 10 MPN/100 mL). The water cannot be used for overhead irrigation unless it is decontaminated with a process that supports a 2-log reduction of coliforms. Type B water can be used up to 21 days before harvest and must be sampled monthly, with no one sample being greater than 235 MPN/100 mL (average of 126 MPN/100 mL). If the water source fails to meet the standard, then the final crop should be tested for E. coli O157:H7, Salmonella, and L. monocytogenes.  

 

Will the New LGMA Standards Make a Difference? 

Although the rollout of the LGMA microbiological standards was positively received by the industry, government, and the media, inherent weaknesses persist. Putting aside the “mulligan” in responding to a failed irrigation water test, the question is, “Will the increase in sampling make a difference?” In short, it will make little difference and can be illustrated using operational characterization curves. As an example, the beef industry routinely performs N60 sampling of beef trim when screening for E. coli O157:H7. Assuming 1 percent prevalence of the pathogen, taking 60 samples would only provide a probability of detection of 55 percent. Therefore, the sample numbers under the LGMA standards would have a low probability of detecting contamination. 

 

Is It Time for a Rethink on How Testing Is Performed?

Although the series of outbreaks linked to romaine lettuce would suggest an industry in crisis, the reality is that 80 percent of those events occurred within a small geographical region of California. This implies that the vast majority of irrigation water is safe, with contamination occurring via unexpected, sporadic events. Therefore, rather than having a rigid sampling plan, it should be more of an investigative type. For example, with groundwater, there is evidence that the level of the water table can be linked to contamination. Flooding or high precipitation events increase the likelihood of contamination of irrigation water. 

 

Test Sediments Rather Than Water 

The traditional approach of collecting water is to take grab samples of 100–500 mL. An alternative approach is to apply concentration methods such as Moore swabs or tangential-hold filtration to collect samples over a longer period or increase sample volumes. The approach was applied by FDA during evidence collection in the E. coli O157:H7 romaine lettuce investigation. Yet, it was when FDA sampled the canal sediments that they finally recovered the strain that was implicated. 

Contaminants that enter surface water systems can be readily retained within the environment by adsorbing onto stream sediments. Therefore, by screening sediments, a history of the water can be assessed with regard to exposure to pesticides, heavy metals, and human pathogens. In addition, sediments shift over time, depending on the flow of the river and geographical features. Through imaging and modeling, it is possible to predict sediment movement and identify where contaminants could accumulate and, hence, be sampled. Therefore, the approach holds the promise of a custom method to determine the risk of surface or groundwater. It’s conceivable that by using next-generation sequencing, pathogens (viral, bacterial, and enteric protozoan) could be identified in sediments rather than depending on indicators such as generic E. coli and coliforms. 

 

Spectral Analysis of Water 

With the advent of imaging technologies and big data, hyperspectral analysis has expanded in recent years. The principle of the approach is to gather data over a spectrum of frequencies, then identify unique signatures through machine learning. The method has increasingly been applied to determine the distribution of nutrients within fields or assess the time of harvest. With regard to irrigation water, it is feasible to collect spectra of water over time to define an assessment of quality and then identify spectral characteristics of the presence of fecal contamination. The hyperspectral analysis also provides a means for continuous monitoring of irrigation water and an alarm when a contamination event occurs. 

 

Strategic Sampling in the Field

A key principle of testing is to ensure that sampling is random and representative of the lot. However, unless contamination is evenly distributed across a field, it is unlikely that the classic randomized sampling approach will be of value. A more likely scenario in field contamination is that it is localized, for example, in relation to animal intrusion. Even when the contamination source is initially evenly distributed (e.g., manure amendments, irrigation water), the pathogens could undergo different die-off rates, depending on the moisture content, exposure to sun, and/or resident microflora. Therefore, specifically mapping a field for potential contamination hot spots will provide a better food safety assessment than random sampling. For example, selecting plants where animal activity has been observed or where water pools. 

 

The Future of Testing in the Leafy Greens Sector

It is generally accepted that testing has limitations, although it is still useful as a verification tool. In a similar manner, irrigation water disinfection has limitations in terms of feasibility along with environmental impact. Therefore, as with milk, there is a need to develop more effective postharvest decontamination methods. There have been developments in this area with processes such as the gas phase advanced oxidation process, among others. Nevertheless, getting smart with sampling and testing can contribute to enhancing the safety of leafy greens and other produce.    

 

Keith Warriner, Ph.D., currently works within the Department of Food Science at University of Guelph. He received his B.Sc. in food science from the University of Nottingham, UK, and a Ph.D. in microbial physiology from the University College of Wales, Aberystwyth, UK. He joined the faculty of the University of Guelph in 2002.