A Critical Review of Compositional Differences in Soybeans on the Market: Glyphosate Accumulates in Roundup Ready GM soybeans (Bøhn, T., et al. 2014)

A Critical Review of Compositional Differences in Soybeans on the Market: Glyphosate Accumulates in Roundup Ready GM soybeans (Bøhn, T., et al. 2014)

Skepti-Forum member, Amelia Jordan, has kindly written a critique of Bøhn, T., et al. (2014) based on discussion of the study in our GMO Skepti-Forum Facebook thread. Since the study is making its rounds through GMO critic communities, and since the study does not seem to hold up to criticism, we wished to provide a discussion on the several limitations of the study. Contrary to the sensationalised memes drawing conclusions from the study, several GMO SF members including Amelia explain why readers should be skeptical of this research. The critiqued study may be read here: Bøhn, T., et al. (2014). “Compositional differences in soybeans on the market: Glyphosate accumulates in Roundup Ready GM soybeans.” Food Chemistry 153(0): 207-215. We thank Amelia sincerely for the time and energy she put into the following critique.
If you come across solid resources on Bohn’s study, please send them our way and we’ll add them to our Wiki of scientific literature.

A Critical Review of Compositional Differences in Soybeans on the Market: Glyphosate Accumulates in Roundup Ready GM soybeans

By: Amelia A. Jordan

This paper does not meet minimal scientific standards in design, writing, or proper citations. To begin, a customary introduction should be full of credible references in order to set the stage for the study, however, there are no citations until the fifth paragraph. By that time, we have seen at least 12 statements that the authors do not give any citations for; most are statistics on GM-soy and glyphosate. The authors do not state the sources of this information, thus we cannot know if what they are claiming is supported by evidence.

This unfortunate trend in proper citation is repeated throughout the paper. You can see it with this un-cited statement: “Evolution of resistance to glyphosate is unfortunately progressing, particularly in the US. System vulnerability to resistance development is enhanced where there is a low diversity in weed management practice coupled with crop and herbicide monoculture.” This statement needs to have supporting evidence backing it up. It is far too lofty to make without any. With the lack of references attached to these claims, the authors have weakened the veracity of their argument. We do not know whether we can trust what they are saying.

References are also inappropriately used, as evidenced with this incomplete quote: “We thus document what has been considered as a working hypothesis for herbicide tolerant crops, i.e. that: “there is a theoretical possibility that also the level of residues of the herbicide and its metabolites may have increased” (Kleter, Unsworth, & Harris, 2011) was actually happening.” Unfortunately, the paper the authors incorrectly cite finds that “No general conclusions can be drawn concerning the nature and level of residues, which has to be done on a case-by-case basis.” (Kleter, Unsworth, & Harris, 2011).

In order for the authors to properly cite the data provided by Kleter, Unsworth, & Harris, 2011, they also needed to provide enough evidence supporting their position, either with data presented in their paper or by the use of another study. The authors fail to provide enough data in this study to do so, and they fail to provide any further citations to back up their statements. During the course of this paper, the authors not only fail to regularly and appropriately cite sources; they also fail to use the cited sources correctly.

Continuing from there, and as evidence of substandard writing, the authors do not even complete their reasoning in the introduction why food and food quality is crucial. We implicitly know why, but the authors do not (but should) state what food quality is crucial for: human health, animal health, etc. etc. The authors use words inappropriately or in incorrect contexts. Such as this line: “which can further accelerate the EVOLUTION of glyphosate resistance in weed species” [emphasis mine]. This is an incorrect use of the word evolution. What the authors should say is that glyphosate use encourages the selection for plants that are able to develop resistance to glyphosate. Natural (or artificial) selection does not equal evolution, with the end result of evolution being an entirely different species from its ancestor and selection of many varieties the mode of action. This is a basic tenant of the biological process and one the authors should be able to grasp; it is alarming that they confuse these two concepts.

I am skeptical of this statement in particular: “By comparing herbicide tolerant (“Roundup Ready”) GM soybeans directly from farmers’ fields, with extended references to both conventional, i.e. non-GM soybeans cultivated under a conventional “chemical” cultivation regime (pre-plant herbicides and pesticides used), and organic, i.e. non-GM soybeans cultivated under a “no chemical” cultivation regime (no herbicides or pesticides used), a test of real-life samples ‘ready-to-market’ can be performed.”

I do not know if the authors are familiar with the multitude of organic pesticides available for use in the US and so did not ask the farmer, or whether the farmer actually did not use any chemicals for treatment of weeds or pests. I find it highly unusual for a commercial organic operation to use no organic chemical controls themselves. I would have liked to see what type of organic regime the organic fields were under and confirmation stated in the paper that the authors asked the farmers what regime they were using. There are an untold number of different management practices growers can use in any combination, not to mention environmental variable that can drastically alter the crop being grown. We need to know those variables, so we can properly assess and put into context the data collected. It is unfortunate, that we are not given the context of this data.

I disagree with their ad hoc statement that testing soybeans straight from the fields is an accurate example of “ready-to-market samples”. By limiting their research to a handful of operations instead of performing lab tests so as to include the entire possible spectrum of grower operations, the authors fail to appropriately capture the true potential of glyphosate in soybean. This type of experiment has no proper controls for varietal, soil type, grower regime, etc. To be more succinct: the lack of controls presented in this paper, the unreasonably low number of samples per group, and the lack of data on sample variables lead me to believe that their conclusions cannot have any statistical merit.

EDIT: An observation from The Physics Police has shed light on why the ANOVA test was entirely inappropriate for this study. ANOVA requires independence, normality, and equal variances. While difficult to prove independence in many field trials, the authors never state that they ran any of their 35 variables, for each sample, through any test for equal variances. The attributes must demonstrate normality and equal variances in order for ANOVA to work. There is no mention of any such tests in their paper.

The authors claim that there are no pesticide monitoring programs in Canada, the EU, and the USA. This is a blatantly false statement, at least for the USA. There has been a pesticide food-monitoring program run by the FDA and in cooperation with the EPA and the USDA for decades now. The authors then criticize the USA, Canada and the EU for not having these (existing) programs; however, the authors don’t specify what these programs should be looking for. What are those programs supposed to monitor? Use? Concentration? Weed resistance? Residue? We don’t know, and not supplying us with a more detailed picture makes this tactic feel like a dirty trick. The authors falsely claim there is are no pesticide monitoring programs and then fail to put forth an outline of what those programs should look like.

For the sake of experimental design, the authors used the organic soy group as the control. I do not find this acceptable. A control is a baseline with all known variables identified and measured so that you can compare your findings with the least amount of error and influence introduced by those variables. To reference the previous edit with regards to statistical integrity, a control is used to compare distributions of data against to test for normality and equal variances. No control, no way to prove that your data have equal variances or normal distributions.

On the one hand, the authors want to prove that GM-soy isn’t as nutritious a product as organic soy, but instead of running greenhouse trials where they can control variables such as: the application of pesticides, the concentration of minerals in the soil, the specific soy varietal, the planting period, and the environmental conditions, the authors chose uncontrolled environments. By introducing unknown and unquantifiable variables into the experiment, the authors give up certainty in their results.

The authors do not provide any data on date of harvest, nor what was planted in previous years. They state no efforts to standardize sample collection within a field, nor time of collection post pesticide spray. None of these data are provided in the paper. What about flood plain status for the farms? What is the soil composition? What was planted the year before? CRP? Corn? Beets? All this information is crucial to the nutritional content of the crop grown and will affect the quality of the product.

The following excerpts from the paper highlight the largest problem with the study, the experimental design is deeply flawed and any data collected will be statistically useless. “Since different varieties of soy (different genetic backgrounds) from different fields (environments) grown using different agricultural practices were analyzed, we need to acknowledge that variation in composition will come from all three of these sources.” This is the purpose of a control. To reiterate, they would not have needed to make this statement if they had added a greenhouse control trial group.

“However, since 13 samples out of the 31 had at least one ‘sibling’ (same variety) to compare both within and across the different agricultural practices, how the same variety ‘performed’ (i.e. its nutritional and elemental composition) between different environments and agricultural practices could be compared. [] The ten samples of conventional soybeans were of four different varieties: The GM samples were from 8-9 different varieties. The organic samples consisted of nine different varieties. The conventional and organic varieties overlapped in the use of “Legend 2375” (n=3 conventional and n=1 organic sample). There was no overlap in varieties between the GM and either the conventional or organic varieties.”

The number of samples collected poses the largest problem with this experiment. The authors should have used n=30 for each group (GM soy, conventional soy, organic soy, and the missing greenhouse control group) in order to obtain meaningful statistical results. Instead, the number of samples collected were 1/4th the number that should have been collected. To add to this severe lapse in experimental design, there aren’t enough matching soy varietals between groups to conduct any meaningful statistical analysis. An n=4 is far too small a sample size to run any comparison with. There is one organic sample that can be compared with three conventional samples of the same variety of soybean. There is no way that the authors could derive any worthwhile statistical meaning from the samples they collected. There are just too few samples.

What is up with these figures? Figure 1: What is the point of showing the organic and the conventional groups if there was no detectable glyphosate in either and this finding has already been stated earlier in the paper? This is a waste of valuable space and makes the GM-soy group stand out like a sore thumb. It is visually misleading and holds no value. Also, that looks like a lot of variation in the AMPA and glyphosate detected, but we aren’t given a context in which to interpret them. I would like to see a confidence interval and data from other studies to compare those numbers against. Additionally, with an n so small and without having the relevant data on the farming practices used on each individual sample, we can’t make any inferences as to why we see the large variation or even the levels of glyphosate in the samples.

I really don’t understand Figure 2. What are the 35 variables they have standardized for, what does height mean, and in what unit is height? They measured 32 variables in table 2 and four types of sugars and fiber in table 3, but for the sake of being nit-picky, I want to see an explicit list of those variables to reduce confusion. All I see is a cluster dendrogram with a whole bunch of names and no guidance to tell me how to interpret it or assign significance.

Figure 3 is absolutely atrocious. What is the unit of measurement for the different axes and how exactly did they separate the groups out? They don’t tell us. Are there any variables that stand out as significant in separating all three growing methods? What are they? This figure is pointless because it assigns no significance, and like the other figures present, there is no legend!

Once again, the citation methodology in this article is unacceptable. In this excerpt, we see the first two assertions cited using questionable studies, and the third is not supported with referenced evidence whatsoever. “The increased use of glyphosate on Roundup Ready soybeans in the US (Benbrook, 2012), contributing to selection of glyphosate-tolerant weeds (Shaner et al., 2012) with a response of increased doses and/or more applications used per season, may explain the plant tissue accumulation of glyphosate.”

Even their data interpretation is substandard. “Using this formula, the data set has on average ‘glyphosate equivalents’ of 11.9 mg/kg for the GM soybeans (max. 20.1 mg/kg).” Wait, so the data set of n=10 have a mean approximately 55% the max level of ‘glyphosate equivalents’ allowed? Then what’s the big fuss about toxicity? I thought this was about nutrient equivalence.

The 4.3 Nutritional Components section in the discussion is full of unsubstantiated claims on soy and it’s role in a healthy diet. EDIT: the Observations article “Saturated Fat is not the Major Issue” recently published in the British Medical Journal highlights the changing data available on saturated fat. I follow the world of nutritional science pretty closely and this topic is coming under considerable scrutiny and review right now because assertions made decades ago about fats and their rolls in a healthy diet were actually not fully supported by the evidence presented. What’s more, they provide no numbers to put into context how much the differences in nutritional will affect a normal diet. There is a difference, but what is the impact? Is it something to get our panties bunched up for or can we meet for drinks at the beach and not think about it again?

The final nail in the coffin is that this paper cites Serlini et al. Any paper that uses a retracted article needs to immediately come under greater scrutiny. Addiitonally, the Monsanto, 1999 citation is nothing but a broken link to an “internet communication” and not an actual study. As evidence that the authors have failed to state their conflicts of interest, the author J. Fagan has ties to the organization Open Earth Source, which is staunchly anti-GMO. GenØk has been the subject of accusations of lies, fraud, and false information from Klaus Ammann, the respected Chairman EFB Section on Biodiversity from the University of Bern in Bern, Switzerland. The cited paper Benbrook, 2012 is by author Charles Benbrook, who is not a research scientist and whose own studies have come under significant fire for not holding up to basic experimental design. Overall, the authors are not independent scientists, but prove to have agendas that are reflected in their other work and with ties to biased organizations.

In conclusion, this is lazy science, shoddy writing, and a truly deplorable attempt to compare the nutritional quality of organic, conventional, and GM-soy. This study has the feel of a group of people who wanted a certain outcome and designed their experiment in such a way that a hand picked combination of variables would give them the results they were looking for. As we continue in this age to biotechnology, we need to keep abreast of what changes we see in food products, whether from new IPM strategies, new chemicals on the market, or new GM varietals available. These changes need to be tracked in controlled environments to better understand the mechanisms involved so we can adjust accordingly to new data. They should not be conducted in uncontrolled environments with miniscule sample sizes as an attempt to appease the public.