According to the graphic above, the diseases phenylketonuria (PKU) and scurvy (vitamin C deficiency) couldn’t be more different. One, PKU, has a “highly genetic” etiology, whereas the other, scurvy, has an entirely behavioral/environmental cause. Both diseases nonetheless have the same mode of treatment: attention to one’s diet. Reasoning back from the “cure”, we might say both diseases are dietary diseases. However, another way of looking at it is that both diseases are caused by an enzymatic deficiency, and enzymes, as proteins, are specified by our genes. From that perspective, we might label both diseases “genetic”. In resolving these apparent paradoxes, we will also shed some light on why the nature/nurture debate is so thorny, and hopefully also dispel some errors in the way most people think about genetics, and errors in the way they think about diet.
The Cause of PKU
The reason PKU is placed on the far genetic end of the graphic is that its genetics are well-understood. PKU is an autosomal recessive disease – autosomal, meaning it is inherited via a non-sex chromosome, and so is equally likely to occur in males and females; recessive, meaning the disease-causing gene must be inherited from both mom and dad, making the disease relatively rare (for PKU, 1 in 12,000), and meaning it can occur in children whose parents show no signs of the disease themselves.
As I’ve written before, genes are segments of DNA that provide the recipe for making proteins from the chemical building blocks of proteins, amino acids. The gene that is disrupted in sufferers of PKU is called the PAH gene. This gene specifies the recipe for making phenylalanine hydroxylase. Babies born with PKU essentially have a bad recipe for making this enzyme, such that the enzyme just doesn’t work the way it’s supposed to – the way it does in the vast majority of people without PKU.
So what does phenylalanine hydroxylase do normally? It converts the amino acid phenylalanine into tyrosine:
In sufferers of PKU, who lack a functional phenylalanine hydroxylase, this reaction doesn’t happen. This is bad – very bad.
Why? Well let’s back up a bit. We’ve already established that genes are recipes for making proteins. We often hear things like our genes define who we are. Well – if all genes do is allow us to make proteins, then it must be equally true that our proteins define who we are. Yes, some gene specifies our eye color, some other gene our blood type – but the color of our eyes is determined by the proteins we make, and our blood type is named for a protein sticking through the membranes of our blood cells. Some proteins define differences between people, like eye color or blood type, but others determine whether we live or die. They determine the chemistry of our body, making some proteins absolutely essential for life.
When and how do we make proteins? When is all the time. We are constantly making proteins in every one of the 30 trillion cells in our body. We are massive protein factories. We never take a break from this activity until death. How we make them is that certain organelles in our cells interact with certain molecules in our cells to build our proteins one amino acid at a time. (A small protein like insulin has several dozen amino acids; larger proteins can have hundreds or thousands.) Again, genes specify the sequence of amino acids for a given protein. A mutation in a gene is a change in this code. The consequences of a mutation can be nil (if for example the change doesn’t alter the amino acid code), virtually nil (if a single amino acid is mis-specified but this amino acid doesn’t change the shape or electrical charge of the protein enough to alter its function), moderate (if the protein’s function is compromised slightly), or severe (if the protein becomes non-functional, as in the case of PKU).
In any event, having the right recipe to make a protein is only part of the problem. After all, if you have the right recipe to bake a cake, that’s not going to help you make a cake if you don’t have eggs and milk and flour in your kitchen. Likewise, having the recipe for making a protein is one thing – having the right amino acids is another.
We use about 20 amino acids to make all of our proteins. Of these 20, nine are considered essential amino acids – we must get them from our diet, or we die. One of these is phenylalanine.
The remaining 11 are not considered essential, because we can make them ourselves – if we have the right enzymes to do so, and if we have the right ingredients to do so.
One of the nonessential amino acids which is very important is tyrosine. The enzyme we need to make tyrosine is phenylalanine hydroxylase, as shown in the graphic above.
So we’ve identified the first problem that PKU causes: without a functioning phenylalanine hydroxylase enzyme, we can’t make tyrosine – and so a nonessential amino acid suddenly becomes an essential amino acid – we must get it in our diet. Without sufficient quantities of tyrosine, we can’t make dopamine, norepinephrine, or adrenaline, to say nothing of the hundreds of proteins requiring this amino acid in its recipe.
But the situation is even more dire than this. If this were just a matter of eating more tyrosine, PKU probably wouldn’t be so devastating. But the lack of the enzyme affects both sides of the chemical reaction shown in the graphic: not only does a PKU sufferer produce no tyrosine (the right side of the reaction), but the PKU sufferer will also build up high concentrations of phenylalanine (the left side of the reaction). This has several effects, including creating stress on the kidney to eliminate the excess.
More devastatingly, high phenylalanine levels disrupt the chemistry of the brain. The brain is protected by a blood-brain barrier that limits access to the brain by large molecules, presumably to keep toxins from affecting the nervous system. But this means that there has to be a way to allow needed large molecules access to the nervous system, and this is accomplished by what are called transport molecules. (These are proteins, by the way. Again proteins.) One such transport molecule is responsible for large, neutral charge amino acids. Think of this molecule like a single-file tunnel that works on a first-come, first-served basis. The problem is, when there’s excessively high levels of phenylalanine, almost every molecule that lines up for entry to the brain through this tunnel is phenylalanine – leading to low levels of valine, isoleucine, tyrosine, and other amino acids in the brain.
The result is devastating – small head size, severe intellectual delays, behavioral problems, depression, and reduced life expectancy.
How does one fix PKU? Gene therapy might be nice – that is, stick some cells in the body containing the right gene for phenylalanine hydroxylase, and let those cells crank out the enzyme. This is being tried, but so far with limited success.
What does work is careful control of the diet. All of the problems caused by this single gene are the result to too much phenylalanine and too little tyrosine. A diet low in the former and high in the latter can completely eliminate the symptoms of this disorder. This is, unfortunately, a pretty stringent diet, as many proteins in the food we eat contain phenylalanine. (When we eat the proteins of other species – beef, pork, chicken, rice, beans, corn – we break the proteins down into their amino acids prior to absorbing them. We then use these amino acids to make our own, human proteins. Think of amino acids like legos – we can destroy the spaceship our brother made of legos, rearrange those pieces, and build our own dune buggy.)
This stringent diet has to start right away – in fact, it’s especially crucial during development. For this reason, most babies have blood drawn within a few hours of birth to test for a small number of problems for which early diagnosis is crucial – and PKU is one of those problems.
With early diagnosis and strict adherence, an entirely genetic-caused disease (see the figure at the top of this post), is completely controllable using an entirely environmental/behavioral therapy.
The Cause of Scurvy
Scurvy, on the other hand, is listed on the extreme environment – nongenetic – portion of the figure. This is completely justifiable – and yet, just to show how thorny the nature/nurture debate is, I will also show how it would be possible to label scurvy as just as genetic as PKU.
Scurvy was virtually unknown until the age of exploration. In the early days of ocean voyages, the diet of the sailors (much more so than the officers) was relatively limited, and these voyages might last many weeks. Magellan’s years-long circumnavigation of the globe started with a crew of 237 and arrived with a crew of 18 – and the loss of men was probably mostly due to scurvy. (Magellan himself was impaled by a bamboo spear in the Philippines.) During the Seven Years War (in the mid 1700s) between the British and the French, a few hundred British seamen died from combat, and at least 60 times that number from scurvy.
Scurvy is a dietary deficiency of vitamin C. The word vitamin is a bit of a misnomer deriving from “vital amine”. We now recognize many vitamins that are not in the chemical class of an amine, and in fact, vitamin C is one such vitamin. Its chemical name is ascorbic acid, and it is a sugar acid. But the “vital” part of the word vitamin does retain its accuracy – vitamins are small molecules that are absolutely required for life, usually in very small amounts, which must be obtained from the diet. Of the vitamins, we require vitamin C in the greatest quantity, though still on the order of a few dozen milligrams per day.
Lack of vitamin C – scurvy – leads to fatigue and soreness, and then progresses to difficulty breathing (due to loss of red blood cells), bruising, bleeding, loss of teeth, and all sorts of other nasty symptoms as the connective tissues of the body slowly degenerate without repair, as vitamin C is necessary in the formation of collagen, a key component of connective tissue.
Because so little vitamin C is needed in the diet, scurvy can be rapidly corrected by eating foods containing vitamin C. Citrus fruits are, of course, excellent sources, and the British Naval habit of carrying limes on board for sailors to eat to combat scurvy led to the nickname “limeys” which was first applied to British seamen and later to British people generally. It is probably not an underestimate to attribute to limes the lion’s share of the credit for the formation and maintenance of the British empire, so devastating was scurvy to the maintenance of a strong Navy.
The rapid amelioration of scurvy by diet explains its position as a “completely environmental” disease on our initial figure. How then can I attempt to justify scurvy as a genetic disease?
Remember, what makes a vitamin a vitamin is that it must be obtained from the diet or death will inevitably result. Nutrition labels on our foods typically display the level of vitamin C per serving. But now check the nutrition label on your dog food or cat food. You probably won’t find vitamin C listed there, though you may find vitamin A, the B vitamins, and vitamin E. Why? Because dogs and cats don’t need vitamin C from their diet. Neither do rabbits, rats, mice, or lemurs.
Now, all of these species need to make collagen, and amides, and other things vitamin C is used for. And these species do use vitamin C to do the job. But unlike humans, dogs, cats, rats, and lemurs can make their own vitamin C from simple sugars. (Vitamin C is, after all, just a small sugar acid, requiring a simple chemical reaction to synthesize.) Again, molecular synthesis typically requires the right enzyme and the right building blocks. Humans have the right building blocks – we eat plenty of sugar – but we lack something that dogs, cats, rats, and lemurs have: the enzyme L-gulonolactone oxidase. If we had it, we’d make our own vitamin C.
The image to the right shows a partial evolutionary family tree. Species connected with the thick, black line, have a functional gene for L-gulonolactone oxidase. Species with connected with a thick, gray line have a nonfunctional copy of the gene. From this perspective, scurvy is actually a genetic disease – in the same way PKU is – it’s just a genetic disease that’s inherited by every human on the planet, rather than 1 in 12,000.
A Tale Of Two Diseases
Diseases that can be traced to a single, nonfunctional gene are pretty rare, which makes PKU something of a textbook example of the role of genetics in physiology. Diseases that can be traced to the lack of a single nutrient are also rare, which makes scurvy something of a textbook example of the role of diet in physiology. But if you dig a little deeper, these extremes start to disappear – PKU is fully treatable by diet (though it does take considerable effort), and scurvy, in principle, could be fully treatable by gene therapy by providing humans with the L-gulonolactone oxidase gene from a cat or a squirrel.
We are often bombarded with information about the role of genetics or diet in disease that concern the less extreme examples. We are told that scientists have found a gene associated with schizophrenia, or depression, or leukemia, or diabetes. We are advised that consuming probiotics, or antioxidants, or vitamin C, or plant protein could keep us healthy, or that too much salt, or cholesterol, or sugar, will make us unhealthy. Some people take extreme lessons from this deluge of information – maybe that certain problems are inevitable (“it’s in my genes”) or that other problems are easily solved (“just buy this supplement and avoid that food”).
But when the extreme cases are so plastic – when a genetic disorder is cured by diet and a dietary disorder is caused by lack of a gene common in the animal kingdom – how can we possibly take simple lessons that concern diseases we know to be a mixture of multiple genetic contributions and multiple environmental factors? The involvement of a gene in a disease does not imply inevitability, but it does represent an exciting tool to unlocking the interlocking effects of proteins to physiology – and once discovered, may lead to genetic, pharmacological, or environmental therapeutic approaches.
Psychologist Paul Rozin and his colleagues asked a fabulous question in a 1996 study investigating people’s attitudes about food:
“Assume you are alone on a desert island for one year. You can have water, oranges that grow on the trees, and one other food. Pick the food that you think would be best for your health (never mind what food you would like). Which of these foods would you pick:
Corn…Alfalfa sprouts…Hot dogs…Spinach…Peaches…Bananas…Milk chocolate”
Desert island questions are always fun, but they usually focus on recreational items. One CD. One movie. One book.
Here is a much more consequential question. One food – from a rather small list of choices – that you’d have to eat every day (along with water and oranges) in the hopes that it would keep you alive for 12 months.
What would you pick?
If you were like 39% of the 124 students surveyed, you picked spinach. Also popular were bananas (24%).
My preferred option, hot dogs, was selected by 17% of the respondents.
The authors of the study argued that only two of the foods stood a reasonable chance of keeping someone alive for 12 months – hot dogs and milk chocolate. These are more complete foods than the others on the list – containing reasonable quantities of all 3 macronutrients (fats, carbohydrates, and protein) as well as a healthy supply of minerals. In the case of hot dogs, in fact, there is very little nutritional requirements lacking other than vitamin A and vitamin C, which accounts for why the question specifies that oranges are also available. (In a previous version of the survey, oranges were lacking, making survival unlikely with any option. Still, you’d have lasted longest on the hot dogs or milk chocolate even then, although only 10% of respondents selected hot dogs or milk chocolate in the first version of the survey.)
Of course, hot dogs and milk chocolate are usually considered unhealthy foods, whereas peaches, spinach, sprouts, bananas, and corn are considered healthy foods. The idea of eating an unhealthy food for 12 months was apparently unthinkable for 79% of the respondents (hot dogs + milk chocolate), even though these were the most complete foods, nutritionally. When all you have is one food source, your best bet is to go with the more complete food source.
In general, of course, fruits and vegetables are considered to be healthy foods, and the perception is that meats are unhealthy. But we are animals – we need, for the most part, the same things that other animals need – and so by eating them, by eating meat, we have a better chance to consume all that we need in one sitting. Even the chocolate, which I hadn’t considered a viable option, contains fats and proteins associated with milk, an animal product, and thus is the second-best option on the list.
None of this is to imply that eating a year’s worth of hot dogs is a great idea when other options exist, and even in the original question there is the necessity of oranges. But the failure of most of the respondents to consider hot dogs or milk chocolate does speak to our black and white thinking about food.
This was made plain in another part of Rozin’s questionnaire. Here, he asked people to decide if a diet lacking salt is better than a diet containing a teaspoon of salt each day. Fully 51% of the respondents agreed with this statement (and another 18% considered these equally healthy options). For context (not provided to the respondents, of course), a teaspoon of salt is right around the recommended daily amount. By contrast, going completely salt free will kill you in about a month, and you’ll feel truly miserable after a week.
A second parallel question asked about a no fat diet vs. a diet in which you consumed the equivalent of 1 teaspoon of butter a day. Similarly to salt, 49% agreed that the fat free diet was healthier, and 18% said it was a wash. This is probably even more surprising a result than the salt question, because although you won’t die as quickly from a lack of fat, a teaspoon of butter contains only 6% of the recommended daily fat intake, and only 10% of the recommended limit for saturated fat. Furthermore, when asked to compare a no fat diet to a diet with the equivalent of 5 teaspoons of butter, which still doesn’t reach recommended levels, fully 79% said the fat free diet was healthier.
Again, salt and butter are seen as unhealthy foods. But this categorization omits a very important detail: foods are (for the most part) neither good nor bad; what matters is dose. As I’ve written about before, sodium is an essential micronutrient that your body cannot make or store. Since you lose sodium in sweat, urine, and feces, you simply have to replace it on a regular basis. Life does not go on without it. Furthermore, it is so critical to normal functioning that we have evolved mechanisms to regulate sodium levels (so-called hydromineral balance) so that we can tolerate excess sodium relatively easily. It is not until levels get well above physiological norms that problems arise.
Fat is less well understood. The primary reason to consume fats is for energy, but we can also get energy from other sources (carbohydrates and protein). But fats also make up key structures in our bodies, including all cell membranes, and thus fats are broken down and reformed into structures that we need to grow and to maintain our tissues. Fatty foods also contain fat-soluble vitamins, like vitamin E, that are necessary for life and which are difficult (impossible?) to obtain without consuming some fat. The icing on the cake is that some people believe that fat never deserved its negative reputation at all – though I always tend to favor the notion of moderation until controversies are settled.
Whereas some foods have a sinful reputation – like fats, salt, sugar, and carbs – others have a “health halo” – they are seen as good in all contexts. I was stunned to see that, in another part of the study, 21% of respondents agreed with the statement “A person cannot eat too many vitamins” and another 8% neither agreed nor disagreed. Likewise, 19% agreed (and another 18% did not disagree) that “A diet cannot have too much protein in it”. Some vitamins are quite dangerous in excess, and high protein consumption, though rare, can lead to kidney problems. As with “bad” foods that can be fine or even necessary at low doses, “good” foods can be harmful or even deadly at high doses.
We see the same kind of thinking with non-food items, such as medication. Recreational sports players will take Advil on a regular basis, before, during, and after pain, operating under the false belief that anything sold over the counter is always safe. Students hand out their ADHD medication as a substitute for a cup of coffee, on the grounds that their doctor wouldn’t prescribe (and their mother wouldn’t let them take) anything that could be dangerous. Activists exploit the success of marijuana in alleviating pain and nausea in cancer patients to drum up support for legalization of marijuana for recreational use, on the grounds that because it is helpful in one context, it is safe in all contexts.
It doesn’t help that we live in an environment saturated with a sensationalizing media. When you hear reports that this or that is linked to cancer, or heart disease, or diabetes, you never – really very close to never – hear anything about the important information: how much? How much of a certain food leads to how much of an increased risk? They don’t tell us, and we don’t ask. In a confusing and complicated environment in which we are bombarded with scary information at every turn, people fall back on black and white categories. It’s a shame, because as the political scientist Aaron Wildavsky once wrote: “The richest, longest lived, best protected, most resourceful civilization, with the highest degree of insight into its own technology, is on its way to becoming the most frightened.”
Frightened even of sodium and fat, two things you would die without. Frightened of hot dogs – even when they’re the only thing that can save you!
The Rozin article is:
Rozin, P; Ashmore, M, Markwith, M (1996). Lay American conceptions of nutrition: Dose insensitivity, categorical thinking, contagion, and the monotonic mind. Health Psychology, 15(6), 438-447.
The Wildavsky quote is in:
Wildavsky, A. (1979). No risk is the highest risk of all. American Scientist, 67, 32-37.
I got temporarily excited during Hillary Clinton’s nomination acceptance speech at the Democratic National Convention. After boisterous applause for a comment slamming Wall Street, buoyed by the enthusiasm of the arena, she shouted:
I believe in science!
This unleashed another round of applause from the crowd, and I have to admit, my heart swelled. (To borrow a Hillary phrase, prompting my wife to deadpan, “She should take something for that.”) A politician had just proclaimed her trust in the scientific method, and an arena full of people from all over America responded with approval. I had just enough time to raise my hopes for the next, oh I don’t know, 3 minutes of the speech? 2 minutes? 45 seconds? She continued:
I believe that climate change is real and that we can save our planet while creating millions of good-paying clean energy jobs.
And then… back to immigration. Science got one sentence, although note that even that one sentence had to share space with Joe Biden’s “three letter word: J-O-B-S jobs!”
This post isn’t about the scientific evidence for climate change or the merits of various public policy positions to combat it. What bothered me about that passing moment in Hillary’s speech is that for many politicians, climate change is the only scientific issue of our day. Worse, it has become a litmus test for politicians and for the general public. If you believe in human-caused global warming, you are pro-science. If you disbelieve, you are a knuckle-dragger. And so by boldly proclaiming her appraisal that science has proven that the climate is warming due to industrial activity, Hillary and her supporters can pat themselves on the back and move on in their sanctimony.
Here’s what I believe is great about science. Science is a system that forces you to weigh evidence and to accept that evidence even when it conflicts with your preconceived notions. That, in a sentence, is what science is – why it is good, why it is sorely needed. Understand, this is not to say that all scientists practice this ideal, and it is not to say there aren’t considerable problems in the day to day practice of science. But over the long haul, it is science – certainly not a particular brand of politics – that deserves the label “progressive”. Bad ideas are weeded out, and those with the best evidence survive.
So for me, you don’t demonstrate your scientific bona fides by taking one particular position. You do so if you favor evidence over your preconceived notions.
It takes no courage for a Democrat to stand before other Democrats and remind us that the scientific consensus is that human activity is warming the planet. That’s a softball in that environment. What would have demonstrated real courage would have been if Hillary Clinton then went on with my hoped-for 2 or 3 minutes:
“And by clean energy” – riotous applause – “I include nuclear power, the most efficient carbon-free energy source we already have the technology to use!” Silence. (Not to inject politics here, but wasn’t the “Iran Deal” all about keeping the Iranians from making nuclear weapons but allowing them to pursue “peaceful” nuclear technology for power generation? Why do Democrats think it’s okay for the Iranians to develop nuclear power but favor inefficient wind farms and solar fields to nuclear power here at home?)
Although you can now hear a pin drop, I imagine Hillary continuing. “I believe also in the science that demonstrates that transgenic crop technology is not only safe, but actually increases yields, decreases the need for new farmland, lowers carbon emissions, and is safer for the environment!” The camera now zooms in on Bernie Sanders, squirming in his seat. She seems to be boring a hole through his chest as she continues, “We will oppose unnecessary GMO labeling laws, recognizing that such regulations would decrease consumer choice, favor large corporations, increase the price of food, and demonize a promising technology!”
Personalize it, Hillary. “When I was Secretary of State, I traveled to some of the poorest countries on Earth. I saw the faces of young children, blinded by Vitamin A deficiency, and met mothers who had buried their children far too young. As President, I will stand up to anti-science crusaders like Greenpeace to ensure that technologies like golden rice become available to all those in need!” Several spectators walk to the exits. A gentleman in the front row with a No GMO hat faints.
Keep harping on your record, Hillary. “I have made a career fighting for access to health care, especially for young children. My administration will continue to do so, placing special emphasis on ensuring that all children have access to life-saving vaccines. I strongly rebuke Robert Kennedy, Jr.’s nefarious demonization of vaccines, and I part with our current President and my opponent Donald Trump in that I unequivocally deny the fraudulent vaccine-autism link!” The Massachusetts and California delegations suddenly become dizzy.
Show your personal growth, Hillary. “And speaking of health care issues, let me also clarify my position on so-called complementary and alternative medicine. Although I was previously sympathetic to this quackery, having learned more, I now recognize that this is one of the major ways in which our nation squanders precious health care resources. I no longer consider Dr. Mark Hyman an advisor on these issues.”
“Instead I will support evidence-based biomedical research. My administration will pursue bipartisan increases to biomedical research funding, which we recognize requires the use of animal models.” Delegates that give time and money to PETA and the Humane Society of the United States break out in a cold sweat.
“My administration will also fully support NASA and exploration of the universe through telescopic observation. I challenge the delegates of the great state of Hawaii to overcome unscientific superstition and support bringing cutting edge research to the Big Island.” The Hawaiian delegation heads for the exit.
“And speaking of unscientific superstition, let me make amends for my earlier embarrassing comments about aliens having visited Earth. When I made those comments I was uneducated on not only the enormous distances between stars and the impossibility of traveling at speeds approaching the speed of light, I was also ignorant on the psychological sciences on how false beliefs are easily formed. When I said there couldn’t be so many stories of UFOs unless they were real was a too-credulous comment on my part, and one that I regret.”
Suddenly the entire roomful of delegates – ones that had lustily applauded science belief when the topic was first broached in a fit of self-congratulation – are themselves experiencing regret. Possibly at their own scientific ignorance, but more likely at having nominated a woman who believes not only in the science consistent with their preconceived notions, but even in the science that does not. How audacious!
There’s a funny Seinfeld episode in which George Costanza has been experiencing some good fortune, but then becomes worried about a spot on his lip that might be cancer. He is discussing this fear with a therapist. He says: “God would never let me be successful. He’d kill me first. He’d never let me be happy.” His therapist replies: “I thought you didn’t believe in God?” He answers: “I do for the bad things.” Take a look at some common left leaning views on climate change, nuclear power, transgenic crops, alien visitations, vivisection. One wonders if many at the Democratic National Convention do believe in science – but only for the bad things.
Don’t get me wrong, I’m not making any political points in this reaction. Donald Trump didn’t do any better in defending science or in promising to make decisions as President with science in mind. But preaching to the choir is easy. Leading – which sometimes means taking your followers where they don’t really want to go – is hard. I suppose it is nice that Hillary Clinton wants to be known as a pro-science leader, but if she really wants to be one, she has to adopt a scientific worldview that favors carefully collected evidence over preconceived notions. To conclude she is pro-science, well – I need to see more evidence.
Every once in awhile, it is worth repeating the Isaac Asimov quote that inspired the title of this blog:
If knowledge can create problems, it is not through ignorance that we can solve them.
Yet ignorance breeds fear, and fear, apparently, breeds petitions over at change.org. Facebook served up this little gem for me today: a petition entitled Say No To Genetically Modified Mosquito Release In The Florida Keys, posted by Mila de Mier. Her goal of 200,000 signatures is nearly met, despite the fact that there are only 25,000 residents of Key West, and there are less that 100,000 residents in all of Monroe County.
It turns out this petition is quite old (4 years or so) and that the project she was attempting to block has apparently received approval and may be currently underway. But having experienced the pain of reading the petition, I can’t let it pass without comment. And even if it is old news, apparently it’s still out there, and so it should still be countered.
The rambling text of the petition certainly qualifies under the heading of ignorance, unless it qualifies as willful lying. Here it is, with commentary.
Right now, a British company named Oxitec is planning to release genetically modified mosquitoes into the fragile enviroment [sic] of the Florida Keys.
The environment of the Keys certainly qualifies as fragile, but it is hard to understand how mosquitoes could damage that fragile environment. Hurricanes, certainly. Another catastrophe with an offshore oil well, perhaps. Multiplication of the lionfish, maybe. But mosquitoes? Indeed, killing mosquitoes might normally require the widespread spraying of insecticide, which, depending on the insecticide in question, might indeed be a challenge to a fragile ecosystem. So shouldn’t someone worried about a fragile ecosystem be standing up and applauding Oxitec’s environmentally friendly mosquito solution?
The company wants to use the Florida Keys as a testing ground for these mutant bugs.
“Mutant bugs” certainly sound scary, especially if you glance up at that mutant bug image (which I borrowed from the petition itself) – it looks like a meth-crazed (or maybe tomacco-crazed), bloodthirsty killer, outfitted by science with superhuman (supermosquito) powers of destruction. But “mutants” are rarely more powerful than naturally-selected forms with hundreds of millions of years of evolutionary fine-tuning, and in this case, the mutant and its offspring are designed to harmlessly die.
Even though the local community in the Florida Keys has spoken — we even passed an ordinance demanding more testing — Oxitec is trying to use a loophole by applying to the FDA for an “animal bug” patent. This could mean these mutant mosquitoes could be released at any point against the wishes of locals and the scientific community. We need to make sure the FDA does not approve Oxitec’s patent.
Now I’m lost. The petition is supposed to be directed at Adam Putnam, Florida’s Commissioner of Agriculture. He doesn’t work for the FDA. In any event, the FDA has already issued a preliminary opinion that the project will have no significant impact on human or animal health, to say nothing of the fragile environment of Key West.
Nearly all experiments with genetically-modified crops have eventually resulted in unintended consequences: superweeds more resistant to herbicides, mutated and resistant insects also collateral damage to ecosystems.
This is an outright lie. Certainly, the so-called Roundup-Ready crops are more resistant to the herbicide Roundup, but crops aren’t weeds, and their resistance to Roundup does not produce resistance to any other herbicide. Yes, application of herbicide to crops will slowly select for weeds resistant to that herbicide, but that has nothing to do with genetic modification of the crops – it has to do with the schedule of herbicide application, which is equally true for conventional crops. More importantly, this sentence is a complete red herring – the genetically modified mosquito can’t produce superweeds. It’s a complete non sequitur. What exactly is Ms. de Mier (and 170,000 signatories) worried about?
A recent news story reported that the monarch butterfly population is down by half in areas where Roundup Ready GM crops are doused with ultra-high levels of herbicides that wipe out the monarch’s favorite milkweed plant.
I’d like to see this “news story” indeed. No one “douses” their crops with Roundup, be it conventional crop or Roundup Ready crop. And research shows that monarch butterfly populations are not limited by the availability of milkweed. Now, if monarch butterfly populations are declining, that is worthy of investigation. But nothing Oxitec is proposing to do with mosquitoes has anything to do with butterflies. Again, if you are worried about butterflies, you should be standing on the rooftops cheering Oxitec and the Florida Keys Mosquito Control District for attempting to eliminate mosquitoes without spraying insecticides that might affect beneficial insect populations.
What about our native species of Florida Keys Bats. Are there any studies being conducted to see if these mosquitoes will harm the native bat population? Why would we not expect GM (genetically modified) insects, especially those that bite humans, to have similar unintended negative consequences?
I’m trying to understand this, but it’s really hard. What similar negative consequences is she talking about? Because there are crops that are Roundup Ready and Roundup reduces milkweed and milkweed is necessary for monarch butterflies and mosquitoes are genetically modified and they bite humans… is the concern that we won’t be able to feed human babies milkweed any more? I’m lost!
But let’s go at this a different angle. Oxitec wants to kill mosquitoes, and clearly Ms. de Mier is worried that if an Oxitec mosquito, doomed to die, bites a human, then the human might die as well – or at least suffer in some way. But this concern conveys complete ignorance about how the mosquitoes are modified. Admittedly, the technology is complex, but the biologists at Oxitec, and the FDA, and thousands of scientists worldwide do understand the technology, and do have justifiable confidence that Ms. de Mier’s fear is completely unfounded.
First, it is female mosquitoes that bite humans, but Oxitec only genetically modifies and releases male mosquitoes. Thus, even if the GM mosquitoes could pass on some toxin by biting (and they can’t – but even if they could), only non-biting mosquitoes are modified. But second, and far more importantly, the modification causes the mosquitoes to manufacture a protein that inhibits gene transcription. This protein quickly causes the cells of the mosquito to cease functioning. In fact, Oxitec had to build in the ability to turn off this gene prior to releasing these mosquitoes, or they wouldn’t even mature in the lab. Because the agent is a protein, it is completely harmless to any organism (such as a bat) that might eat the insect, as proteins are normally and thoroughly broken down into amino acids before being absorbed into the body. And even if Oxitec made a mistake and made a few modified female mosquitoes, and they happened to bite you, and the protein managed to accumulate in the fluids that enter the blood of the person bitten by the mosquito, the injected protein would be in such minuscule amounts compared to the size of the human, or dog, or cat bitten that the effect on any cellular machinery would be too small to measure. If, indeed, it could have any effect, given that the protein is optimized to work on the gene transcription process of insects.
And by the way, if you are worried that a protein made by a GM mosquito might hurt a bat that eats it or a human bitten by it, shouldn’t you be just as worried for a bat eating or a human bitten by a mosquito doomed to die from an insecticide spray? After all, insecticides which are sprayed aren’t proteins and may have a much longer active lifespan. I’m not saying you should be worried about this either – I’m just pointing out that the fear of the technology is entirely due to ignorance, not knowledge.
Will the more virulent Asian tiger mosquito that also carries dengue fill the void left by reductions in A. aegypti?
As far as I can tell, the Asian tiger mosquito is active during more of the day, and therefore might be more likely to bite when people are out and about. The more bites, the more virulent (capable of causing harm). This is, so far, the one bit of cleverness in Ms. de Mier’s plea. Of course, if the tiger mosquito moves in when the aegypti mosquito dies out thanks to Oxitec, then we would expect to see rises in disease rates. Will Ms. de Mier then greet, with relief and celebration, the dramatic reduction in dengue fever cases in areas where the Oxitec mosquitoes have been released?
Will the dengue virus mutate (think antibiotic resistant MRSA) and become even more dangerous?
Well, after making one halfway decent suggestion, we return to kooky town. If you are worried about a virus mutating, then you should reduce the virus’s ability to get into hosts where it can replicate and – you know, mutate. Genetically modifying a mosquito won’t increase the rate of dengue virus mutation. Indeed, killing off dengue’s host in great numbers is the surest way to reduce the rate of viral replication and therefore mutation.
There are more questions than answers and we need more testing to be done.
If there’s one reliable, laughable bit of hypocrisy you always hear from the Luddites of the world it is this – “We’ve got to stop testing this technology because we haven’t tested it enough!” Whether it’s Greenpeace destroying a test field of potentially life-saving golden rice, or Ms. de Mier and her 170,000 petition signers trying to stop a field test of the Oxitec mosquito, you can be sure they don’t really want more testing. Ms. de Mier, this is how testing works. As the FDA has already ruled and as any professional biologist can tell you, the testing thus far has been more than ample to determine that field tests can proceed. We know these things are safe to humans and animals, and now we need to find out if it’s effective in lowering transmission of disease.
Having exposed Ms. de Mier as either an ignoramus or a charlatan, let’s check out the comments and see who is signing this petition.
I don’t live in Key West, but I am sick and tired of Monsanto and other biotech companies using the general population as their laboratory! I can control what food I put in my mouth, but I cannot control their poisons blowing onto the crops that I eat, nor can I control getting bitten by mosquitoes! PLEASE do not let this insanity continue! No more genetically-mutated crap on this Earth!Sandi White, Lowell, MI
Oxitec is not Monsanto. If you don’t want poisons blowing onto your crops, you should favor development of GM crops which reduce pesticide use, particularly insecticide use. Crap cannot be genetically modified, only the organisms (like Sandi White of Lowell, Michigan) which produce it (like the comment above).
I am certain that, though Oxitec claims that these mosquitoes will be harmless and/or beneficial, sooner or later it will be discovered that something is horribly wrong with these mosquitoes. Genetic engineering is in its infancy. Common sense dictates that the release of an experimental organism – one that breeds uncontrollably and will undoubtedly transmit antigens to humans and other hosts – into the natural environment is both moronic and irreversible.Seth Casson, Kihei, HI
We should make all public health decisions on the basis of the certitude of Seth Casson of Kihei, Hawaii, right? Genetic engineering is hardly in its infancy; it’s been used for several decades and is responsible for major medical advances such as the ready supply of insulin for diabetics and the creation of mouse models for neuroscience research. The Oxitec mosquito can breed, this is true (that’s the point of releasing them) but the larva will die before maturity. How is this breeding uncontrollably? And can we all agree Mr. Casson has used the word antigens while having no idea what it means?
I am also sick of Monsanto and other biotech companies using us as guinea pigs. We really DO NOT need to let loose GM mosquitos into the environment. Whatever happened to the USA being a country “for the people, by the people”? We were never asked if we wanted GMOs released into our environment and polls show that 90% or more of citizens don’t want them. It makes me incredibly sad and angry that the US has become a falsely “democratic” nation. There is very little democracy left if we have no voice.
Mairin Elmer, Fallbrook, CA
Oxitec is not Monsanto. Would Mr. or Ms. Elmer vote tomorrow to rid the world of insulin, or cheese, or other products largely available due to genetic modification? I think if you announced to the world they’d have to give up inexpensive cheese, that 90% figure would drop right quick. Heck, 80% of Americans oppose food with DNA (at least unlabeled). If we get rid of food with DNA, try surviving on salt. It’s the only food eaten in quantity that has no DNA. Of course, the food idiots tell you salt is bad for you too.
I’m signing because I want these atrocities to stop. You can’t mess with Mother Nature & not have something bad happen, they don’t know what they’re doing!!!
Karen Whissen, Newark, OH
Ah yes, the Frankenstein gambit. You can’t mess with Mother Nature, says Ms. Whissen, pounding angrily on an iPhone constructed of rare metals mined from the earth’s crust.
There’s not much point in going on, I suppose – by definition, if someone signed the petition, the comment is unlikely to be scientifically grounded. Perhaps, instead, we should take some comfort in the fact that only 170,000 people signed the petition, and not a single one of them could justify that signature with a coherent rationale.
Out Of Our Heads: Why you are not your brain, and other lessons from the biology of consciousness by Alva Noe, PhD (2009, Hill and Wang, New York)
I routinely teach a course formerly called The Psychobiology of Consciousness and currently called The Mind-Body Problem. Although I am not a consciousness researcher per se, I was drawn into the field of physiological psychology because of my fascination with this topic. Like many introspective people, I “discovered” John Locke’s inverted spectrum problem long before I’d ever heard of John Locke: if you and I are both looking at a red apple, how do I know that your experience of red is the same as mine? You might see it the way I see the blue sky, or a yellow dandelion; yet having learned the term “red” for that experience – the experience of looking at such an apple – you call it red and beyond that verbal agreement, neither of us have direct access to one another’s subjective phenomenology. Later, as a graduate student, I learned that there was such a thing as blindsight – a neuropsychological syndrome usually caused by damage to primary visual cortex in which a person becomes blind – yet can paradoxically can recognize objects by sight if forced to guess at their identity.
These examples convinced me that the best way to understand the mind-body problem – the question of how a physical brain can create ineffable subjective experiences (“red”, “cold”, “sourness”) – would be to become a sensory neurobiologist. Furthermore I began to study the taste system – because of all the sensory systems, that was the one that seemed to have the most circumscribed phenomenological experiences. Tastes were sweet, or sour, or bitter, or salty, and that was about it. (Yes strong and weak, and yes umami or oleogustic, but nonetheless, a more manageable range than millions of colors or thousands of auditory pitches.) Furthermore I styled myself as a researcher in “taste quality coding”, which is to say, I was interested in understanding the patterns of neural activity correlated with those particular experiences. In that respect my work was in the tradition of Francis Crick and Christof Koch’s suggestion that people interested in consciousness should begin to search for the neurobiological correlates of consciousness – brain activity associated with a particular feature of conscious experience.
Even at the beginning, though, I think I knew there was something wrong with this approach. There’s a danger in taking the word “coding” too seriously. When we taste something, some of our taste buds detect the molecules of our food, and cause electrical signals to stream towards our brains. Eating a sweet apple versus a salty pretzel both cause this electrical activity, but presumably the activity is different in some way for the apple than it is for the pretzel – hence we can tell the difference, and hence we experience sweetness in one case and saltiness in another. Whatever that difference is we might call the code for taste quality. Like a code, the meaning (“sweetness”) is in a different “language” (a barrage of electrical impulses). However, a code implies decoding – someone or something will translate the message and experience the sweetness as a result. But is this really what happens? There’s no little guy inside of our brains that decodes the message. Our brains operate on the language of electrical impulses: there’s no need for a decoding at all. This was a thought illusion one of my scientific heroes, Robert Erickson, tried (mostly in vain) to disabuse his colleagues of. One colleague who was sympathetic was Bruce Halpern, whose article “Sensory coding, decoding, and representations: Unnecessary and troublesome constructs?” must have pleased Erickson when it was presented at a festschrift in his honor.
Regardless of concerns about decoding, there is still the question of where our subjective experiences come from. The working assumption of Crick and Koch, obviously, is that they come from brain activity. Most people believe that only organisms with brains are conscious – I am conscious, the rock is not. My dog is conscious, my tomato plant is not. But if this is right (and when I get around to talking about Alva Noe, I will point out that he does not think this is right – or rather, that this is not the whole story) – then there is an interesting problem. Our brains are made up of 80 billion neurons (and hundreds of billions of glial cells) which are not in physical contact with one another, yet we seem to have only one unified consciousness. How is such a thing possible? (And Noe would chime in here: and why is the skull a magical barrier?)
Imagine we were to remove one of these 80 billion neurons. Or a million. Or a billion. Such things happen all the time of course, as a result of aging, neurodegenerative disorders, strokes, head injury. These events may change someone’s behavior, but they do not eliminate consciousness. But how far could we go? How many could we eliminate? (One could ask the reverse question: when does consciousness emerge in embryological development?) There’s really no principled way to give an answer to this question. I think, in fact, that it was because of this problem that the renowned philosopher David Chalmers proposed a radical solution. Unable to draw a line, Chalmers proposes that no, we’re wrong, the tomato plant is conscious too. And so is the rock. Chalmers proposes that consciousness is a fundamental property of the universe, like mass, and that (somehow) the magnitude of the consciousness is proportional to the amount of information involved. If this sounds loopy, I think it does too. If I get around to reviewing one of his books, I’ll say more.
Alva Noe (remember him? This essay is about him!) has a very different answer to this conundrum. Noe believes the mistake is to start with the premise that consciousness occurs inside of us, inside of our brains. He doesn’t believe the neurobiological correlates of consciousness will reveal anything about the mind-body problem. Instead of going inward, more and more restrictively (as Penrose and Hameroff do, with their idea of consciousness as a product of the quantum states of microtubules – an idea even loopier than Chalmers’ panpsychism), Noe goes more expansive. Noe suggests that consciousness is not something in us but something we do – and that it encompasses (is encompassed by?) our interactions with the world (including all that we are perceiving at the moment and all that we are acting upon). We should be looking not for consciousness in our brains, or even worse, in some small part of our brains (the microtubules of Penrose or the dynamic core of Gerald Edelman and Giulio Tononi), but rather in the dynamic interactions of a situated agent in its locally-accessible environment.
This may also sound like a loopy idea, but I don’t think so. Consider the following exercise I have my students try in the first week of class. Take a pencil and close your eyes. Now draw a tree on a piece of paper. As you move the pencil, ask yourself the following question: as you are guiding the pencil, do you in some sense “feel” the paper through the tip of your pencil? Most people do. (And the golfer “feels” the ball hitting the club, the blind man “feels” the grass with his cane, the gardener “feels” the roots of the bush with her rake.) Of course what’s really happening is the pencil, or the golf club, or the cane, or the rake, is vibrating against our hand and fingers in a way that we’ve learned to ascribe that to that other feeling. Except that’s not quite right either, since if we are our brains, what’s really really happening is that the vibrations against our hands and fingers are causing neural activity in the hand region of primary somatosensory cortex (or somewhere “beyond” that in the neural circuitry). Or maybe the first description is right after all. Noe would argue for that more expansive view of our bodies as extended. The voice from across the room is experienced as being across the room, not in our auditory cortices and not in our eardrums.
These kinds of examples are discussed in Noe’s Chapter 4 (Wide Minds) where he also reviews some of my favorite studies from my class. There is the rubber hand illusion, in which an experimenter touches a fake hand which is visible to the subject while simultaneously touching (in the same relative location) the subject’s actual hand (hidden from view). Over time, the subject experiences that rubber hand as part of his or her own body, and have the experience that the touch is being felt from the rubber hand itself. (If you watch the video linked here, be warned that the explanation provided for the effect falls into the usual trap that Noe objects to in his book).
Noe addresses related experiments in his Chapter 3 (The Dynamics of Consciousness) which is the chapter where his book really begins to gather momentum. Here, he addresses the rewired ferret experiments of Mriganka Sur. These technically arduous and brilliant experiments (with one outstanding flaw, in my opinion, which maybe I will write about another time) essentially produced ferrets in which information from the eyes was redirected to primary auditory cortex. These ferrets behaved like they still experienced vision despite this redirection, and features of the auditory cortex developed a visual cortex like character. In the battle, in other words, between the brain (I’m auditory cortex, therefore you shall hear) and the dynamic interactions of a situated agent in its locally-accessible environment (to coin a phrase), the latter wins. The sensory-motor contingencies were visual, so the experience was visual, despite the identity of the brain region.
Related, Noe also describes another favorite of my Mind-Body class: sensory substitution, especially the work of Paul Bach y Rita. Rita was interested in developing a technology that might help the visually impaired. In the original incarnation, blind subjects were seated in front of a large TV camera, which they could direct at an object. The camera’s view would then be translated as little electrical tingles on the subject’s back, isomorphic to the scene. So if the camera was pointing at the letter X drawn on a chalk board, the subject would feel a X-shaped set of tingles on his or her back. The technology improved over time, so that now the camera can be placed in a pair of sunglasses, and the electrode array is placed on a small pad worn on the tongue. Although Noe oversells the phenomenon a bit in his description, Rita describes the experience as visual or quasi-visual – at least, it is unlike touch. This phenomenology emerges once the subjects have some experience with the system, and is much more powerful when the subjects are in control of the camera. That is, pointing the camera at a stationary X is much less useful than panning the camera (now, by moving the head back and forth) – a behavior that is also very visual in nature. Even more exciting, users can duck to avoid objects or, alternatively, catch them. When visual objects approach us, they “loom” – they grow bigger. This does not occur (in nearly the same way) with somatosensory stimuli – so experienced users of this system immediately equate a spreading of the electrical tingles with an approaching object. They also quickly learn how to move their heads to get more information about an object, again, not a natural somatosensory behavior. Again, he have a case where the sensory-motor contingencies seem to specify the conscious experience rather than the brain area activated (here, the tongue region of somatosensory cortex). The dynamic interactions of a situated agent in its locally-accessible environment, once again, is explanatory. (See also a recent exciting paper by Julia Ward and Peter Meijer.)
There are problems with Noe’s ideas too. Phantom limb pain is a difficult condition faced by many amputees in which they continue to feel their non-existent limb – and often it feels excruciating. The usual explanation is that the lack of neural inputs from the hand to the somatosensory cortex produces a change in the brain so that this area is now dominated by inputs from other places – such as the face. Touch to the face is then felt in the hand (a case of the brain region winning over sensorimotor contingencies). There are also dreams and hallucinations – where sensorimotor contingencies would seem to have no explanatory power for a phenomenological experience – where the only thing that seems to be happening (correlated with the experience) is neural activity. To his credit, Noe takes on these situations. In some cases I found his explanations compelling (as with dreams) but in other cases less-so (as with phantom limbs).
Noe’s Chapter 6 is titled The Grand Illusion, which is how I first came to know of Noe’s thinking (he authored a paper called “Is The Visual World A Grand Illusion?” which I have used for many years in class). His answer to this question, by the way, is essentially “no”. Since this is probably most people’s answer to the question, one would wonder why such a paper would need to be written, which means I must do some explaining. Consider the examples I gave at the start of this essay. When we hear the sound of a distant voice, we experience the voice as coming from far away. In a sense, this is an illusion: the only reason we can detect the voice is that air molecules (set in motion by the speaker’s vocal cords) cause our ear drums to vibrate. Nothing about the way they vibrate indicates the origin of the voice that set them in motion. Likewise, we see the world in 3 dimensions: my coffee cup I see as being at arm’s length, my door is several feet away. But this too is something of an illusion. The only way I see these objects at all is that the reflection of light from them falls on my 2-dimensional retina. The brain, it would seem, creates the illusion of 3-dimensions. (Obviously a useful illusion, as it proves to be accurate when I reach for the coffee cup.)
But furthermore – so the story goes – we experience our visual worlds as being all in focus, and we experience ourselves as being able to easily detect changes in our environment. But a moment’s experimentation should prove that very little of the world is in focus: concentrate on any word on the screen of this essay, attempt to keep your eyes still, and notice that only that word is in focus. Also consider that magicians can easily fool audiences with sleight of hand tricks in which we fail miserably at detecting changes in our environment when we are distracted. (This is related to the psychological phenomenon of change blindness.) Noe describes the fact that many philosophers and psychologists have made much hay of these phenomena: that we have a false belief about the completeness of our perceptual worlds – and that this is the grand illusion. Noe argues that we do not in fact have false beliefs – or at least, that our behavior belies this. We are constantly shifting our eyes, tilting our heads. The artist does not look once at his or her portrait subject and draw from memory; the artist is constantly studying and restudying the subject throughout the sitting. We do not act as though we build up a representation of the world in our heads for constant consultation – we do not have to. The detail is not in our heads, it is in the world. Our feeling of the completeness of our perceptual experience is not, Noe would say, an illusion of the completeness of an internal representation of the world but rather an awareness – based on a lifetime of experience – that we have access to all that rich detail by employing the right, basic skills – eye movements, head movements, body movements. Again, Noe is reinforcing the point that consciousness is not in us but rather consists of what we do – the skills that we use to interact with the world.
For the neuroscientist – and for the taste quality coding theorists of the world – this hits home. Much of the program of sensory neuroscience has been based on understanding how stimulus features are represented in neural activity. In Chapter 7 (“Voyages of Discovery”), Noe takes on the giants of my field – David Hubel and Torsten Wiesel – Nobel Prize winning neurophysiologists. (Theoretical critiques aside, Hubel & Wiesel’s contributions to neuroscience are unassailable.) Noe notes that their discovery of the responses of visual cortex neurons – in anesthetized animals – was responsible for decades of research and thinking in neuroscience focused on understanding feature representation (which reached its most Baroque form with the probably misguided work of the genius David Marr.) The kind of reification of the duties of neurons or brain areas, and the eventual (also misguided) “modular brain” theories of cognitive science, are a long way from the warnings of Erickson and Halpern, cited earlier, that representations and internal models may not be necessary to explain behavior. (Mental representations or fuzzy modularity may still have some utility – but Noe would probably disagree.) Noe’s critique of Hubel and Wiesel was certainly the boldest part of the book, and for that reason, one of the most important.
In the end, then, I found Alva Noe’s book full of important ideas. He reviewed a number of key phenomena in psychology and neuroscience. He called out the hidden dualism of active programs in neuroscience. As effective and as thought-provoking as the book was, though, it still didn’t help me understand why that apple was red, and why it tastes sweet. The how of the mind-body problem still nags, but in part thanks to Noe’s writings, I am excited that we may have a better idea of the where.
An acquaintance of mine, on a message board, recently played the Frankenstein gambit in a discussion about the politicization of science. Here’s his quote (modified slightly to improve readability):
You seem to be ignoring the Frankenstein aspect of genetically modified crops in that genes are being inserted that are entirely alien to the organism…the kind of mutation that would never occur in a natural environment. Yes, it’s the point of GMO, a pretty powerful technology that has been harnessed to this point successfully…what might a failure in this technology look like?
At some point before or after this comment, my acquaintance expressed a preference for hybridization to generate new crop varieties, rather than transgenic technology, and also argued that “tampering with nature can be very wrong.” This appeal to nature, essentially the converse of the Frankenstein gambit, seems to be a powerful (if fallacious) argument that can be applied to any new technology (including, at one time, hybridization). Indeed, this appeal is so powerful that many pseudoscientific websites adopt appeals to nature in their very URL (such as Natural News, RawForBeauty, and so on). It’s also why companies are falling all over themselves to get the word “natural” into their product names.
In any event, my main objection to the Frankenstein gambit is not so much the appeal to nature (grating as that is), but rather its reliance on a very superficial understanding of genes. That superficiality is betrayed by the comment that transgenic technology requires the insertion of genes which are “entirely alien to the organism”. Unfortunately, scientists compound this problem.
Genes Make Proteins, Not Organisms
Consider golden rice. This remarkable transgenic crop will save millions of lives by providing a staple food with beta carotene, a nutrient our bodies convert into vital Vitamin A. The original incarnation of golden rice borrowed a gene from the daffodil and a gene from a bacterium in order to alter the nutritive characteristics of rice.
An unfortunate and misleading way of describing this would be to say that scientists put a bacteria gene and a daffodil gene into rice (reread my original description and note the different semantics). This way of phrasing invites the Frankenstein image: a monstrous rice, cobbled together from bits of bacteria and bits of daffodil. Who wants to eat bacteria? (Never mind that we shovel trillions of bacteria down our throats every day.) Who wants to eat daffodil? Yuck!
By calling a gene a daffodil gene, we imply that the gene’s job is to make a daffodil. We imply that the rice now has daffodil-like qualities. But that’s not at all what happens.
Consider an analogy. Jack is a contractor, who’s been hired to build an elementary school. Naturally, this requires purchasing a lot of raw material – bricks, wood, drywall, insulation, pipes, paint, wire, fixtures, tile, shingles, nails, screws, and so on. After the job is done, he has a few hundred bricks left over. He is contracted next to build a private residence for Tom, a home-owner, and he says to Tom, “You’re in luck, I happen to still have a few hundred bricks from the school project, so I’m willing to offer you a discount.” Tom, revolted, complains, “Those are school bricks, Jack. I’m asking you to build me a house. I would go crazy living in a school.”
Genes don’t make organisms, any more than bricks are limited to making a particular kind of structure. Genes make proteins. In fact, they don’t make proteins in isolation; they contain a recipe for making a protein (in the form of the genetic code) in the presence of cellular machinery capable of converting the gene’s instructions into a working protein. This requires the participation of enzymes for transcribing the gene into messenger RNA, organelles like ribosomes for anchoring the growing protein, nucleic acids (in the form of transfer RNAs) to deliver the amino acids to the protein, and any number of additional enzymes for facilitating the process and shaping the final result. Amazingly, a human cell is capable of faithfully (or nearly faithfully) reading the instructions of genes borrowed from virtually any other life form on earth. The implication of this astonishing fact is that genes aren’t proprietary to particular organisms – we share far, far more in common with the source of any “alien” DNA than most people realize.
Rather than refer to one of golden rice’s transgenes as a daffodil gene, it would be more precise to refer to it as a phytoene synthase gene.
The gene doesn’t make daffodils, or the mysterious essence of daffodils: it makes (in the right cellular environment) a protein called phytoene synthase. The protein gets its name from its ability to synthesize phytoene from precursor molecules. In the right environment, this molecule will be further processed to beta carotene. Amazingly, the rice grain is just such an environment. Even though the rice grain lacks phytoene synthase, it contains all of the other enzymes required to make beta carotene (except phytoene desaturase, which is why two transgenes are required for golden rice).
Is there any justification for calling this a daffodil gene rather than a phytoene synthase gene? It wouldn’t seem so. Bricks aren’t just used to build schools, and neither is phytoene synthase just used to build daffodils. This enzyme is found in a myriad organisms; this table shows a partial list.
In fact, today’s golden rice doesn’t make use of the gene from the daffodil. Instead it uses a gene from corn. Both daffodil and corn make beta carotene. Vitamin A is Vitamin A, whether it comes from eating daffodil (not recommended), corn, golden rice, or regular rice sprinkled with vitamin A from a multivitamin pill. The gene from the corn causes rice to make more than 20 times as much beta carotene as the variety using the gene from the daffodil, which is why it was used. How can this be? I presume the genes are slightly different: even though both proteins allow the synthesis of phytoene, the corn’s protein must be slightly different, such that the reaction occurs faster in corn than daffodil.
Don’t let this fact trouble you. The bricks Jack ordered for the school might differ slightly from the bricks he would have ordered for a private home, but that doesn’t necessarily make them incompatible. It might even make them slightly better, like when they overbook your flight and offer to let you sit in First Class rather than Coach. In any event this subtle variation in gene products brings up another important issue: how evolution works.
Evolution Isn’t Directed To A Purpose
My correspondent quoted above makes another interesting assertion. He says that a gene dropped into rice from a daffodil (for example) is “the kind of mutation that would never occur in a natural environment.” On one hand he may simply be saying that daffodils and rice can’t have sex with one another to “naturally” mix their DNA. But if we read his statement literally, he seems to be suggesting that daffodils and rice are so dissimilar – so alien to each other – that you’d never have rice arise with the ability to make beta carotene through natural means.
But that’s just wrong. In fact, rice does make beta carotene. It just doesn’t do so in the grain part of the plant that we eat. Furthermore, rice does make other enzymes required to synthesize beta carotene in the grain, suggesting that a relatively minor mutation could reinstate the synthetic pathway. It may even be that minor mutations caused the rice to lose the ability it once had.
Humans, for example, don’t make the enzymes required to make Vitamin C, though some fairly closely related organisms can (see figure; black lines are Vitamin C producing animals and gray lines are Vitamin C requiring animals). Given that Vitamin C is absolutely vital to survival (it is estimated that about 66 times as many British naval personnel died in the Seven Years War from Vitamin C deficiency – scurvy – than those who died in battle), it is difficult to see that loss as being adaptive. Ask anybody at the risk of dying of scurvy if they wouldn’t like to borrow a gene from a rat or a lemur or a cat or a rabbit that permits their body to synthesize this life-saving substance. The fact that human ancestors had this ability, but modern humans don’t, is an accident of evolution that was possible only because humans evolved the ability to eat a diet rich in Vitamin C. This is probably the only reason that the failure to make endogenous Vitamin C doesn’t effect one’s genetic fitness – or rather didn’t, until man invented sailing ships that could stay at sea for weeks at a time.
Thus, also evolution generally enhances the genetic fitness of a species over generations, it does not always provide individuals with the ideal genetic complement. And, even if it does, changes in the environment or in the milieu of a species can undermine that adaptability. Thus, when young men started sailing during the age of exploration, a new vulnerability – scurvy – became apparent. Thus, when mankind became agrarian and discovered that rice was the easiest, least expensive, most reliable crop in certain parts of the world, Vitamin A deficiency suddenly became a killer. Whereas my friend views the current human genome as something sacrosanct, not to be tampered with, I view it as a halfway decent compromise, generated by a trial and error mechanism which, while a solid foundation, could stand quite a bit of improvement.
How does evolution work? Evolution is a 3-step process. First, you must have variation. Variation includes both different alleles (versions) of the same gene across members of a population, and also includes the emergence of spontaneous mutations through copying errors in the DNA. Next, you must have selection. In the natural world, selection is, essentially, early death: those most-adapted to an environment are more likely to survive than those poorest-adapted. Third, you must have inheritance: children must resemble their parents. Thus, the survivors pass on their genes to the next generation, whereas those who experienced an early death, do not.
Evolution doesn’t just work in the natural world. It works everywhere those 3 factors exist. Why were there no reality TV shows 30 years ago, whereas now every other show is a reality show? Because there was variation – in this case, innovation – reality shows probably starting with MTV’s The Real World began to be added to the variety of TV shows available. Then there was selection – the reality shows attracted viewers, made money, and thrived. Then there was inheritance – the reality shows were renewed for additional seasons, and produced spin offs and copies – and you had an evolution of TV programs. One can tell a similar story about how SUVs suddenly came to dominate American roadways, how certain funny videos go viral, or how certain breeds of dog become popular.
However – and this is a key point – selection can only work on the varieties currently in existence. Consider another analogy. Jenny and I are coauthoring a magazine article together. We decide that Jenny will write the first draft, and I will edit her draft and add my own comments to it. It must be obvious that this very well could result in a very different article than if I wrote the first draft and she edited it. My job will be to select and shape her ideas into something better – but this is a very different process than producing a first draft myself. Evolution works in this way – selection is a wonderful mechanism for increasing the fitness of organisms, but selection can only work on the genes currently in the genome. This is why I say that I view the human genome as having been arrived at through a bit of a slapdash process. We happen to have these particular genes, I suppose we might call them “human genes”, but I don’t attach too much importance to that. Our lack of having certain genes was, in some cases, merely an accident of they way things played out In addition: 1) Many other organisms have many of those same genes, and 2) We have the ability to use other organisms’ genes just fine, as they may use ours just fine. I elaborate briefly each of these points below.
Human Genes (If You Must Call Them That) Are Found Naturally In Other Species As Well
Earlier I pointed out that a good number of plant species have the capacity to produce phytoene synthase, so much so that it was a bit pointless to call the phytoene synthase gene from the daffodil a “daffodil gene”. The same can be said for the human genome as well. Because of evolution, useful variations which cropped up in our ancestors hundreds of millions of years ago are still with us today – and still with many of our evolutionary cousins such as lemurs, or foxes, or pigeons, or squid. Because genes are composed of hundreds or thousands of codons (sets of base pairs that specify amino acid constituents of proteins), the genes are often slightly different in one species or another, but that’s also true comparing one human to another.
But we have so much in common with other animals (indeed, this is why neuroscientists often study other animals – not to learn about monkeys or rats or worms or fruit flies, but to learn about ourselves). GABA, for example, is one of the most important neurotransmitters in the human brain – but you’ll also find it in the nervous systems of most animals. Why? Because many species make the enzyme that converts glutamate into GABA – because we all tend to have the gene that specifies the recipe of a protein that will serve this function. In fact any time you hear about some molecule that’s found in both humans and animals, it is very likely that some gene is responsible for making that molecule, or that some gene is responsible for making an enzyme that catalyzes the synthesis of that molecule.
Thus it is possible to say that we share 98% of our genes with chimpanzees, 85% of our genes with the zebra fish, and 21% of our genes with roundworms.
Consider the discovery that insulin injections can alleviate the symptoms of diabetes. Insulin is a very large molecule and thus not practical to synthesize in a laboratory, so originally the insulin was harvested from the pancreas of cows and pigs. The reason this is even possible is because cows and pigs make insulin too (as do any number of species), and although there may be slight differences in the cow, pig, and human genes and therefore the cow, pig, and human insulin peptide, our cells respond to this animal insulin sufficiently enough that this “alien” insulin saved people’s lives.
I imagine there are any number of people who are squeamish at the thought of genetic modification but are perfectly okay with the idea of injecting cow insulin to treat diabetes. It is difficult to understand why. Surely grinding up cow pancreas, purifying the juice, and injecting it into a human is “unnatural”. Yet we recognize that many useful drugs can be derived from not only animals but plants, and we are used to the idea of putting these foreign substances in our bodies. Why play the Frankenstein gambit when it comes to GMOs?
We Have The Ability To Use The Genes Of Other Species (And Vice Versa)
But that’s not the best part of the insulin story. The best part of the insulin story is that we no longer have to grind up the pancreas of farm animals to supply the diabetics of the world with insulin. If you think carefully about where that insulin is coming from, the solution is obvious. A cow’s pancreas (like ours) contains cells whose job it is to release insulin when the organism eats a caloric meal. (The insulin is a signal to cells all over the body to prepare for the coming rush of energy.) Because the pancreas has to release insulin every meal (3 or more times a day in us, and awful lot more than that in a grazing herbivore like the cow), those cells have to keep making insulin or they will run out. Each time the cells have to make insulin, the insulin gene in the cow’s DNA is copied to a messenger RNA strand, and that messenger RNA’s genetic code for the insulin peptide is “read”, line by line, by transfer RNA molecules that build the growing molecule.
Point is, you don’t need a whole cow to make insulin. You just need certain cells in its pancreas. But actually, you don’t even need that. You just need the gene, plus the cellular machinery for making proteins. Once you have that, you have an insulin factory. Because all organisms share so much in common, the cellular machinery for making proteins exists in pretty much the same form in pretty much any living cell on the planet.
The solution? Rather than kill a bunch of cows, take a copy the human version of the gene for making insulin, and place it into a yeast cell or an E. Coli bacterium. Not only have you made an insulin factory, but you’ve made one that self-replicates. Just feed it, come back tomorrow, and you’ve got thousands of insulin factories. Not only that, but they’re cranking out pure insulin, reducing the possibilities of impurities (from the cow or pig) causing allergic reactions in the diabetic patients.
There’s another name for this insulin factory. It’s a genetically modified organism. (Shhh!)
The Last Question
The only thing I haven’t responded to in my acquaintance’s post is “What might a failure in this technology look like?” My response to this question would be to point out the kinds of failures that won’t happen – failures imagined by the people who don’t understand the technology:
- You won’t be altered in any odd way by eating a genetically modified crop. Although I’ve emphasized the remarkable fact that E. Coli can read our genes, and therefore that we can read the genes of, say, some herbicide-tolerant ear of corn, that only occurs if you put the genes into the nucleus of some cell. Everything we eat contains DNA (except salt), but none of that DNA – “natural” or “modified” – can be incorporated into our cells. We enzymatically break down proteins and nucleic acids before we absorb them. In some cases that’s unfortunate – if someone eating golden rice suddenly gained the ability to make their own Vitamin A, that might not be a bad thing!
- You won’t gain some strange allergy to the genetically modified food you eat – though I wouldn’t be so sure about new varieties that appear from hybridization. Genetically modified crops undergo safety testing – something not required of more “natural” means of creating new varieties. Yet there is no principled reason to require it in one case but not the other.
- We won’t experience some collapse of the food supply because GM crops become so popular we end up with a monoculture problem. There’s nothing inherently different about GM crops from other specialty crops, so such concerns aren’t unique to GM. And in many cases GM crops are adding needed variety – as, for example, in the case of the banana.
- We won’t unleash some kind of superplant or superfish that destroys the ecosystem. The Frankenstein gambit also invites the notion that we are creating Wrath of Khan type super beings that can out-compete their natural rivals. But any farmer can tell you how difficult it is to maintain a thriving farm – our crops survive not because they’ve been bred to be evolutionary winners; they survive because farmers provide them with round the clock tender loving care. Not to mention water, nutrients, pesticides, and, I suppose, scarecrows. Varieties selected or modified to better tolerate farm conditions are unlikely to be especially suited for spreading outside the farm.
- We won’t destroy natural plants through inadvertent cross-breeding with modified plants. Surely some plants will escape the farm, but a GM crop is so minutely different from the standard crops (one or a few genes’ difference) that you wouldn’t be able to distinguish an all-natural variety from a cross-breed. Nor is it likely that the new genes will significantly spread through the native populations.
- We won’t unleash some environmental destruction by creating crops that can withstand pesticide application or which endogenously produces pesticides. On the whole, GM crops significantly improve the environment. Crops that make their own pesticides don’t have to be sprayed, minimizing leakage of pesticides off-farm. Decreased use of pesticides will also minimize crop dusting and use of farm machinery and therefore usage of fossil fuels. As better-able to tolerate the farm environment, more crop can be grown on less farmland, minimizing the encroachment of farmland into native forests. All of these benefits have already been documented.
What fears are left? I don’t know, I can’t get into that head-space very easily. The failures of the technology which are the most plausible are basically the same kinds of problems that occur with all modern agriculture. Growing lots of food puts a stress on the environment. On the whole, modern ag is a godsend to human comfort and longevity, but as with most things that cause major benefits, the industry comes with major trade-offs. The point is that GM doesn’t add anything new to those trade-offs and, if anything, begins to minimize some of the problems that already exist.
It’s been a long time since I read Frankenstein, but wasn’t one of the morals to that story that the monster wasn’t so bad – it was the people’s reaction to the monster that was regrettable? Well, if not, that’s how I would have written it.
P.S. Bonus points to those of you who know that The Last Question is this post’s Asimov Easter Egg. Double bonus points if you read this whole damn article.
Mindless Eating by Brian Wansink, PhD (2006, Bantam)
My review (out of 5 stars):
I don’t own too many books whose cover features a rave review from O: The Oprah Magazine. I also don’t own any other book that might remotely fit in the category of “diet advice books”. Most of that genre, in my opinion, ranges from despicable to utterly useless. But this book is different.
It is also a book I know well. The occasion of my writing this review is having completed reading the book cover to cover for the fourth time. Three of these read-throughs have been in conjunction with a class of senior psychology students as part of a course called The Psychology of Eating and Drinking Behavior.
Mindless Eating was written by Brian Wansink, a decorated scientist who holds an Endowed Chair at Cornell University. While Wansink’s book is full of practical advice for people trying to control their weight, his suggestions are evidence-based. He is a research scientist with a gift for designing revealing experiments of impressive creativity. While one could probably lose weight by applying Wansink’s many suggestions, for me the joy of the book is as a fellow research scientist appreciating the cleverness of the studies he has designed and carried out.
But even as a practical book on dieting, this one is different. Many popular diets are focused on physiology – and often on unproven physiological theories, or theories generalized far behind evidentiary support. Thus we get low fat diets, or low carb diets, or protein diets, or restriction diets, or avoid processed foods, or eat what our caveman ancestors ate – any number of theories borne out of the belief that what we eat is crucial, and that what we have been eating isn’t good for us.
Wansink makes no such suggestions about what we should or shouldn’t eat. Wansink is a “calories in – calories out” theoretician – he believes that what we eat is not nearly so important as how much we eat. This shifts the attention away from understanding physiology and toward understanding behavior. The issue for him isn’t what happens to the food as it goes from our intestines into our bodies, but rather as it goes from the package or the plate or the buffet line into our mouths.
Another refreshing perspective in Wansink’s book is that he doesn’t really have any villains, other than that old classic – human nature. He doesn’t blame fast food companies, or modern farming practices, or marketers, or any of the other favored targets of our modern ills. To be sure, all of these are important aspects of our food environment which collectively make gaining weight easy, but the solution to gaining weight isn’t to battle these behemoths – it is to control our own local environments.
Enter the clever experiments. A movie popcorn bag adorns the cover of my paperback version of the book, alluding to an experiment in which he gave free popcorn to movie-goers in exchange for patrons filling out a short survey at the end of the movie. The popcorn was purposefully made stale (popped several days in advance), and patrons were either given a large container or a medium container (the medium still so large that no one could eat all of it). The result? Patrons given the large container ate considerably more than patrons given the medium container. Again, since no one even came close to finishing the medium container, the implication is that the size of the package – more than the taste of the food or the filling of our stomachs – suggests how much we should eat at a sitting. Consider this the next time you open a huge bag of potato chips that’s mostly air – the bigness of the container may cause you to overeat.
His book covers many replications and extensions of this package size effect. In one case, invitees to an ice cream social put away more ice cream when given a larger bowl and/or a larger scoop to serve themselves with. In another, Wansink invented a bottomless soup bowl, in which research subjects ate tomato soup out of a bowl that was unnoticeably being refilled from below. Again, while even the control group (eating from normal bowls) didn’t finish all of their soup, the bottomless group at almost twice as much – favoring the opinion of their eyes over the opinion of their stomachs.
Wansink and others have also studied whether this prejudice of the eyes can be used to someone’s advantage. He mentions the work of Barbara Rolls, who made smoothies for two groups of subjects. In one group, she whipped the smoothies longer, which adds more air to the mixture, fluffing up the smoothie. Thus one group was eating a smoothie with half as many calories as the other, but in both cases the smoothie was one full glass. Each group was equally satisfied, and the amount they ate at a later meal depended not on how many calories were in the previous smoothie, but rather how filling the previous smoothie looked to the eye.
In Wansink’s version, he had students attending a Super Bowl watch party with an all-you-can-eat chicken wing buffet. For half the tables, servers removed the bones as the chicken wings were consumed, and for the other half of the tables, the wing bones were allowed to pile up. The students who could see the evidence of their consumption – the bones piling up – ate less than those whose tables were frequently bussed. Both groups could rely on their stomachs, but didn’t. Informally, Wansink says he leaves empty wine and beer bottles out at parties he throws, to reduce the alcohol consumption of his guests.
Besides visibility, relatively minor increases in the amount of effort required to get access to food can have surprisingly large effects. Moving a candy dish across the room – still visible, but now requiring an office worker to get up and take a few steps to obtain it – decreased eating behavior substantially. Wansink suggests making food just slightly more difficult to obtain as a general strategy. Plate your food in the kitchen rather than at the table, so getting a second helping requires returning to the kitchen. Pour potato chips into a bowl, so getting more requires getting the bag out of the pantry again. Move fruit and vegetables out of the crisper drawer and onto the middle shelf of the refrigerator to increase the probability of selecting a healthier snack. In school lunch rooms, he suggests moving healthy snacks next to the register into “impulse buy” position.
Another set of studies examined the effect our expectations have on our appreciation of food. Subjects given strawberry yogurt to eat in the dark believed in was chocolate yogurt simply because they were told so. Subjects given brownies on a paper napkin rated it less tasty and were less likely to buy more than subjects given the same brownie on fancy china. Patrons in a test restaurant presented with a free bottle of North Dakota wine spent less time eating their food and rated their meal as less appealing than patrons eating the same meal given a free bottle of California wine. In reality, the wines were identical with only labels changed.
Similar effects were seen in a Hardees restaurant in which one room was temporarily given a make over to play calming music, table cloths, and table service. The same fast food was rated as tasting better and patrons spent an extra 10 minutes enjoying their meal during a busy lunch hour. Back in the test restaurant, half of the diners ordered from menus with plain food descriptions, and the other half ordered the same food from menus loaded with adjectives. The adjective menu diners ate more, rated the experience more highly, and hung around longer. Think about that the next time you’re deciding between the “value-menu hamburger” and the “Chiabatta bacon cheeseburger” at Wendy’s.
Expectations can also create what Wansink calls health halos, which can actually be quite dangerous to someone watching their weight. For example, most people consider Subway to be one of the healthier fast food restaurants, but Wansink’s research shows that this knowledge can lead people to presume that everything at Subway is healthy and also that this gives them permission to overindulge there. Subway’s sandwiches may in fact be a healthier option than, say, a Big Mac, but if one then orders double meat, mayonnaise, a large soda, and a cookie, the advantage may be gone. Similarly, Wansink found that if people were given a bag full of granola that was (misleadingly) labeled low-fat or low-calorie, people would eat far more from that bag than other people given an unlabeled bag. Low calorie doesn’t mean no calorie – if you eat more of a low-calorie food because it’s “healthy”, then at some point you’ll eat more total calories than if you just stuck with the regular version. Even products labeled “heart-healthy” or “full of vitamins and minerals” – products making no overt claims about their calorie content – will be overconsumed on the mindless assumption that healthy in one respect means healthy in all respects.
The implications of Wansink’s work are simultaneously depressing and inspiring. It is depressing to realize that so much of our food behavior is mindless and cognitively impenetrable – that is, no matter how much education we have on these topics, we will still succumb to these environmental cues. (In one study Wansink trained students on the effect of package size on intake and even his educated students failed to moderate their selection of foods from larger bowls when put to the test.) But on the other hand, these cues can be tremendously effective at reducing intake when arranged to our advantage. Buying smaller plates, buying tall-and-thin glasses instead of short-and-fat glasses, plating food in the kitchen, increasing the apparent size of food by spreading it out or fluffing it up with air or low-calorie ingredients like lettuce on a burger, placing healthy items on the most visible shelves, placing candy dishes across the room (and in opaque containers) – simple environmental engineering can have a large impact over time. Wansink points out that all many of us have to do is eat 100 calories too little rather than 100 calories too much and, over time, we manage our weight.
This concept of environmental engineering needs to become better known among our nutritional gatekeepers – the ones who buy and prepare our food. Often this is mom or dad, but it also includes our politicians now that obesity has become a cause celebre and given that public school children get a large portion of their weekly calories in our schools. So much effort is now being wasted worrying about what we feed our kids at lunch, and too little attention has been given to the environments in which we feed kids lunch. Too much effort is spent on educating people about their food choices, and not enough effort on creating an environment in which people make the right choices whether educated or not.
Wansink’s book is about food, true, but it’s more about human behavior and decision making. I therefore recommend the book to anyone who is a human who eats food.