Visit Your Local PBS Station PBS Home PBS Home Programs A-Z TV Schedules Watch Video Donate Shop PBS Search PBS

I'll be watching the Olympics again this year whenever I can. I'll watch for the explosive performances and tight finishes on the track and field. I'll look for the photo-finish touch in the pool. I'll be tuning in for gymnastics' superhuman-like shows of power and balance.

And, I'll be watching a new Olympic pastime: seeing who gets caught using performance-enhancing drugs, or PEDs. "Citius, Altius, Fortius"--the Olympic motto--translates to: Faster, Higher, Stronger. Does that sentiment drive some athletes to seek enhanced results?

This post marks the beginning of a blog series covering PEDs and how they have affected Olympic games and athletes. PEDs have profoundly changed modern sporting contests and media coverage of them for decades. In addition to the Major League Baseball steroid scandals and July's renewed accusations of Lance Armstrong's blood doping, there have been instances and investigations of doped Olympians dating back to East German and Soviet-era athletes more than 40 years ago.

Only more recently have governing bodies--namely the World Anti-Doping Agency, "WADA," which tests international contests such as London's 2012 Olympics--attempted to test for and regulate their use and abuse. Along with competitive advantages, PEDs carry severe health risks. They can even kill.

Dr. William Mitchell is an orthopedic surgeon and an expert on this subject, having worked with professional and amateur athletes in greater Boston for more than 25 years and having served as contributing editor to The Encyclopedia of Sports Medicine. Of the many drugs on the black market, Mitchell spotlights "the big three": erythropoietin, or EPO; human growth hormone, also known as hGH; and synthetic testosterone. All are prominently featured on WADA's nine-page list of prohibited substances, which went into effect on January 1 this year. It provides the standard for prohibited chemicals among international athletes and will guide official Olympic testing in coming weeks.

Dr. Mitchell walked me through the first section of WADA on anabolic agents, or synthetic versions of testosterone. At the top of the long list are the lab-made hormones androstenediol and androstenedione. Along with many similar anabolics, these are laboratory-made versions of the body's main strength inducing hormone, testosterone.

Both men and women produce testosterone naturally. It imparts primary sex characteristics in males, like a deep voice, facial hair and sexual organs. In both genders, it metabolizes amino acids from the diet into the proteins that make up muscle fibers. Therefore, the more testosterone in the body, the more muscle building potential.

Athletes take these synthetic forms of testosterone hoping to gain strength and muscle density, decrease recovery time after training, and reduce the incidence of injury during intense workouts.

But side effects of these drugs go beyond the simple risk of being caught using. Rage, depression, severe acne and baldness, in both genders, may be the best-known side effects. Less widely circulated are the more severe repercussions of chronic use, like liver abnormalities and tumors, heart and circulatory impairment, cholesterol risks, and the added danger of contracting infectious diseases, like HIV or hepatitis, from shared needles. Every one of these can be life threatening. Because of these dangers, and the use of anabolic agents across so many athletes in multiple sports, the WADA ranked them at the top of their prohibited list, said Mitchell.

Scrolling down the document, you don't have to go far to find the second and third of Mitchell's "big three," hGH and EPO.

Like the "andro-" drugs, hGH is an anabolic hormone very similar to testosterone. Naturally secreted by the pituitary gland in both sexes, it too increases muscle mass. Mitchell relates it very closely to testosterone in that hGH builds proteins from the food we eat so that bone and muscle can grow in density. Olympians in strength and speed events--sprinting, power lifting, swimming, boxing--may be competing against athletes who have used hGH.

Technically, hGH is available only by doctor's prescription, and it is typically used to help young children with deficiencies leading to inhibited growth. Though it hasn't been studied as a performance-enhancer--the ethical implications of such a study are troubling--baseball fans allegedly saw it in action in Barry Bonds. HGH increased Bonds' muscle mass along with his shoe size and even his skull size, said Mitchell.

"That's human growth hormone," he said. "That's what it does."

HGH brings its own set of risks. Topping the list is cardiomyopathy, an enlarging and thickening of muscles in the heart, that weakens heart function over time. HGH can also impair glucose regulation, leading to type-two diabetes. Over prolonged use, joints, tendons, ligaments and muscles can deteriorate, causing an ironic lack of strength in the aging hGH athlete.

This brings us to the last of the "big three," EPO. The drug, epoietin alpha, is a laboratory version of erythropoietin, a naturally-occurring hormone produced by the kidneys and liver that stimulates red blood cell production by the bone marrow. By helping to increase the number of red blood cells, which contain hemoglobin molecules that transport oxygen from the blood to the muscle, EPO boosts the amount of fuel muscles have to burn for energy. In medical applications, it has been used to treat lack of blood iron, or anemia, in patients with greatly impaired kidney function from diseases like AIDS. It can also be used before surgery, like open-heart procedures, to counter the effects of anticipated blood loss.

But, it has been implicated in the death of at least 18 cyclists during alleged heavy use in the 1990s. These cyclists were victims of bleeding events: stroke, heart attack, and blood clots in the lungs called pulmonary edemas.

With so many grave risks, are the perceived benefits worth it? In fact, none of PEDs' touted performance benefits--taken at high doses acquired on the unregulated black market, and with prolonged use or abuse--have been proven. After all, giving athletes high doses of dangerous drugs for research purposes would be highly unethical.

Mitchell agrees: "Doping increases health risks when doses and amounts of hormonal use is not regulated and can lead to overdosing and catastrophic health risks including death."

I had only been working for NOVA for a short time when I got the assignment to interview Sally Ride. I was working on a film called "Space Women" about the first three female astronauts, Ride, Judy Resnick, and Kathryn Sullivan. Sally Ride had become the first American woman in space as a crew member on Space Shuttle Challenger for STS-7 on June 18, 1983 and there was an outpouring of national pride and excitement surrounding her success.

Watch footage from Melanie Wallace's 1984 interview with Sally Ride.

When I contacted NASA to set up my interview, Sally Ride was deep in training for her second mission, which took off October 5, 1984 and landed seven days later on October 13. But a mere two weeks after Sally touched down at Kennedy Space Center, there she was to meet me.

Meeting Sally Ride was a lifetime thrill. Though I was a nervous wreck, she was relaxed and immediately put me and the crew at ease. She was articulate, funny, honest, and eager to share her enthusiasm for her job. Her passion for her profession and the camaraderie of her crewmates was contagious.

Yet what has stayed with me all these years is how she continually described working in space as fun--lots of fun. She said it was fun to be weightless and fun to go through reentry. Ride was an outrageously accomplished woman--in addition to being the first American woman in space, she was youngest astronaut to go into space, earned a PhD in physics, and was a tennis champion--yet she embraced her career as an astronaut with sheer, playful delight.

When I asked Ride about her long term goals--where did she see herself in five years?--I was surprised by her answer. She said she was not a very goal-oriented person and did not plan five years out. She was happy being an astronaut and planned on staying one for as long as NASA would have her. And I think she would have had a much longer career if not for the Challenger accident in 1986. She served on the Presidential Commission investigating the tragedy and resigned from NASA in 1987.

But NASA's loss was a gain for education, particularly science and math education, or STEM. In 2001 she founded her own company, Sally Ride Science, dedicated to her long-time passion for motivating young girls and boys to follow their love for science and consider careers in science, technology, engineering, and math. The company creates innovative classroom materials, classroom programs, and professional development training for teachers. Sally also initiated and directed NASA-funded education projects designed to fuel middle school students' fascination with science, including EarthKAM and GRAIL MoonKAM. She has also co-written seven science books for children.

The critical importance of science education resonates deeply with all of us at NOVA. Over our forty years on the air, we have had the privilege to witness pivotal moments in the history of science and exploration and to share those moments with our audience. It is part of our mission to preserve these stories for the future.

The inspirational legacy Sally Ride leaves behind is vast and multifaceted. Without seeking it out, she embraced her public persona as a role model for young girls and women who have a passion to succeed in science. When I heard last night that she had passed away after a battle with pancreatic cancer, my heart skipped a beat. Although I had only met her once, I felt as if I had lost a friend. Yet I know her legacy will continue through her organization and through all those who have been inspired by her.

This is the final post in a series on evolutionary medicine, the application of the principles of evolution to the understanding of health and disease. Read the previous entries here and here.

Pain, fever, diarrhea, coughing, vomiting--these are all conditions most of us wish did not exist. We go to the doctor to get relief. So why does evolution keep them around? These miseries may actually help us survive by protecting our bodies from the damage of infection, injury, and toxins.

No one wants to feel pain, yet pain helps keep us alive. Individuals with a rare condition called congenital insensitivity to pain often injure themselves unintentionally, sometimes with devastating consequences, such as bone infections or destruction of tissues and joints.

Fever is similar to pain in that it makes us feel terrible, but can be beneficial. It provides us with a defense against infection by boosting the immune system and fighting off heat-sensitive pathogens. Given all the good that seems to come from fever, Dr. Matthew Kluger of the College of Health and Human Services at George Mason University suggests fever is most likely an adaptive response.

Kluger's studies show that animals that experience lower or no fever with infection fare worse than those whose temperatures shoot up. When he infected lizards with heavy doses of live bacteria, all those that experienced fever survived, while those that couldn't raise their body temperature died. The results of other studies, compiled by Dr. Sally Eyers and colleagues at the Medical Research Institute of New Zealand, found that the risk of death was higher in animals given fever lowering medication.

While we have a fever, our bodies strategically employ a lot of other tools too to fight infection and get us healthy. "An infected animal loses food appetite, does not want to interact with anyone else, increases his body temperature, and fights infection," notes Kluger. "Then when infection is fought off, you see a change in behavior." In his fever studies, "before even looking at the temperature recorder," he explains, "we could see when fever broke."

Even the least glamorous symptoms can have a silver lining. Studies have shown, for instance, that individuals infected with bacteria that cause diarrhea actually stay sick longer when they take anti-diarrhea medications than when they let nature take its course without meds. The same can be true for coughing: In one study, elderly patients with a less-sensitive cough reflex were more likely to get pneumonia than their coughing cohorts.

The argument extends to vomiting, too, particularly during pregnancy. Some researchers argue that morning sickness is an evolutionarily acquired defense to protect a pregnant woman and her fetus from dangerous food-borne toxins. Across the world, nearly 70% of women experience nausea and vomiting during pregnancy. Many foods, especially meats, may contain viruses, bacteria, or fungi that could be dangerous to humans in general, but some are more vulnerable than others. Dr. Paul Sherman of Cornell University argues that the developing embryo and carrying mother are especially susceptible to the negative effects of these pathogens because of their weakened immune systems.

A pregnant woman's immune system is suppressed during pregnancy to prevent the body from rejecting the fetus. The fetus is especially vulnerable during the early stages of pregnancy because that is when it is growing and developing most rapidly. If a woman became ill from food borne toxins, especially in her first trimester, it could result in birth defects or miscarriage. Compiling nine different studies, Sherman found that women who experienced nausea and vomiting during pregnancy were less likely to miscarry compared to those without those symptoms. Though much more research needs to be done, it seems that morning sickness may be a defense evolved to protect the pregnant mother and her growing fetus.

Does this mean that we should all rid our medicine cabinets of anti-nausea pills, painkillers, fever lowing medications, and cough suppressors? Dr. Randolph Nesse of the University of Michigan, one of the founders of the field of Evolutionary Medicine, suggests that in many cases it could still be safe to turn off (or tone down) the body's more disagreeble defenses. What we have to do is better understand the system so that we know when it is not safe to do so.

For Nesse, the body's defenses are akin to a smoke detector. "The system is set to go off like a smoke detector very often when there's no fire," Nesse explains. A smoke detector will alarm when it senses fire or smoke just as the body's defenses kick in when they sense a danger to the body. Sometimes the detector will get it right, but often times it will go off when there's no real threat. As the saying goes, nothing in life comes free. There is a cost to the defenses your body elicits against a sensed threat. This cost is relatively small compared to the cost of not defending the body if something actually is wrong. Pain is uncomfortable and costs energy, but if you did not feel pain when you broke your leg, you would be in even bigger trouble.

Nevertheless, most of the time, defenses kick in when they are not needed. "Many communities prohibit parking adjacent to a fire hydrants," Nesse points out, "although the chance that a fire truck will use that hydrant on a given day is less than one in 100,000." Similarly, "Birds flee from backyard feeders when any shadow passes overhead," even though most of these shadows do not present a real threat to the bird. The frequency of false alarms is why you can block pain or fever most of the time and not see really bad things happen. "If [medical professionals] have an understanding of the smoke detector principle," Nesse explains, "they can begin to decide when it's safe to block defenses and when it's not."

user-pic

Hunting for Higgses

Another version of this post appears on Cosmic Variance, where night owls will also be able to follow Sean Carroll's liveblogging of the 3 am ET July 4 announcement from CERN.

Greetings from Geneva, where I'm visiting CERN to attend the much-anticipated Higgs update seminars on Wednesday, July 4. We're all wondering whether the physicists from the Large Hadron Collider will say the magic words "We've discovered the Higgs," but there's more detailed information to watch out for. I've been hard at work on a book on the subject, entitled The Particle at the End of the Universe, so I'm hoping for some big and exciting news, but not so big that I have to rewrite the whole thing. (Note that I'm a theoretical physicist, so I personally am not hunting for Higgses, any more than someone who orders catfish at a seafood restaurant has "gone fishing." The real hunters are the experimenters, and this is their moment to shine.)

Lecture fall
If at all possible, I'll try to live-blog here at CV during the seminars. They will start at 9am Geneva time, a slot chosen to enable a simulcast in Melbourne for people attending the ICHEP Conference. For folks in the U.S., not so convenient: it's 3 am Eastern time, Midnight (July 3/4) Pacific time. Here is the seminar announcement, and of course CERN will have a live webcast. Or try to, anyway; last time something like this was arranged, back in December, the live feed collapsed pretty quickly under the load. I'm sure I won't be the only one live-blogging: here's Aidan Randle-Conde and Tommaso Dorigo.

So what are we looking for?

This post is the second in a series on evolutionary medicine, the application of the principles of evolution to the understanding of health and disease. Read the previous entry here.

It's a basic tenet of biology that natural selection picks the most advantageous traits and passes them on to the next generation. Why, then, do people still suffer from debilitating genetic diseases? Shouldn't the genes that code for these diseases be removed from the population over time? How did they manage to keep themselves around during the course of human evolution? It turns out that there may be a reason that genes for harmful diseases survive evolutionary selection and pass from generation to generation.

One disease that has stood the test of time is sickle cell anemia. Sickle cell disease is a nightmare for the millions living with the symptoms of the disease, yet the genes that cause it may be a blessing for many others. The sickle cell gene has the potential to cause intense pain, delayed growth, and even organ damage or stroke. But, it can also provide a measure of protection against an entirely different illness: malaria, a potentially fatal blood infection. How can the sickle cell have such different effects in different people? The answer is all in the genes.

Sickle cell relates to the gene for hemoglobin, the protein in the blood that carries oxygen throughout the body. Instead of inheriting a normal hemoglobin gene, babies with sickle cell inherit mutated hemoglobin. These abnormal hemoglobin molecules clump together, causing red blood cells, which are normally round, to become crescent-shaped.

A baby normally inherits two copies of a gene, one from each parent. In the case of sickle cell, inheriting two abnormal copies of the hemoglobin gene causes symptoms of the disease to appear. But babies who inherit only one copy of the sickle cell hemoglobin gene and a copy of the normal hemoglobin gene do not show sickle cell anemia symptoms.

Unlike kids with two copies of normal hemoglobin, though, kids with just one copy of the sickle cell gene are protected against the worst symptoms of malaria. Since malaria infection can cause flu-like symptoms, bleeding problems, shock, or even death, being able to diminish those effects has obvious advantages. In areas of the world where malaria infection rates are high, like Africa, South Asia, and Central and South America, this protection becomes even more important.

Dr. Anthony Allison was the first to discover that the sickle cell trait provided protection against malaria. In 1954, Allison collected blood samples from children in Uganda in order to compare hemoglobin types and rates of infection with the malaria-inducing parasite Plasmodium falciparum. He found that individuals with the sickle cell trait--that is, just one copy of the mutated gene--had lower P. falciparum parasite counts than than those with normal hemoglobin. They were also less likely to die of malaria.

Allison also discovered that the sickle cell hemoglobin gene is most common in parts of the world where infection with the P. falciparum parasite is very high. Since Allison's initial work, further research has supported his notion that the sickle cell gene, although the cause of fatal disease in young children, has stuck around because of the survival advantage it provides for malaria.

How exactly does sickle cell save many around the world from the deadly effects of malaria? Dr. Rick Fairhurst of the National Institute of Allergy and Infectious Disease suggests that it may be related to how the malaria parasite affects red blood cells. In normal red blood cells, P. falciparum parasites leave a sticky protein on the surface of the cell. The sticky protein causes blood cells to adhere each other and to the sides of the blood vessel, leading to a build up that blocks blood flow and causes the blood vessel to become inflamed. The stickieness also keeps the parasites from being flushed out of the blood stream.

Kids who have the trait for sickle cell hemoglobin, however, see a different end to this story: It's sickle cell to the rescue. Their infected red blood cells with the sickle cell trait are not as "sticky" as infected normal red blood cells. This allows blood to flow more freely and quells inflammation.

Sickle cell hemoglobin is not the only type of hemoglobin that shields against malaria. "Alpha-thalassemia, HbE, and HbS are all different mutations, but they're doing the same thing," Fairhurst says. These hemoglobin gene mutations yield abnormal red blood cells and cripple the health of many kids. But, just like sickle cell hemoglobin, they also decrease the severity of malaria after parasite infection.

Mother nature, explains Fairhurst, found ways to change hemoglobin to weaken the stickiness of red blood cells caused by parasites. He hopes to use this knowledge to develop new medicines to treat--or even prevent--malaria. "If the strength of binding is what's killing you, develop therapies that can weaken that binding," Fairhurst explains.

Natural selection seems to have had a hand in preventing serious malaria infection by maintaining the sickle cell and other abnormal hemoglobin genes, despite the potentially deleterious ramifications. It can help explain why other debilitating diseases are still around, too. Huntington's chorea is a genetic disease that causes degeneration of neurons in the brain, eventually resulting in the inability to carry out many everyday tasks, like walking, swallowing, and speech. Unlike sickle cell disease, which requires you to inherit two copies of an abnormal gene to develop the disease, Huntington's only requires one copy of the defective gene. In other words, as long as you receive a copy of the Huntington's gene from at least one of your parents, you will show symptoms of the disease. Huntington's is also fairly common--about one in 20,000 people worldwide has the disease. Why does this disease, which has such devastating neurological consequences, still affect so many people?

Huntington's can escape natural selection because it does not appear until later in life--typically between ages 40 and 50. Because this is after reproductive years are over, there is no evolutionary drive to weed it out. Natural selection optimizes reproductive success, not health as an end in itself. Huntington's is therefore able to survive generation after generation because it is invisible to natural selection. Pretty sneaky.

But this may not be the whole answer to why Huntington's is still around. Surprisingly, the prevalence of Huntington's has actually increased over the years. Evolutionary "invisibility" cannot explain this increase. It turns out that the gene for Huntington's might actually provide a health advantage during reproductive years by protecting against a entirely different disease: cancer.

A recent study at the Centre for Primary Health Care Research at Sweden's Lund University found that individuals with Huntington's disease and other genetically similar diseases exhibited lower-than-average incidences of cancer. The decreased cancer risk was even greater for younger patients, suggesting that the greatest health benefit of Huntington's disease occurs right around the reproductive years.

Thus even Huntington's disease, which kept itself around by cheating natural selection, may be a double-edged sword. In one way or another, natural selection seems to maintain and even favor some truly dreadful diseases. By understanding both the good and bad, we may gain insights into how to treat--or even prevent--disease.

This post is the first in a series on evolutionary medicine, the application of the principles of evolution to the understanding of health and disease.

It's a nice sunny day out in the wild, where a hunter-gatherer man is enjoying his dinner of minimally processed plants and meat. Surrounding the hunter-gatherer is a hoard of worms, parasites, and bacteria--organisms with which he has shared a home since his birth. This is the environment, many scientists now argue, to which the modern day human is adapted.

If you fast forward to the 21st century, however, our Paleolithic bodies are living in a very different modern world. Gone are the days of hunting and eating game animals and large amounts of wild plants. In the West especially, urbanization and increased standards of hygiene have depleted our environment of the microbes that the human immune system once needed to learn to tolerate. The hunter-gatherer diet has been overwhelmingly replaced by large helpings of grains, refined sugars, vegetable oil, dairy products, cereals, and other processed foods.

Our world has changed quickly. Our genes, not so much. Could the discrepancy between the environment to which humans are adapted and the one in which we now live be making us sick?

According to the World Health Organization, chronic diseases--like heart disease, respiratory disease, diabetes, obesity and stroke--are responsible for 63% of deaths worldwide. Some scientists link this rise in chronic disease to the change from a hunter-gatherer diet to the modern "Western" diet.

Hunter-gatherers took in about one-third of their daily calories from animal meat, usually lean meats and fish, and the remaining two-thirds from fruits, vegetables, nuts, and other plant foods. Today, however, about 70% of the human diet is made up of foods the hunter-gatherer would very rarely have eaten, if they saw them at all: cereals and refined grains, milks, cheeses, syrups and refined sugars, cooking oils, salad dressings, etc.

The hunter-gatherer diet provides very different nutrients from those in the modern diet. Hunter-gatherers took in considerably higher levels of fiber and various vitamins and minerals, including vitamin C, vitamin B-6 and -12, calcium, and zinc. They also took in significantly less sodium. While hunter-gatherers probably ate less than 1,000 milligrams of sodium each day, the average American consumes three times that amount.

Why should it matter that we eat differently than humans did tens of thousands of years ago? Were hunter-gatherers actually healthier than we are now? S. Boyd Eaton, M.D, of Emory University in Atlanta, Georgia, believes so. All of these diet differences may adversely affect health. High sodium intake, for example, can cause hypertension or osteoporosis, while lower protein consumption could cause stroke or weight gain. "Human ancestors had almost no heart disease" and no obesity, Eaton argues.

Yet critics point out that our ancestors simply did not live long enough to experience chronic disease. Since there is little age-related information about very early humans, their life expectancy is estimated using statistics from the 18th century and from current hunter-gatherers. This evidence suggests that the average life expectancy of pre-industrial humans was probably 30-35, whereas now, the average human life expectancy is about 68 years.

Eaton, however, argues that the physical signs of cardiovascular disease can be detected much earlier in life. For instance, when Dr. Abraham Joseph and his colleagues at the University of Louisville looked at trauma victims aged 14 through 35, they found that about three-quarters of them already had evidence of cardiovascular disease.

And it's not just our diet that is changing faster than our genes can keep up. As cities were built and hygiene standards increased, the critters with which we once needed to adapt to live have been wiped from our environments. Some researchers believe our bodies miss these worms and bugs and that disorders like allergies, asthma, and inflammatory bowel disease could be the result.

"All of these organisms that you pick up everyday have to be tolerated," explains Dr. Graham Rook of University College London. When parasites are ubiquitous, as in Paleolithic times, they "start to be relied on to regulate the immune system." When we don't encounter these critters, Rook hypothesizes, it upsets a delicate balance between immune cells in the human body.

In fact, epidemiological studies have shown a correlation between hygiene and the prevalence of inflammatory and immune diseases. In undeveloped countries, where there are still high populations of the worms and bugs with which humans developed, inflammatory conditions are less common, says Dr. Joel Weinstock, chief of gastroenterology and hepatology at Tufts Medical Center. Weinstock is currently investigating how parasitic worms interact with the immune systems of their human hosts.

Some doctors are now hoping these "old friends" could inform new treatments for allergies and other immune disorders. Two such researchers, Dr. Jorge Correale and Dr. Mauricio Farez, of the Raúl Carrea Institute for Neurological Research in Argentina, performed a study in 2007 in which they infected multiple sclerosis (MS) patients with parasites. The researchers hoped to determine whether parasite infection could reduce symptoms of MS, an autoimmune disease, which include numbness, muscle weakness and spasms, vision loss, problems walking, and speech problems.

Patients in the study were randomly assigned to receive treatment with one of four different species of worms. Uninfected MS patients and healthy control individuals were used as comparisons. The researchers found that during the almost five year follow-up period, MS patients infected with worms showed significantly fewer flare-ups than non-infected MS patients. Plus, infected patients produced more of the particular immune cells that regulate the immune system--the same cells that Rook and others believe have declined due to the increased cleanliness of our living spaces.

Rook admits that our "unnaturally" hygienic environment is probably just one factor in the dramatic rise of autoimmune and inflammatory disease. Dr. Scott Weiss of the Harvard University School of Medicine has another culprit in mind, particularly when it comes to asthma: vitamin D deficiency. "If you really look at what has happened with autoimmune disease," he says, "they started to increase in the 1950s and even more dramatically in the '60s, '70s, and '80s. Now they have leveled off." This is the same time period, Weiss explains, during which we started spending less time outdoors and more time inside, enjoying new inventions like television and air conditioning. Since sunlight triggers the body's natural production of vitamin D, less time outside in the sun means less vitamin D for the body.

Weiss and his colleagues found that women who took in more vitamin D when pregnant delivered babies who were less likely to have asthma-related symptoms, like wheezing, as toddlers. Our hunter-gatherer ancestors probably also saw less asthma because they spent most of their time outside, absorbing all that sunny, vitamin D.

Is it time, then, to give up your television, air conditioning, processed food and Purell and head to the sunny outdoors to kill and scavenge your own meals, hunter-gatherer style? Well, maybe not yet--at least not completely. As Rook explains, relaxing hygiene now wouldn't help us get back our old friends, but instead would expose us to new dangers. While there may be benefits to our old hunter-gatherer diets, our current diet has its advantages, like the higher calorie content.

Nevertheless, we live in a world that is very different from the one in which our ancestors evolved. Our genes have not changed as quickly as our diet, physical environment and lifestyle. Until our genes can catch up to the world we've created, we may have to find ways to bring back pieces of our old world.

How do we know what the Milky Way actually looks like, when we're inside it? I asked Mark Reid, Senior Radio Astronomer at the Harvard-Smithsonian Center for Astrophysics.

For more with Mark Reid on the Milky Way, check out the Q&A below.

NOVA: Do we know what the Milky Way looks like?

Reid: The answer really is no. At this point, we do know it is a spiral galaxy, we know it has spiral patterns, but we don't really know where these spiral arms lie. There's even debate between astronomers about whether there are two or four arms. I would say within the next few years when we've been able to map the positions of young stars that trace out these arms, then we'll know exactly where the arms are, and how many there are. We have good preliminary evidence to show there are four arms. And we know a little bit more about how tightly bound or open the arms are, there's been some debate about that, and it looks like it's a fairly loosely bound spiral, but to go beyond that it'll take a lot more observations and data.

NOVA: What are some of the challenges to studying the Milky Way?

Reid: The dust and gas in the Milky Way really makes it difficult for astronomers who use optical telescopes to see very far at all. And so the idea of measuring the distance to very distant stars with optical telescopes and making a map out of all that data won't work very well. You just can't see far enough. So you really have to use other techniques. You could use infrared light, or you could use radio light. What I use is radio light.

NOVA: Radio light?

Reid: Radio waves are light, they're not sound. They can be used to carry sound if you want, as people do with radio stations, but radio waves are still light. Your eyes just can't see. Radio telescopes collect radio light in much the same way that optical telescopes collect optical light. The ones I'm using are called the Very Long Baseline Array; there are 10 of them, all across the US from New Hampshire to Hawaii to the Virgin Islands, and a lot of them in between. By a technique called very long baseline interferometry, I'm able to make an image of what the sky would look like if your eyes could see radio waves, and if they were as big as the earth.

NOVA: Why use radio light?

Reid: The one advantage we have with radio light--with all these telescopes all across the earth--is it gives us incredible resolving power. We can measure very, very small shifts in angle as the earth goes around the sun, and from that we can calculate the distance to the star. That's a very powerful technique which you can't really do with any other wavelength at the moment. It's directly analogous to a surveyor measuring out a plot of land. A surveyor will look at object against a background, and will move to a different angle and measure again. If you know the baseline and angles you can calculate the distance. We've just extended that technique by a factor of a billion or more.

We can not only measure where things are, we measure how fast they're moving. So for every star forming region in the Milky Way, we look at how they're moving, so we know how the Milky Way rotates. From our observations we can tell how fast the milky way spins, and it tells us how much mass in the Milky Way--we figured out 15% faster than thought, and the mass of the galaxy is 50% more than before.

NOVA: What do you look for?

Reid: We look at very young stars. You can't detect stars with radio waves, but you can detect the clouds of gas around them because they just formed. Most of the gas is hydrogen, but there are trace amounts of water. The water molecules can act like a MASER, which is a radio wavelength version of laser light, which lets use measure position very well. Now there are probably only 500-1,000 of these very bright very young stars that have formed in the Milky Way at any one time, but the nice thing is they're very bright and trace out the spiral arms very nicely, much like in other galaxies.

user-pic

Living Elements: Calcium

This post is the third in a three-part series on how living creatures use the elements of the periodic table. Read earlier posts here and here, and learn more about the elements on NOVA's two-hour special, "Hunting The Elements."

Calcium (Ca) is all around us and even within us: from rocks to shells, pearls, antacids, bones, teeth (in the form of a mineral appropriately named apatite), nerves, and our beating hearts. Simply reading and thinking about this post requires the shuttling of calcium ions through special calcium-ion channels in our bodies for cardiac muscle contractions and the release of neurotransmitters.

Yet one organism plays a large role in removing calcium from watery environments and trapping it in the form of calcite, which forms rocks. This organism is called a coccolithophore, a photosynthesizing single-celled marine plant.

For the past 230 million years, coccolithophores have been protecting themselves with calcium armor. This armor is made up of hubcap-shaped structures called coccoliths which are composed of calcite molecules, which contain one atom of calcium with one carbon and three oxygen. Each coccolith is just a fraction of a millimeter across, so coccolithophores combine dozens of them to create protective scales, as shown in the image below.

Coccolithus_pelagicus
Coccolithus pelagicus. Credit: Richard Lampitt, Jeremy Young, The Natural History Museum, London. Via the Wikimedia Commons and planktonnet.

Over time, these scales flake off into the water, shedding as much as 1.5 million tons of calcite each year. This makes coccolithophores the largest calcite producers in the world's oceans.

Coccolithophores have a complex effect on the biosphere. In the short term, coccolithophores photosynthesize, taking in carbon dioxide from the atmosphere and emitting oxygen. When they create their coccoliths, they take carbon, oxygen, and calcium from the water, removing a carbon atom that could potentially become carbon dioxide. However, in the process of making their coccoliths, they also emit carbon dioxide.

A large-scale effect of coccolithophores is that the carbon used to create the coccoliths is trapped as calcite. When the scales fall off of the organism, the calcite sinks to the ocean floor, where it mixes with silt and clay to form chalk. Over time, the deposits of coccolithophores can accumulate and create geological wonders like the White Cliffs of Dover.

White cliffs
Credit: Remi Jouan, via the Wikimedia Commons

When coccolithophores find the right mixture of sunlight and nutrients, they quickly proliferate, leading to large "blooms." These blooms are so large and dramatic that they even temporarily change the color the local seawater. In this image, taken from space, the calcite scales turn the water brighter and more turquoise, causing it to reflect more sunlight.

Coccolithophore bloom
A phytoplankton bloom in the Barents Sea. Credit: NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team at NASA GSFC.
So the next time you doodle on a blackboard or draw some sidewalk art, think about the millions of coccoliths crammed into that piece of chalk.
user-pic

Living Elements: Iron

This post is the second in a three-part series on how living creatures use the elements of the periodic table. Read the first post here, and learn more about the elements on NOVA's two-hour special, "Hunting The Elements."

Iron (Fe) is the most common element by mass on Earth. But it isn't just the stuff of pots, pans, and fences: It is also one of life's essential nutrients. The iron-containing hemoglobin in our red blood cells carries oxygen, and iron helps plants create chlorophyll. But when iron combines with oxygen to make the magnetic compound magnetite, it becomes a built-in compass for living creatures.

Magnetite has been found in the brains of termites, bees, fish, birds, dolphins and even humans. Creatures like bats, sea turtles, pigeons, and salmon are able to sense the planet's weak magnetic field, which helps guide them on their migrations. Even tiny bacteria (called magnetotactic bacteria) use the Earth's magnetic field to orient themselves.

Magnetotactic bacteria were first discovered in 1975 when a researcher noticed that his cells naturally kept moving north. Curious, he put a magnet near them and, viola, they aligned themselves with the magnet, just like in this video.

When a magnetotactic bacterium develops, it grabs three iron atoms and four oxygen atoms from its environment and uses them to make magnetite crystals. Once the bacterium has created enough magnetite crystals, it links them together in create a magnetosome. This turns the cell into a sensitive living compass. But why would a bacterium need a compass in first place?

Magnetic bacteria are very picky about where they live. They prefer to live in deep water, where there is little oxygen but plenty of the ions they need for their metabolism. Furthermore, there are two varieties of magnetotactic bacteria: one that points north and one that points south, corresponding to the hemisphere they live in. By following Earth's the magnetic field lines, they find their way to the deep water in which they thrive.

The magnetosome automatically directs the movement of the bacteria, even after the bacteria are dead. When magnetotactic bacteria die, their denser-than-water magnetosomes cause them to sink to the seafloor, where they become embedded in marine sediments. Their fossils remain oriented with the magnetic field, leaving a historical record of the Earth's changing magnetic field.

This is just one of many ways that organisms transform the elements, literally bringing chemistry and physics to life. If only we had a large, relatively powerful magnet permanently within us--we'd probably never have to ask for directions again.

user-pic

Living Elements: Silicon

This post is the first in a three-part series on how living creatures use the elements of the periodic table. Learn more about the elements on NOVA's two-hour special, "Hunting The Elements."

When you think of silicon (Si), you may think about Silicon Valley and the fact that modern computing would not be possible without this element in computer circuits. You may also think about how silicon is the second most abundant element in the Earth's crust, after oxygen. You even see it (or don't see it) every time you look through a piece of glass.

What you probably won't think about are diatoms: tiny photosynthesizing algae that are the primary movers of silicon in the world's oceans. Diatoms depend on silicon. They flock to locations where silicon is available. In the process, they generate enormous blue-green blooms.

algae bloom
A bird's eye view of diatom blooms in the ocean. Credit: Norman Kuring/NASA Ocean Color Group, via NASA Earth Observatory.


Every year, diatoms use almost seven trillion kilograms of silicon. They collect silicon in the form of silicic acid--that's silicon plus four hydrogen atoms and four oxygen atoms--and convert it into silica, the primary constituent of glass. What could algae possibly be doing with all of this amorphous, natural glass?

Diatoms use silica to create protective cell walls called frustules that are strong and chemically inert. Essentially, this is glass armor--cheap glass armor. It costs a diatom only two percent of its total energy budget needed for growth to make its shell.

Their beautifully geometric and symmetric shells are tiny marvels of natural art. Furthermore, each of the estimated 100,000 of species of diatoms has its own unique glass shell design. Diatoms are extremely efficient micro- and nano-architects. They consistently create the same 3D structure over and over again.

diatom shells
A collection of various diatom species. Credit: Wipeter, via the Wikimedia Commons.

The intricate armor protects the diatoms and also acts as a chamber that helps the diatoms photosynthesize by boosting their surface area, facilitating the exchange of gasses between the air, seawater, and organism. The silica in the glass also speeds up the conversion of ocean bicarbonate to carbon dioxide by changing the acidity of the water, which facilitates the exchange of protons in vital chemical reactions. The diatom then uses this carbon dioxide for photosynthesis. Thanks to silicon, diatoms have been able to carve out a unique biological niche in Earth's oceans.

With such efficient photosynthesis thanks to their miniature greenhouses (silica shells), diatoms produce about one quarter of the world's oxygen--almost as much as all tropical forests combined. This process also removes great amounts of carbon dioxide from the atmosphere, potentially helping combat global warming. When diatoms die, their heavy shells cause them to sink down to the depths of the ocean, taking that carbon with them. So, not only do they play a vital role in the global silicon cycle, they are also responsible for removing carbon dioxide and producing the oxygen that we breath.

Since they have been using silicon for more than 110 million years, diatoms are a vital part of Earth's biogeochemical silicon cycle. Diatoms are the world's way of moving silicon between rocks, water, and life. They are a form of life that changes the chemistry of the world's oceans.

In a beautiful cycle, the tiny glass masterpieces created by living diatoms sink to watery depths when the diatom dies. Many of the silica shells become part of the rock record when they fall to a seabed. Over time, these rocks dissolve and become orthosilicic acid, which is reused by new diatoms. These rocks also erode and the old silica shells end up on beachy shores in the form of sand. For 5,500 years, humans have been making glass from the same sand that all came from diatoms.

older posts newer posts

Picture of the week

Inside NOVA takes you behind the scenes of public television’s most-watched science series. You'll hear from our producers, researchers, and other contributors. It's a forum where you can see what's on our minds and tell us what's on yours.

Follow NOVA's Twitter Feed

    Support provided by