user-pic

Hey Mars, What's For Dinner?

As I prepare for my cross-country road trip from Boston to California, one thing I'll have to pack is plenty of food. Of course, my trip won't require too much planning in that area, because I'll be staying on this planet. But when it comes to sending humans to Mars, a Ziploc of trail mix and a few tubes of Pringles won't suffice--after all, astronauts sailing through space on their way to Mars won't be able to pull over at Burger King and grab a medium number one, no pickles.

So, how exactly are astronauts supposed to maintain a healthy diet while traveling such a long journey? And what will they eat once they settle down on the Red Planet? Food in space has come a long way since John Glenn, of Project Mercury, tucked in to his small aluminum packages of foods and liquids. Today, astronauts on the International Space Station enjoy a variety of canned, packaged, and freeze dried goods, various condiments for enhanced flavor and drinks like juice to coffee.

Space food
Dinner is served at the Johnson Space Center food lab. Image courtesy NASA.

Adapting food for a space flight was one challenge. But if astronauts are to stay on Mars for an extended period of time, they won't be able to bring enough food with them. They'll need to grow it. On Mars.

Mars' environment is starkly different from Earth's. The surface pressure on Mars is roughly 1/100th of that on Earth. Temperatures range between -200 to 80 degrees Fahrenheit. Greenhouses--known as "food production units" at NASA--have to be carefully designed to fend off Mars's harsh conditions while still nurturing healthy plants. Martian farmers will also need to imitate the wide variety of growing environments found on Earth. Certain crops may need a high humidity environment, while others like vegetables such as squash or tomatoes may not.

To test out the techniques future explorers might use to farm on Mars, scientists at the Kennedy Space Center are experimenting with growing lettuce and peas in low-pressure greenhouses. Why low pressure? As Robert Ferl, director of the Interdisciplinary Center for Biotechnology Research at the University of Florida, explains, low-pressure units are easier to transport to and build on Mars. Plus, at low pressure, plants soak up and release water faster, allowing for speedy recycling of limited quantities of water that are available within the dome while also inhibiting hormones such as ethylene which could cause fruits and vegetables to quickly ripen and rot.

Private companies are investigating Mars agriculture, too. Mars One, which has set out the ambitious goal of settling humans on Mars by 2023, intends to feed the colonists using Plant Production Units that control temperature, light, and humidity, allowing plants to thrive despite the harsh conditions outside. They are essentially closed rooms with beds of plants situated beneath rows of colored LED lights, explains Gertjan Meeuws, a managing partner at Plant Lab, the company that manufactures the units. The LEDs generate light in red, blue, and a nearly-invisible color called far-red, a spectrum which is optimal for photosynthesis. The units also control the air temperature and disperse water on a timer.

These units weren't originally designed for Mars, but were created to feed Earth's growing population. Because they can be placed directly in cities, says Meeuws, they cut out many of the traditional steps of food production and transportation, qualities that will also be important on Mars.

No matter what kind of structure Mars crops are grown in, though, they will need water. Mars is drier than the Sahara, so according to Ferl, the key to maintaining a strong water supply will be recycling. Astronauts managing the growth operations must make sure all water that escapes into the atmosphere is later condensed so that it can be reused, either by the astronauts themselves or to water plants in the closed greenhouse system.

Yet Mars is not totally dry. Subsurface water ice could be mined for drinking and for watering crops. Or, if humans settle far from these natural water sources, they might be able to synthesize water using hydrogen and oxygen locked up in nearby rocks.

Growing crops on an alien planet is just one of the challenges. Preparing these crops is another. Food scientists from Cornell University are working to create menus for HI-SEAS, a simulated Mars mission slated to begin in early 2013. Using a remote lava field 8,500 feet up a mountain in Hawaii as a stand-in for Mars, the 120 day simulated mission will focus on how to feed human settlers using the limited resources they have. Space flight experts recognize the psychological importance of a diverse meal schedule, explains Cornell biological engineer Jean Hunter, so one of the main goals of the simulation is to create an outer space menu that will prevent "menu fatigue."

So on my way to California, as my fingers brush the bottom of that bag of trail mix and I start hankering for a bag of Cheetos, I will keep in mind that my menu fatigue is nothing compared to what future Mars astronauts will have to manage. All I know is that when I get to California I am there to stay, and I hope I survive being away from Red Sox fans and jaywalkers. As I stop at the Grand Canyon and look up to the sky, I can now be assured that my fellow humans who choose to delve into our vast galaxy will be well prepared and well fed and as they embark on the adventure of expanding human life on another planet.

Learn more about Mars on NOVA's Ultimate Mars Challenge, premiering Wednesday, November 14 at 9 pm ET on most PBS stations. Please check your local listings to confirm when it will air near you.

user-pic

Poring Over Einstein's Brain

Einstein playing the violin

Albert Einstein died in 1955, but not before leaving science one more gift—his brain. Pathologist Thomas Stoltz Harvey removed and preserved Einstein's after conducting an autopsy, and researchers have been studying it ever since, hoping to discover anything that may have lead to the physicist's genius.

Unfortunately for neuroscientists, Einstein's brain hasn't been in one piece since shortly after the autopsy. Harvey sectioned it into 240 pieces, many of which are missing. Luckily, before dicing up the organ, he also photographed it, though like the brain sections, many of those photos have been lost. But now a cache of Harvey's personal affects has been unearthed, including a number of photographs that are new to science. From the images, researchers have produced the most thorough description to date of Einstein's brain.

"Thrilling, in a word." says Dean Falk, a senior scholar at the School for Advanced Research in Santa Fe and a professor of anthropology at Florida State University, of studying the newly discovered images. Falk authored a paper detailing the findings, which will appear in the journal Brain, along with Fred Lepore, a neuro-ophthalmologist at Robert Wood Johnson Medical School in New Jersey, and Adrianne Noe, director of the National Museum of Health and Medicine, the institution that now has the photographs.

"It's a very comprehensive description of the outward appearance of Einstein's brain," says Sandra Witelson, a professor at the Michael G. DeGroote School of Medicine at McMaster University in Canada who has also published research on photographs of Einstein's brain. "On [Harvey's] return to New Jersey in the 1990's, he had hoped to compile an atlas of all the photographs and slides of Einstein's brain. The current report by Dr. Dean Falk and colleagues partly fulfills this plan."

Falk and Lepore had inklings that these photographs existed, but hadn't been able to get their hands on them until they were donated by the Harvey family to the National Museum of Health and Medicine. Noe made the photographs and other materials from the Harvey cache available to the two scientists for eight hours one day in the middle of September. "That was the breakthrough," Lepore says. "We got a lot of photographs."

After Falk and Lepore photographed the originals, they pored over the two-dimensional images for months, tracing every gyrus in an attempt to coax new information from the folds of Einstein's brain. Falk was well-suited to the task of analyzing a brain in such an abstract state—as an anthropologist, she frequently analyzes skulls for clues about the organs they once contained.

"Because they are two-dimensional photographs, I had to do a lot of mental rotation to be sure a feature that I saw in one view, when I saw that feature in another view, did that identification still make sense?" she recounts. Perhaps ironically, the spatial reasoning skills Falk relied on so heavily in her study were the very same at which Einstein excelled, a fact which may have been influenced by his unusual parietal lobes. "He would have been much better at studying his brain than I was," Falk jokes.

Falk and Lepore compared Einstein's brain to 85 other brains well-known in scientific circles. They mapped every portion they could, identifying characteristics which stood out in comparison to the other specimens. The primary somatosensory and motor cortices in his left hemisphere—which are responsible for the sense of touch and the planning and execution of motion, respectively—were much bigger than was expected.

But that wasn't all that was unusual. "Einstein's brain has an extraordinary prefrontal cortex, which may have contributed to the neurological substrates for some of his remarkable cognitive abilities," they wrote. In particular, he had four gyri in that region where most of us only have three.

The prefrontal cortex is involved in higher cognitive functions, "including remembering things, keeping them online, and my favorite, daydreaming and planning the future," Falk says in an interview with NOVA scienceNOW. "It's perhaps appropriate because Einstein was famous for his thought experiments." But as to whether the presence of a fourth gyrus had any affect on that, she says, "We can only speculate."

This isn't the first discovery Falk has made regarding Einstein's brain. In 2009, she published a paper showing the physicist also shared a feature common among certain musicians—a knob-shaped fold in the part of the motor cortex, or the region that controls motion. Specifically, people who learn to play stringed instruments in childhood tend to develop a knob in that area, which controls the finger movements. Einstein, as it turns out, was a lifelong violinist.

Still, there is only so much that can be gleaned from photographs of the brain's surface. "When we look at photos, we are literally just scratching the surface because that is all we're seeing," Falk says. But that's not to say such studies are fruitless. "There's been this revolution in the contemporary neurosciences where now we have more information about what's going on underneath that surface, which enables us to perhaps better interpret what functional correlates of that surface may be."

Falk, Lepore, and Noe hope this paper represents more than just another scholarly publication. They hope it represents the start of a new chapter in the study of Einstein's brain. "As far we know, that set of photographs had not been viewed by the scientific community since the mid-50s," Lepore says. Both Lepore and Falk credit the Harvey family, Noe, and the museum for making the photographs available to scientists and the public. "That should have been done in 1955," Falk says. "It's turning things around."

Source:

Falk, Dean. Frederick E. Lepore, and Adrianne Noe. 2012. "The cerebral cortex of Albert Einstein: a description and preliminary analysis of unpublished photographs." Brain. DOI: 10.1093/brain/aws295

user-pic

A House Made of Garbage

Brighton Waste House Courtesy of BBM Sustainable Design Ltd. All Rights Reserved

In our throwaway society, many things that we discard still have value. An architect and a reuse and recycling advocate in Brighton, England, plan to demonstrate just how useful our trash can be. On November 19, 2012, they will begin building a house from discarded construction material, paper, videocassettes, even toothbrushes.

Have a listen:

While researching the NOVA scienceNOW segment on "augmented reality," premiering this Wednesday at 10 pm as part of NOVA scienceNOW's "What Will the Future Be Like," I came across some fascinating research on the new science of haptics. Haptics actually enables us to "touch" objects in a virtual world. As Katherine J. Kuchenbecker, Assistant Professor in the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania, describes it, haptics is "the science of improving human interaction in an augmented world through the power of touch." Imagine what online shopping will be like in the future, when, thanks to the power of haptics, you will be able to feel a pair of corduroys before you buy them!

But Kuchenbecker thinks haptics will help us with more than just online shopping. She wants to use it to save lives, starting with a robotic surgical system called da Vinci. This groundbreaking technology gives surgeons the ability to operate less invasively with the help of robotic arms. Surgeons currently operate the da Vinci by looking through a tiny camera while using game-like joysticks to manipulate the robotic arms. The system allows surgeons to cut with incredible precision, but there's a problem: in the process, they lose their sense of touch.

"Robotic surgery systems enable the doctor to operate on a patient through tiny incisions, which is a great benefit for recovery," says Kuchenbecker, "but current systems don't let the doctor feel what the tools are touching."

Kuchenbecker has found a clever way around this, by enhancing da Vinci with haptics. In the video below, watch as David Pogue takes it for a test drive. After viewing Kuchenbecker's da Vinci in action, read on for more on the science behind how it works.

After sensors record the vibrations of the surgical instruments, tiny motors called actuators reproduce them at the surgeon's hand. "Every time the robotic instruments touch each other," says Kuchenbecker, "or touch something in the operative field inside the patient, those vibrations travel up the tool and we measure them with our sensors and we immediately recreate them at the surgeon's hand so they can feel it almost as though they were holding the tools themselves."

When Hurricane Sandy ripped through the Eastern United States, it took down power lines, sent sea water gushing into substations, and knocked out connections to power plants. Millions of people were without electricity, but more important, dozens of hospitals lost power from both the grid and their secondary and tertiary backup systems. Cleaning up the mess is the first priority, but a close second will be evaluating how the grid could better cope with disasters of this magnitude.

That question comes at an opportune time. We're in the midst of a lengthy and expensive overhaul of our nation's electrical grid, one that heralds a new, "smarter" future. Power generation and delivery haven't changed much since the days of Edison and Tesla, but a new wave of technologies, known collectively as the smart grid, will modernize the industry. Some utility companies have already started down this road, installing smart meters that communicate demand with operators. But could smart grid technologies have helped during Hurricane Sandy, or any other large natural disaster, for that matter? The answer is yes and no, and which part of that answer is right depends on how you define the smart grid.

The smart grid isn't just one technology, but a whole host of new systems which, hopefully, will combine to make our electrical distribution system more robust and efficient. It involves everything from intelligent washing machines, which run only when electricity demand is low, to dynamic power plants, which can quickly spool up in response to spikes in demand.

Much of the smart grid, though, still relies on the same grid we have today. The distribution system may become more responsive, but physically, it won't be much different than it is today. That means when a substation is flooded or a tree knocks down a power line, the juice will stop flowing, just as it does today. And when that happens on a large scale, as it did during Hurricane Sandy, millions of people will still lose power. There's not a lot an intelligent system can do to guard against physical damage.

And when there is widespread physical destruction of the grid, "There's a limited amount the smart grid can do," says Mark McGranaghan, vice president of power delivery and utilization at the Electric Power Research Institute. During smaller disasters, a smart grid could more deftly reroute power around downed lines than a traditional grid, ensuring customers who needn't lose power don't. But that would only work if the alternate routes are still functioning. If they are damaged, you're still out of luck and out of power. The smart grid, McGranaghan says, is no substitute for system hardening.

System hardening is where infrastructure is beefed up to prevent damage from weather or other disasters. It can include things like using cement for telephone poles instead of wood, burying cables underground, or raising substation equipment above the level of flood waters. System hardening is not entirely distinct from smart grid approaches--information relayed by smart technologies can guide hardening efforts--but it can be done independently of "smart" updates.

That's not to say the smart grid won't be useful in the case of disasters. Vermont, for example, has widely deployed smart grid technologies, including smart meters and grid sensors. "When that last hurricane went through the Northeast, they had an easier time getting power restored in Vermont because they could spot the shortages more easily," says Maggie Koerth-Baker, author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. "They were able to actually spot the downed wires through the system." That allowed crews to focus on repairing downed lines rather than searching for them. The same happened after storms swept through the Southeast earlier this year, McGranaghan says. Crews in Chattanooga were able to repair the system in much less time thanks to smart grid technologies.

Many smart grid technologies are better suited to helping a system recover from disaster, but to keep the power flowing during an event, experts are bullish on microgrids. Also considered a member of the smart grid pantheon, microgrids can function autonomously if the larger grid fails, says Alexis Kwasinski, a professor at the University of Texas. They derive their power from a variety of sources, including diesel generators, natural gas-powered microturbines, photovoltaics, and small wind turbines. Microgrids are expensive, though, so they are most commonly used where a continuous power supply is deemed worth the added cost, such as hospitals, telecommunications equipment, and computer server farms, Kwasinski says. (Incidentally, Thomas Edison's first power plant in Manhattan, Pearl Street Station, is considered a microgrid, since it served electricity to only a small section of the city.)

While many smart grid technologies are still being rolled out, microgrids already have a good track record when disasters strike. The continuity of cell phone service is perhaps the most conspicuous example. After the earthquake off the coast of Sendai, Japan, Kwasinski says a microgrid operated by NTT kept power flowing long after the main grid had failed, allowing people to stay in touch. Another in Garden City, New York, operated well after Hurricane Irene in 2011, he adds. And during Sandy, widely deployed microgrids may have helped cell service remain operational long after the grid went down.

Still, even microgrids may not survive powerful or widespread disasters. "We have to look at the capability of the infrastructure to withstand these events," McGranaghan says. During disasters, the smart grid's virtues may not be advantageous because the system is built atop the same, fragile grid as before. System hardening would change that, but like smart grid enhancements, it is not an inexpensive proposition. Fortunately, the smart grid can inform where engineers should focus on hardening the grid. "If we know we can use the smart grid to respond better, maybe that will influence those decisions," he says.

Note: We will be launching a new NOVA Lab on energy, renewables, and the smart grid in the coming weeks. Check the NOVA Labs site soon.

Intelligence tests have had many uses throughout their history--as tools to sort both schoolchildren and army recruits, and, most frighteningly, as methods to determine who is fit to pass on their genes. But as intelligence testing has made its mark on the educational and cultural landscape of the Western world, the concept of intelligence itself has remained murky. The idea that an exam can capture such an abstract characteristic has been questioned, but never rigorously tested--perhaps because conceiving of a method to measure the validity of a test that evaluates a trait no one fully understands is impossible. IQ tests have proved versatile, but are they legitimate? To what extent do intelligence tests actually measure intelligence?

Reporter and philosopher Walter Lippmann, who published a series of essays in the 1920s criticizing the Stanford-Binet test, wrote that it tests "an unanalyzed mixture of native capacity, acquired habits and stored-up knowledge, and no tester knows at any moment which factor he is testing. He is testing the complex result of a long and unknown history, and the assumption that his questions and his puzzles can in 50 minutes isolate abstract intelligence is, therefore, vanity."

Lippmann criticized the tests over 80 years ago, but already he recognized that people needed to approach their results with caution--advice now made more salient by a number of studies revealing interesting phenomena that validate his and other test skeptics' opinions.

As it turns out, a number of variables, none of which have to do with brainpower, can influence test scores.

For example, many researchers have discovered that people from minority racial groups often perform worse on intelligence tests than their white counterparts, despite a lack of evidence that they are actually less intelligent.

Another study has shown that intelligence is not fixed throughout one's lifetime: Teens' IQs changed by as much as 20 points over four years, raising questions about some modern IQ test uses. Many gifted programs, for example, use IQ tests to select student participants. But what does it mean if, over their school careers, students who fell below the cut-off grew more "gifted" than those in the program? Should they have been denied the enrichment opportunity, even though they later were revealed to be just as intellectually capable as the students who were allowed to enroll?

External cues that influence one's self-perception--such as reporting one's race or gender--also influence how one performs on intelligence tests. In a blog post for Scientific American, writer Maria Konnikova explains, "Asian women perform better on math tests when their Asian identity is made salient--and worse when their female identity is. White men perform worse on athletic tasks when they think performance is based on natural ability--and black men, when they are told it is based on athletic intelligence. In other words, how we think others see us influences how we subsequently perform."

If one's performance on IQ tests is subject to so many variables outside of natural ability, then how can such tests measure intelligence accurately? And does one's level of innate intelligence even matter? Is it correlated with success?

In one study, Robert Sternberg, a psychologist at Tufts University, found that in the Western world high scores intelligence tests correlated with later career success, but the people he tested live in a culture that places enormous emphasis on achievement on such tests.

Imagine a student with great SAT scores who later goes on to excel in her career. One could say that the student was very smart, and her intelligence led her both to succeed on the test and in her career. But one could also say the student was particularly skilled at test-taking, and since she lived in a society that valued high test scores, her test-taking ability opened up the door to a great college education. That in turn gifted her with the skills and connections she needed to succeed in her chosen field.

Both of these scenarios are over-simplifications. Intelligence--however murky it may be and however many forms it may come in--is undoubtedly a real trait. And intelligence tests have persisted because they do provide a general way to compare people's aptitude, especially in academic settings. But from their invention, intelligence tests have been plagued by misinterpretation. They have been haunted by the false notion that the number they produce represents the pure and absolute capacity of someone's mind, when in reality studies have shown that many other factors are at play.

Donna Ford, a professor of Education and Human Development at Vanderbilt, writes, "Selecting, interpreting and using tests are complicated endeavors. When one adds student differences, including cultural diversity, to the situation, the complexity increases... Tests in and of themselves are harmless; they become harmful when misunderstood and misused. Historically, diverse students have been harmed educationally by test misuse."

If current intelligence tests are subject to factors outside of intelligence, can a new assessment be developed that produces a "pure" measure of innate intelligence? Scientists are starting to examine biology, rather than behavior, to gain a new perspective on the mind's ability. A team of researchers recently set out to understand the genetic roots of intelligence. Their study revealed that hundreds or thousands of genes may be involved. "It is the first to show biologically and unequivocally that human intelligence is highly polygenic and that purely genetic (SNP) information can be used to predict intelligence," they wrote.

But even if researchers discover which genes are related to intelligence, a future in which IQ tests are as simple as cheek swabs seems unlikely. Most contemporary intelligence researchers estimate that genetics only account for around 50 percent of one's cognitive ability; the environment in which one learns and grows determines the other half. One study found that biological siblings, when raised together, had IQ scores that were 47 percent correlated. When raised apart, that percentage dipped to 24.

And, though world knowledge and problem-solving skills are commonly tested, the scientific world has yet to come to a consensus for a precise definition of intelligence. Perhaps such a definition is impossible.

So perhaps Galileo would have stolen Mozart's prestigious preschool seat. But that preschool admissions officer would likely whack herself in the head a year or two later as she heard of the prodigious star wowing audiences with his musical compositions.

Discarding intelligence tests altogether might be too harsh a reaction to their flaws. After all, they are useful when interpreted correctly in the right settings. Instead, we need to understand the tests' limitations and their potential for misuse to avoid determining a person's worth--or future trajectory--with a single number.

Who was smarter--Galileo or Mozart?

Answering that question seems impossible. After all, the former was an astronomer, the latter, a composer. Asking which of the two was smarter seems akin to forcing someone to objectively determine if apples are better than oranges. But while humans have not yet invented a scale to measure the value of fruit, we do have one that measures brain-power: the IQ test. And according to a book by Stanford psychologist Catherine Cox Miles, Galileo's IQ was around 20 points higher than Mozart's. The number may seem trivial, but if both of them were three-years-old today, competing for a slot at a private New York City preschool, Galileo would likely edge out Mozart. But should he? Is there such a thing as a true measure of something as intangible as intelligence?

The notion of assigning a numerical value to intelligence dates back to the early 20th century, when French psychologist Alfred Binet created a series of tests to help Parisian public schools identify "mentally defective" children. Between 1904 and 1911, Binet and his colleague Theodore Simon observed the skills of "average" French schoolchildren, then created a series of tests for students between the ages of three and 12 designed to assess whether their abilities were above or below the norm.

To calculate a student's "intelligence quotient," Binet and Simon simply took his mental age, divided it by his actual age, and then multiplied by 100. For example, if a seven-year-old could perform the tasks required of a nine-year-old, his IQ would be (9 / 7) *100, or around 128.

Intelligence testing reached the United States in 1916, when psychologist Lewis Terman created a new, refined intelligence scale based on the abilities of thousands of students--significantly more than the fifty or so Binet studied. Today, psychologists use a revised version of Terman's scale to evaluate children in five categories: fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory. Big differences in a student's scores across these categories can help psychologists diagnose learning disabilities.

Try to answer some questions from a real IQ test yourself. Below are some examples, one from each of the five categories.

Fluid Reasoning:
1. "I knew my bag was going to be in the last place I looked, so I looked there first." What is silly or impossible about that?

Knowledge:
2. What does cryptic mean?

Quantitative Reasoning:
3. Given the numbers 3, 6, 9, 12, what number would come next?

Visual-spatial Processing:
4. Suppose that you are going east, then turn right, then turn right again, then turn left. In what direction are you facing now?

Working Memory:
5. Repeat a series of digits (forward or backward) after hearing them once.

Source: Introduction to Psychology by Dennis Coon and John O. Mitterer

Just a few years after Terman brought the IQ test to the United States, it left the classroom--and entered the military. During World War I, the number of army recruits exploded from around 200,000 in March of 1917 to over 3.5 million in November of 1918. As the military grew, so too did the need for trained officers; the most intelligent recruits needed to be identified early so they could enter officer training programs.

Thus Harvard psychologist Robert Yerkes developed the Army Alpha and Beta tests. Modeled after the Stanford-Binet scale, the tests were designed to give commanders a sense of the intelligence of the men they were leading and to screen soldiers for officer potential. Unlike Terman's IQ test, the army exams could be administered to recruits en masse and the results could be summed and interpreted without the expertise of a psychologist. During WWI, over 1.7 million men took the intelligence tests.

Think you have what it takes to be an officer? Try the questions below--they appeared on real Army alpha tests.

1. If you saw a train approaching a broken track you should:
A. telephone for an ambulance
B. signal the engineer to stop the train
C. look for a piece of rail to fit in

2. Why is beef better food than cabbage? Because:
A. it tastes better
B. it is more nourishing
C. it is harder to obtain

3. Why do some men who could afford to own a house live in a rented one? Because:
A. they don't have to pay taxes
B. they don't have to buy a rented house
C. they can make more by investing the money the house would cost

4. A dealer bought some mules for $1,200. He sold them for $1,500, making $50 on each mule. How many mules were there?

5. Unscramble the words to form a sentence. Then indicate if the sentence is true or false.
a. happy is man sick always a
b. day it snow does every not

Answers: 1.) B 2.) B 3.) C 4.) 6 5a.) False - A sick man is always happy. 5b.) True - It does not snow every day.

Sources: historymatters.com and official-asvab.com

While the tests helped educators and administrators in the early 20th century understand more about their students and recruits, they had already begun to stray from Binet's original intention. People began to use them as indicators of general aptitude, removing them from the classroom context for which they were intended. Suddenly, an absolute measure existed for a trait that had never been absolute--adding fuel to the fire of the growing eugenics movement. Rather than simply suggesting the one would have success in grade school, higher scores on Binet's test started to mean that one was more fit for breeding. The Advanced Learning Institute reported that between 1907 and 1965, thousands of people were sterilized on the basis of low scores on intelligence tests that characterized them as "feeble-minded."

In 1924, 18-year-old Carrie Buck became the first person subjected to Virginia's Eugenical Sterilization Act. She was classified as "feeble-minded" after a version of the Stanford-Binet test revealed that she had a mental age of nine. Carrie resided in the State Colony for Epileptics and Feeble-Minded in Virginia, the superintendent of which decided that she would be the first person subjected to the new law.

According to Paul Lombardo, a professor at the University of Virginia and an expert on Buck's history, others arranged a trial for Carrie to challenge the new law. Carrie was unable to convince the court of her mental capacity, but her lawyer appealed the court's decision, arguing the new law was discriminatory. The case, Buck v. Bell, went all the way to the Supreme Court, but ultimately, in 1927, the court deemed that there was nothing unconstitutional about Virginia's new law. Carrie, along with around 8,300 other Virginians, was sterilized.

It took the rise of Nazi Germany for people in the United States to recognize the horrific consequences of eugenics. But, chillingly, though the sterilization of individuals in mental institutions came to a halt in the 1970s, the Buck v. Bell decision has never officially been overruled.

Editor's note: NOVA scienceNOW explores the science of intelligence on "How Smart Can We Get," premiering Wednesday, October 24 at 10 pm ET on most PBS stations.

user-pic

Cryptography: Encryptions Present

Ever since writing has existed, people have wanted to send secret messages to one another--and others have wanted to intercept and read them. This is the third installment of the blog series taking you through the history of cryptography, its present, and future possibilities of unbreakable codes. Follow the links to read the first and second parts of the series.

Last time I talked about complex polyalphabetic ciphers--techniques of encoding information that make it nearly impossible to reveal the message without access to a secret key. Because the encryption processes themselves are so intricate, the main security challenge becomes keeping that key private. In order to tell a secret, you already have to share a secret key, and your encoded message will be useless if an eavesdropper finds a way to get ahold of that key.

But in 1976, Ron Rivest, Adi Shamir, and Leonard Adleman invented a method that eliminated the need to give away the key at all. Their method, public key cryptography, is still used today to make secure transmissions of sensitive data and to prove the identities of people online. Public key cryptography turns traditional cryptography on its head: instead of keeping the key a secret, every receiver creates and broadcasts his own individualized key for everyone to see, and anybody who wants to send him a message will use that key to encode it. Because of the way the key was created, he will be the only one who can decode it.

The trick to creating this kind of public key is to use what mathematicians call a one-way function: a type of math operation that's easy to do in one direction but nearly impossible to undo without additional information. One example of a one-way function is multiplying two super-big prime numbers. The multiplication part is easy, but to undo that multiplication, you need to know what the original prime numbers were. And as mathematicians can attest, factoring a number into primes is a ton of work--pick a number big enough, and it'll take all the computers in the world longer than the age of the universe to find the factors.

The RSA public-key cryptosystem that they invented (named after Rivest, Shamir, and Adleman themselves, of course) is still in use today, and it works along exactly these lines. Each person's public key is a version of a large number built from two primes, and only someone with the knowledge of the number's factors--the private key--can decode something encoded using their public key.

The other popular public key cryptosystems today work similarly. They each use a mathematically "hard problem" to create keys so that anyone can encode messages to specific people but only the intended recipients will have the extra information needed to reverse it.

Now, if people wanted to stop at this level of security it would be perfectly understandable. With the computers we have now, public key cryptography is certainly secure enough--so secure, in fact, that it's prompted governments of several countries to put limits on key size, and even to try and ban the exportation of big prime numbers. After all, governments want to be able to read everybody's mail--it wouldn't do for foreign states to have better encryption systems. Public key cryptography is the system that makes e-commerce possible, and it is a standard for high-importance confidential messages. But there is always a chance that someone will find a way to beat the system and find the extra information from the public key.

Enter the next big step, at least in theory--the quantum computer.

More on that next time.

Over the years I've had the unfortunate experience of leaving great stories on the "cutting room floor." One of them is the story of painter Anne Adams and composer Maurice Ravel, two people who lived in different countries almost a century apart yet had an extraordinary connection. Researching a documentary, especially one that explores a scientific mystery, one always takes twists and turns. Though it didn't make it into the final cut of NOVA scienceNOW's "How Smart Can We Get," the story of Adams and Ravel is one twist that has stayed with me.

Editor Jedd Ehrmann and I spent weeks looking for a way to integrate this story into the program, but alas we failed! Thanks to the internet we have the opportunity to share it with you. I hope you'll take a few minutes to watch it, then read on for more.

I came across the story while investigating a rare neurological disorder called acquired savant syndrome for NOVA scienceNOW. I knew what savant syndrome was from watching the movie "Rain Man." Dustin Hoffman plays the part of Raymond Babbitt, a savant with the uncanny ability to remember everything he's ever read. Savants have skills they never learned. Some, like the famous Kim Peek, have extraordinary memories; others, like blind, autistic savant Leslie Lemke, are natural born musicians. Leslie couldn't stand until he was 12 and didn't walk until he was 15, but at 16 he sat down at the piano and played Tchaikovsky.

Cases like these are very rare, but cases of "acquired" savant syndrome are even scarcer. Only a few dozen cases have been found to date. They were discovered by the man who gave the syndrome its name, psychiatrist Darold Treffert.

Treffert lives in Fond du Lac, in the middle of the Wisconsin countryside. He started the Savant Institute in his home office--a tiny room in his basement. The space is filled with file cabinets stuffed with documents he's collected for over 40 years. They describe the cases of over 300 savants. While collecting these stories he came across a few dozen cases of people who suddenly develop savant abilities after a head injury, acquired savants.

People like acquired savant Derek Amato. Amato, who is featured in the program, felt a sudden, compulsive desire to play piano--and, to his own surprise, found that he knew how to do so--after a concussion. Jon Sarkin became a painter after a stroke. After a brutal mugging, Jason Padgett began drawing extraordinary images based on mathematical equations.

What does Anne Adams have to do with these cases? Although Anne was not an acquired savant, she did experience a sudden burst of creativity late in life, after she was diagnosed with a rare form of dementia. MRI scans revealed what was happening in her brain as Anne's dementia progressed and her artistic ability flourished. (More about this can been seen in the program.) Treffert believes her case gives us a one-of-a-kind glimpse into how sudden abilities emerge in the injured brain.

After learning about Anne I tracked down her family in Vancouver, Canada. Her husband, Professor Robert Adams, and son Alex shared stories of how art slowly took over Anne's life. They have preserved hundreds, possibly thousands, of her paintings and drawings, which fill the walls of Alex's home. But one of their favorites was given to Dr. Bruce Miller, director of the Memory and Aging Center at the University of California, San Francisco.

Dr. Miller is the neurologist who diagnosed Anne. Over the years he has discovered a handful of patients like Anne who experienced a sudden burst of creativity during the course of the same form of dementia.

Miller was especially taken with Anne's case, and her artwork. In fact, the painting Anne's family gave him hangs in his office; it's called "Unraveling Bolero."

And this is the part of the story that hit the cutting room floor--the part that has nothing to do with acquired savant syndrome but is a fascinating tale on its own. The story of how two people who lived worlds apart are connected through the neurological changes that were taking place in their brains. Miller found their connection so fascinating that he and his colleagues wrote a paper about it. Though it didn't make it into the broadcast, I'm glad to have the opportunity to share it with you here.

Editor's note: "How Smart Can We Get?" premieres Wednesday, October 24 at 10 pm ET on most PBS stations.

user-pic

Teaching Robots How to Walk and Dance

Over the weekend, a new video on the internet took the world by storm. Not exactly news in itself, but who—or rather what—starred in the video makes it noteworthy.

Like most viral diversions, this latest video was a riff on another campy sensation—Gangnam Style, the K-pop music video featuring a slick-haired, doughy rapper who rides an imaginary horse across all manner of over-saturated backdrops. The new video starts with Gangnam Style's familiar bass beat, but instead of the Korean sensation Psy bouncing around the screen, there's a white clad, black-visored robot waving its arms and banging its head.

CHARLI, the robot in the video, isn't nearly as fluid as Psy, and his leg lifts are restrained compared with Psy's manic prancing. But amongst robots, CHARLI is a bona fide Michael Jackson.

Dennis Hong and his Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech built CHARLI to study bipedalism in robots. "CHARLI's groovy dance moves were just done for fun in the lab during our 'free time,' " Hong says. In addition to dancing, the robot competes in the vaunted RoboCup, a soccer league where roboticists test the speed and agility of their creations.

To get the robot to move to the beat, Hong and his team scripted the entire dance. It was programmed frame by frame on a computer, not constructed by recording the captured motion of a human dancer. "If you simply do a 'motion capture' of a person dancing and 'playing that motion back' on a robot—which is often done in generating the motions for characters in video games or movies using computer graphics—it does not work. The robot will fall," Hong points out. That's because a robot's center of gravity, and the center of mass in each of its body parts, is different from a human being's, he says. A human moving his head, for example, will compensate differently from a robot doing the same thing.

CHARLI isn't dancing on its own yet, but the performance is still a tour de force of flexibility and dynamism. At five feet tall, CHARLI is not a small robot. Such size complicates matters greatly. For example, to flail its arms, CHARLI's actuators must be sufficiently powerful to quickly overcome the inertia. Balance is another challenge—all that mass moving around so rapidly could easily upset a less sophisticated robot, even one that's not following a motion captured human dancer.

"Balance is difficult, especially if it is moving its limbs around in high speed," Hong says. "Normally the inertial forces created by the upper body motion is considered as disturbances by the lower body, and without coordination, the robot will fall. The lower body needs to compensate for the forces created by the upper body, and vice versa." Compensating for such upper body motions is state of the art, meaning CHARLI won't be jumping around like Psy—a hallmark of the Gangnam Style video—anytime soon. But don't count out future generations of robots.

Ultimately, Hong would love to have a robot that could not only jump around, but learn to dance on its own. "For the robot to really dance—besides its capability to be able to 'enjoy' it—requires many things besides the hardware design," Hong says. He lists the challenges: It must listen to the music, track the beat, and "understand" the musical style enough to construct an appropriate dance (something even many humans can't do). Then the robot must remain balanced throughout all the motions. Finally, Hong says, "trying to create a robot that can actually 'enjoy' the dance itself, that would be the most challenging of all."

Hong, CHARLI, and some of RoMeLa's other robots will be featured in the November 14 episode of NOVA scienceNOW. Watch a sneak peek of the episode in which CHARLI scores a goal in robot soccer, another challenging feat roboticists are striving to perfect.

Picture of the week

Inside NOVA takes you behind the scenes of public television’s most-watched science series. You'll hear from our producers, researchers, and other contributors. It's a forum where you can see what's on our minds and tell us what's on yours.

Follow NOVA's Twitter Feed