user-pic

Writing Can Do Wonders

You have no doubt heard that getting a problem "off your chest" will make you feel better. You may even have experienced the cathartic effect of talking about your worries. As it happens, disclosing does more than just make you feel better, it can actually change the workings of your brain when the pressure is on. The end result can be better performance under stress--on tests, in job interviews, and even on the playing field.

Take research conducted by one of my graduate students, Gerardo, at the University of Chicago. A few years back, Gerardo asked a group of students to take a difficult math test while we ratcheted up the stress. We called in several different techniques for putting the pressure on--including offering students $20 for stellar performance and reminding them that, if they performed poorly, they would jeopardize the ability of a partner who also wanted to win money. We also videotaped students and told them that math teachers and professors would be watching the tapes to see how they performed.

Immediately after hearing what was on the line, we asked some students to write for about 10 minutes about their thoughts and feelings concerning the test they were about to take. We wanted the students to get their feelings about the pressure off their chest so we told them that they couldn't be linked to their writing by name so they should feel free to write openly and freely about their worries. Other students were not given the opportunity to write, but just sat patiently for about ten minutes while the experimenter got all the testing materials together.

What we found was quite amazing. Those students who wrote for ten minutes about their worries before the math test performed roughly 15% better than the students who sat and did nothing before the exam. Keep in mind that this difference doesn't just reflect variation in math ability across our writing and no-writing groups. We know this because everyone took a practice math test before the experiment got started and there was no difference in performance between the two groups. Those students given the opportunity to write about their worries before the math test improved, while those who didn't write choked under the pressure.

These results have also been extended to high-school classrooms. In another study, ninth graders were randomly assigned to an expressive writing condition (writing about their worries about the upcoming test) or a control condition (thinking about items that would not be on the upcoming test) for 10 minutes immediately prior to the first final exam (biology) of their high-school career. Both students and teachers were blind to the particulars of the study and the condition students were in. Those students who expressively wrote outperformed controls. This was especially true for students who have a tendency to worry on tests (i.e., students high in test anxiety). High test anxious students who wrote down their thoughts beforehand received an average grade of B+, compared with those who didn't write, who received an average grade of B-.

Why does such a simple writing exercise have such a big impact? The answer has to do with the content of the writing itself. Writing reduces people's tendency to ruminate because it provides them with an opportunity to express their concerns. Expressing concerns gives people some insight into the source of their stress, allowing them to reexamine the situation such that the tendency to worry during the actual pressure-filled situation decreases.

Worries are problematic because they deplete a part of the brain's processing power known as working memory, which is critical to successfully computing answers to difficult test questions. Working memory is lodged in the prefrontal cortex (at the very front of our heads, sitting just above the eyes) and is a sort of mental scratch pad that allows people to "work" with whatever information is held in consciousness, usually information relevant to the task at hand. When worries creep up, the working memory people normally use to succeed becomes overburdened. People lose the brain power necessary to excel.

For several decades, psychologists has been extolling the virtues of writing about personally traumatic events in your life, such as the death of a close family member or a difficult breakup. Time and time again, psychologists have found that, after several weeks of writing about a life stressor, people have fewer illness-related symptoms and even show a reduction in doctor's visits.

Expressing your thoughts and feelings about an upsetting event--whether a trauma in the past or a pressure-filled test coming up in the future--is similar to "flooding therapy," which is often used to treat phobias and posttraumatic stress disorder. When a person repeatedly confronts, describes, and relives thoughts and feelings about his or her negative experiences, the very act of disclosure lessens these thoughts. This is good for the body because the chronic stress that often accompanies worrying is a catalyst for health problems.

Disclosure seems to be good for the body and for the mind. When university freshman, for example, are asked to write about the stress of leaving home for the first time and going off to college, they report a decrease in their worries and intrusive thoughts. Interestingly, writing about their worries also leads to improved working memory over the course of the school year. Expressive writing reduces negative thinking in stressful situations, freeing up brainpower to tackle what comes your way.

This essay was excerpted and modified from "Choke: What The Secrets Of The Brain Reveal About Getting It Right When You Have To" © 2011 by Sian Beilock. Published by Free Press, a Division of Simon & Schuster, Inc. Excerpted with permission from the author. All Rights Reserved.

Joe Palca, writing for NPR:

Scientists working on NASA's six-wheeled rover on Mars have a problem. But it's a good problem.

They have some exciting new results from one of the rover's instruments. On the one hand, they'd like to tell everybody what they found, but on the other, they have to wait because they want to make sure their results are not just some fluke or error in their instrument.

Curiosity's scientists aren't alone—the Higg's folks went through the same thing, double- and triple-checking their data (and probably more) before they were confident what they were seeing was a real result. That's just science. It pays to be careful. But given the leak, it's a good bet that the Curiosity scientists believe the result is bonafide and not just a fluke.

The NASA team has a lot on the line. The launch, transit, and landing were so flawless that expectations are high for the remainder of the mission. There's also a human dimension to the suspense—many scientists on the team have been dreaming of a day like this since they were kids.

Here's to hoping they really have found something amazing.

user-pic

Encryptions Future: Quantum Cryptography

Ever since writing has existed, people have wanted to send secret messages to one another--and others have wanted to intercept and read them. This is the fourth installment of a blog series taking you through the history of cryptography, its present, and future possibilities of unbreakable codes. Follow the links to read the first, second, and third parts of the series.

In the last post I talked about Public Key Cryptography, a system that derives its security from "hard" math problems like finding the factors of large numbers. But increases in computing power have made once-impenetrable codes solvable in just a few months. Keeping ahead of technological development by increasing the length of the secret number used to encode information--the key--is always an option, but there is still a chance that someone will find a tricky way to calculate your private key, letting them read all your messages.

One of those tricks could be quantum computing. Scientists have shown that a quantum computer could use some clever properties of quantum mechanics to help codebreakers solve many of the math problems at the heart of public key cryptography.

How? A quantum computer encodes "bits" of information in the properties of subatomic particles. And here is where things get strange: Because unobserved particles--according to quantum mechanics--exist in a combination of all possible states, a quantum computer is able to store and operate on multiple numbers at once. Suddenly, "impossible" tasks like factoring large numbers can be done in the same time it would take to multiply them.

Although quantum computers are still in their infancy--nothing with enough complexity to run that sort of program can be built yet--codemakers can see the demise of public key cryptography on the horizon. It's not like everyone would have a quantum computer, but chances are that someday in the future, if the government or a big business wants to read your encrypted mail, they'll have access to a computer that can do so.

But in 1984, Charles Bennett and Gilles Brassard found a way to transmit a key that is provably secure, even in the face of quantum computers. Their method, which is still used today, uses properties of quantum mechanics to transmit a random key--and any eavesdropping party would not only get useless information, but would even alert the sender and receiver to an attack. Pretty cool stuff.

Here's how it works: The One-Time Pad, which I discussed in a previous article, basically encodes data by using a randomly-generated key as long as the intended message. It's impossible to break the code and read the message without the key.

Bennett and Brassard's quantum key distribution protocol, called BB84, acts as a sort of high-tech One-Time Pad: A random key is generated using quantum mechanics and shared securely between two people, who can then use it to encode and send unbreakable messages however they want. It eliminates the problem of transferring a key securely. The process of generating the key takes advantage of the quantum mechanical property that measuring something can change it.

Here are the basics: photons are sent from one person to another, measured at both ends. The measurements that match up will be used as the basis for a randomized key.

The sender and receiver each have two kinds of polarized filters: one that only lets in horizontal or vertically oriented photons, and one that only lets in the diagonals. They agree that photons that are vertically or diagonal-forward polarized will represent binary 1, and photons that are horizontal or diagonal-backward will be 0.

Two filters
The two possible filters

The sender generates a photon and prepares it with one of the two random filters before sending it along an optical cable to the receiver. Once it's there, the receiver measures the photon with his own randomly chosen filter.

Quantum cryptography
Sender and receiver both choose a horizontal/vertical filter, both measure 0.

Here's the tricky part. If the receiver measures the photon with the same filter as the sender, he'll get the same result. But if he uses the wrong filter, there's no such guarantee: If he's been sent a vertically polarized photon and he measures it with a diagonal filter, he has a 50% chance of getting each of the diagonals as his result.

Quantum cryptography
Sender randomly chooses horizontal/vertical, but receiver randomly chooses diagonal. Chaos ensues!

The sender and receiver go through a long string of photons in this way, recording the bit values and which filter they used for each. Afterwards the receiver uses another line of communication--it doesn't have to be secret--to tell the sender which filters he used for each (without giving away the results of each measurement). The sender reveals which filters she used, and they agree to only count the photons where they used the same filters. That way, they know that they've measured each photon the same way, so they'll have the same values.

In this way they build up a secret, random string of numbers using the photons that they both measured in the same way. This string of numbers will be their key.

Now imagine that you are an eavesdropper. If you are able to intercept a photon between the users, you won't know how the sender prepared it, so you have a 50% chance of using the wrong filter. That means that, not only might you get the wrong answer, you might also mess up the value for the receiver because of that quantum mechanical property I mentioned where measuring the photon can change its value. In fact, if the sender and receiver compare the values of a few of their photons and find any disagreements, they can tell that somebody's been trying to read their photons and discard the suspect values from the key.

Quantum cryptography
Eavesdropper tries to measure photon but uses different filter than the sender and receiver, messing up measurements.

Oh, and by the way: Once the sender and receiver start revealing the filters they used, it's far too late for you to use those filters yourself. The photons are long gone.

There are a few commercial systems that implement quantum key distribution today, including ID Quantique, a spinoff from a University of Geneva experimental physics group. The technology has already been used in highly secure transmissions, from Swiss ballots to World Bank transactions.

But quantum key distribution hasn't become mainstream quite yet, mostly due to a few basic issues. For one, the machines are all handmade by physicists, so they are expensive and inconvenient to commission. Another issue is that the system requires dedicated optical cables to send the photons, whereas almost all currently existing fiber-optic infrastructure, although fairly widespread, relies on sending multiple signals on the same cables. And finally, there's the issue of scalability. Right now quantum key communications must be cabled directly from one user to another--like from a bank to a single high-powered client--but a vast new infrastructure would be needed to connect a large network of users.

Richard Hughes, a quantum researcher at Los Alamos National Laboratory, is working on answers to these design problems. In the future, he expects quantum cryptography to be used in smart grid applications and to eventually extend to everything from smartphone and tablet security to securing data in the virtual cloud. He says that quantum cryptography is much further along than we realize: The technology that exists today can already be used reliably in optical fiber networks for systems of medium scale, and experiments suggest that through line-of-sight delivery of secure keys--that is, sending the photons through open air--it could be possible to generate keys with satellites in orbit.

Although the key creation is perfectly secure, there still may be ways to outwit the system. For instance, in April 2010 researchers at the Norwegian University of Science and Technology found a way to trick a commercial system into revealing its secrets by shining a laser into the receiver's filter, blinding it while they read the photons themselves. The team was kind enough to warn companies using quantum technology before publishing their results so the security hole could be fixed.

Certainly this won't be the only flaw that researchers--and hackers--discover. After all, you can have the strongest, most well-secured door in the world, but the room's only safe for the time it takes to blast through the wall. As time goes on, security flaws will be found and repaired, approaching the perfectly secure system promised by quantum physics, and maybe even revealing more about how our universe works.

The fight between codemakers and codebreakers has driven technological and mathematical advances through history--from frequency analysis to mechanical bombes during World War II to computers to quantum programs. Quantum key distribution promises an unbreakable one-time cipher that companies, governments, and even individuals will be able to use to send information with perfect security and store private data with an unbeatable cipher. So--at least for now--the secret keepers have won. What will we do with that power?

user-pic

Hey Mars, What's For Dinner?

As I prepare for my cross-country road trip from Boston to California, one thing I'll have to pack is plenty of food. Of course, my trip won't require too much planning in that area, because I'll be staying on this planet. But when it comes to sending humans to Mars, a Ziploc of trail mix and a few tubes of Pringles won't suffice--after all, astronauts sailing through space on their way to Mars won't be able to pull over at Burger King and grab a medium number one, no pickles.

So, how exactly are astronauts supposed to maintain a healthy diet while traveling such a long journey? And what will they eat once they settle down on the Red Planet? Food in space has come a long way since John Glenn, of Project Mercury, tucked in to his small aluminum packages of foods and liquids. Today, astronauts on the International Space Station enjoy a variety of canned, packaged, and freeze dried goods, various condiments for enhanced flavor and drinks like juice to coffee.

Space food
Dinner is served at the Johnson Space Center food lab. Image courtesy NASA.

Adapting food for a space flight was one challenge. But if astronauts are to stay on Mars for an extended period of time, they won't be able to bring enough food with them. They'll need to grow it. On Mars.

Mars' environment is starkly different from Earth's. The surface pressure on Mars is roughly 1/100th of that on Earth. Temperatures range between -200 to 80 degrees Fahrenheit. Greenhouses--known as "food production units" at NASA--have to be carefully designed to fend off Mars's harsh conditions while still nurturing healthy plants. Martian farmers will also need to imitate the wide variety of growing environments found on Earth. Certain crops may need a high humidity environment, while others like vegetables such as squash or tomatoes may not.

To test out the techniques future explorers might use to farm on Mars, scientists at the Kennedy Space Center are experimenting with growing lettuce and peas in low-pressure greenhouses. Why low pressure? As Robert Ferl, director of the Interdisciplinary Center for Biotechnology Research at the University of Florida, explains, low-pressure units are easier to transport to and build on Mars. Plus, at low pressure, plants soak up and release water faster, allowing for speedy recycling of limited quantities of water that are available within the dome while also inhibiting hormones such as ethylene which could cause fruits and vegetables to quickly ripen and rot.

Private companies are investigating Mars agriculture, too. Mars One, which has set out the ambitious goal of settling humans on Mars by 2023, intends to feed the colonists using Plant Production Units that control temperature, light, and humidity, allowing plants to thrive despite the harsh conditions outside. They are essentially closed rooms with beds of plants situated beneath rows of colored LED lights, explains Gertjan Meeuws, a managing partner at Plant Lab, the company that manufactures the units. The LEDs generate light in red, blue, and a nearly-invisible color called far-red, a spectrum which is optimal for photosynthesis. The units also control the air temperature and disperse water on a timer.

These units weren't originally designed for Mars, but were created to feed Earth's growing population. Because they can be placed directly in cities, says Meeuws, they cut out many of the traditional steps of food production and transportation, qualities that will also be important on Mars.

No matter what kind of structure Mars crops are grown in, though, they will need water. Mars is drier than the Sahara, so according to Ferl, the key to maintaining a strong water supply will be recycling. Astronauts managing the growth operations must make sure all water that escapes into the atmosphere is later condensed so that it can be reused, either by the astronauts themselves or to water plants in the closed greenhouse system.

Yet Mars is not totally dry. Subsurface water ice could be mined for drinking and for watering crops. Or, if humans settle far from these natural water sources, they might be able to synthesize water using hydrogen and oxygen locked up in nearby rocks.

Growing crops on an alien planet is just one of the challenges. Preparing these crops is another. Food scientists from Cornell University are working to create menus for HI-SEAS, a simulated Mars mission slated to begin in early 2013. Using a remote lava field 8,500 feet up a mountain in Hawaii as a stand-in for Mars, the 120 day simulated mission will focus on how to feed human settlers using the limited resources they have. Space flight experts recognize the psychological importance of a diverse meal schedule, explains Cornell biological engineer Jean Hunter, so one of the main goals of the simulation is to create an outer space menu that will prevent "menu fatigue."

So on my way to California, as my fingers brush the bottom of that bag of trail mix and I start hankering for a bag of Cheetos, I will keep in mind that my menu fatigue is nothing compared to what future Mars astronauts will have to manage. All I know is that when I get to California I am there to stay, and I hope I survive being away from Red Sox fans and jaywalkers. As I stop at the Grand Canyon and look up to the sky, I can now be assured that my fellow humans who choose to delve into our vast galaxy will be well prepared and well fed and as they embark on the adventure of expanding human life on another planet.

Learn more about Mars on NOVA's Ultimate Mars Challenge, premiering Wednesday, November 14 at 9 pm ET on most PBS stations. Please check your local listings to confirm when it will air near you.

user-pic

Poring Over Einstein's Brain

Einstein playing the violin

Albert Einstein died in 1955, but not before leaving science one more gift—his brain. Pathologist Thomas Stoltz Harvey removed and preserved Einstein's after conducting an autopsy, and researchers have been studying it ever since, hoping to discover anything that may have lead to the physicist's genius.

Unfortunately for neuroscientists, Einstein's brain hasn't been in one piece since shortly after the autopsy. Harvey sectioned it into 240 pieces, many of which are missing. Luckily, before dicing up the organ, he also photographed it, though like the brain sections, many of those photos have been lost. But now a cache of Harvey's personal affects has been unearthed, including a number of photographs that are new to science. From the images, researchers have produced the most thorough description to date of Einstein's brain.

"Thrilling, in a word." says Dean Falk, a senior scholar at the School for Advanced Research in Santa Fe and a professor of anthropology at Florida State University, of studying the newly discovered images. Falk authored a paper detailing the findings, which will appear in the journal Brain, along with Fred Lepore, a neuro-ophthalmologist at Robert Wood Johnson Medical School in New Jersey, and Adrianne Noe, director of the National Museum of Health and Medicine, the institution that now has the photographs.

"It's a very comprehensive description of the outward appearance of Einstein's brain," says Sandra Witelson, a professor at the Michael G. DeGroote School of Medicine at McMaster University in Canada who has also published research on photographs of Einstein's brain. "On [Harvey's] return to New Jersey in the 1990's, he had hoped to compile an atlas of all the photographs and slides of Einstein's brain. The current report by Dr. Dean Falk and colleagues partly fulfills this plan."

Falk and Lepore had inklings that these photographs existed, but hadn't been able to get their hands on them until they were donated by the Harvey family to the National Museum of Health and Medicine. Noe made the photographs and other materials from the Harvey cache available to the two scientists for eight hours one day in the middle of September. "That was the breakthrough," Lepore says. "We got a lot of photographs."

After Falk and Lepore photographed the originals, they pored over the two-dimensional images for months, tracing every gyrus in an attempt to coax new information from the folds of Einstein's brain. Falk was well-suited to the task of analyzing a brain in such an abstract state—as an anthropologist, she frequently analyzes skulls for clues about the organs they once contained.

"Because they are two-dimensional photographs, I had to do a lot of mental rotation to be sure a feature that I saw in one view, when I saw that feature in another view, did that identification still make sense?" she recounts. Perhaps ironically, the spatial reasoning skills Falk relied on so heavily in her study were the very same at which Einstein excelled, a fact which may have been influenced by his unusual parietal lobes. "He would have been much better at studying his brain than I was," Falk jokes.

Falk and Lepore compared Einstein's brain to 85 other brains well-known in scientific circles. They mapped every portion they could, identifying characteristics which stood out in comparison to the other specimens. The primary somatosensory and motor cortices in his left hemisphere—which are responsible for the sense of touch and the planning and execution of motion, respectively—were much bigger than was expected.

But that wasn't all that was unusual. "Einstein's brain has an extraordinary prefrontal cortex, which may have contributed to the neurological substrates for some of his remarkable cognitive abilities," they wrote. In particular, he had four gyri in that region where most of us only have three.

The prefrontal cortex is involved in higher cognitive functions, "including remembering things, keeping them online, and my favorite, daydreaming and planning the future," Falk says in an interview with NOVA scienceNOW. "It's perhaps appropriate because Einstein was famous for his thought experiments." But as to whether the presence of a fourth gyrus had any affect on that, she says, "We can only speculate."

This isn't the first discovery Falk has made regarding Einstein's brain. In 2009, she published a paper showing the physicist also shared a feature common among certain musicians—a knob-shaped fold in the part of the motor cortex, or the region that controls motion. Specifically, people who learn to play stringed instruments in childhood tend to develop a knob in that area, which controls the finger movements. Einstein, as it turns out, was a lifelong violinist.

Still, there is only so much that can be gleaned from photographs of the brain's surface. "When we look at photos, we are literally just scratching the surface because that is all we're seeing," Falk says. But that's not to say such studies are fruitless. "There's been this revolution in the contemporary neurosciences where now we have more information about what's going on underneath that surface, which enables us to perhaps better interpret what functional correlates of that surface may be."

Falk, Lepore, and Noe hope this paper represents more than just another scholarly publication. They hope it represents the start of a new chapter in the study of Einstein's brain. "As far we know, that set of photographs had not been viewed by the scientific community since the mid-50s," Lepore says. Both Lepore and Falk credit the Harvey family, Noe, and the museum for making the photographs available to scientists and the public. "That should have been done in 1955," Falk says. "It's turning things around."

Source:

Falk, Dean. Frederick E. Lepore, and Adrianne Noe. 2012. "The cerebral cortex of Albert Einstein: a description and preliminary analysis of unpublished photographs." Brain. DOI: 10.1093/brain/aws295

user-pic

A House Made of Garbage

Brighton Waste House Courtesy of BBM Sustainable Design Ltd. All Rights Reserved

In our throwaway society, many things that we discard still have value. An architect and a reuse and recycling advocate in Brighton, England, plan to demonstrate just how useful our trash can be. On November 19, 2012, they will begin building a house from discarded construction material, paper, videocassettes, even toothbrushes.

Have a listen:

While researching the NOVA scienceNOW segment on "augmented reality," premiering this Wednesday at 10 pm as part of NOVA scienceNOW's "What Will the Future Be Like," I came across some fascinating research on the new science of haptics. Haptics actually enables us to "touch" objects in a virtual world. As Katherine J. Kuchenbecker, Assistant Professor in the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania, describes it, haptics is "the science of improving human interaction in an augmented world through the power of touch." Imagine what online shopping will be like in the future, when, thanks to the power of haptics, you will be able to feel a pair of corduroys before you buy them!

But Kuchenbecker thinks haptics will help us with more than just online shopping. She wants to use it to save lives, starting with a robotic surgical system called da Vinci. This groundbreaking technology gives surgeons the ability to operate less invasively with the help of robotic arms. Surgeons currently operate the da Vinci by looking through a tiny camera while using game-like joysticks to manipulate the robotic arms. The system allows surgeons to cut with incredible precision, but there's a problem: in the process, they lose their sense of touch.

"Robotic surgery systems enable the doctor to operate on a patient through tiny incisions, which is a great benefit for recovery," says Kuchenbecker, "but current systems don't let the doctor feel what the tools are touching."

Kuchenbecker has found a clever way around this, by enhancing da Vinci with haptics. In the video below, watch as David Pogue takes it for a test drive. After viewing Kuchenbecker's da Vinci in action, read on for more on the science behind how it works.

After sensors record the vibrations of the surgical instruments, tiny motors called actuators reproduce them at the surgeon's hand. "Every time the robotic instruments touch each other," says Kuchenbecker, "or touch something in the operative field inside the patient, those vibrations travel up the tool and we measure them with our sensors and we immediately recreate them at the surgeon's hand so they can feel it almost as though they were holding the tools themselves."

When Hurricane Sandy ripped through the Eastern United States, it took down power lines, sent sea water gushing into substations, and knocked out connections to power plants. Millions of people were without electricity, but more important, dozens of hospitals lost power from both the grid and their secondary and tertiary backup systems. Cleaning up the mess is the first priority, but a close second will be evaluating how the grid could better cope with disasters of this magnitude.

That question comes at an opportune time. We're in the midst of a lengthy and expensive overhaul of our nation's electrical grid, one that heralds a new, "smarter" future. Power generation and delivery haven't changed much since the days of Edison and Tesla, but a new wave of technologies, known collectively as the smart grid, will modernize the industry. Some utility companies have already started down this road, installing smart meters that communicate demand with operators. But could smart grid technologies have helped during Hurricane Sandy, or any other large natural disaster, for that matter? The answer is yes and no, and which part of that answer is right depends on how you define the smart grid.

The smart grid isn't just one technology, but a whole host of new systems which, hopefully, will combine to make our electrical distribution system more robust and efficient. It involves everything from intelligent washing machines, which run only when electricity demand is low, to dynamic power plants, which can quickly spool up in response to spikes in demand.

Much of the smart grid, though, still relies on the same grid we have today. The distribution system may become more responsive, but physically, it won't be much different than it is today. That means when a substation is flooded or a tree knocks down a power line, the juice will stop flowing, just as it does today. And when that happens on a large scale, as it did during Hurricane Sandy, millions of people will still lose power. There's not a lot an intelligent system can do to guard against physical damage.

And when there is widespread physical destruction of the grid, "There's a limited amount the smart grid can do," says Mark McGranaghan, vice president of power delivery and utilization at the Electric Power Research Institute. During smaller disasters, a smart grid could more deftly reroute power around downed lines than a traditional grid, ensuring customers who needn't lose power don't. But that would only work if the alternate routes are still functioning. If they are damaged, you're still out of luck and out of power. The smart grid, McGranaghan says, is no substitute for system hardening.

System hardening is where infrastructure is beefed up to prevent damage from weather or other disasters. It can include things like using cement for telephone poles instead of wood, burying cables underground, or raising substation equipment above the level of flood waters. System hardening is not entirely distinct from smart grid approaches--information relayed by smart technologies can guide hardening efforts--but it can be done independently of "smart" updates.

That's not to say the smart grid won't be useful in the case of disasters. Vermont, for example, has widely deployed smart grid technologies, including smart meters and grid sensors. "When that last hurricane went through the Northeast, they had an easier time getting power restored in Vermont because they could spot the shortages more easily," says Maggie Koerth-Baker, author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. "They were able to actually spot the downed wires through the system." That allowed crews to focus on repairing downed lines rather than searching for them. The same happened after storms swept through the Southeast earlier this year, McGranaghan says. Crews in Chattanooga were able to repair the system in much less time thanks to smart grid technologies.

Many smart grid technologies are better suited to helping a system recover from disaster, but to keep the power flowing during an event, experts are bullish on microgrids. Also considered a member of the smart grid pantheon, microgrids can function autonomously if the larger grid fails, says Alexis Kwasinski, a professor at the University of Texas. They derive their power from a variety of sources, including diesel generators, natural gas-powered microturbines, photovoltaics, and small wind turbines. Microgrids are expensive, though, so they are most commonly used where a continuous power supply is deemed worth the added cost, such as hospitals, telecommunications equipment, and computer server farms, Kwasinski says. (Incidentally, Thomas Edison's first power plant in Manhattan, Pearl Street Station, is considered a microgrid, since it served electricity to only a small section of the city.)

While many smart grid technologies are still being rolled out, microgrids already have a good track record when disasters strike. The continuity of cell phone service is perhaps the most conspicuous example. After the earthquake off the coast of Sendai, Japan, Kwasinski says a microgrid operated by NTT kept power flowing long after the main grid had failed, allowing people to stay in touch. Another in Garden City, New York, operated well after Hurricane Irene in 2011, he adds. And during Sandy, widely deployed microgrids may have helped cell service remain operational long after the grid went down.

Still, even microgrids may not survive powerful or widespread disasters. "We have to look at the capability of the infrastructure to withstand these events," McGranaghan says. During disasters, the smart grid's virtues may not be advantageous because the system is built atop the same, fragile grid as before. System hardening would change that, but like smart grid enhancements, it is not an inexpensive proposition. Fortunately, the smart grid can inform where engineers should focus on hardening the grid. "If we know we can use the smart grid to respond better, maybe that will influence those decisions," he says.

Note: We will be launching a new NOVA Lab on energy, renewables, and the smart grid in the coming weeks. Check the NOVA Labs site soon.

Intelligence tests have had many uses throughout their history--as tools to sort both schoolchildren and army recruits, and, most frighteningly, as methods to determine who is fit to pass on their genes. But as intelligence testing has made its mark on the educational and cultural landscape of the Western world, the concept of intelligence itself has remained murky. The idea that an exam can capture such an abstract characteristic has been questioned, but never rigorously tested--perhaps because conceiving of a method to measure the validity of a test that evaluates a trait no one fully understands is impossible. IQ tests have proved versatile, but are they legitimate? To what extent do intelligence tests actually measure intelligence?

Reporter and philosopher Walter Lippmann, who published a series of essays in the 1920s criticizing the Stanford-Binet test, wrote that it tests "an unanalyzed mixture of native capacity, acquired habits and stored-up knowledge, and no tester knows at any moment which factor he is testing. He is testing the complex result of a long and unknown history, and the assumption that his questions and his puzzles can in 50 minutes isolate abstract intelligence is, therefore, vanity."

Lippmann criticized the tests over 80 years ago, but already he recognized that people needed to approach their results with caution--advice now made more salient by a number of studies revealing interesting phenomena that validate his and other test skeptics' opinions.

As it turns out, a number of variables, none of which have to do with brainpower, can influence test scores.

For example, many researchers have discovered that people from minority racial groups often perform worse on intelligence tests than their white counterparts, despite a lack of evidence that they are actually less intelligent.

Another study has shown that intelligence is not fixed throughout one's lifetime: Teens' IQs changed by as much as 20 points over four years, raising questions about some modern IQ test uses. Many gifted programs, for example, use IQ tests to select student participants. But what does it mean if, over their school careers, students who fell below the cut-off grew more "gifted" than those in the program? Should they have been denied the enrichment opportunity, even though they later were revealed to be just as intellectually capable as the students who were allowed to enroll?

External cues that influence one's self-perception--such as reporting one's race or gender--also influence how one performs on intelligence tests. In a blog post for Scientific American, writer Maria Konnikova explains, "Asian women perform better on math tests when their Asian identity is made salient--and worse when their female identity is. White men perform worse on athletic tasks when they think performance is based on natural ability--and black men, when they are told it is based on athletic intelligence. In other words, how we think others see us influences how we subsequently perform."

If one's performance on IQ tests is subject to so many variables outside of natural ability, then how can such tests measure intelligence accurately? And does one's level of innate intelligence even matter? Is it correlated with success?

In one study, Robert Sternberg, a psychologist at Tufts University, found that in the Western world high scores intelligence tests correlated with later career success, but the people he tested live in a culture that places enormous emphasis on achievement on such tests.

Imagine a student with great SAT scores who later goes on to excel in her career. One could say that the student was very smart, and her intelligence led her both to succeed on the test and in her career. But one could also say the student was particularly skilled at test-taking, and since she lived in a society that valued high test scores, her test-taking ability opened up the door to a great college education. That in turn gifted her with the skills and connections she needed to succeed in her chosen field.

Both of these scenarios are over-simplifications. Intelligence--however murky it may be and however many forms it may come in--is undoubtedly a real trait. And intelligence tests have persisted because they do provide a general way to compare people's aptitude, especially in academic settings. But from their invention, intelligence tests have been plagued by misinterpretation. They have been haunted by the false notion that the number they produce represents the pure and absolute capacity of someone's mind, when in reality studies have shown that many other factors are at play.

Donna Ford, a professor of Education and Human Development at Vanderbilt, writes, "Selecting, interpreting and using tests are complicated endeavors. When one adds student differences, including cultural diversity, to the situation, the complexity increases... Tests in and of themselves are harmless; they become harmful when misunderstood and misused. Historically, diverse students have been harmed educationally by test misuse."

If current intelligence tests are subject to factors outside of intelligence, can a new assessment be developed that produces a "pure" measure of innate intelligence? Scientists are starting to examine biology, rather than behavior, to gain a new perspective on the mind's ability. A team of researchers recently set out to understand the genetic roots of intelligence. Their study revealed that hundreds or thousands of genes may be involved. "It is the first to show biologically and unequivocally that human intelligence is highly polygenic and that purely genetic (SNP) information can be used to predict intelligence," they wrote.

But even if researchers discover which genes are related to intelligence, a future in which IQ tests are as simple as cheek swabs seems unlikely. Most contemporary intelligence researchers estimate that genetics only account for around 50 percent of one's cognitive ability; the environment in which one learns and grows determines the other half. One study found that biological siblings, when raised together, had IQ scores that were 47 percent correlated. When raised apart, that percentage dipped to 24.

And, though world knowledge and problem-solving skills are commonly tested, the scientific world has yet to come to a consensus for a precise definition of intelligence. Perhaps such a definition is impossible.

So perhaps Galileo would have stolen Mozart's prestigious preschool seat. But that preschool admissions officer would likely whack herself in the head a year or two later as she heard of the prodigious star wowing audiences with his musical compositions.

Discarding intelligence tests altogether might be too harsh a reaction to their flaws. After all, they are useful when interpreted correctly in the right settings. Instead, we need to understand the tests' limitations and their potential for misuse to avoid determining a person's worth--or future trajectory--with a single number.

Picture of the week

Inside NOVA takes you behind the scenes of public television’s most-watched science series. You'll hear from our producers, researchers, and other contributors. It's a forum where you can see what's on our minds and tell us what's on yours.

Follow NOVA's Twitter Feed