How the Parker probe was built to survive close encounters with the sun

NASA has a mantra for preparing spacecraft to launch: “Test as you fly.” The idea is to test the entire spacecraft, fully assembled, in the same environment and configuration that it will see in orbit.

But the Parker Solar Probe, set to launch August 11, is no ordinary spacecraft (SN Online: 7/5/18). And it’s headed to no ordinary environment. Parker will sweep through the sun’s scorching hot atmosphere for humankind’s first close encounter with the star at the center of the solar system.
“Solar Probe is a little bit special,” says space plasma physicist Stuart Bale of the University of California, Berkeley. Getting the whole kit and caboodle into a setting that simulated the sun’s energetic particles, intense light and searing heat “was deemed impossible.” Scientists had to get creative to test the technology that will touch the sun, using everything from huge mirrors to dust tunnels to reams of paper.

Taking the heat
The first order of business was to find materials that can stand the heat. The sun’s outer atmosphere, or corona, sizzles at millions of degrees Celsius — but it is so diffuse that it doesn’t pose much threat to the spacecraft (SN Online: 8/20/17). Direct sunlight, however, can heat exposed components to around 1370° Celsius. Two of the spacecraft’s scientific instruments, plus parts of its solar panels and its revolutionary heat shield, will be exposed to that searing sunlight at all times.

“Normal things … would melt,” says solar physicist Kelly Korreck of the Smithsonian Astrophysical Observatory in Cambridge, Mass.
Korreck works on the Solar Wind Electrons Alphas and Protons instrument, known by its acronym SWEAP, which will catch the charged particles of the solar wind with a sensor called a Faraday cup (SN Online: 8/18/17). “It actually sticks around the heat shield and will be able to touch the sun,” Korreck says. “That cup is special.”
To build the cup and other instruments that will see the sun directly, engineers settled on three main materials — a niobium alloy called C103 that is used in rocket engines, an alloy of titanium, zirconium and molybdenum called TZM, and tungsten. Some cables carrying power to the SWEAP cup are also lined with sapphire, a good insulator at high temperatures. And the probe’s heat shield is made of two kinds of carbon-based materials.

Figuring out how each of these materials would behave in space was tricky. Engineers couldn’t just use an oven to test the metals, which in heat can react with oxygen to rust or corrode. Carbon also can react with oxygen to combust. So the team had to test pieces of the instruments in airless vacuum chambers.

“Getting things hot on Earth is easier than you would think it is,” says Elizabeth Congdon of Johns Hopkins Applied Physics Laboratory in Laurel, Md., the lead engineer for the heat shield. “Getting things hot on Earth in vacuum is difficult.”

One way the Parker team mimicked the sun’s heat was using actual sunlight. Engineers took material samples to the world’s largest solar furnace, the PROMES facility in Odeillo, France. A series of 63 mirrors built on a hillside redirect sunlight onto an enormous concave mirror on the side of an eight-story building. That mirror then focuses the sunlight into a beam no more than 80 centimeters wide that heats materials to 3000° C inside a small vacuum chamber in a laboratory on stilts.

The beam is so hot, “you can take a two-by-four and swing it through the beam, and it burns right off,” Bale says. “Just a flash of smoke and it’s gone.” Bale leads another of the probe’s experiments called FIELDS that also needed heat testing. FIELDS is comprised of five long antennas, four of which will be exposed to the sun, that will measure electric and magnetic fields in the corona.

The SWEAP team needed an even more realistic simulator, one that would deliver intense sunlight at the angles that Parker will experience. They found an unlikely solution in IMAX film projectors, which emit light in a similar range of wavelengths to the sun.

“It took a completely custom test facility to do it,” says astrophysicist Anthony Case of the Smithsonian Astrophysical Observatory, who also works on the SWEAP instrument. He and his colleagues turned four IMAX projectors around so the lamps focused light into a small vacuum chamber, rather than spreading it across a huge screen. That gave the team the right light intensity and angles to test their instrument.
Biting the dust
Solar heat isn’t the only threat to the Parker Solar Probe. The region around the sun is also expected to be full of dust, left over from the formation of the planets. Scientists don’t know exactly how much dust to expect, but it’s likely to be moving almost as fast as the spacecraft, about 170 kilometers per second.

That’s a big worry for Parker’s twin telescopes, together called the Wide-field Imager for Solar Probe, or WISPR. One of the telescopes will be facing the direction that Parker is traveling, so it will be heading directly into the dust storm. “It can’t be protected,” says astrophysicist Russell Howard of the U.S. Naval Research Laboratory in Washington, D.C.

Dust particles hitting the telescope’s lens leave it pocked with little craters. Only 0.6 percent of the lens should be pitted by the end of Parker’s seven-year mission, according to computer models of dust in the inner solar system. But even a few pits can skew the data, so the team wanted to minimize the damage by choosing the right glass.

Howard and colleagues tested three possible materials for the lens in a dust acceleration tunnel at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. The tunnel accelerated charged iron particles, ranging from half a micrometer to 3 micrometers wide, to speeds between half a kilometer per second to 8 kilometers per second — fast enough for the scientists to extrapolate up to the dust speeds Parker might experience.

Sapphire withstood the barrage best, but it was unclear how it would behave as a lens. The team also rejected diamond-coated BK7 glass, commonly used for space telescopes, after the coating separated from the glass and left an extra ring around the impact spot. Regular, uncoated BK7 was the winner.

What’s what
The Parker Solar Probe will use four sets of scientific instruments plus innovative self-protective measures to explore the environment near the sun. Take a tour of the spacecraft’s tech.

Tap or click to explore the tech.
Swinging the temperatures
Most of the spacecraft won’t have to worry about the dust or the sun’s extreme heat. Aside from SWEAP and FIELDS, almost everything is tucked behind the all-important heat shield.

That 2.5-meter-wide heat shield is made of carbon foam sandwiched between two carbon sheets. The whole thing is just 11.5 centimeters thick, and is coated on the sun-facing side with white ceramic paint to reflect as much sunlight as possible. Even then, that side could get as hot as 1370° C. But behind it, the bulk of the spacecraft will chill at an average of just 30° C (about 85° Fahrenheit).

“We hide in the shadows,” says solar physicist Eric Christian of NASA’s Goddard Space Flight Center in Greenbelt, Md. He’s the deputy principal investigator of the Integrated Science Investigation of the Sun experiment, which will measure solar particles across a wide range of energies. His team was able to build with ordinary materials and skip the rigorous heat testing. “We’re the lucky ones.”
But Parker won’t always be near the sun. The spacecraft’s orbit will bring it as far from the sun as Venus, where temperatures are around –270° C. At that distance, the spacecraft that will touch the sun needs onboard heaters to keep it at 20° C. So Parker needed to be tested for cold and extreme temperature changes, too.
“We’re not just worried about hot cycles, we’re worried about hot then cold then hot then cold,” says Congdon.

In January 2018, the entire spacecraft was lowered into a thermal vacuum chamber at NASA Goddard for two months of testing. The chamber, a cylinder standing 12 meters tall and 8 meters wide, was cooled to –190° C. A radiator glowing at about 315° C represented the heat coming from the back of the heat shield — but most of that heat never reached the scientific instruments since a titanium truss holds the heat shield at a safe distance from the spacecraft’s main body. The team cycled through hot and cold several times to simulate what Parker will experience.

Another challenge was keeping the probe’s solar panels cool. “You think, obviously, you’re going to the sun, solar power makes the most sense,” Congdon says. “But solar panels don’t like to get hot.” So the panels are threaded with veins that carry water to cool them off. The water absorbs heat from the panels and carries it to radiators that release the heat into space.

The solar panels are also on a shoulder joint, so they can tuck behind the heat shield at Parker’s closest approaches to the sun. Only the last row of cells will see the sun then. “That single row of cells can produce the same amount of power as the full wing can when we’re by the Earth,” says solar physicist Nicola Fox of Johns Hopkins Applied Physics Laboratory, the probe’s project scientist.But Parker won’t always be near the sun. The spacecraft’s orbit will bring it as far from the sun as Venus, where temperatures are around –270° C. At that distance, the spacecraft that will touch the sun needs onboard heaters to keep it at 20° C. So Parker needed to be tested for cold and extreme temperature changes, too.
“We’re not just worried about hot cycles, we’re worried about hot then cold then hot then cold,” says Congdon.

In January 2018, the entire spacecraft was lowered into a thermal vacuum chamber at NASA Goddard for two months of testing. The chamber, a cylinder standing 12 meters tall and 8 meters wide, was cooled to –190° C. A radiator glowing at about 315° C represented the heat coming from the back of the heat shield — but most of that heat never reached the scientific instruments since a titanium truss holds the heat shield at a safe distance from the spacecraft’s main body. The team cycled through hot and cold several times to simulate what Parker will experience.

Another challenge was keeping the probe’s solar panels cool. “You think, obviously, you’re going to the sun, solar power makes the most sense,” Congdon says. “But solar panels don’t like to get hot.” So the panels are threaded with veins that carry water to cool them off. The water absorbs heat from the panels and carries it to radiators that release the heat into space.

The solar panels are also on a shoulder joint, so they can tuck behind the heat shield at Parker’s closest approaches to the sun. Only the last row of cells will see the sun then. “That single row of cells can produce the same amount of power as the full wing can when we’re by the Earth,” says solar physicist Nicola Fox of Johns Hopkins Applied Physics Laboratory, the probe’s project scientist.
Up and away
Before Parker can peer into the sun’s secrets, though, it must survive the trip to space.

The violent shaking during a spacecraft’s launch make it a tense time for scientists, even if they’ve tested all of the parts in an acoustic vibration chamber. Watching SWEAP’s vibration test “made me swear,” Korreck says. “It’s very scary to watch this thing you’ve spent 10 years on flop around as it keeps shaking more and more.”

Her team faced an unusual challenge in making Parker ready to rattle. It could not glue screws in place to prevent them from shaking loose, because epoxies would melt in the sunlight. So the SWEAP team twisted thin niobium wire by hand to tie hundreds of screws together in such a way that, if one comes loose, the others hold it in.

Launch can be a high-pressure time for the spacecraft, too — literally. Engineers initially thought Parker’s launch aboard a powerful Delta IV Heavy rocket, would subject the heat shield to a force 20 times that of Earth’s gravity, although later the team realized the launch force wouldn’t be so severe. To make sure the 72.5-kilogram shield wouldn’t bend or break, the team stacked 1,360 kilograms of paper on top of it.

Once it’s passed the final test of launching and deploying, Parker’s first scientific data should start trickling back to Earth in December. These missives will let scientists take the first step to unlocking the secrets of the sun’s superheated atmosphere and its energetic winds.

“It’s like being a proud parent. I worry that something could happen, but I don’t worry that we didn’t prepare or test her well,” Fox says. “I just hope she writes home every day with beautiful data.”

Editor’s note: This story was updated on August 27, 2018 to correct the cooling temperature in the thermal vacuum testing chamber at NASA Goddard.

In a first, physicists accelerate atoms in the Large Hadron Collider

Not content with protons and atomic nuclei, physicists took a new kind of particle for a spin around the world’s most powerful particle accelerator.

On July 25, the Large Hadron Collider, located at the laboratory CERN in Geneva, accelerated ionized lead atoms, each containing a single electron buddied up with a lead nucleus. Each lead atom normally has 82 electrons, but researchers stripped away all but one in the experiment, giving the particles an electric charge. Previously, the LHC had accelerated only protons and the nuclei of atoms, without any electron hangers-on.

Scientists hope the successful test means that the LHC could one day be used as a gamma-ray factory. Gamma rays, a type of high-energy light, could be produced by zapping beams of ionized atoms with laser light. That light would jostle the atoms’ electrons into higher energy states, and the accelerated atoms would emit gamma rays when the electrons later returned to lower energy states. Existing facilities make gamma rays from beams of electrons, but the LHC might be able to produce gamma rays at greater intensities.

More powerful beams of gamma rays would be useful for various scientific purposes, including searching for certain types of dark matter — mysterious particles that scientists believe exist in the universe but have yet to detect (SN: 11/12/16, p. 14). The gamma rays could also be used to produce beams of other particles, such as heavy, electron-like particles called muons, for use in new kinds of experiments.

Zika may harm nearly 1 in 7 babies exposed to the virus in the womb

Babies exposed to a Zika infection while in the womb are not out of the woods even if they look healthy at birth.

Nearly 1 in 10 of 1,450 babies examined developed neurological or developmental problems, such as seizures, hearing loss, impaired vision or difficulty crawling, a study from the U.S. Centers for Disease Control and Prevention finds. It’s the first tally of the health of children at least 1 year old who were born in Puerto Rico and other U.S. territories and exposed to Zika in utero.
Overall, 14 percent of children exposed to Zika in the womb — about 1 in 7 — were harmed in some way by the virus, the researchers report online August 7 in Morbidity and Mortality Weekly Report. These babies were either born with a birth defect such as microcephaly — a condition in which a baby’s head is significantly smaller than it should be — or developed neurological symptoms that may be related to Zika, or both.

“Congenital Zika virus infection is quite serious, even beyond just the microcephaly,” says Peter Hotez, a pediatrician and microbiologist at Baylor College of Medicine in Houston, who was not involved in the report. “We’re still getting our arms around the full neurologic spectrum of illness” that is related to Zika.

The report also found that 6 percent of babies in the study had at least one birth defect caused by the virus, such as defects of the eye or brain or microcephaly (SN: 10/29/16, p. 14).

That’s fairly consistent with what’s seen in other countries hit by the Zika virus, Margaret Honein, director of CDC’s Division of Congenital and Developmental Disorders, said at a news conference. While a 2016 study suggested higher rates of birth defects in Brazil, “we think there isn’t a geographic difference” but more of a difference in how Zika-related birth defects are defined, she said.
The data come from the U.S. Zika Pregnancy and Infant Registry, set up to monitor pregnant women with Zika virus infection and the health of their babies. The study focuses on those pregnancies reported from Puerto Rico, the U.S. Virgin Islands, American Samoa, the Federated States of Micronesia and the Marshall Islands. The children, all at least a year old, had received some follow-up medical care, such as brain imaging, hearing tests, eye exams or developmental screening. A report on pregnancies from the mainland United States is expected later this year.

Zika ravaged Brazil, Colombia and other countries of the Americas in 2015 and 2016. By 2017, the spread of the virus had slowed to a crawl (SN: 11/11/17, p. 12). But experts expect to see future outbreaks (SN: 12/23/17, p. 30).

“What makes this report unique is that we’re looking at the health of these babies beyond what was observed at birth,” Honein said. “This is really providing us with the first clues about how common some of these neurodevelopmental disabilities might be.”

Researchers suspect that health issues will continue to emerge for children exposed to Zika in the womb as they grow older. “This is why it is so absolutely critical that these babies receive care to identify issues as soon as possible,” Honein said, and that children continue to be monitored over time.

Nasty stomach viruses can travel in packs

Conventional wisdom states that viruses work as lone soldiers. Scientists now report that some viruses also clump together in vesicles, or membrane-bound sacs, before an invasion. Compared with solo viruses, these viral “Trojan horses” caused more severe infections in mice, researchers report August 8 in Cell Host & Microbe.

Cell biologist Nihal Altan-Bonnet had been involved in discovering in 2015 that polioviruses can cluster together to invade cells in a petri dish. In the new study, Altan-Bonnet and a different group of colleagues find that transmission via virus clumps also occurs naturally with both rotavirus and norovirus, which can cause gastrointestinal illness.
The scientists first identified norovirus cluster vesicles in patients’ stool samples, which was “eye-opening,” says Altan-Bonnet, who works at the National Institutes of Health in Bethesda, Md. “We can see these vesicles everywhere.”

Altan-Bonnet and her team infected live mice with either vesicle-packaged rotavirus or equal amounts of single virus particles. Vesicles were not only more successful in causing infections, they also caused infections that were more severe, the researchers found. In the mice, it took five times the amount of single virus particles to cause the same severity of infection as caused by the clustered viruses. It also took the mice two to four days longer to fight off the cluster-caused infections.
While the mice were sick, the researchers found viral clumps in their feces, showing that the vesicles were able to survive the harsh environment of the GI system unscathed. It’s still unclear, however, if the viruses remain inside the vesicles to invade cells, and if so, how.
The clusters act like a Trojan horse, Altan-Bonnet suggests. “The wooden horse would be the vesicle, and inside it you have all the soldiers.” She has several hypotheses for why viruses behave this way. Vesicles may help the viruses evade the immune system or replicate faster inside cells. “We really have to rethink the way we think about viruses,” she says.

Norovirus and rotavirus, which can be dangerous for children and the elderly, kill a combined total of about 265,000 children each year worldwide, mostly in developing countries (SN: 8/8/15, p. 5). The researchers hope the discovery of vesicle transmission will lead to better prevention methods and treatments, for example, by targeting the membranes containing the virus clusters.

Because the long-standing “dogma in the field” suggested viruses were transmitted individually, it’s not surprising that these vesicles were missed in earlier virus research, says Craig Wilen, a physician at the Yale School of Medicine who recently discovered what cells norovirus targets in mice (SN: 5/12/18, p. 14). “It’s probably been seen and just dismissed.”

Wilen says that there are still questions about viral clusters that need to be answered. For example, he says, “how does the virus escape the vesicle?” Other questions that remain include how the vesicle latches on to a cell’s surface, and what advantage the viruses actually get from packaging themselves together.

Children may be especially vulnerable to peer pressure from robots

Peer pressure can be tough for kids to resist, even if it comes from robots.

School-aged children tend to echo the incorrect but unanimous responses of a group of robots to a simple visual task, a new study finds. In contrast, adults who often go along with the errant judgments of human peers resist such social pressure applied by robots, researchers report August 15 in Science Robotics.

“Rather than seeing a robot as a machine, children may see it as a social character,” says psychologist Anna-Lisa Vollmer of Bielefeld University, Germany. “This might explain why they succumb to peer pressure [applied] by robots.”
Little is known about how either adults or children respond to the behavior of lifelike robots designed to interact with people, for example, as museum tour guides, child-care assistants and teaching aids.

In a preliminary examination of the influence of social robots, Vollmer’s group adapted a 1950s social psychology experiment in which most adults agreed with groups of peers who had been coached to say that lines of different lengths were in fact the same length (SN Online: 5/15/18).

Vollmer’s team observed comparable social conformity in a study of 60 British adults, ages 18 to 69, who judged line lengths after hearing the opinions of three peers who were working with the researchers. Participants usually endorsed peers’ unanimous, inaccurate judgments. Conformity vanished, however, when volunteers performed the task while sitting with three robots that, on some trials, agreed on an incorrect answer.
Each robot was programmed to make periodic movements, such as blinking its eyes and briefly gazing at others. Robots spoke with distinctive, individualized voice pitches when making line judgments.
When children sat with the robots, though, the kids frequently went all-in. The study’s 43 participating British grade-schoolers, aged 7 to 9, agreed with three-quarters of the robots’ unanimous, inaccurate answers. The kids did not participate in conformity experiments with trios of same-age human peers, given the difficulty of getting youngsters to act convincingly according to researchers’ directions.

Still, larger samples of volunteers are needed to confirm that kids usually cave to social pressure from robots. Cultural factors, such as being raised in a society that emphasizes individualism or group values, also may influence how people of all ages perceive and react to social robots.

Three unresolved issues in particular stand out, says psychologist and child development researcher Paul Harris of Harvard University. First, it’s unclear whether some robot behaviors, but not others, triggered conformity in children. A bot’s periodic head turns toward a child, for example, might sway that youngster’s choice more than the same robot’s eye blinks or finger movements. It’s also unclear why adults who bent to human peer pressure reversed course with robots.

Finally, Harris asks, “Would fine-tuning of the robots’ repertoire [of movements and vocalizations] eventually elicit deference even from adults?”

Cancer drugs may help the liver recover from common painkiller overdoses

Experimental anticancer drugs may help protect against liver damage caused by acetaminophen overdoses.

In mice poisoned with the common painkiller, the drugs prevented liver cells from entering a sort of pre-death state known as senescence. The drugs also widened the treatment window: Mice need to get the drug doctors currently use to counteract an overdose within four hours or they will die, but the experimental drugs worked even 12 hours later, researchers report August 15 in Science Translational Medicine.
If the liver-rescuing results are verified in clinical trials, this therapy may buy time for people who accidentally or intentionally overdose on Tylenol or other medications containing the painkiller acetaminophen. In the United States, such overdoses occur more than 100,000 times a year and are the leading cause of acute liver failure. Many people get treatment on time or recover on their own, but some require emergency liver transplants. And 150 people on average die of acetaminophen poisoning each year.

Currently, doctors treat such overdoses with N-acetylcysteine, an antidote that must be given within eight hours of ingesting a potentially fatal dose. Some people don’t make it to a doctor in time, and will die or need transplants.

In the study, untreated mice died within 18 hours. But mice given the new drugs survived at least a week until researchers sacrificed the rodents to examine their livers.

The anticancer drugs work by blocking a signal from a tumor growth-stimulating protein called TGF-beta, which is activated by inflammation provoked by the overdose. When unchecked, TGF-beta sends a stress signal that puts liver cells in senescence, liver specialist Thomas Bird of Cancer Research UK Beatson Institute in Glasgow and colleagues report.

More than 2 billion people lack safe drinking water. That number will only grow.

Freshwater is crucial for drinking, washing, growing food, producing energy and just about every other aspect of modern life. Yet more than 2 billion of Earth’s 7.6 billion inhabitants lack clean drinking water at home, available on demand.

A major United Nations report, released in June, shows that the world is not on track to meet a U.N. goal: to bring safe water and sanitation to everyone by 2030. And by 2050, half the world’s population may no longer have safe water.

Will people have enough water to live?

Two main factors are pushing the planet toward a thirstier future: population growth and climate change. For the first, the question is how to balance more people against the finite amount of water available.
India has improved water access in rural areas, but remains at the top of the list for sheer number of people (163 million) lacking water services. Ethiopia, second on the list with 61 million people lacking clean water, has improved substantially since the last measurement in 2000, but still has a high percentage of total residents without access.

Short of any major but unlikely breakthroughs, such as new techniques to desalinate immense amounts of seawater (SN: 8/20/16, p. 22), humankind will have to make do with whatever freshwater already exists.

Most of the world’s freshwater goes to agriculture, mainly to irrigating crops but also to raising livestock and farming aquatic organisms, such as fish and plants. As the global population rises, agricultural production rises to meet demand for more varied diets. In recent decades, the increase in water withdrawal from the ground or lakes and rivers has slowed, whether for agriculture, industries or municipalities, but it still outpaced the rate of population growth since 1940.
That means every drop is increasingly precious — and tough choices must be made. Plant your fields with sugarcane to make ethanol for fuel, and you can’t raise crops to feed your family. Dam a river to produce electricity, and people downstream can no longer fish. Pump groundwater out for yourself, and your neighbor might just want to fight over it. Researchers call this the food-water-energy nexus and say it is one of the biggest challenges facing our increasingly industrialized, globalized and thirsty world.

“There just isn’t enough water to meet all our needs,” says Paolo D’Odorico, an environmental scientist at the University of California, Berkeley whose team analyzed the food-water-energy nexus in a paper published online April 20 in Reviews of Geophysics.

Overall, the energy sector is expected to consume more and more water in decades to come. And sometimes what sounds like a good idea — such as switching to renewable energy sources to reduce carbon emissions — might help in one area but hurt in another. For example, it can take more water to grow biofuel crops than to consume fossil fuels.
** Water consumption is defined as water that is used and not returned to its source. These projections are based on nations’ stated commitments to phase out fossil fuel subsidies and reduce emissions of greenhouse gases.

Source: World Energy Outlook 2016 Special Report: Water-Energy Nexus/IEA

Then there’s climate change. As greenhouse gases build up in Earth’s atmosphere, trapping heat and altering the planet’s weather and climate, water will become more precious. Rising global temperatures alter weather patterns and change how water cycles between the ground and the atmosphere. Freshwater stores can shrink. Extreme events, such as flooding and drought, are becoming more common on our warming planet (SN: 1/20/18, p. 6). That means more water in places where people don’t need it, and less water where they do.

The map below shows how water stress — the ratio of water use to water supply — is expected to look by the year 2040. It assumes a “business-as-usual” scenario in which carbon emissions rise steadily. The highest stress is expected in areas where water supply is vulnerable because of already arid climates and growing populations.
Cities will bear the brunt of future water shortages. Early this year, it looked as if the more than 4 million people living in Cape Town, South Africa, were going to run out of water. Officials calculated a “Day Zero” in April when the taps would run dry. Only through belated and desperate conservation measures, such as slashing the amount of water for irrigating crops, did city residents eke through until the rainy season began in May. That Cape Town crisis is almost certainly the first of many.

By 2050, some 3.5 billion to 4.4 billion people around the world will live with limited access to water, more than 1 billion of them in cities. Among 482 cities, more than a quarter will face demands that outpace supply, according to a study that analyzed water sources and demands. In general, urban growth is the main driver of cities’ future water deficits. Los Angeles tops the list because its population is expected to boom even as climate change dries up its water sources. Cities will be worse off if other sectors get priority for water access.
In the face of such inexorable changes, it’s easy to despair. But science offers hope, in the form of alternative paths forward. Computer modelers at MIT, for example, find that policies to fight climate change, such as the 2015 Paris agreement that the United States announced its intention to pull out of last year (SN Online: 6/1/17), can reduce the severity of future water shortages. If nations follow commitments similar to those in the agreement, 60 million people across Asia could avoid dire water scarcity by 2050, the team wrote in June in Environmental Research Letters.

But the Paris agreement is not enough. As research increasingly makes clear, there are trade-offs and decisions to be made. Cape Town’s experience shows how governments need to better prepare for the competing demands on water supplies. Municipalities may need to raise the cost of water to the point where people value it enough to conserve it.

“We can address the problem by thinking about technological solutions, but we also have to think about changing our behavior,” says Martina Flörke, a hydrologist and environmental scientist at the University of Kassel in Germany. “If we can make clear … that water has value, that it’s an ecosystem service that we use and have to take care of — then we are really thinking about how to adapt.”

Here’s how to bend spaghetti to your will

Here’s good news for anyone who’s had to sweep up pasta shards after snapping dry spaghetti and thought, “there’s got to be a better way.”

There is.

Simply bending a stick of spaghetti in half typically shatters it into three or more fragments. That’s because when the stick breaks, vibrations wrack the remaining halves, causing smaller pieces to splinter off (SN: 11/12/05, p. 315). To avoid this problem, give the spaghetti stick a twist before bending it, researchers report online August 13 in Proceedings of the National Academy of Sciences.
Vishal Patil, a mathematician at MIT, and colleagues discovered this technique by breaking hundreds of pieces of pasta with a custom-made spaghetti-snapping device. These observations, along with computer simulations of the system, reveal that when a spaghetti stick is twisted, it doesn’t bend as far before breaking. As a result, the vibrations that rattle the spaghetti halves post-snap aren’t strong enough to cause further fracturing.

The exact amount of twist required to give pasta a clean break depends on the length of the rod, but for a typical stick 24 centimeters long to crack neatly in two, it’s at least 250 degrees.

This strategy may not be much practical help in the kitchen; Patil and colleagues aren’t selling their spaghetti snapper for $19.95 — and even if they were, meticulously twisting and bending pieces of pasta one-by-one is hardly efficient meal prep. Still, the discovery of the bend-and-twist technique may lend new insight into controlling the breakage of all kinds of brittle rods, from pole vault sticks to nanotubes.

5 decades after his death, George Gamow’s contributions to science survive

Half a century ago, if you asked any teenage science fan to name the best popular science writers, you’d get two names: Isaac Asimov and George Gamow.

Asimov was prominent not only for his nonfiction science books, but also for his science fiction. Gamow was known not only for writing popular science, but was also a prominent scientist who had made important contributions both to physics and biology.

Fifty years ago this month, Gamow’s career ended when he died at the age of 64. His books and scientific papers survive him, leaving plenty of science and science writing worth celebrating. Nuclear physics, astrophysics, modern cosmology and molecular biology all benefited from Gamow’s fertile intellect.
Like Asimov, Gamow was born in Russia (Odessa). But while Asimov came to the United States as a child, Gamow grew up in Russia, went to college first in Odessa (studying math) and then to the university in Petrograd (soon to become Leningrad), where he became a physicist. At Leningrad he attended lectures by the mathematician Alexander Friedmann. Friedmann was the first to fully realize that Einstein’s new general theory of relativity implied a dynamic universe — one that would expand or contract — rather than the static never-changing cosmos that most experts (including Einstein) believed in at the time.

Gamow planned to pursue a career in relativity under Friedmann’s direction. But Friedmann died young, in 1925. So Gamow fell in with a group of students more interested in quantum physics than relativity. “We spent all our time following the new [quantum] publications and trying to understand them,” Gamow wrote in his autobiography.

While a visitor at one of Europe’s top centers for quantum theory — the University of Göttingen in Germany — he solved a mystery about radioactive decay by identifying one of the quantum world’s most important phenomena: tunneling. In one form of radioactive decay, an atomic nucleus emits alpha particles that are moving too slowly to have overcome an energy “barrier” supposedly preventing their escape. (The analogy is a hill too steep for a slow-moving ball to reach the top without rolling back down.) Gamow showed that the wave mechanics version of quantum physics permitted the alpha particle to “tunnel” through the energy-barrier hill. Quantum tunneling turned out to be important for many other features of nature, such as how the sun shines, how many chemical reactions proceed and maybe even how the universe began.
His work on tunneling impressed Niels Bohr, the leading quantum physicist in the world, earning Gamow a fellowship for study at Bohr’s Institute for Theoretical Physics in Copenhagen. During time there and at Cambridge University, Gamow became one of the world’s leading experts on nuclear physics theory. He also became well-known for his humor and irreverence, including a “relentless mockery of science’s solemnity,” as one biographical account put it.
Returning to the Soviet Union in 1929, Gamow found the political atmosphere for continuing his work unfavorable. He eventually managed to emigrate to the United States, where he obtained a position at George Washington University in Washington, D.C., in 1934. There he studied the evolution and energy production of stars, producing fruitful insights into the stellar explosions known as supernovas. Later, he turned his attention to the universe at large, developing early versions of what became the Big Bang theory (Gamow didn’t like the name) of the origin and evolution of the universe. In 1942, the historian Helge Kragh wrote, “Gamow clearly endorsed a big-bang picture and suggested that the gross material of the present world is the result of what happened some two billion years ago in a highly compressed primeval state.” Gamow’s timing was off (it was nearly 14 billion years ago), but his basic idea was right.
After World War II, Gamow found new fun with the “physics of biology.” He wondered, for instance, about the physical processes allowing cells to make proteins. Inspired by Watson and Crick’s 1953 paper on the structure of DNA — the molecule that makes genes — Gamow speculated that some sort of code could be translated from DNA to build the long chains of amino acids that constituted proteins. Nature provided merely 20 such amino acids for constructing thousands of distinct proteins.

Gamow realized that DNA’s four subcomponent “bases” could be thought of as numbers that could be translated into “words” specifying a chain of amino acids, linked in a specific order, chosen from their 20-letter “alphabet.” He saw that if you chose three DNA bases at a time, there were about 20 possible combinations, indicating that each three-base “triplet” might correspond to an amino acid. He couldn’t crack the code for which base combinations went with which amino acids, though, even with help from some U.S. Navy cryptologists. But Gamow had more or less the right idea, although he didn’t recognize at first that an intermediate molecule, RNA, had to “read” the DNA code first before transferring the information to the cell’s protein-making apparatus.

Throughout his career, Gamow desired to share his enthusiasm for the science he investigated, not only with fellow scientists but with people in general. Today, it is fairly common for prominent scientists to write popular books. But it was not that way in the 1930s, when Gamow first tried to explain relativity and quantum physics through the eyes of his fictional character, Mr. Tompkins. Mr. Tompkins lived in worlds where the speed of light was small or Planck’s constant was large, allowing Gamow to illustrate the strangeness of the new physics in an entertaining and intuitively accessible way. To learn about Heisenberg’s uncertainty principle, for instance, Mr. Tompkins visited a billiard parlor where a professor placed a ball inside a wooden triangle. The ball began to move rapidly at varying speeds within the triangle, because restricting its position to the triangular space increased the uncertainty about its velocity. (And then the ball escaped the triangle, not by jumping over its wooden wall, but by “leaking” through it. Tunneling.)

After many rejections, Mr. Tompkins in Wonderland appeared in 1940, followed by Mr. Tompkins Explores the Atom in 1944. Later Gamow produced other more straightforward accounts of the frontiers of physics, and science more generally, in such books as One Two Three … Infinity and Matter Earth and Sky.

Gamow moved to the University of Colorado in 1956, focusing on his popular books as his prominence in science diminished. His nonconformity and irreverent attitude, along with his emphasis on popularization, did not play well with many of his peers. And he was a heavy drinker, impairing his ability to engage with other physicists and possibly contributing to his death.

Still, his science was substantial. And even if it hadn’t been, his writing contributed to the scientific enterprise via another important avenue — by opening the wonders of the world of science to a great many teenagers who are scientists, or science writers, today.

Huge ‘word gap’ holding back low-income children may not exist after all

A scientific takedown of a famous finding known as the 30-million-word gap may upend popular notions of how kids learn vocabulary.

Research conducted more than 20 years ago concluded that by age 4, poor children hear an average of 30 million fewer words than their well-off peers. Since then, many researchers have accepted the reported word gap as a driver of later reading and writing problems among low-income youngsters. A Providence, R.I., program inspired by the study, for example, now teaches poor parents how to talk more with their kids.
But here’s the rap on the word gap: It doesn’t exist, says a team led by psychologist Douglas Sperry of Saint Mary-of-the-Woods College in Indiana. In a redo of the original study, virtually no class differences appeared in the number of words addressed to young children by a primary caregiver, Sperry and colleagues report in a study to be published in Child Development.

What’s more, after including speech spoken directly to children by various caretakers as well as family members’ conversations that the youngsters could easily overhear, kids in some poor and working-class communities heard more words on average than middle-class youngsters, the scientists say. Within each of those communities, some children heard many more words than others did despite belonging to the same social class, Sperry’s team adds.

“It’s time to turn a skeptical eye to the word-gap claim,” Sperry says.

Researchers usually treat word learning as a product of one or both parents regularly talking to a child. But different, equally effective ways exist for children to learn vocabulary, Sperry contends. Depending on culture and community, word learning depends to varying extents on a main caretaker talking to a child, many caretakers talking to a child and youngsters overhearing family members talking, he says (SN: 2/17/18, p. 22).
The original word-gap study included 42 children in Kansas from either of four communities — poor, working class, middle class or wealthy professional. Sperry’s group analyzed data on word use collected during home observations of 42 children in five communities — poor whites in South Baltimore, poor blacks in Alabama, working-class (largely blue-collar) whites in Indiana and Chicago, and middle-class (largely white-collar) whites in Chicago.
Videotaped home observations began when children were 18 to 30 months old. Intermittent observations continued until kids reached ages 32 to 48 months. Most primary caregivers were children’s mothers.

Primary caregivers in poor, black Alabama families directed an average of 1,838 words per hour to their children, close to the corresponding figure of 2,153 words per hour for high-income, white caregivers in Kansas in the original word-gap study. The earlier study reported that primary caregivers on welfare in Kansas spoke an average of 616 words per hour to their children, about one-third the total spoken to poor, black children in the new study. Primary caregivers from working-class and middle-class families in the new study uttered an average of 1,048 to 1,491 words per hour to youngsters.

Taking multiple caregivers into account, average hourly words spoken to children in each community increased by 17 percent or more. An increase of 58 percent occurred in Alabama’s poor, black households. In addition, kids in poor families overheard an average of 3,203 words per hour. Eavesdropping figures reached no higher than about 2,500 words per hour in the other households. Greater numbers of older siblings in the poor, black families contributed to that disparity, the researchers suspect.

The new study convincingly rejects claims of a word-gap for poor children, says cultural anthropologist Jennifer Keys Adair of the University of Texas at Austin.

White, middle-class parents and many educators wrongly assume that vocabulary learning always proceeds best via one-on-one interactions of parents with children, or teachers with grade-school students, Adair says. That assumption may not apply to kids from other cultural backgrounds. Adair has found, for instance, that first-graders from Latin American immigrant families — who were allowed to devise classroom projects, collaborate with one another and ask questions without raising their hands — did especially well three years later on state English assessments.

But some child researchers say the new study falls short of showing that poor kids are generally exposed to as much language as better-off peers.

Sperry’s group, for example, did not study children in upper-class, professional households, as researchers did in the 1990s. And other studies of early word learning point to a need for programs that help low-income parents engage their children in language-boosting conversations, conclude psychologist Roberta Golinkoff of the University of Delaware in Newark and colleagues in comment that will appear in the same journal.

“Overhearing language about death and taxes — topics of interest to adults — can never be as effective for language learning as participating in conversations about what matters to children,” Golinkoff and her colleagues write in their comment.

Kids frequently eavesdrop, Sperry responds. Ongoing research shows that “young children are very interested in talk that occurs around them, particularly when parents or siblings are talking about the child.”

While that may be so, little is known about the role of overheard speech and social context in language learning. Sperry and his colleagues plan to take a closer look at the difficult-to-study issue of how eavesdropping on family members influences later reading and writing skills.