Contents | Zoom in | Zoom out
For navigation instructions please click here
C`jk\e`e^`efek_\ GFIGF@J<D@E;
9cl\$jbpjZ`\eZ\f] 8
Search Issue | Next Page
K_\i\Xci`jbf] 8IJ
AMERICAN
IY_[dj_ij AXelXipÆ=\YilXip)'(,
nnn%Xd\i`ZXejZ`\ek`jk%fi^ ______________________
,%0,
FIRST LOOK AT PLUTO
Contents | Zoom in | Zoom out
New Horizons eyes the next frontier of exploration
For navigation instructions please click here
Search Issue | Next Page
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Sigma Xi thanks these exhibitors for their support of our 2014 conference!
S A V E
T H E
D A T E
Sigma Xi’s 2015 Annual Meeting and Student Research Conference October 22–25, 2015 4IFSBUPO,BOTBT$JUZ)PUFMBU$SPXO$FOUFSt,BOTBT$JUZ .JTTPVSJ
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
AMERICAN
IY_[dj_ij Departments 2 From the Editors 3 Letters to the Editors 6 Spotlight Fungus threatens salamanders r A fundamental unit of space r Briefings
Lebkc['&)DkcX[h'@WdkWhoÄ<[XhkWho(&'+
Feature Articles 34 Arsenic, the “King of Poisons,” in Food and Water Levels in common foods can exceed US standards for drinking water. Andrew Yosim, Kathryn Bailey, and Rebecca C. Fry
46
10 Computing Science Cultures of code Brian Hayes 14 Engineering Can an engineer appreciate art? Henry Petroski 18 Perspective The many guises of aromaticity Roald Hoffmann 23 Sightings Fly-by forestry takes off 26 Ethics What everyone should know about statistical correlation Vladica M. Veliþkoviü 30 Technologue Each blade a single crystal Lee S. Langston
54 When the Cause of Stroke Is Cryptic Mathematics can help uncover unrecognized reasons for this ailment. David M. Kent and David E. Thaler
34 42 Journey to the Solar System’s Third Zone The New Horizons spacecraft will soon image Pluto for the first time. S. Alan Stern
42
Scientists’ Nightstand
60 Like Holding a Piece of Sky Light, airy aerogels are complex, strong, and exceptionally insulating. Mark Miodownik
Reviving the dead r Umami r Oldest living things r Pretty molecules
From Sigma Xi 73 Distinguished Lectureships, 2014–2015 75 Sigma Xi Today Student Research Conference winners r Annual Meeting recap r Chapter award winners
54
46 The Acoustic World of Harbor Porpoises Captive studies show how these animals perceive their underwater environment. Magnus Wahlberg, Meike Linnenschmidt, Peter T. Madsen, Danuta M. Wisniewska, and Lee A. Miller
60
The Cover Visualizing the upcoming passage of the New Horizons spacecraft past Pluto and its large satellite Charon necessarily involved a lot of educated guesswork. Pluto is so small (about 2,360 kilometers in diameter) and so distant (currently 4.9 billion kilometers away) that it is impossible to image directly from Earth. Artist Steve Gribben at Johns Hopkins University had to rely on sparse visual data from the Hubble Space Telescope and consultations with members of the New Horizons team to guide the colors and and surface features seen on the cover. As mission scientist S. Alan Stern explains in “Journey to the Solar System’s Third Zone” (pages 42–45), the Pluto flyby will offer a first look not just at this one unexplored world but at an entire class of intriguing, poorly understood bodies orbiting the Sun beyond Neptune. (Image courtesy of NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Steve Gribben.)
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
FROM THE EDITORS
Time Brings Change
T
ime does indeed fly. Some days it soars, some days it rockets. Its incessant progression motivates us to manage it. We share our time, track our time, and schedule our time. No matter how hard we try to slow it down, there’s no stopping it. Meanwhile, each passing moment brings a new experience that deserves our attention. That is, until the next moment displaces it. And so it goes. We have now reached the year 2015. This presents the opportunity to look back, take stock of our accomplishments, and prepare for what is to come. After covering vast scientific territory last year, from the diversity of fossils observed in avian evolution to fluctuations in Earth’s magnetic field, we’re excited about the science we’ll be bringing you in the new year. With this issue, the element of time is crucial to many of the articles. In the photo essay, “Journey to the Solar System’s Third Zone” (pages 42–45), Alan Stern describes the long-anticipated arrival of the New Horizons spacecraft at its destination, Pluto, following 42 years of planning and a 9-year journey across the Solar System; Roald Hoffmann connects the dots on 150 years of research on aromatic compounds, in the Perspective column, “The Many Guises of Aromaticity” (pages 18–22); and in the Technologue column, “Each Blade a Single Crystal” (pages 30–33), Lee Langston describes the 75-year evolution of jet engines, from the first jet-powered flight to our modern-day air fleet enhanced by advanced hightemperature turbines. Sometimes the problem is clear, but it takes time to implement a solution. In “Arsenic, the ‘King of Poisons,’ in Food and Water” (pages 34–41), Andrew Yosim, Kathryn Bailey, and Rebecca Fry explain how 40 years of research on the health effects of arsenic exposure may finally result in a new standard for consumption of rice and rice-based products. Just as these achievements emerged over time, new discoveries are currently unfolding at their own pace. There is one certainty; the passage of time ensures that American Scientist’s journey, like those other long journeys, is leading to exciting new places.
A
s American Scientist charts a course to the future, we expect transitional moments. This marks one of those junctures. For nearly 25 years, we have published the work of Nobel laureate Roald Hoffmann. Roald has been an indefatigable writer producing more than 50 American Scientist columns, together comprising an unparalleled anthology of chemistry concepts. His flair for interweaving storytelling and science has garnered praise and recognition, including a place in The Best American Science Writing 2003. We are forever indebted to him for his contributions. However, the moment has arrived for us to establish a new relationship. Roald will be working more on other projects, and we will be bringing some new voices and new concepts into American Scientist. But there are some things time cannot change. There are many more great discoveries to be made and countless inspirational stories yet to be told. His will always be welcome here. —Jamie L. Vernon (@JLVernonPhD) 2
AMERICAN
IY_[dj_ij nnn%Xd\i`ZXejZ`\ek`jk%fi^ _______________
VOLUME 103, NUMBER 1 Editor-in-Chief Jamie L. Vernon Senior Consulting Editor Corey S. Powell Managing Editor Fenella Saunders Senior Editor Sandra J. Ackerman Associate Editor Katie L. Burke Senior Writer Brian Hayes Contributing Editors Marla Broadfoot, Catherine Clabby, Laura Poole, Anna Lena Phillips, David Schoonmaker, Michael Szpir Editorial Associate Mia Evans Art Director Barbara Aulicino Contributing Art Director Tom Dunne SCIENTISTS’ NIGHTSTAND Editor Dianne Timblin AMERICAN SCIENTIST ONLINE Managing Editor Katie-Leigh Corder Interim Publisher David Moran ADVERTISING SALES BEWFSUJTJOH!BNTDJPSHt ___________ EDITORIAL AND SUBSCRIPTION CORRESPONDENCE American Scientist P.O. Box 13975 3FTFBSDI5SJBOHMF1BSL /$ tGBY FEJUPST!BNTDJPOMJOFPSHtTVCT!BNTDJPSH ____________ ________ PUBLISHED BY SIGMA XI, THE SCIENTIFIC RESEARCH SOCIETY President George Atkinson Treasurer Ronald Millard President-Elect Mark Peeples Immediate Past President Linda Meadows Interim Co-Executive Directors Jasmine Shah, Jamie L. Vernon COMMITTEE ON COMMUNICATIONS AND PUBLICATIONS James Baur, Marc Brodsky, Thomas Kvale, Dennis Meredith (chair), Antonio Pita, and Andrew Velkey American Scientist gratefully acknowledges support for “Engineering” through the Leroy Record Fund. Sigma Xi, The Scientific Research Society XBTGPVOEFEJOBTBOIPOPSTPDJFUZ for scientists and engineers. The goals of the Society are to foster interaction among science, technology, and society; to encourage appreciation and support of original work in science and technology; and to honor scientific research accomplishments. Printed in USA
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
LETTERS
Megafauna Extinction To the Editors: In the Spotlight report “New Information from Ancient Genomes” by Sandra J. Ackerman (September–October), I question the statement on page 327 that Eske Willerslev’s data “constitute a strong case that climate change—not hunting—caused the last great extinction.” Others argue that the extinction of the megafauna caused changes in the vegetation, not the other way around. Richard Gillespie, PhD University of Wollongong Wollongong, Australia Editors’ Note: We asked Dr. Willerslev to comment on this question, and he said: “In the scientific community the processes causing the extinction of the Late Quaternary megafauna of the Northern Hemisphere, such as the woolly mammoth and woolly rhinoceros, are heavily debated. The statement that these extinctions were due to climatic changes that caused shifts in vegetation is based on two recent papers from my groups: Lorenzen et al. (Nature 2011) and Willerslev et al. (Nature 2014). In my view the former, which is the largest scale population genetic study to
date on Late Quaternary megafauna, clearly shows climate as the main driver of population size changes in these animals. The latter demonstrates that climate change during the Late Quaternary resulted in severe vegetation changes and the loss of key food sources for the megafauna.”
the existence of living fossils—the black swans of evolution—would not turn out to be surprising at all.
Living Fossil Statistics
Many paleontologists, particularly the late Jack Sepkoski, have performed statistical analyses on species turnover and longevity. These studies have led to important findings, including the detection of patterns of mass extinction events in the fossil record. Other statistical studies include investigation of what percentage of dinosaur species or genera have been discovered.
To the Editors: I found the article “The Evolutionary Truth About Living Fossils” by Alexander Werth and William Shear (November–December) informative. On page 440, the authors mention “the role that stochastic contingencies play in evolutionary history” and state that “survival of a living fossil [may depend on] chance events in history.” If it is correct that evolution proceeds because a few of the many random mutations occurring provide competitive advantages for organisms in certain environments, then it seems not only possible but likely that a few of the species that have arisen on Earth would persist relatively unchanged over long time scales. Maybe there is an opportunity for a statistician to be tasked with an analysis of species longevity. I suspect that
Stephen L. Brown Alameda, CA Drs. Werth and Shear reply:
Take the Heat Off To the Editors: In the Computing Science column “Clarity in Climate Modeling” (November–December), Brian Hayes shows the following equation as the energy balance at the surface of the Earth: Q(1 – Į) = ıT4. He says that the heat radiated into space has to equal the heat absorbed from solar radiation to keep the average temperature of Earth’s surface relatively constant.
American Scientist (ISSN 0003-0996) is published bimonthly by Sigma Xi, The Scientific Research Society, P.O. Box 13975, Research Triangle Park, NC 27709 (919-549-0097). Newsstand single copy $5.95. Back issues $7.95 per copy for 1st class mailing. U.S. subscriptions: one year $30, two years $54, three years $80. Canadian subscriptions: one year $38; other foreign subscriptions: one year $46. U.S. institutional rate: $75; Canadian $83; other foreign $91. Copyright © 2015 by Sigma Xi, The Scientific Research Society, Inc. All rights reserved. No part of this publication may be reproduced by any mechanical, photographic or electronic process, nor may it be stored in a retrieval system, transmitted or otherwise copied, with the exception of one-time noncommercial, personal use, without written permission of the publisher. Second-class postage paid at Durham, NC, and additional mailing office. Postmaster: Send change of address form 3579 to Sigma Xi, P.O. Box 13975, Research Triangle Park, NC 27709. Canadian publications mail agreement no. 40040263. Return undeliverable Canadian addresses to P. O. Box 503, RPO West Beaver Creek, Richmond Hill, Ontario L4B 4R6.
Discover
KAMCHATKA & Lake Baikal!
Including the Trans-Siberian Express www.americanscientist.org
American Scientist
We invite you to travel the World with Sigma Xi! Explore the two finest natural areas in Russia, the Kamchatka Peninsula and Lake Baikal, and take the Trans Siberian Express from the Russian Far East across the vast taiga of Russia to Irkutsk and Lake Baikal. Baikal is the richest single location in Russia for endemism, a fabulous reservoir of unique flora and fauna. It is the oldest and deepest lake in the world, storing nearly 20% of the freshwater on earth! Betchart Expeditions Inc. 17050 Montebello Rd, Cupertino, CA 95014-5435 Phone: (800) 252-4910 Fax: (408) 252-1444 Email: ____________________________ [email protected] On the web: betchartexpeditions.com
6,*0$;,([SHGLWLRQV 7+(6&,(17,),&5(6($5&+62&,(7<
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
3
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
ONLINE @ Wc[h_YWdiY_[dj_ij$eh] _________________________ Porpoise Echolocation Read “The Acoustic World of Harbor Porpoises” (pages 46–49) and then watch this animation explaining how porpoise echolocation works:
compounds with potential for treating substance abuse and diagnosing Parkinson’s: http://bit.ly/1u2SD68
http://bit.ly/1xJdsbH
Pin us on Pinterest
From Dung Beetles to War Ships
http://www..pinterest.com/amscimag
Watch University of Montana biologist Doug Emlen explain how the evolution of animal weapons parallels military arms races:
Read American Scientist using the iPad App
http://bit.ly/1uurxdb Through the Theoretical Glass
Available through the Apple Store and iTunes Newsstand
Find American Scientist on Facebook
Listen to this podcast interview with Duke University chemist Patrick Charbonneau about how his research on the glass transition at higher dimensions has caused a paradigm shift in the study of glass problems:
twitter.com/AmSciMag
http://bit.ly/1wzUl0N
plus.google. ______ com/+AmericanscientistOrg/about __________________
Medicinal Chemistry Leader In this podcast interview, F. Ivy Carroll of the Research Triangle Institute explains how he came to develop
Illustr ation Credits Computing Science Page 11 Brian Hayes Perspective Pages 19–22 Tom Dunne Ethics Page 27 Tom Dunne Arsenic, “the King of Poisons,” in Food and Water Pages 36, 38–41 Stephanie Freese Journey to the Solar System’s Third Zone Pages 42, 44 (bottom, right) Steve Gribben Page 44 (center) Alex Parker Page 45 (top, center) Barbara Aulicino The Acoustic World of Harbor Porpoises Pages 48–53 Tom Dunne When the Cause of Stoke Is Cryptic Pages 55, 57, 58 Tom Dunne
facebook.com/AmericanScientist
Follow us on Twitter
Follow us on Google Plus
Join us on LinkedIn: https://www.linkedin.com/company/ __________________ american-scientist _________
This calculation is not exactly true, because Hayes is ignoring what is called the geothermal heat flux, the heat conducted from the molten core of the Earth to the surface, which is also radiated into space. An article on Wikipedia says the estimated total heat loss from the Earth is about 4.42 × 1013 watts, which is about 0.03 percent of solar power absorbed by the Earth. For the Earth’s surface temperature to stay relatively constant, the heat radiated must be about 1.0003 times the amount absorbed. This number may not seem like a lot, but another article, “Climate and Earth’s Energy Budget,” from the NASA Earth Observatory website, estimates that the heat imbalance caused by excess carbon dioxide is about 0.8 watts per square meter. The Wikipedia article says the geothermal heat flux is about 0.087 watts per square meter. I do not think it is justifiable to ignore an input that is on the order of 10 percent of the variable one is trying to predict. W. C. Rust Wallace, ID
4
M q M q
M q
M q MQmags q
Mr. Hayes responds: It’s true that the Earth’s surface is warmed from below as well as above, but the magnitudes of these heat flows are very different: We get roughly 4,000 times as much energy from the Sun as from the Earth’s interior. In a model that simply calculates the planet’s overall energy budget, the geothermal contribution is too small to have any noticeable effect. It’s less than the variation in solar output associated with the sunspot cycle. Furthermore, the geothermal flow is constant on a human time scale, and so it is an unlikely contributor to climate change. On the other hand, one of the important findings of modern climate studies is that small perturbations can have large consequences, especially when they are amplified by feedback effects or other nonlinearities. Accordingly, geothermal flux is taken into account in the detailed computational models that yield quantitative predictions of future climate. For example, the Community Earth System Model includes geothermal flows into the atmosphere, the oceans, and ice on land. Finally, one detail in Mr. Rust’s letter requires correction. The equation Q(1–Į)=ıT4 defines the energy balance at the top of the atmosphere, not at the surface of the Earth. The surface is warmer by more than 30 degrees Celsius because of the greenhouse effect.
Rembrandt’s Grasp To the Editors: In the article “What’s in a Grasp?” (September–October), David A. Rosenbaum, Oliver Herbort, Robrecht van der Wel, and Daniel J. Weiss begin with an example of a Rembrandt painting of Dutch militia members to submit that studying action planning and control “may inform the design and skill-training systems and safer and more efficient setups for work.” However, Dutch military leaders accomplished just this with a new drill system before 1600. By the 1607 publication of the illustrated drill manual by Jacob de Gheyn, Wapenhandlinghe van Roers Musquetten ende Spiessen, Dutch soldiers were so well drilled that they came close to automatic handling of their firearms. I attest to the drill’s efficiency through my participation in living history reenactments. I can use proficiently
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
reproductions of the firearms and accoutrements that Rembrandt painted in Officers and Other Militiamen from Amsterdam’s Second District, led by Captain Frans Banning Coq and Lieutenant Willem van Ruytenburch, a daytime scene long misconstrued as The Night Watch. Rosenbaum and his coauthors erred when they said the soldier in red is handling his ramrod (scouring stick); he is not. Just before the moment Rembrandt depicted him, this soldier used his right thumb-up grasp to remove a Errata In “New Information from Ancient Genomes” (Spotlight, September–October), the last glacial maximum was equated with the Little Ice Age. The two are distinct. In “New Twists in Earth’s Radiation Belts” (September–October), the illustration on page 380 shows the maximum value for AE8MAX Integral Flux as 11 MEV when in fact it is 1 MEV. The credits for two images in the November–December issue were confused: The bottom right image on page 436 is by Mark V. Erdmann; the top right image on page 438 is by Doug Perrine. We have corrected these errors online.
exclusive!
cover from a wooden charger; Rembrandt painted the subsequent motion with the soldier’s right thumb down, emptying gunpowder into the barrel. The chargers are shown suspended from a bandolier across his left shoulder. After the powder was poured, a bullet would be taken from a pouch on the bandolier, inserted into the barrel, and only then would the scouring stick be withdrawn to push the bullet down the barrel. Rembrandt likely knew the de Gheyn illustrations—the painting depicts several steps from the manual. He also had many opportunities to observe militia drill gatherings, which had become social events as well as military preparedness exercises. This painting was commissioned circa 1639 to hang in the great hall of the arquebusiers’ guild. The static poses used by other portrait artists were eclipsed by Rembrandt’s action composition of men handling their weapons. The portrait was immediately acclaimed for its dynamic realism. Contemporary scientists must consider the social milieu, symbolism, and technology of our ancestors before forming conclusions about them.
Portrait painting provided Rembrandt with much of his livelihood; he knew his mind and world better than we do today. Neal L. Trubowitz, PhD Andover, MA Drs. Rosenbaum, Herbort, van der Wel, and Weiss respond: We appreciate Dr. Trubowitz’s comments about 17th-century Dutch musketry and defer to his expertise on this subject. Our main point remains unaffected by his critique of our opening example: Grasps reflect planning, as revealed by behavioral experiments, neuroscience, and computational modeling.
How to Write to American Scientist
Brief letters commenting on articles appearing in the magazine are welcomed. The editors reserve the right to edit submissions. Please include an email address if possible. Address: Letters to the Editors, P.O. Box 13975, Research Triangle Park, NC 27709 or [email protected]. _________________
Own a
Genuine Piece of NASA
Space History 6JKUKUCTCTGQRRQTVWPKV[VQQYPCEGTVKƂGFƃQYP spacecraft shingle fragment from NASA's Mercury Atlas-1. 2TQHGUUKQPCNN[HTCOGFCPFOQWPVGFVJGTGNKEQH4GPG CNNQ[KUCPCEVWCNRKGEGQHCPCHVGTDQF[UJKPINGTGEQXGTGF HTQOVJG/#URCEGETCHV9Z*+PENWFGUEGTVKƂECVG of authenticity.
Mercury Atlas-1 Framed Relic #3155019…..$199.95
ScientificsOnline.com
www.americanscientist.org
American Scientist
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
5
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Spotlight
New Disease Emerges as Threat to Salamanders The invasive chytrid fungus is spreading in Europe; new policy could prevent its introduction in the United States.
T
he death knell for the fire salamanders of the Netherlands offers a sharp warning about the ecological risks of pets as invasive disease vectors. A study published in Science in October shows that the salamanders succumbed to a type of chytrid fungus, which causes a disease known as chytridiomycosis, that originated in Asia; the pathogen, called Batrachochytrium salamandrivorans, can be transmitted by some Asian salamanders and kill susceptible species within two weeks. More than 2.3 million Chinese fire belly newts, one potential reservoir for the disease, were imported into the United States between 2001 and 2009. The disease is lethal to 12 of the 17 non-Asian salamander species tested. The pathogenic chytrid fungi are disquieting examples of how quickly an invasive disease can decimate an entire class of the tree of life. More than 200 amphibian species have gone extinct
already due to a related pathogen that emerged in the 1990s, B. dendrobatidis. The new invasive chytrid showed up in the Netherlands in 2010 when volunteers in an amphibian monitoring program looked for fire salamanders in the country’s three wild populations and returned with barely any: The population had experienced a 96 percent decline, and no one knew why. Because it was so sudden, ecologist Annemarieke Spitzen-van der Sluijs of the NGO Reptile, Amphibian, and Fish Conservation the Netherlands knew that she had either a chemical spill or disease outbreak on her hands. She sought the help of wildlife disease veterinarians An Martel and Frank Pasmans of Ghent University, Belgium, who tested the specimens for all known causes of such dramatic amphibian declines and came up with null results. The remaining 150 or so Dutchorigin fire salamanders were kept in
Edward Kabay
Mortality of a common salamander in the eastern United States, the red-spotted newt, was 100 percent after animals were exposed to a new invasive pathogen, Batrachochytrium salamandrivorans. Researchers recommend preventing the disease’s introduction to the United States from Europe or Asia through regulations on the pet trade. 6
captivity, but they also began dying. As Martel performed necropsies, she noticed microscopic fungal growth in skin lesions typical of the known chytrid fungus. “Because the fungal organisms looked very similar to B. dendrobatidis but the tests for it were negative, we knew that this was a new species,” she says. She and her colleagues published these results in Proceedings of the National Academy of Sciences of the U.S.A. last year. Their new study in Science, led by Martel and Pasmans, reveals the origin and current distribution of the new chytrid fungus B. salamandrivorans, as well as the susceptible and reservoir species among the more than 5,000 amphibian individuals from around the world that they tested. The new chytrid disease has not been detected in the Americas yet. Karen Lips of University of Maryland, who is a coauthor on the Science paper and an expert on chytrid fungus, wants to keep it that way. She, along with several US-based conservation NGOs, is pushing for tighter controls on the pet trade to prevent the otherwise inevitable spread of the disease to the United States. According to this new study, two common salamanders of North America, the red-spotted newt in the east and the rough-skinned newt in the west, are susceptible to the new chytrid disease. “All evidence says if it gets here and escapes into the wild, it’s going to spread all over North America,” says Lips. “Even though most of our North American salamanders were not tested, if you look at the phylogenetic tree, it suggests that at least some of them are going to be susceptible to this disease.” North America has the highest diversity of salamanders in the world, with hotspots in the southern Appalachian Mountains. Lips says that about 50 percent of North America’s salamanders are already threatened. This new disease could push many of these species toward or over the brink of extinction. “In this day and age, our biggest threats are emerging infectious diseases. But, officially, there’s no way the United States has to require anybody to show that their imports of live animals are clean, and they have no legal tool to prevent that introduction,” declares Lips. “We’re not only blindfolded, but
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
our hands are tied as well. It’s only a matter of time before one of those infected animals is imported, and either escapes or is released.” The US Fish and Wildlife Service (FWS) holds the key to regulating live animal imports that could affect wildlife, but Lips says current legislation needs updating to deal with emerging infectious diseases. The Invasive Fish and Wildlife Prevention Act, a proposed bill awaiting congressional approval, would give the FWS more control over monitoring and regulating invasive fish and wildlife and their diseases. Another amendment could also address the threat of B. salamandrivorans: Under the Lacey Act, a law regulating the trade of live organisms, FWS can regulate the importation of foreign amphibians, if they are deemed “injurious to human beings, to the interests of agriculture, horticulture, forestry, or to wildlife.” Lips and many researchers and conservationists would like the FWS to allow the import of salamanders only if they are shown to be free of such lethal invasive disease. Under Title 50 of the Lacey Act, the FWS has passed such measures for imported trout and salmon after several viruses posed a threat to domestic fish. Similar legislation to prevent spread of the first chytrid disease was proposed by the Defenders of
Wildlife, a conservation-oriented NGO, in a petition in 2009, but the legislation has not moved forward. In response to an emailed inquiry about Lips’s points, FWS public affairs specialist Laury Parramore stated: “The Service is very concerned about the potential mortality the salamander chytrid could cause. The United States has more salamander species than any other country, and many are already federally listed as endangered or threatened. The Service takes this issue seriously and is looking at various options under our authorities, but we have not yet completed our review.” To prevent the introduction of this invasive disease, Lips and those at FWS think that policy and science need to be proactive. Meanwhile, the new salamander disease has already spread from the southern Netherlands to Belgium. Martel explains, “The first outbreak was discovered in the Netherlands in 2010 and lasted until 2012. Then, at the end of December 2013, we had the first outbreak in Eupen, Belgium, which is approximately 30 kilometers south from the first outbreak. Then, in April this year, we had the third outbreak in Robertville, Belgium, which is again about 30 kilometers south. If the disease continues to progress at the same rate, which we expect because
First Person: Aaron Chou Is there a fundamental unit of space, and hence a baseline graininess to the universe? If so, that limit caps the total possible amount of information the universe can store. It also has a weirder implication: The three-dimensional reality we perceive might be an illusion—a projection of space similar to a hologram, which is actually encoded in two dimensions. As bizarre as those ideas may sound, they are actually testable. Aaron Chou, a physicist at the Fermi National Accelerator Laboratory in Batavia, Illinois, is the lead scientist and project manager for the Holographic Interferometer, or Holometer for short. Chou explained to Managing Editor Fenella Saunders how the instrument may help answer some of these questions. What is a holographic interferometer and what are you trying to find? It’s two 40-foot-long laser interferometers, which make extremely precise measurements of the relative positions of different objects, in particular the relative positions of the mirrors inside these devices. Interferometers provide the best resolution of any instrument because they use billions and billions of photons, www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
so you can make that measurement over and over again. We’re using about 1022 photons per second. We’re resolving positions to about 1,000 times smaller than the size of an atomic nucleus. We are trying to see if there is any ultimate limit in the precision that one can possibly make in this kind or any other kind of measurement. There are some ideas originating from gravita-
there are no natural barriers in Europe to prevent it, then in about 25 to 50 years, all the salamanders in Europe will be affected.” The remaining 150 Dutch fire salamanders are surviving in captivity, but there currently is no funding for a captive breeding program. The researchers do not want to release them back into the environment until the disease is gone from the outbreak areas, a situation they are monitoring now. According to Martel and Spitzen-van der Sluijs, they do not know yet how to slow the spread of the disease, but preventing further introductions through the pet trade will keep the pathogen from spreading even more quickly. Although some treatments have been proposed for the first chytrid fungus, B. dendrobatidis, such as immunization protocols and probiotics, none have been sufficiently tested in the field. Perhaps insights gained from the first chytrid disease will accelerate research on the second, but at present very little is known about the basic biology of B. salamandrivorans or how it compares to B. dendrobatidis. Without fast and strategic action, the new chytrid disease could exacerbate worldwide amphibian declines and extinctions, especially for salamanders.—Katie L. Burke tional physics and quantum mechanics that imply that the information storage capacity of the space-time itself that we live in is finite, that it’s just like in a hard drive or a memory stick, there’s a maximum amount of information that you could possibly pack in. If you try to measure it better than that precision, you won’t be able to because space doesn’t have any more digits to give you. Can you describe how this limit relates to the 3D nature of the universe? The information content of all the matter that you throw into a black hole would end up being stored on the surface of the black hole. The bizarre thing about black hole physics is the information content is not proportional to the volume of the black hole, but rather just the surface area. This is called the holographic principle; it’s an analogy to a hologram in which you store an apparently 3D image on a 2D surface. But if you scatter light off the surface in a particular way, the pattern of the scattered light apparently reconstructs the 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
7
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
three-dimensional image. This is similar to what’s believed to happen when objects fall into black holes, that somehow the three-dimensional information of the object that fell in gets transcribed and encoded on the two-dimensional surface of the black hole.
backpack stays the same. That’s kind of what we mean when we say that it could be that space-time itself has some maximum information storage capacity. Once you reach the black hole limit, if you try to cram in more information, it just takes up more space.
How does that affect the information storage limit of the universe? Say you start with a situation where you don’t have any black holes at all but you have a bunch of memory
How would this limit be reflected in the Holometer measurements? There’s a prediction that if space runs out of digits, it gives you the same kind of error in the measurement of the two
An overhead view (above) shows the two vacuum vessels that house the beam splitters for the Holometer’s two interferometers. The beam splitters ensure that an identical laser signal is sent to each interferometer, so their measurements are equivalent. Aaron Chou (right) and his team spend a lot of their time making hands-on adjustments to the instruments. “I find it personally much more fun to be working with the equipment rather than sitting at a desk and dealing with processed data later on,“ Chou says. (Photographs courtesy of Fermilab.)
sticks, and you say well, gee, I don’t have enough storage here in the memory sticks in my backpack, and I need more storage capacity. I’m going to go to buy a bunch more memory sticks, cram them all in my backpack and then I’ll have more storage capacity. Eventually what happens is if you’re super strong, like Hulk strong, you cram all those memory sticks in, and the density of all that matter inside your backpack gets so large that you form a black hole. You might say, well, that wasn’t very good, but no matter, I’ll go buy some more memory sticks. But then you see to your dismay that instead of being able to cram more information into this backpack-shaped black hole, the black hole grows, so the actual density of information storage you have in your 8
M q M q
M q
M q MQmags q
devices that are situated close to each other. Your measurements start being correlated at that point for no reason. If you do find this limit, does that mean that 3D is a construct? If it turns out to be true that the information is stored on two-dimensional surfaces like in a hologram, rather than in three-dimensional volume, I think that it would be a very interesting curiosity, and maybe it would lead us down to other paths of thought and study. I don’t think it really affects our everyday three-dimensional lives. One could ask if you find it disturbing that all the instructions for constructing a person or an animal or a plant could actually be encoded in one dimension using an alphabet based on four different letters.
What’s your timeline for figuring out whether there is a limit? We have recently commissioned our detectors to be operating at full sensitivity, so we are beginning to collect data. We’re expecting to have reportable results on a one-year time scale. Any time you operate an instrument at greater sensitivity than you have ever done before, you’re going to find all sorts of problems, so at that point you enter in a long debugging phase to make sure that if you do see something that looks a little
bit odd, that you really figure out what it is, so if we do see this unexpected limit on the precision of our measurements, that we can confidently draw some sound scientific conclusions from it. How do you handle the pressure that you might not find anything? One of my favorite professors from graduate school described basic research to us as it’s like going out for a midnight swim in the ocean. Maybe by chance you’re going to bump into something floating out in the water, but most likely it’s just going to be perfectly clear waters. But that doesn’t prevent you from wanting to go out for a midnight swim. A lot of the fun and the excitement is in the process itself. I personally think of each experiment I work on as a lottery ticket. The probability that I’m actually going to find anything in any particular experiment is pretty small. But on the other hand, it’s not vanishing. All of these experiments I work on have a very good theoretical motivation. So if we do find something, life is really good.
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
n this roundup, associate editor Katie Burke summarizes notable recent developments in scientific research, selected from reports compiled in the free electronic newsletter Sigma Xi SmartBrief. Online: https:// ____ www.smartbrief.com/sigmaxi/index.jsp
I
Humans Made Art Earlier
cells. These cells hold promise for reversing some types of blindness, such as macular degeneration, through regenerative therapy. So far, the therapy has been researched only in the lab; clinical trials are expected to be under way in five years. Chen, X., et al. Adult limbal neurosphere cells: A potential autologous cell resource for retinal cell generation. PLoS ONE doi:10.1371/journal.pone.0108418 (October 1) Schwartz, S. D., et al. Human embryonic stem cell-derived retinal pigment epithelium in patients with age-related macular degeneration and Stargardt’s macular dystrophy: Follow-up of two open-label phase 1/2 studies. Lancet doi:10.1016/S0140-6736(14)61376-3 (Published online October 15)
Kinez Riza
Neural Tract Lost for 100 Years
ginning around 75,000 years ago. Alternatively, artistic abilities may have arisen independently in different societies. Aubert, M., et al. Pleistocene cave art from Sulawesi, Indonesia. Nature 514:223 (Published online October 8)
Stem Cell Therapies for Blindness A new stem cell treatment for two types of macular degeneration shows promise in initial human trials. Researchers led by Robert Lanza of Advanced Cell Technology treated embryonic stem cells in the lab to induce their development into retinal cells and then injected them behind the affected retinas in 18 patients. After three years, more than half the test subjects experienced improved vision, and none experienced complications associated with immune system rejection of the new cells. In a separate study, researchers from University of Southampton discovered stem cells in the human eye that develop into light-sensitive www.americanscientist.org
American Scientist
A neural pathway was discovered but forgotten for more than 100 years, potentially because of a dispute between a student and his mentor. The vertical occipital fasciculus (VOF) connects parts of the brain important for perception. Jason Yeatman, now of University of Washington, re-discovered the structure a few years ago while working on his PhD at Stanford, but could not find mention of it in the scientific literature until a colleague alerted him to an 1881 brain atlas by German neurologist Carl Wenicke. The neurologist’s mentor was an anatomist named Theodor Meynert, who had proposed that all neural pathways connect front to back in the brain, an idea generally accepted among his contemporaries. But the VOF was a vertical connection, and it is mysteriously missing mention in Meynert’s publications following its discovery. Study of the VOF will help neuroscientists understand how the brain and vision system work together to perceive categories, such as reading words or recognizing a face. Yeatman, J. D., et al. The vertical occipital fasciculus: A century of controversy resolved by in vivo measurements. Proceedings of the National Academy of Sciences of the U.S.A. doi:10.1073/pnas.1418503111 (Published online November 17)
Long-Armed Dinosaur Resolved An incomplete fossil of a dinosaur with the longest forelimbs of any bipedal animal was first discovered in Mongolia in 1965. With arms longer than two meters, Deinocheirus mirificus was a puzzle to paleontologists, who were not sure where to put it on the dinosaur family tree. Two new, almost complete fossil skeletons of the creature have been unearthed in Mongolia in the past decade, resolving this halfcenturylong mystery. The fossils showed that this ancient beast is the largest known ornithomimosaur, a group of ostrichlike dinosaurs. Although these dinosaurs are known for being fast runners, the massive hindlimbs and heavy body of D. mirificus indicate that it was a slow mover. Stomach contents showed that it ate fish and was probably an omnivore that lived in a wet environment.
Yuong-Nam Lee/Korea Institute of Geoscience and Mineral Resources
New dating of cave paintings in Indonesia reveals that they are more than 40,000 years old, casting doubt on theories of art in human prehistory. These paintings are among the earliest ever found, and their location is a surprise to archaeologists. Other contemporary cave art has been found only in Europe, and archaeologists thought that the practice of cave painting originated there. The revised age measurements, combined with previous findings that some carved patterns in Africa are 50,000 years old, suggest that humans may have developed artistic proclivities before their migration out of Africa, be-
Wikimedia Commons
Briefings
Lee, Y.-N., et al. Resolving the long-standing enigmas of a giant ornithomimosaur Deinocheirus mirificus. Nature 515:257 (November 13)
Odd Methane Source Feeds Fires In a place called Yanartas, meaning “flaming stone,” in Turkey, there are fires that have been aflame for millennia. The source of the methane lighting the fires has been a mystery: It is not produced biologically, but abiotic reactions forming methane were thought to occur only at temperatures above those experienced at Yanartas. A new study by Giuseppe Etiope of the National Institute of Geophysics and Volcanology in Italy showed that a rare metal called ruthenium found in rock underneath the site can catalyze a reaction forming methane in the lab at temperatures below 100 degrees Celsius—within the range of the area’s climate. Etiope, G., and A. Ionescu. Low-temperature catalytic CO2 hydrogenation with geological quantities of ruthenium: A possible abiotic CH4 source in chromitite-rich serpentinized rocks. Geofluids doi:10.1111/gfl.12106 (Published online September 18) 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
9
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Computing Science
Cultures of Code Three communities in the world of computation are bound together by common interests but set apart by distinctly different aims and agendas. Brian Hayes
K
im studies parallel algorithms, designed for computers with thousands of processors. Chris builds computer simulations of fluids in motion, such as ocean currents. Dana creates software for visualizing geographic data. These three people have much in common. Computing is an essential part of their professional lives; they all spend time writing, testing, and debugging computer programs. They probably rely on many of the same tools, such as software for editing program text. If you were to look over their shoulders as they worked on their code, you might not be able to tell who was who. Despite the similarities, however, Kim, Chris, and Dana were trained in different disciplines, and they belong to different intellectual traditions and communities. Kim, the parallel algorithms specialist, is a professor in a university department of computer science. Chris, the fluids modeler, also lives in the academic world, but she is a physicist by training; sometimes she describes herself as a computational scientist (which is not the same thing as a computer scientist). Dana has been programming since junior high school but didn’t study computing in college; at the startup company where he works, his title is software developer. These factional divisions run deeper than mere specializations. Kim, Chris, and Dana belong to different professional societies, go to different conferences, read different publications; their paths seldom cross. They represent
Brian Hayes is senior writer for American Scientist. Additional material related to the Computing Science column can be found online at ___ http:// bit-player.org. [email protected] _______ E-mail: ___________ 10
different cultures. The resulting Balkanization of computing seems unwise and unhealthy, a recipe for reinventing wheels and making the same mistake three times over. Calls for unification go back at least 45 years, but the estrangement continues. As a student and admirer of all three fields, I find the standoff deeply frustrating. Certain areas of computation are going through a period of extraordinary vigor and innovation. Machine learning, data analysis, and programming for the web have all made huge strides. Problems that stumped earlier generations, such as image recognition, finally seem to be yielding to new efforts. The successes have drawn more young people into the field; suddenly, everyone is “learning to code.” I am cheered by (and I cheer for) all these events, but I also want to whisper a question: Will the wave of excitement ever reach other corners of the computing universe? Setting Agendas What’s the difference between computer science, computational science, and software development? When Kim the computer scientist writes a program, her aim is to learn something about the underlying algorithm. The object of study in computer science is the computing process itself, detached from any particular hardware or software. When Kim publishes her conclusions, they will be formulated in terms of an idealized, abstract computing machine. Indeed, the more theoretical aspects of her work could be done without any access to actual computers. When Chris the computational scientist writes a program, the goal is to simulate the behavior of some physical system. For her, the computer is
not an object of study but a scientific instrument, a device for answering questions about the natural world. Running a program is directly analogous to conducting an experiment, and the output of the program is the result of the experiment. When Dana the developer writes a program, the program itself is the product of his labors. The software he creates is meant to be a useful tool for colleagues or customers—an artifact of tangible value. Dana’s programming is not science but art or craft or engineering. It is all about making things, not answering questions. Should these three activities be treated as separate fields of endeavor, or are they really just subdivisions of a single computing enterprise? The historian Michael Mahoney, an astute observer of computing communities, suggested that a key concept for addressing such questions is the “agenda.” The agenda of a field consists of what its practitioners agree ought to be done, a consensus concerning the problems of the field, their order of importance or priority, the means of solving them (the tools of the trade), and perhaps most importantly, what constitutes a solution…. The standing of the field may be measured by its capacity to set its own agenda. New disciplines emerge by acquiring that autonomy. Conflicts within a discipline often come down to disagreements over the agenda: what are the really important problems? The issue, then, is whether Kim, Chris, and Dana set their own agendas, or whether each of them has merely chosen to concentrate on se-
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
computer science
computational science
software development
understand the nature of computation
use computation to understand nature
write useful programs
Determine what can be computed.
Map natural processes onto computational ones.
Determine what can be computed with finite resources. Determine what can be computed efficiently.
Simulate the behavior of physical, biological, and social systems.
Learn to manage complexity. Map abstract concepts onto concrete program structures. Provide tools for debugging.
Compare models of computation (e.g., classical and quantum).
Capture and manage large volumes of data. Minimize numerical errors.
Provide tools for collaborative work and code sharing.
Organize data for efficient storage and retrieval.
Solve large systems of linear equations.
Manage versions and variants of code.
Approximate the solutions of differential equations.
Define mechanisms and standards for exchanging data between programs.
Improve the human interface with computers.
Encode continuous quantities in discrete form.
Learn what factors influence programmer productivity.
Ensure the correctness of concurrent operations.
Devise ways to visualize spatial and temporal patterns, such as vector fields.
Learn what features make programming languages more expressive.
Define the syntax and semantics of programming languages.
Communities that share an interest in computing but have distinct goals can be distinguished by their agendas: the lists of problems to solve and tasks to accomplish that members of each community agree on. The idea
lected parts of a shared agenda. There are certainly questions that would interest all three of them. A prominent example is “What can be computed efficiently?” Theoretical computer science seizes on this question as one of its most central, existential concerns, but the answer also matters to those who write and run programs for practical purposes. Thus the three groups might seem to stand on common ground. The trouble is, a theorist’s answer to the question may not be much use to a practical programmer. Knowing that the worst-case running time grows as some polynomial function of the problem size doesn’t actually tell you whether a specific computation will take seconds or centuries. The issue here is not that all computer scientists are otherworldly theorists. Sometimes the theoretical challenges arise elsewhere. As a fluid dynamicist, Chris has on her agenda the tricky theoretical problem of partitioning a continuous fluid into discrete parcels suitable for processing by a digital computer. Solutions to such problems have come mainly from mathematicians, engineers, and physicists rather than computer scientists. One of the glories of computer science in its early years was a deep analysis of programming languages. Everyone who does computing would seem to have a stake in this work. In an interesting collaboration between computer scientists, mathematicians, and linguists, the languages were classified www.americanscientist.org
American Scientist
of defining a community by its agenda was introduced by the historian Michael Mahoney. Shown here are some possible to-do items for computer science, computational science, and software development.
according to their expressive power. The next project was to devise algorithms for parsing programs—breaking statements down into their basic grammatical units—and then assigning meaning to the statements. Most of this work was completed by the 1970s. Programmers today are intensely partisan in their choices of programming languages, yet interest in the underlying principles seems to have waned. Two years ago I attended a lunch-table talk by a young graduate student who had turned away from humanities and business studies to take up a new life designing software. She had fallen in love with coding, and she spoke eloquently of its attractions and rewards. But she also took a swipe at the traditional computer science curriculum. “No one cares much about LR(1) parsers anymore,” she said, referring to one of the classic tools of language processing. The remark saddened me because the theory of parsing is a thing of beauty. At the very least it is a historical landmark that no one should pass by without stopping to read the plaque. But, as Edith Wharton wrote, “Life has a way of overgrowing its achievements as well as its ruins.” Roots of Computing Schisms in the computing community can be traced back all the way to the beginning of the digital electronic era, circa 1950. The designers of the early machines, such as the ENIAC in the United States and the EDSAC in Brit-
ain, wove together ideas from sources that must have seemed unlikely bedfellows. Basic notions of how to “mechanize thought” came from mathematical logic, including the 19th-century work of George Boole on a form of algebra in which the elements are not numbers but the values true and false. Electrical engineering, and in particular switching theory, provided circuits that implement Boolean operations in hardware. Mathematical logic became one of the seed pearls on which theoretical computer science grew. Circuit theory also remains a core component of computer science and engineering. Indeed, the design and manufacture of hardware represents yet another independent computing culture. Alongside mathematical logic and electrical circuits, there was a third tradition present at the birth of modern computing. The users of those first high-speed computing machines came mainly from applied mathematics and closely allied areas such as physics. Prominent among the users were tablemakers, who compiled tables of logarithms, trigonometric functions, and all sorts of other quantitative information. (The ostensible reason for building the ENIAC was to compile ballistic tables for artillery.) Another important constituency among the users were the numerical analysts, who devise schemes for finding approximate solutions to equations that cannot be solved exactly. Most of the interesting problems in the sciences fit this description. For 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
11
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
the tablemakers and the numerical analysts, the electronic computer was a problem-solving or question-answering tool; the heirs of these pioneers are today’s computational scientists. Notably absent from the planning for early computer projects was any serious discussion of programming. For each problem to be solved, a mathematician or other professional was expected to design the scheme of computation, perhaps in the form of a flow chart annotated with equations. Translating this plan into instructions suitable for the machine was viewed as a routine clerical task, requiring no intellectual engagement with the underlying ideas. In the case of the ENIAC, six women were recruited as “coders” to do this work. Three of the six had majored in mathematics in college, and all of them were absurdly overqualified for clerical
neered in the art of programming were largely displaced by men—an issue the profession is still dealing with 60 years later. Silver Bullets By the time computers were being manufactured for commercial use, programming was recognized as a costly bottleneck. The work was tedious and slow; people good at it were hard to find; even the most talented and dedicated programmers made mistakes. Programming projects became notorious for running over budget and behind schedule. Progress in computing was threatened by a “software crisis.” The subsequent history of programming methodology can be read as an extended campaign to slay this dragon. Higher-level programming languages—closer to the vocabulary of
The investment of effort that once printed “Hello, world” on the computer screen now brings the world itself to that screen. duties. As it turned out, their qualifications were put to the test, because the work of preparing programs for the machine was anything but routine. The discovery that programming presents serious intellectual challenges apparently came as a surprise to the early leaders of the field. Maurice V. Wilkes, the principal architect of the EDSAC, had an epiphany while writing the first substantial program for that machine in 1949: The EDSAC was on the top floor of the building and the tapepunching and editing equipment one floor below.… It was on one of my journeys between the EDSAC room and the punching equipment that … the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs. Recognizing that programming requires skill and ingenuity elevated the status of the occupation. Unfortunately, not everyone benefited from this upgrade. Women who had pio12
M q M q
M q
M q MQmags q
the problem domain, further from the minutiae of the hardware—were the first weapon, and the most effective one. A regimen called structured programming tried to untangle the logic of programs by allowing only a few kinds of loops and branches. Under the banner of modularity, programs were to be assembled out of pretested, reusable units. Another movement called for formal proofs of program correctness. More slogans paraded by: abstraction, encapsulation, declarative programming, functional programming, object-oriented programming, design patterns, test-driven development, agile development. The sheer variety of these remedies is a hint that no one of them was a cure-all. Even now the software crisis is still with us: Witness the debacle of the Healthcare.gov website in 2013. The debate over software quality has included repeated calls to make programming a proper engineering discipline, with recognized standards of proficiency and perhaps requirements for certification or licensing. A related trend imposed more manage-
ment structure on the programming process. In the software shops of the 1960s and ’70s, the way to get ahead was to rise above the actual writing of code and become a system analyst or architect. At the same time, however, another strand of computing culture—or counterculture—was moving in the opposite direction. The enthusiasts who called themselves hackers, most famously situated at MIT among members of the Tech Model Railroad Club, saw computer programming as a puzzle to be solved, a world to explore, a medium of self-expression. They saw it as fun. They resisted the idea that only an elite with engineering credentials would be allowed access to the machinery. The notion that programming could be regulated or restricted was further undermined when personal computers became widely available and affordable in the 1980s. Coding Is Cool Again Another wave of irrepressible hacker enthusiasm is washing over us now, as a new generation discovers that coding is cool. Introductory programming courses, which had disappeared from many college curricula, now attract hundreds of students. At Harvard, for example, a hands-on programming course called CS50 has an enrollment of almost 900, the largest in the entire university. Online courses engage millions more. And a group called code.org is working to revive the study of computing in elementary and secondary schools. Why this sudden infatuation with the nerdy side of life? Fad and fashion doubtless play a part. So does the prospect of creating the next billion-dollar app. And there’s always excitement in joining your generation’s mission to change the world. Beyond all that, I would cite one more factor. Within the past five years, programming tools have crossed a threshold of accessibility and power. It’s not that we have finally found the magic elixir that makes programming easy and error-free. The learning curve is still steep. But the view from the top of the hill is spectacular. The same investment of effort that once printed the words “Hello, world” on the computer screen now brings the world itself to that screen. In 1984 I saw a demo of a mapping program created by Michael Lesk and his colleagues at AT&T Bell Labs. The
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
graphics were crude by modern standards, but the program could answer geographic queries and recommend routes from point to point in the New York area. I was wowed. A key innovation in Lesk’s program was storing the terrain map in small square tiles that could be loaded into memory as needed. Twenty years later, Google Maps employed the same principle (with better graphics) to create the illusion that the computer screen is a window onto a vast unfurling map of the whole planet. New tiles are fetched over the network whenever you move the window or zoom in and out. I was wowed again. Google Maps was state-of-the-art wizardry in 2005; in 2015 anyone can do it. With a dozen lines of code—plus an open-source library called Leaflet and a free web service that supplies the map tiles—you can create your own mapping program, offering the viewer the same breathtaking window-on-theworld experience. The grizzled curmudgeon in me wants to object that this instant cartography is not real programming, it’s just a “mashup” of prefabricated program modules and Internet resources. But building atop the achievements of others is exactly how science and engineering are supposed to advance. Still, a worry remains. How will the members of this exuberant new cohort distribute themselves over the three continents of computer science, computational science, and software development? What tasks will they put on their agendas? At the moment, most of the energy flows into the culture of software development or programming. The excitement is about applying computational methods, not inventing new ones or investigating their properties. In the long run, though, someone needs to care about LR(1) parsers. Guy Lewis Steele, Jr., one of the original MIT hackers, worried in the 1980s that hackerdom might be killed off “as programming education became more formalized.” The present predicament is just the opposite. Everyone wants to pick up the knack of coding, but the more abstract and mathematical concepts at the core of computer science attract a smaller audience. The big enrollments are in courses on Python, Ruby, and JavaScript, not automata theory or denotational semantics. I would not contend that mastery of the more theoretical topics is a prereqwww.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
uisite to becoming a good programmer. There’s abundant evidence to the contrary. But it is a necessary step in absorbing the culture of computer science. I am sentimental enough to believe that an interdisciplinary and intergenerational conversation would enrich both sides, and help in knitting together the communities. Bibliography Baldwin, Douglas. 2011. Is computer science a relevant academic discipline for the 21st century? IEEE Computer 44(12):81–83. Denning, Peter. 1985. What is computer science? American Scientist 73:16–19.
Felleisen, Matthias, and Shriram Krishnamurthi. 2009. Viewpoint: Why computer science doesn’t matter. Communications of the ACM 52(7):37–40. Fritz, W. Barkley. 1996. The women of ENIAC. IEEE Annals of the History of Computing 18(3):13–28. Gramelsberger, Gabriele (ed.). 2011. From Science to Computational Sciences: Studies in the History of Computing and Its Influence on Today’s Sciences. Zürich: Diaphanes. Mahoney, Michael Sean. 2011. Histories of Computing. Cambridge, MA: Harvard University Press. Wegner, Peter. 1970. Three computer cultures: Computer technology, computer mathematics, and computer science. Advances in Computers 10:7–78.
NEW VERSION
Data Analysis and Graphing Software
OVER 200 NEW FEATURES & IMPROVEMENTS IN ORIGIN 9.1! For a free 60-day evaluation, go to OriginLab.com/demo and enter code: 5326
Over 500,000 registered users worldwide in: ◾ 6,000+ Companies including 120+ Fortune Global 500 ◾ 6,500+ Colleges & Universities ◾ 3,000+ Government Agencies & Research Labs
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
13
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Engineering
Can an Engineer Appreciate Art? Physical rigor may seem at odds with playful whimsy, but the tension between the two has produced some of the greatest public works. Henry Petroski
L
ast spring a magazine cover caught my eye and caused me to reflect on the seemingly divergent creative disciplines of engineering and art. The cover’s artwork showed a realistically rendered International Space Station sprawled against a background of dark, starry space. This part of the cover was all understandable in the context of science: orbital mechanics, astronomy, cosmology, and the like. What started me thinking was the object in the lower right corner of the cover. It was a terrestrial bicycle of no uncommon design being pedaled toward the station by an apparently conscientious young man wearing a helmet and a highway-orange-colored safety vest. By his upright posture, the bicyclist looked neither exhausted nor in a particular hurry. In keeping with the artist’s attention to detail on the space station, the bicycle and its rider were realistically depicted. However, as an engineer, I couldn’t stop myself from wondering how he got into space and how his bicycle was being propelled. Should not an artist consider such matters in creating a piece of art otherwise so meticulously rendered? I know that a magazine cover—or any piece of artwork—is not necessarily meant to be taken literally. Regular readers of The New Yorker, which is known for its absurdist humor, understand that its featured artwork usually refers to some current event,
Henry Petroski is the Aleksandar S. Vesic Professor of Civil Engineering and a professor of history at Duke. His most recently published book is The House with Sixteen Handmade Doors: A Tale of Architectural Choice and Craftsmanship (W. W. Norton, 2014). Address: Box 90287, Durham, NC 27708. 14
fad, or fashion, often in the context of something relevant to New Yorkers. The June 2014 space-station cover fit into this genre perfectly, alluding to the Russian reaction to Western sanctions imposed in the wake of conflicts in Ukraine. The Russians had recently threatened to cease ferrying astronauts and supplies to the International Space Station, something they had been doing since the curtailment of the US space shuttle program in 2011. Still, what about the incongruity of a bicycle being ridden in space? A bicycle works when its rear tire pushes backward against a road surface. According to Newton’s law of action and reaction, the bicycle is then effectively pushed forward by the road. In open space there is no physical road, so what gives with the cover drawing? Once I noticed this flaunting of physical reality it distracted me from simply appreciating the magazine cover as a work of illustrative art. I sought to rationalize the scene. A License for Illogic One explanation might be that the editors of The New Yorker are excellent grammarians who know not to deliberately split an infinitive but do not feel that they or their artists need to worry about being consistent with the laws that govern the physical interaction of everything with everything else throughout the universe. After all, physical laws are frequently suspended in works of science fiction, and poets are routinely granted artistic license. Artistic license is like a free pass. For centuries it has excused John Keats of misidentifying the discoverer of the Pacific Ocean in one of his poems. To “correct” the poem by replacing “Cortez” with “Balboa” would ruin the me-
ter, and so the almost 200-year-old sonnet has stood in anthologies as written, perhaps with a footnote identifying the historical error as a curious piece of literary trivia. Altering the magazine cover by putting a road beneath the bicycle would be equally ruinous of the artistic achievement. The bicycle rider on the magazine cover is the kind of delivery boy familiar on the streets of Manhattan: His hands do the double duty of steering the bike and securing the plastic bags of food suspended from the handlebars. The drawing is titled “Free Delivery,” perhaps a jab at the hefty price NASA has had to pay to take American astronauts to and from the space station. The Russians have been charging $70 million per trip. The cover artist, Bruce McCall, is a self-taught illustrator who, among other things, worked early on for the humor magazine National Lampoon. He has also been drawing and painting magazine covers for decades. During this period he has produced at least 40 cover drawings in a genre he calls “urban absurdism,” which “makes life in New York look even weirder than it is.” But only one or two of his previous covers—such as the one in which automobiles are being driven through thin air over the roof of a toll plaza and its EZ-Pass lanes to use skyward ones marked E-ZR-Pass—have depicted things that were physically impossible. Whatever “serious nonsense” McCall draws, he has claimed always to use “a technique of painstaking realism of editorial illustration” and confesses to being “a bug for authenticity.” In a 2008 TED talk, he showed a detailed drawing of a car fitted with a propeller and wings and about to be catapulted over the rooftops of nearby
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
www.americanscientist.org
© The New Yorker/Bruce McCall/Condé Nast
homes. For the basic automobile he used a 1953 Henry J, which he drew accurately right down to its emerging rear fender fins and cursive trunk emblem. (This short-lived model compact vehicle was introduced in 1950 by the Kaiser-Frazer Corporation. To keep the price down, early versions were made without a trunk lid, the space being accessed through the rear seat. Presumably McCall knew whether the 1953 model he depicted should or should not have had the lid.) But for all of his attention to fine detail in depicting cars and planes, McCall has been cavalier about how he represents them in their interaction with the physical world. The juxtaposition of an allusion to an international political squabble and a New York delivery boy may make for a good chuckle in certain literary circles, but it is an invitation to cognitive dissonance among those of us who prefer our art realistic and our engineering grounded in reality. McCall’s work can seem conflicted in its obsessive detail and its casual treatment of physical law. It leaves a lot of room for alternative interpretations. McCall himself has provided a simple explanation for his choice of subject: “I’ve been trying forever to find a way to honor the food-delivery guys. I wanted to show what heroes they are; they’re intrepid, pedaling along at any time of night to deliver food to New Yorkers.” The meaning may be no more complicated than that. In his passion for honoring the intrepid bicycle riders, the artist probably couldn’t care less about what supports the bike and propels it toward the space station. However, I repeat, I am an engineer, and in their work engineers must conform to the constrictions of natural law and the constraints of reality. They could, akin to what McCall does, put down on paper fanciful skyscrapers without ground floors or graceful bridges that have neither foundations nor piers. Playful blueprints for such structures might even be drafted as fanciful excursions into the imaginable but un-constructible, but they cannot be serious plans for a real building or bridge that could be erected. Architects, coming more from the artistic side, sometimes engage in such flights of fancy; the Soviet “paper architects” known as Brodsky and Utkin are a delightful example. Even engineers enjoy a good laugh now and then—as long
American Scientist
M q M q
M q
M q MQmags q
The cover of The New Yorker, June 2, 2014: An act of clever art or sloppy engineering?
as they can still tell the difference between a joke and a fantasy. The public expects the bridges and skyscrapers engineers design to withstand the tests of nature, whether they be administered as earthquakes or hurricanes or other disruptive forces. When something built does succumb to a natural onslaught it should not be because the engineers did not imagine that it could occur but rather because they did not expect it would occur so powerfully and so soon—or because social and financial constraints such as building codes and budgets allowed (if not encouraged) the thing to be built to minimum standards. Engineers can always make something stronger, but strength comes at a cost. On the other hand, public engineering works are also works of public art
and so must be treated as such. A large building or bridge can add to or detract from the built environment. So what about creative embellishment in engineering? What about frills that add to the bottom line? Engineers work with architects on some structures but not on others. A skyscraper such as the Empire State Building is as much a marvel of proportion and form as it is of strength. In addition to those qualities, the Chrysler Building is decorated with hood-ornament gargoyles and hub-cap medallions. In the context of the design integrity of the whole building, these details are neither absurdist nor nonsense. They are apt decorations for an automaker’s office building. In contrast to buildings, bridges are often offered as examples of pure engineering, in that their structure is out 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
15
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
philipus/Alamy
in the open for all to see—although there are exceptions. London’s famous Tower Bridge, built in the late 19th century, is essentially a steel structure, but its proximity to the Tower of London led to prominent parts of the bridge’s then-technologically advanced steel frame being enclosed in stone. The intent was both to conceal a material that had not yet received the imprimatur of architects and to harmonize better with the neighborhood
HD iPad Wallpapers
way. The Golden Gate Bridge is widely admired for more than its function as a utilitarian structure between San Francisco and Marin County. It is also lauded as a large-scale piece of Art Deco structural sculpture. The faceted fascia on the towers’ horizontal braces and the corbelled brackets on which they rest are not structurally necessary. Like the bridge’s distinctive paint color, they were the thoughtful contributions of a consulting architect, Irving Morrow.
Engineers typically eschew metaphor and stay on the right side of the laws of nature. But they are creative, too. tourist attraction. Similarly, in the early 20th century, the new technology of reinforced concrete was disguised by the cladding of many a concrete bridge with traditional-looking stone. These are not structurally honest pieces of infrastructure but, like a magazine cover, may work in their context. A well-proportioned bridge does not need an architectural facade or treatment, but many a bridge has one any16
M q M q
M q
M q MQmags q
Suspension bridges are awe inspiring in their own right, given that the roadway can stretch for thousands of feet between the towers and beneath the gracefully arcing cables. These bridges work structurally because of the massive anchorages that hold back the cables and keep them from sagging below the curve of their design. Typically the anchorages are made of concrete, but they have to be so large
Colin Galloway/Alamy
volumetrically that they compete aesthetically with the relatively slender steel members that typically constitute the bridge between them. Nineteenth-century engineers such as Thomas Telford and John Roebling, who had an artist’s sense of weight and proportion, were able to design an entire bridge structure by themselves. Telford’s Menai Strait and Roebling’s Brooklyn Bridge stand as outstanding architectural as well as engineering achievements. The Brooklyn’s genuine stone towers and anchorages have a proper visual relationship with each other and with the metal structure between them. However, stonework such as that in the Brooklyn Bridge takes considerable skill and time to execute, and so is seldom done nowadays. In the 20th century, suspension bridges equal to and larger than the Brooklyn began to be built with steel towers, not all aesthetically successful. Hence the cladding on the Golden Gate towers. The anchorages were a different matter. Physics, economics, and aesthetics argued against their being made of steel, and so concrete became the material of choice. In the case of the Golden Gate, the anchorages are largely inconspicuous, the Marin County one being nestled beside a prominent headland and the San Francisco one
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
being tucked beneath approach spans, where the steel arch over the historic fort and the fort itself distract the eye from the massive anchorage. Sometimes anchorages are given architectural—that is, artistic—treatments that can distract from the bridge structure. To my eye, the Benjamin Franklin Bridge across the Delaware River at Philadelphia falls into this category. Its Beaux Arts-style, stone-clad anchorages, designed by the architect Paul Cret, compete with the 20thcentury steel structure of the bridge proper, which was the work of chief engineer Ralph Modjeski. The anchorages of New York’s Bronx-Whitestone Bridge were also given an architectural treatment, but of sculpted concrete that was left unclad. The shape and texture of the anchorages are due to consulting architect Aymar Embry II, who made it clear in his writings that the bridge’s chief engineer, Othmar Ammann, had the final word on all things, both structural and visual. I find this bridge’s anchorages to be more in keeping with the scale and style of the bridge itself. Finding Aesthetic Balance But back to fine and not-so-fine art. Just as engineers are not averse to having architectural features temper their hard steel and concrete structures, so they do not demand that physically impossible depictions be banned from works of art. Bruce McCall’s homage to the delivery boy and other examples of his serious nonsense are certainly not the only drawings or paintings that run afoul of the laws and facts of nature. We find violations in all sorts of creative media, from political cartoons that caricature well-known individuals with outlandishly prominent ears or chins to movies that are awash with out-of-this-world special effects. There is room in art of all kinds for metaphor and symbolism—and for just pure imagination. The drawings of M. C. Escher often flout Euclidean geometry, but they are universally admired for their impeccable draftsmanship, creative perspective, and challenge to the imagination. Who has not imagined climbing an arrangement of Escher stairways and ending up torn between admiration for their possibility and confusion over their irrationality? The playfulness of the drawings justifies our admiration of them. Perhaps that should be the way an engineer approaches a McCall New www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
Universal Pictures/courtesy Everett Collection
Above: The fantastical bicycle-flight scene in the movie E.T. resonates equally with engineers and artists. Opposite: Modern skyscrapers blend rigorous design with the artistic creativity demanded of high-profile public works. From left to right, they are the Burj Khalifa in Dubai; the Petronas Towers in Malaysia; and Taipei 101 in Taiwan. Each achieves essential functional goals while reflecting a distinctive local aesthetic.
Yorker cover. Could or should “Free Delivery” have been drawn differently to satisfy everyone from artists to editors to everyday readers to engineers who are not purist New Yorkers? Perhaps having the delivery boy riding a rocket-propelled bicycle or motorcycle might have helped. But should he also have been outfitted with a space suit so that he might not have succumbed to the vacuum of space before he reached his destination? Creating an increasingly detailed technical narrative about how this person got where he is depicted complicates the simple message of the absurdist drawing and opens up more and more questions. For example, from where was the delivery boy’s bicycle launched? Was it an Italian restaurant on the Upper West Side or a Chinese restaurant on the Lower East Side? Did he have to coordinate his launch time with the orbital position of the space station? Was the bike fitted with wings or fins that were jettisoned? How did the delivery boy hold onto the handlebars (and the plastic bags) while experiencing the g-forces needed to propel him to the escape velocity needed to reach orbit? Did the food in the containers or wrappers in the plastic bags remain warm throughout the journey? If so, how? This is the way an engineer thinks.
Engineers are always full of questions relating to details of fact and narrative. The challenge to an engineer, whether working on the design of a tall building or of a mission to outer space, is the same: Think of everything that could go wrong with the design and make sure it does not happen. Such a constraint may seem to suppress creativity. That it does not is exhibited by such marvels of engineering as the Burj Khalifa in Dubai—at 2,717 feet high currently the tallest building in the world—and the Apollo 11 mission that landed astronauts on the Moon and brought them safely back to Earth. A look at other modern skyscrapers, such as Taipei 101 in Taiwan or the Petronas Twin Towers in Malaysia, reveals a rich diversity of ways to satisfy the engineer’s rigorous concerns in a gratifying —indeed, artistic—way. In the issue carrying the delivery boy drawing, a page identifying contributors notes that Bruce McCall “is working on a book on creativity.” He certainly is creative, as a review of his magazine covers attests, and his ability to work with artistic license rather than technical correctness gives him wide berth for expression. Engineers typically eschew metaphor and stay on the right side of the laws of nature. But they are creative, too. Perhaps the most memorable scene in the movie E. T. the Extra-Terrestrial is the one in which the children on bicycles are spiriting the alien visitor away to safety. To escape the grasp of the adult authorities, the children pedal their bikes off the ground and over the police cars, and then ride through the sky in front of the moon. Like McCall’s delivery boy, they take off without rocket assistance and ride on no road, but in the context of the film’s posited powers of E. T., the scene works. Even engineers can put aside the gravity of earthly constraint to enjoy a happy ending. To the best of my knowledge no one, engineer or not, has called for Steven Spielberg to reshoot the scene. Bibliography Kaneko, Mina, and Francoise Mouly. 2014. Cover story: Bruce McCall’s “Free Delivery.” May 26. http://www.newyorker. com/online/blogs/culture/2014/05/cov_________________________ er-story-bruce-mccalls-free-delivery.html ________________________ McCall, Bruce. 2008. Transcript of TED talk. http://www.ted.com/talks/bruce_ _________________________ mccall_s_faux_nostalgia#t-77451 ___________________ McCall, Bruce. 2014. Free Delivery. The New Yorker, June 2, front cover. 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
17
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Perspective
The Many Guises of Aromaticity Is hype debasing a core chemical concept? Roald Hoffmann
A
romaticity is one of the core concepts of organic chemistry. The idea began as a descriptor of the special stability of the ring of six carbons, benzene, C6H6. And importantly, of the ability of that ring to be transformed by chemically substituting the hydrogens attached to it. The reactions involved were relatively easy, the products often stable and useful. Aspirin, TNT, mescalin, vanillin, and serotonin all contain an aromatic, benzenoid core. As chemistry evolved, it was natural to seek other gauges of stability and the capacity to change while retaining the C6 core. Clothed in various measures, wonderfully expanding in scope, mental catnip for my fellow theoreticians, aromaticity, the concept, flourished. I will sketch its flowering, from the seminal paper of German chemist August Kekulé that proposed the correct structure of benzene 150 years ago, to those new measures. Today, an inflation of hype threatens this beautiful concept. Molecules constructed in silico are extolled as possessing surfeits of aromaticity—“doubly aromatic” is a favorite descriptor. Yet the molecules so dubbed have precious little chance of being made in bulk in the laboratory. One can smile at the hype, a gas of sorts, were it not for its volume. A century and a half after the remarkable suggestion of the cyclic structure of benzene, the conceptual value of aromaticity—so useful, so chemical—is in a way dissolving in that hype. Or so it seems to me. Kekulé It is the sesquicentennial of Kekulé’s proposal of the cyclic structure of Roald Hoffmann is Frank H. T. Rhodes Professor of Humane Letters, Emeritus, at Cornell University. Address: Baker Laboratory, Cornell University, Ithaca, NY 14853-1301. Email: [email protected] _________ 18
benzene. The C6H6 molecule was first isolated from compressed illuminating gas by Michael Faraday in 1825. Many organics, not just benzene, have distinct olfactory characteristics, ranging from pleasant to downright foul-smelling. But already by Kekulé’s time, the adjective “aromatic” was associated with the group of molecules related to benzene. The structure of benzene remained a stumbling block to mid-19th century chemists. The road to the structure we now know was not simple—nothing in the real world is; Kekulé himself went through four distinct graphic representations in the seven seminal years of formulating benzene’s structure. Let me show you two of these. Kekulé’s first formulas didn’t look at all like the representation we use today. From his first 1865 paper comes a “sausage” (Wurst) formula, shown at the top of the bottom figure on page 19. The arrows in the image, his very first published representation, are Kekulé’s way of communicating, within the constraints of a linear representation, that the left end of the molecule is connected to the right. In the middle of the same figure is a roughly contemporary image of a physical model of the structure, possibly made by Kekulé himself in Ghent, Belgium. The ovals in the top figure and the four black balls welded together in the middle represent the four valences (bonding capabilities) of the carbon atoms. The solid model develops a more direct representation of that connection. Within a few years of this proposal, the structures were rewritten in a graphic form close to the contemporary one, as in the “Kekulé structures” shown at the bottom of the figure on page 19, here taken from an 1872 paper by him. The two structures just differ in the placement of the double bonds in the molecule.
Kekulé’s two structures, which we now would call cyclohexatrienes, posed an immediate problem to him and to other chemists—how to reconcile the equivalence, at every level (physical, chemical) of all six carbons of benzene with the existence, on paper, of two cyclohexatrienes. Kekulé proposed a microscopically detailed (and erroneous) theory of bonding forces in atoms, involving their oscillations around their equilibrium positions, resulting in timed collisions with other atoms. His ad hoc hypothesis saved the day. As Yale University’s Jerome Berson wrote in his 2003 book, in a most perceptive analysis of Kekulé’s ideas: The history of organic chemistry shows that even though this theory was not really understood by most organic chemists of the 19th century, it was applied nearly everywhere. Chemists of the time quickly suppressed any remaining distaste, swallowed this awkward bolus, and pressed ahead. Their subsequent achievements under the aegis of the theory vindicated their action. The tremendous flowering of synthesis and the discovery of an abundance of new reactions and structures during that time all took place in an atmosphere of growing conviction that, for whatever reason, the C1–C2 and C1–C6 bonds of benzene were structurally equivalent, as Kekulé had said. Benzene derivatives were and are ubiquitous in chemistry. Aside from the selection of compounds I mentioned, several of the amino acids and all nucleic acids contain a benzenoid entity. Why so many benzenes? It’s not just the inherent stability of the six-membered ring, but I think two other matters of architecture and reactivity. First, the flat skeleton of ben-
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Wikimedia Commons
zene allows us to “disperse” chemical functionality (through substituents) in a plane, radiating out of the ring. And second, while the molecule is relatively stable to decomposition, it is also moderately reactive. That reactivity, with acids, bases, and radicals, permits one to substitute a subset of the hydrogens by other groupings of atoms—CH3, Cl, OH, NH2, NO2. Function comes from the properties of these substituents—benzene is like a mug that allows one to detach and attach a number of handles to it, each with its own chemical capabilities. In 1890, long after the events, Kekulé describes his famous dream (here translated from German by Alan J. Rocke): I was sitting there, working on my textbook, but it was not going well; my mind was on other things. I turned my chair toward the fireplace and sank into halfsleep. Again the atoms fluttered before my eyes. This time smaller groups remained modestly in the background. My mind’s eye, sharpened by repeated visions of a similar kind, now distinguished larger forms in a variety of shapes. Long lines, often combined more densely; everything in motion, twisting and turning like snakes. But look, what was that? One of the snakes had seized its own tail, and the figure whirled mockingly before my eyes. I awoke in a flash, and this time, too, I spent the rest of the night working out the consequences of the hypothesis. Kekulé’s ouroboric vision did not hurt the molecule’s popularity. www.americanscientist.org
American Scientist
German chemist August Kekulé traces his breakthrough insight that the structure of benzene was a ring to a dream of a serpent eating its tail. This notion laid the groundwork for the concept of aromaticity, the special stability of this carbon ring, which forms the base of a variety of organic molecules.
Stability Aromatic compounds, meaning compounds containing benzene rings, became common. So were they particularly stable? Yes and no. Let’s take benzene itself. Here is one measure, an energetic one, of what is special about it. A common reaction is hydrogenation, the addition of a H2 to a CC double bond, as in cyclohexene going to cyclohexane, shown at the top of page 20. The experimental heat of the reaction shown, at room temperature, is –118 kilojoules per mole. That’s a lot of heat emitted—it would heat a liter of water from 0 to 28 degrees Celsius. If benzene were cyclohexatriene, with its three double bonds, the heat of
Ghent University Museums - History of Sciences
the triple hydrogenation (cyclohexatriene to cyclohexane) should be thrice that of cyclohexene, or around –354 kilojoules per mole. Experimentally, the energy produced when three molecules of hydrogen are added to benzene is much less, –206 kilojoules per mole. To put it another way, benzene is more stable than a hypothetical cyclohexatriene by about 150 kilojoules per mole. That was actually the first simplistic estimate of the extra stabilization of benzene, called its resonance energy. With a more careful definition, the stabilization is seen to be even larger. The word “resonance” came from Linus Pauling, the premier American theoretical chemist and structural chemist of the mid20th century. He used a mechanical metaphor of resonance—the seeming ambiguity of two Kekulé structures was The progression of Kekulé’s understanding of aromatic benzene is shown at left, from top to bottom: In an 1865 paper, he proposed the “sausage” formula, with arrows indicating the left and right ends are connected, the ovals the bonding capabilities of the carbon atoms, and the dots hydrogens. Later, he built a physical model, with the black balls also representing these bonding capabilities and the white ones hydrogens. In 1872, he published the two structures at bottom. These would today be called cyclohexatrienes, and immediately posed a conundrum—how could the functional equivalence of all the carbon atoms of the ring be reconciled with the apparent existence of two forms? 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
19
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
transformed by Pauling into an extraordinary stabilization. A nomenclature not introduced by Pauling but roughly contemporaneous—the circle in a hexagon symbol, shown at left (hydrogens understood, in the typical way of organic chemistry)—took over. The circle represents the symmetric, stabilized disposition of the six highest energy electrons of the benzene molecule, the ones the Kekulé structures wrote as double bonds.
H
The Hückel model was initially neglected (the story of why that happened is well told by Berson), but got a second life in 1950s. In the hands of physical organic chemists, who could see in it pointers not just to the aromatic sextet but also to stable π-electron systems with two and ten electrons, Hückel’s theory blossomed. I remember the excitement (it was when I was starting out in chemistry); a nexus between quantum mechanics and organic chemistry took shape. With time, the delocalization of electrons, part classi-
H
H H
C C
+ H
C C
C
H
H
H
H H
H
C H
H
H C
H
C
C
C
H
H
H
C
H
C H
H
H H
H
A common reaction is hydrogenation, the addition of an H2 to a C–C double bond, as in cyclohexene (left) going to cyclohexane (right), which releases –118 kilojoules of heat when the reaction occurs at room temperature.
At the same time that Pauling’s ideas flourished, Erich Hückel in Germany came up with a molecular orbital (MO) theory of the stability of benzene. Each carbon has six electrons—two are in a 1s orbital (orbitals are quantum mechanical places for electrons to move, wave functions that are solutions of Schrödinger’s equation), too low in energy to bond. Three more electrons per carbon are in orbitals (2s and 2p) that form single bonds to the other carbons and hydrogens. There remains on each carbon one orbital perpendicular to the plane of the ring, a 2p orbital, and one electron. These form what is called the π-system of benzene. From the six atomic orbitals, Hückel constructed six molecular orbitals, combinations of atomic orbitals, shown at the top of page 21. Three of them were at low energy, and with two electrons per molecular orbital, they had room for precisely six electrons. The π electrons in the ring were delocalized, no longer associated with any carbon, but shared equally among them. And six was the magic number. Here, in the language of quantum mechanics, was the aromatic sextet, the special feature of benzene. 20
M q M q
M q
M q MQmags q
cal yet governed by quantum mechanical phase relationships that Hückel uncovered, became the distinguishing feature of aromaticity. The First Inflation Mid-20th century, a time in which I studied, in retrospect was also the time of the first inflation of the concept of aromaticity. Guided either by a simplistic idea of “the more resonance structures, the better,” or by seeing aromatic sextets of electrons in too many places, much sweat in organic synthesis was spent in chasing down molecular phantoms. No matter, some fascinating molecules were made, and we gained a better understanding of the factors governing molecular stability. All along there was a thermochemical corrective on our romance with benzene, and I don’t mean its carcinogenicity (which eventually imposed controls on the laboratory use of this ubiquitous solvent). As stable as benzene is, its heat of formation, that is, the heat of the reaction 6C (graphite) and 3H2 (a gas) to form benzene (a liquid) is +49 kilojoules per mole. The elements are more stable than benzene. And benzene is quite flammable.
New Criteria for Aromaticity One hundred years ago, during World War I, chemical crystallography was born. Advancing slowly at first (the details of benzene’s structure did not emerge until the 1930s), the field exploded with the advent of computers— today from the diffraction of x-rays by crystals we know the metrics of over 700,000 organic compounds. You can bet that there are tens of thousands of benzene rings in molecules in this cornucopia. All have C–C distances close to 1.39 angstroms, and quite far from the extremes of a localized double and single bond alternating (1.34 and 1.48 angstroms, respectively) in a hypothetical cyclohexatriene, one of the two Kekulé structures. So came about another measure of aromaticity, bond equalization. There are many molecules part-way aromatic, and for those the disparity (or lack thereof) of bond lengths, expressed in a variety of ways, is a good measure of aromaticity. For instance, in the partially aromatic C 4 OH 4 molecule of furan, shown at right, one has a sextet of electrons, but the bonds are, as shown, closer to being localized (double bonds shorter than single bond) than completely delocalized (all bonds approximately equal). Nuclear magnetic resonance (NMR) provides another, most useful gauge of aromaticity. A magnetic field is applied to a molecule; the molecule’s electrons respond by moving to counteract the applied field (see the figure on page 22). The field actually experienced by a nucleus in the molecule (say that of hydrogen) is the sum of the external magnetic field plus the induced one. That local field is different at every distinct hydrogen; this difference, a chemical shift, allows one to identify hydrogens in an organic molecule. This is what has made NMR the prime analytic tool of modern organic chemistry. And, as magnetic resonance imaging (MRI), a common if expensive medical diagnostic procedure. In benzene’s delocalized electrons, a ring current of some magnitude is induced, substantially bigger than in a “saturated” organic analogue, such as cyclohexane. As the figure on page 22 shows, the ring current (producing a net magnetic field opposite to the induced one) runs around the molecule in
o
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
other, or approach acids or bases (water is both) they react, going away in a jiffy.
H H H H H
–
–
+
+
–+
–+
–
–
bonding molecular orbitals
–
+
energy
antibonding molecular orbitals
H
+
+
–
–+
+
+
–
–+
+–
+
+
–
+
+ high energy low energy
– –
+
+
+
+
+
–+
–+
–
–
– +
–
–
+ –
–
– –
+
–
+
–
+ +
–
–
+
++
–
–
The 2p-π atomic orbitals in benzene (top) are used to derive its six molecular orbitals (bottom); the three at low energy are occupied by the six π electrons of benzene.
such a way that at the periphery of the benzene molecule, which is just where the six hydrogens reside, the external magnetic field is actually augmented. The chemical shift of aromatic protons is identifiably different from normal protons. This became a hallmark of aromaticity, and not just in benzene. A related theoretical indicator, called the nucleus independent chemical shift (NICS) introduced by the University of Georgia’s Paul von Ragué Schleyer, has been cited as a measure of aromaticity (or lack thereof) in more than 4,000 papers. We had bond length variations from the 1930s, NMR chemical shifts from the 1960s. Both became ways to measure the extent of aromaticity. Bench-Stable, Bottleable Computers made the determination of the structure of molecules in crystals easy—what took half a year in 1960 takes less than an hour today. They also made computations of the stability of molecules facile. Whoa! What do you mean by stability? Usually what’s computed is stability with respect to decomposition to atoms. But that is pretty meaningless; www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
for instance, of the four homonuclear diatomic molecules (composed of identical atoms) that are most stable with respect to atomization, N2,C2, O2, and P2, two (C2 and P2) are not persistent. You will never see a bottle of them. Nor the tiniest crystal. They are reactive, very much so. In chemistry it’s the barriers to reaction that provide the opportunity to isolate a macroscopic amount of a compound. Ergo the neologism, “bench-stable.” “Bottleable” is another word for the idea. A lifetime of a day at room temperature allows a competent graduate student at the proverbial bench to do a crystal structure and take an NMR scan of a newly made compound. Or put it into a bottle and keep it there for a day, not worrying that it will turn into brown gunk. Of course, one can also observe molecules in noble gas matrixes at 10 kelvin, or flying down a molecular beam, a stream of relatively few molecules. And one can obtain proof of their existence from a variety of spectroscopic techniques. Such molecules are very real; a molecule is a molecule, no matter how long it lives. But if one allows such fleeting molecules to approach each
Hype Here is where hype comes in—not of advertisers, where we expect it, but of scientists. People calculate a new molecule, estimate its energy, find that it will not fall apart. To me the existence of such molecules, if attested to by spectroscopy, suffices. They are real. But there is a natural human tendency to want our molecular children to be exceptional. So, like the parents of the kids in Lake Wobegon or on City X’s West Side, the conceivers of such new molecules look for something special. Could it be that the molecule is “aromatic?” Aromaticity is good, it has been good for 150 years. Perhaps the molecule is more aromatic, or maybe it is endowed with aromaticity of a different kind? There it is! Suppose the ı orbitals of the molecule (the orbitals in the plane of a molecule) also complete a shell, a group of occupied orbitals. Then the molecule is ı-aromatic. That’s surely better than just plain old π-aromatic. But, very often, the molecules conceived in a computer have minimal kinetic persistence. They would never survive aggregation at ambient laboratory conditions, or an encounter with the chemical killers in the air— water, oxygen. Let me show two molecules for which such claims of extra aromaticity have been been made. Both are illustrated in the figure at right: The top one is Al42-, a square planar molecule; the bottom one is PtZnH5-. These molecules are beyond a doubt real, their structures established spectroscopically, and with reliable theory. What grates on me are claims of aromaticity, single or multiple, accompanying the fine experimental and theoretical work on these molecules. I would be willing to bet a good bottle of NY State Riesling (I will leave a sum for the eventuality in my will) that salts of these will not be made in milligram quantities in my lifetime or yours. Or take C6, known for decades. The molecule exists in cyclic (benzene denuded of hydrogens) and linear forms. C6 is observed in the interstellar medium and in the laboratory in a molecular 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
21
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
induced magnetic field
induced movement
H external magnetic field
H
induced movement
H
H
er properties, necessarily results in an ever increasing widening of the 19th-century aromaticity concept. And Henning Hopf, a lover of aromaticity in all its guises, writes:
H
induced magnetic field
H
This schematic diagram of applied and induced currents in benzene in a strong magnetic field shows how nuclear magnetic resonance works. The applied field is indicated by the arrow on the left. In response to it, the π electrons of the benzene ring in blue and red orbitals move in a direction marked by two red circles to create an induced magnetic field (whose lines of force are represented by the blue lines) that opposes in direction the applied field. The hydrogen nuclei feel the sum of the applied field and the induced one. At the positions of the hydrogens in benzene, the two fields reinforce, and the hydrogen nuclei are said to be deshielded.
beam. Not one of the good spectroscopists working on these carbon “clusters” (there are other Cn species) has made a claim of multiple aromaticity for them. Yet there are theoretical papers claiming just such double aromaticity for C6. Carbon clusters, of which C6 is a small example, are patently reactive in a laboratory flask, moving on with a vengeance to graphite or, if oxygen is present, to CO2. Until n=60, when one reaches a really persistent molecule. The beautiful construct of aromaticity, sailing along for 150 years, enriched by new measures (bond equalization, ring currents) is vitiated by the idea being used as a marker of supposed singularity in a reaching for approbation. What bothers me is not the hype—I and most scientists have a finely tuned sensor for it. But I am pained by the damage done to this beautiful, eminently chemical idea that I can trace back a century and a half.
Möbius strip arrangements of orbitals. But to me the labeling of the molecules cited in the last section as aromatic (and of other hypothetical molecules in too many papers I have seen) appears to be less motivated by an intellectual desire to probe what aromatic means than by a reaching for distinction. Aromaticity will survive the current wave of cheapening. The concept will survive because in its strong form—in the shape of benzene and other smaller ring systems with delocalized bonds—it singles out a group of molecules whose kinetic persistence and thermochemical stability go hand in hand. That’s a correlation worth thinking about. Another reason the concept will survive comes from its inherently chemical and, therefore I would argue, changeable nature. Here is what Paul Schleyer, an organic and computational chemist who has contributed immensely to the field, writes:
Will Aromaticity Survive? Oh, it will. It is in the nature of humans to both create a great idea, a new way of seeing one piece of the chemical universe, a way that lets us see similarities and differences. And then, equally human, to weaken the exemplary construction by bringing under its roof ideas or molecules that do not belong. There are reasonable extensions of aromaticity—to three dimensions, to different topologies of orbital interactions, such as those in
Historically, aromaticity has been a time-dependent phenomenon. Aromatic implies various features, properties, or behaviors to chemists with different backgrounds. While “benzene-like” still suffices for some, the “cyclic delocalization of mobile electrons” description now seems paramount. Its general implication for energies and structures, both geometrical and electronic, as well as magnetic and oth-
22
M q M q
M q
M q MQmags q
150 years after Kekulé’s benzene dream, aromatic chemistry has reached a cultural richness and variety which the originator of the hexagonal benzene structure could not have imagined in his wildest fantasies. Human beings need reasons for doing things. Aromaticity provides at least three motivations—first a search to better define the concept, hoping against hope that there is a unique measure of this elusive property. Second, one wants to explore all of its experimental manifestations. This has changed with time, as our tools have— we could not measure internuclear separations and chemical shifts in 1900. Third, and this is well said by Hopf, people have been inspired by aromaticity as a design principle to make ever more interesting molecules. The hype is seen through, the molecules, fleeting or persistent, remain real and beautiful. Acknowledgments I am grateful to Paul Schleyer for his critical comments and many suggestions. Even as in the end he would not have been happy with what I have written (sadly, he passed away as this article went to press). Thanks to Martin Rahm for discussions and providing me with some illustrations. Bibliography Balaban, A. T., P. v. R. Schleyer, and H. S. Rzepa. 2005. Crocker, not Armit and Robinson, begat the six aromatic electrons. Chemical Reviews 105:3436–3447. Berson, J. 2003. Chemical Discovery and the Logicians’ Program. Weinheim: Wiley-VCH. Boldyrev, A. I., and L.-S. Wang. 2005. All-metal aromaticity and antiaromaticity. Chemical Reviews 105:3716–3757. Hopf, H. 2014. My favorite aromatic compounds—a tribute to Friedrich August Kekulé. Chemical Record 14:979–1000. Rocke, A. J. 1985. Hypothesis and experiment in the early development of Kekulé’s benzene theory. Annals of Science 42:255–381. Rocke, A. J. 2014. It began with a daydream: The 150th anniversary of the Kekulé benzene structure. Angewandte Chemie, International Edition. 53:2–7. doi: 10.1002/ anie.201408034. Schleyer, P. v. R. 2001. Introduction: Aromaticity. Chemical Reviews 202:1115–1117.
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Sightings
Fly-By Forestry Takes Off Remote laser imaging can measure the health and density of forests, allowing scientists to observe large swaths of vital ecosystems all at once. Catherine Clabby
D
espite some scattered recent gains, the world’s forests are in trouble. From 2000 to 2012, the planet lost a net total of 1.5 million square kilometers of forestland, according to a 2013 survey based on NASA satellite data. Much of the decline was due to deforestation in Brazil, Indonesia, and other tropical countries, but there have been many other setbacks as well. In the western United States, for instance, trees face an onslaught of wildfires, insect infestations, and drought. The assaults persist despite a growing awareness of the ecological value of forests, particularly their ability to absorb large amounts of carbon dioxide and sequester carbon. As they formulate ways to protect endangered woodlands and rehabil-
www.americanscientist.org
American Scientist
itate ones already lost, scientists and governments need detailed information on the structures and vulnerabilities of forests around the world. Traditional ground-based surveys lack sufficient scope, so scientists are turning to another way to take the measure of the trees: light detection and ranging, or LiDAR, remote-imaging technology. Airplaneborne LiDAR scanners shoot 100,000 pulses of laser light per second to record the distance to the ground. From those data, researchers can measure the shape, type, and density of forest cover over tens of thousands of square kilometers. “That is the real power of LiDAR,” says Van Kane, an ecologist at the University of Washington who uses the technique extensively. “We can build tremendously large databases.”
In one notable recent study, Kane and his colleagues used LiDAR to observe how fires of various intensities affect the forests in Yosemite National Park. Some fires are known to help keep forests healthy by creating gaps in their canopies that enable new growth. Kane’s LiDAR-based studies show more specifically that lowseverity fires produce favorable density changes in areas dominated by red fir forests, but fires of moderate severity are needed to improve areas dominated Airborne LiDAR can measure detailed structure from the tops of trees to the ground only in open forests such as those in the Sierra Nevada mountains. (Image courtesy of Van Kane of the University of Washington and Robert McGaughey of the US Forest Service.)
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
23
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
LiDAR scans merged with burn estimates from satellite data pinpoint changes to forest structure from fires of different intensities at Yosemite National Park (left). The image above is from still-developing research using LiDAR to try to identify individual trees from canopy scans. (Image courtesy of Van Kane of the University of Washington and Robert McGaughey of the US Forest Service.)
by ponderosa and white fir–sugar pine trees. Kane has also combined airborne LiDAR with satellite vegetation data to study how natural fires alter tree density of Yosemite forests. They do so in more irregular ways than was previously known, creating variable mosaics of tree clumps. Those studies will aid forest managers designing controlled burns or mechanical thinning to mimic natural fire’s positive effects. Now the drive is on to make LiDAR even more useful. For instance, airborne LiDAR discerns only modest amounts of detail below the outer canopy in dense forests, so researchers are trying to fill the gap by adding measurements made with ground-based LiDAR. David Kelbe, a doctoral student in imaging science at Rochester Institute of Technology, recently adapted an industrial LiDAR device to create a portable scanner that can be carried into the woods. There, it can be used to acquire diameter data along the full length of tree trunks with enough detail to model three-dimensional trees. Such data could be useful for commercial forest inventories and for habitat studies, and also for calibrating across the different types of LiDAR studies. “We could take advantage of the fine-scale resolution by linking it to the large 24
geographic coverage by an airborne or space-borne platform,” Kelbe says. Carnegie Airborne Observatory earth scientist Greg Asner merges upclose and remote observations to get as near as possible to ground truth in tropical forests. He creates carbon maps, geographically accurate models depicting the density of vegetation in the forests; the more abundant the vegetation, the more carbon is sequestered in its roots, stems, and leaves. To build these maps, Asner combines airborne LiDAR data with nonLiDAR research plot observations, rainfall records, and space-based measurements. By developing algorithms to extract high-resolution vegetation maps from archival data taken by the Landsat satellite, he quickly acquired a vast—and free—satellite data set. Following this approach, Asner has mapped large swaths of the Amazon River basin, Peru, Panama, and Hawaii to pinpoint where carbon sinks most urgently need protecting. He feels the urgency of his work: It can take decades to rebuild a damaged forest into a carbon sink, but almost no time at all to cut or burn a forest down. “So we integrate satellite data with the airborne LiDAR in order to scale up,” Asner says. “This helps to greatly reduce cost and improve our speed.”
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Terrestrial LiDAR units collect more data on tree structure near the ground than airborne instruments can. These scans were made at Harvard Forest in Massachusetts. (Image courtesy of David Kelbe at Rochester Institute of Technology.)
On the million-hectare Island of Hawaii, this Carnegie Airborne Observatory map shows variation in the geographic distribution of carbon-rich vegetation. Red designates the most dense plant biomass per hectare; dark blue indicates no vegetation is present. (Image courtesy of Greg Asner.)
www.americanscientist.org
American Scientist
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
25
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Ethics
What Everyone Should Know about Statistical Correlation A common analytical error hinders biomedical research and misleads the public. Vladica M. Veliþkoviü
I
n 2012, the New England Journal of Medicine published a paper claiming that chocolate consumption could enhance cognitive function. The basis for this conclusion was that the number of Nobel Prize laureates in each country was strongly correlated with the per capita consumption of chocolate in that country. When I read this paper I was surprised that it made it through peer review, because it was clear to me that the authors had committed two common mistakes I see in the biomedical literature when researchers perform a correlation analysis. Correlation describes the strength of the linear relationship between two observed phenomena (to keep matters simple, I focus on the most commonly used linear relationship, or Pearson’s correlation, here). For example, the increase in the value of one variable, such as chocolate consumption, may be followed by the increase in the value of the other one, such as Nobel laureates. Or the correlation can be negative: The increase in the value of one variable may be followed by the decrease in the value of the other. Because it is possible to correlate two variables whose values cannot be expressed in the same units—for example, per capita income and cholera incidence—their relationship is measured by calculating a unitless number, Vladica M. Veliþkoviü is a Doctor of Medicine, a PhD student in public health, and a fulltime teaching assistant at the Public Health Department, Faculty of Medicine, University of Niš, Serbia. His research interests are in the use of computational and mathematical models for public health insight. E-mail: vladica.velickovic@ __________ medfak.ni.ac.rs ________ 26
the correlation coefficient. The correlation coefficient ranges in value from –1 to +1. The closer the magnitude is to 1, the stronger the relationship. The stark simplicity of a correlation coefficient hides the considerable complexity in interpreting its meaning. One error in the New England Journal of Medicine paper is that the authors fell into an ecological fallacy, when a conclusion about individuals is reached based on group-level data. In this case, the authors calculated the correlation coefficient at the aggregate level (the country), but then erroneously used that value to reach a conclusion about the individual level (eating chocolate enhances cognitive function). Accurate data at the individual level were completely unknown: No one had collected data on how much chocolate the Nobel laureates consumed, or even if they consumed any at all. I was not the only one to notice this error. Many other scientists wrote about this case of erroneous analysis. Chemist Ashutosh Jogalekar wrote a thorough critique on his Scientific American blog The Curious Wavefunction, and Beatrice A. Golomb of University of California, San Diego, even tested this hypothesis with a team of coauthors, pointing out that there is no link. Regardless of the scientific community’s criticism of this paper, many news agencies reported on this article’s results. The paper was never retracted, and to date has been cited 23 times. Even when erroneous papers are retracted, news reports about them remain on the Internet and can continue to spread misinformation. If these faulty conclusions reflecting statistical
misconceptions can appear even in the New England Journal of Medicine, I wondered, how often are they appearing in the biomedical literature generally? The example of chocolate consumption and Nobel Prize winners brings me to another, even more common misinterpretation of correlation analysis: the idea that correlation implies causality. Calculating a correlation coefficient does not explain the nature of a quantitative agreement; it only assesses the intensity of that agreement. The two factors may show a relationship not because they are influenced by each other but because they are both influenced by the same hidden factor—in this case, perhaps a country’s affluence affects access to chocolate and the availability of higher education. Correlation can certainly point to a possible existence of causality, but it is not sufficient to prove it. An eminent statistician, George E. P. Box, wrote in his book Empirical Model Building and Response Surfaces: “Essentially, all [statistical] models are wrong, but some are useful.” All statistical models are a description of a real-world phenomenon using mathematical concepts; as such, they are just a simplification of reality. If statistical analyses are carefully designed, in accordance with current good practice guidelines and a thorough understanding of the limitations of the methods used, they can be very useful. But if models are not designed in accordance with the previous two principles, they can be not only inaccurate and completely useless but also potentially dangerous—misleading medical practitioners and public.
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
I often use and design mathematical models to gain insight into public health problems, especially in health technology assessment. For this purpose I use data from already published studies. Uncritical use of published data for designing these models would lead to inaccurate, completely useless—or worse, unsafe— conclusions about public health. Getting to Know the Data In well-designed experiments, correlation can confirm the existence of causality. Before causal inferences can be derived from nonexperimental data, however, careful statistical modeling must be used. For example, a randomized controlled trial published by epidemiologist Stephen Hulley of University of California, San Francisco, and colleagues determined that hormone replacement therapy caused increased risk of coronary heart disease, even though previously published nonexperimental studies concluded that the therapy lowered its risk. The well-designed experiment showed that the lower-than-average incidence of coronary heart disease in the nonexperimental studies was caused by the benefits associated with a higher www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
average socioeconomic status of those using the hormone treatment, not by the therapy itself. Re-analyses of nonexperimental studies, including the effect of socioeconomic status on outcome, showed the same findings as the randomized controlled trial. But the damage was done: The US Food and Drug Administration Advisory Committee had already approved a label change for hormone replacement therapy that permitted prevention of heart disease to be included as an indication, almost a decade before the experiment mentioned above. Even though scientists are well aware of the mantra “correlation does not equal causation,” studies conflating correlation and causation are all too common in leading journals. A widely discussed 1999 article in Nature found a strong association between myopia and night-time ambient light exposure during sleep in children under two years of age. However, another study published a year later— also in Nature—refuted these findings and reported that the cause of child myopia is genetic, not environmental. This new study found a strong link between parental myopia and the development of child myopia, noting that
myopic parents were also incidentally more likely to leave a light on in their children’s bedroom. In this example, authors came to a conclusion based on a spurious correlation, without checking for other likely explanations. But as shown in the figure on page 28, completely, laughably unrelated phenomena can be correlated. Along with the mistaken idea that correlation implies causation, I also see examples of a third, opposite type of correlation error: the belief that a correlation of zero implies independence. If two variables are independent of one another—for example, the number of calories I ate for breakfast over the past month and the temperature of the Moon’s surface over the same period— then I would expect the linear correlation coefficient between them to be zero. The reverse is not always the case, however. A linear correlation coefficient of zero does not necessarily mean that the two variables are independent. Although this principle can be applied in many cases, there are still nonmonotonic relationships (think of a line graph that goes up and down) in which the value of the correlation coefficient equaling zero will not imply independence. To better envision this abstract 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
27
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
per capita consumption of mozzarella cheese (US) civil engineering doctorates awarded (US) – 800
11
pounds
– 600
10
– 500 9 2001
2002
correlation: 96%
2003
2004
2005
2006
2007
2008
sources: USDA & National Science Foundation
degrees awarded
– 700
2000
– 400 2009
tylervigen.com
All sorts of unrelated phenomena can be correlated, including the per capita consumption of mozzarella cheese and the number of civil engineering doctorates awarded in the United States. Misinterpretations of spurious correlations make it past peer review all too often.
12
12
10
10
8
8
6
6
4
4 6
8
10 12 14 16 18 x1
12
12
10
10 y1
y1
(steadily increasing or decreasing), but judging from the examples I see in the biomedical literature, some people are cutting corners. A U-shaped relationship between two variables may have a linear correlation coefficient of zero, but in that case it does not imply that the variables are independent. In 1973, Frank Anscombe, a statistician from England, developed idealized data sets to graphically demonstrate this misconception. Called Anscombe’s quartet, this representation shows four data sets that have very similar statistical properties, each with a correlation coefficient of 0.816. On
y1
y1
concept, imagine flipping a fair coin to determine the amount of a bet, using the following rule: When heads is flipped first and then tails, you lose $10; if tails comes up first and then heads, you win $20. If we define X as the amount of the bet and Y as the net winning, X and Y may have zero correlation, but they will not be independent—indeed, if you know the value of X, then you know the value of Y. Nevertheless, the relationship between the two variables may be nonlinear, and thus not detected by a linear correlation test. Ideally, a scientist would plot the data first to make sure it is monotonic
4
8
6
4
4 6
8
10 12 14 16 18 x1
4
6
8
10 12 14 16 18 x1
4
6
8
10 12 14 16 18 x1
8
6
4
Anscombe’s quartet is a set of four plots that show data resulting in strong correlation coefficients, in this case of 0.816. Although that statistic appears to indicate a strong linear relationship, such a conclusion would only be appropriate for the top left graph. The other three violate assumptions of the statistical analysis, emphasizing the importance of plotting data first to choose a suitable analysis. 28
M q M q
M q
M q MQmags q
first blush, the variables in each case appear to be strongly correlated. However, it is enough just to observe the plots of these four data sets to realize that such a conclusion is wrong (see figure below). Only the first graph clearly shows a linear relationship where the interpretation of a very strong correlation would be appropriate. The second and the fourth graphs show that the relationship between the two variables is not linear, and so the correlation coefficient of 0.816 would not be relevant. The third graph depicts an almost perfect relationship in which the linear correlation coefficient value should be almost 1, but a single outlier decreases the linear correlation coefficient value to 0.816. Such misconceptions can have major impacts on human health and policy. When testing the safety of a new substance, toxicologists often assume that high-dose tests will reveal lowdose effects more quickly and with less ambiguity than long-period, low-dose testing. But Anderson Andrade of the Charité University Medical School and his colleagues showed otherwise. They tested the effect of a plastic ingredient and endocrine disruptor called DEHP (di-(2-ethylhexyl)-phthalate) on rats at two widely different levels of exposure; in the experiment, the researchers monitored the activity of a key enzyme called aromatase, which induces masculinization in the brain. They showed that lower doses of DEHP suppress aromatase, but higher doses actually increase the enzyme’s activity. In Andrade’s study, this doseresponse curve follows a nonmonotonic pattern, and the usual high-dose tests would not predict these lowdose effects. In 2010, the US Consumer Product Safety Commission announced that products containing DEHP may be considered toxic and hazardous. Studies such as this one have led to the questioning of basic assumptions used to design toxicological tests of hormonally active compounds, and this example again confirms that sloppy analysis, or poor and superficial interpretation of data, certainly is not a benign phenomenon. Avoiding Errors All three misinterpretations of correlation can be avoided. Epidemiologist and statistician Austin Bradford Hill suggested in 1965 certain criteria that must be met to justify concluding causal
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
associations. Those criteria are still valid, but newer methods for drawing causal inference from observational data have also been developed. Others are still in development—for example, Judea Pearl and James Robins independently introduced a new framework for drawing causal inference from nonexperimental studies. Robins figured out a statistical solution that can convert nonexperimental data into data like those resulting from a randomized controlled trial. To avoid an ecological inference fallacy, Hill suggests that researchers who lack data at the individual level should perform careful multilevel modeling. This kind of fallacy is often made in epidemiological studies when researchers only have access to aggregate data. In his 1997 book A Solution to the Ecological Inference Problem, Gary King of Harvard University describes the statistical difficulties that lead to such errors. As King explains, data used for ecological inferences tend to have massive levels of heteroskedasticity, meaning that the variability within different parts of a data set fluctuates widely across the range of values. Aggregate data are often easier to obtain than data on individuals and may offer valuable clues about individual behavior when analyzed correctly, but that requires individuallevel data. Then, modeling at the individual level must be performed in an attempt to determine the connection between individual and aggregate levels. Only then is it possible to conclude whether the correlation at the aggregate level applies to the individual level. Ecologic data alone do not allow one to determine whether ecologic bias is likely to be present for this type of data set; the only solution is to supplement the ecologic data with individual-level data. This type of modeling usually involves mixed or multilevel statistical models, which allow for individuals to be nested into aggregates. To avoid assuming two variables are independent because their correlation equals zero, the data must be plotted to make sure it is monotonic. If not, one or both variables can be transformed to make them so. In a transformation, all values of a variable are recalculated using the same equation, so that the relationship between the variables is maintained but their distribution is changed. Different www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
types of transformations are used for different distributions; for example, the logarithmic transformation compresses the spacing between large values and stretches out the spacing between small values, which is appropriate when groups of values with larger means also have larger variance. Without access to the original data, it is impossible to know whether this error has been committed. Correlation errors are as old as statistics itself, but as the number of published papers and new journals continues to increase, errors multiply as well. Although it is not realistic to expect all researchers to have an in-depth knowledge of statistical methods, they must continuously monitor and extend basic methodological knowledge. Ignorance or uncritical assessment of the adequacy and limitations of statistical methods used often are the source of errors in academic papers. Involvement of biostatisticians and mathematicians in a research team is no longer an advantage
Involvement of biostatisticians and mathematicians in a research team is no longer an advantage but a necessity. but a necessity. Some universities offer the option for researchers to check their analysis with their statistics department before sending the article to review with a publication. Although this solution could work for some researchers, it provides little incentive for the researcher to take this extra time. The process of scientific research requires adequate knowledge of biostatistics, a constantly changing field. To that end, biostatisticians should be involved in the research from the very beginning, not after the measurement, observations, or experiments are completed. On the other hand, basic knowledge of biostatistics is essential in the critical appraisal of published scientific papers. A critical approach must exist regardless of the journal in which the paper is published. A more careful use of statistics in biology can also help set more rigorous standards for other fields.
To avoid these problems, scientists must clearly show that they understand the assumptions behind a statistical analysis and explain in their methods what they have done to make sure their data set meets those assumptions. A paper should not make it through review if these best practices are not followed. To make it possible for reviewers to test and replicate analyses, the following three principles must become mandatory for all authors intending to publish results: publishing data sets as supplementary information alongside articles, giving reviewers full access to the software code used for the analysis, and registering the study in a publicly available database online with clearly stated study objectives before the beginning of research, with mandatory submission of summary results to avoid publication bias toward positive results. These steps could speed up the process of detecting errors even when reviewers miss them, provide increased transparency to bolster confidence in science, and, most important, avoid damage to public health caused by unintentional errors. Bibliography Aldrich, J. 1995. Correlations genuine and spurious in Pearson and Yule. Statistical Science 10:364–376. Andrade, A. J. M., S. W. Grande, C. E. Talsness, K. Grote, and I. Chahoud. 2006. A dose-response study following in utero and lactational exposure to di-(2-ethylhexyl)phthalate (DEHP): Non-monotonic dose– response and low dose effects on rat brain aromatase activity. Toxicology 227:185–192. Anscombe, F. J. 1973. Graphs in statistical analysis. American Statistician 27:17–21. David, H. A. 2009. A historical note on zero correlation and independence. American Statistician 63:185–186. Hill, A. B. 1965. The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine 58:295–300. King, G. 1997. A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data. Princeton, NJ: Princeton University Press. Lemmens, P. 2010. U-shaped curve. In N. Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: SAGE Publications. pp. 1587–1589. doi: 10.4135/9781412961288. n485. Pearl, J. 2009. Causal inference in statistics: An overview. Statistics Surveys 3:96–146. Wakefield, J. 2009. Multi-level modelling, the ecologic fallacy, and hybrid study designs. International Journal of Epidemiology 38:330– 336. doi: 10.1093/ije/dyp179. Zadnik, K., et al. 2000. Myopia and ambient night-time lighting. Nature 404:143–144. 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
29
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Technologue
Each Blade a Single Crystal Clever casting techniques produce jet engines that can withstand 2,000-degree temperatures, allowing unprecedented efficiency. Lee S. Langston
T
he very first flight powered by a jet engine took place in Germany on August 27, 1939. Now most of the 19,400 airplanes in the global air-transportation fleet are jets, with about 5 million passengers boarding them every day. On heavily traveled North Atlantic routes between North America and Europe, there are about 800 flights daily; it is possible for a passenger to reach almost any part of the planet within a day. Yet the jet engine remains largely unsung as both a masterpiece of energy conversion and a means of modern transit. Since it was invented, the aviation version of the gas turbine (a common workhorse for the generation of electricity) has been continuously upgraded by legions of engineers. Following the laws of thermodynamics, one of the most fruitful paths toward better performance has been finding ways to increase thermal efficiency— the amount of fuel that actually turns into the desired output and not waste heat—by raising the temperature at which the jet engine operates. Creating turbine parts that can survive extreme heat has been a major engineering challenge. Meeting it has required fundamentally rethinking the material structure of the turbine blades, making metals do things that they do not normally do in nature. The result has been a largely invisible revolution,
Lee S. Langston is a professor emeritus of mechanical engineering at the University of Connecticut. He received a PhD in 1964 from Stanford University. He was a research engineer with Pratt & Whitney Aircraft working on fuel cells, heat pipes, and jet engines from 1964 to 1977. For the past 10 years he has written an annual review of the gas turbine industry for Mechanical Engineering magazine. Email: ______________ [email protected] 30
but one that is responsible for much of the ongoing success of the jet age. Superalloys Beat the Heat All turbines operate on similar principles: A gas or other fluid turns a rotor, which can do useful work. In a jet engine, air is taken in and compressed, then fuel is added and combusted to heat the air, which then turns the rotor blades of a turbine. The hot exhaust is then expelled through a nozzle to create thrust. (See “The Adaptable Gas Turbine,” Technologue, July–August 2013.) Gas turbine thermal efficiency increases with greater temperatures of gas flow exiting the combustor and entering the turbine. In modern, high-performance jet engines, the temperature of this gas can exceed 1,650 degrees Celsius (nonaviation gas turbines operate at 1,500 degrees or lower, whereas military jet engines can reach 2,000 degrees, which exceeds the boiling point of molten silver). Since the 1950s, in high temperature regions of the turbine, special blades and vanes are made from a combination of metals based on high-melting-point nickel. This material is called a “superalloy” because it retains strength and resists oxidation at extreme temperatures. The nickel in this superalloy has a crystal structure called a face-centered cubic, meaning it’s a cube with an atom at each corner and one at the center of each side. Other metallic elements are alloyed with nickel to produce a microstructure with two variant types, or phases, of crystals, one of which contains different elements at specific locations in the cubic crystal. When this phase is equally distributed in the larger nickel alloy, it helps stabilize the microstructure at elevated temperatures, resulting in high strength and corrosion resistance.
Such superalloys, when they are cast using conventional methods in a vacuum furnace to prevent oxidation, soften and melt at temperatures between 1,250 and 1,400 degrees. This temperature limit means blades and vanes closest to the engine combustor may be operating in gas path temperatures far exceeding their melting point, and thus must be cooled to typically eight- to nine-tenths of the melting temperature to maintain integrity. To maintain these temperatures, turbine airfoils subjected to the hottest gas flows must be cast with intricate internal passages and surface hole patterns needed to channel and direct cooling air (bled from the compressor) within and over their exterior surfaces. After casting, the working surface can be sprayed with ceramic thermal barrier coatings to increase life and act as a thermal insulator (allowing inlet temperatures a few hundred degrees higher). To create blades that can endure these extreme conditions, engineers began digging deeper into the structure of the blades themselves starting in the 1960s. Conventionally cast turbine airfoils are polycrystalline, consisting of a threedimensional mosaic of small metallic crystals, or grains, formed during solidification in the casting mold. Each grain has a different orientation of its crystal lattice from its neighbors’. The interfaces between these crystals are most often not aligned along the crystals’ axes, resulting in what are called grain boundaries. Untoward events happen at grain boundaries, such as increased chemical activity, slippage under stress loading, and the formation of voids. Among other problems, these conditions can lead to creep, an insidious life limiter: the tendency of blade material to deform at a temperature-dependent rate under
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
stresses well below the yield strength of the material. Corrosion and cracks also start at grain boundaries. Thus grain boundaries greatly shorten turbine vane and blade life, and require lowered turbine temperatures with a concurrent decrease in engine performance. One can try to gain sufficient understanding of grain boundary phenomena so as to control them. But in the early 1960s, researchers at jet engine manufacturer Pratt & Whitney Aircraft (now called simply Pratt & Whitney, and owned by United Technologies Corporation) set out to deal with the problem by eliminating grain boundaries from turbine airfoils altogether. Its researchers invented techniques to cast single-crystal turbine blades and vanes, and designed alloys to be used exclusively in single-crystal form. All Going the Same Direction As part of that effort, mechanical engineer Maurice (Bud) Shank left the faculty of the Massachusetts Institute of Technology to form the Advanced Materials Research and Development Laboratory (AMRDL) in North Haven (then later in Middletown), Connecticut, for Pratt & Whitney. Over its subsequent 10-year life, AMRDL pioneered single-crystal superalloy technology. AMRDL was an excellent example of industry using fundamental and applied research to create and bring to market a superior product within a decade. At its peak the staff numbered more than 200 scientists, engineers, and technicians, conducting research and development on all aspects of single-crystal technology, from casting, alloy development, coatings, joining, and repair. I developed a picture of AMRDL’s early days from discussions with Maury Gell and Tony Giamei, both retired Pratt & Whitney researchers and managers. As they tell it, one of Shank’s first acts was to hire Frank VerSnyder from jet engine manufacturer General Electric. VerSnyder had developed a concept that was a step toward single crystals, because it eliminated grain boundaries in blades in what’s called the spanwise direction, from root to tip, during casting. (General Electric did not realize the potential of VerSnyder’s concept, so had been reluctant to exploit or patent it.) VerSnyder’s first invention and patent for Pratt & Whitney, developed in 1966, was a turbine blade that contained only columnar grains, which form along the length of the blades. He accomwww.americanscientist.org
American Scientist
These turbine blades have had their surfaces etched with acid to reveal their inner structure. The pair at left are single crystals, whereas the pair in the middle are directionally solidified, with all the crystal boundaries going in one direction. The pair at right are made up of small crystal grains, with numerous boundaries. Blades of single crystal have significantly increased life spans under extreme temperature conditions. (Image courtesy of Alcoa Howmet.)
plished this formation with a process called directional solidification, which is carried out in a vacuum chamber furnace, and involves pouring molten superalloy metal into a vertically mounted, ceramic mold heated to metal melt temperatures, and filling it from root to tip. The bottom of the mold is formed by a water-cooled copper chill plate, with a knurled surface exposed to the molten metal. The chilled knurls cause crystals to form from the liquid superalloy, and the solid interface advances. A temperature-controlled enclosure surrounds the mold, and maintains a temperature distribution on the outside surfaces of the mold so that the latent heat of solidification is removed by conducting it through the solidified superalloy to the chill plate. As the solidification front advances from root to tip, the mold is slowly lowered out of the temperature-controlled enclosure. After molding, these blades are then cleaned and machined to be mounted in an engine. The final result is a turbine airfoil composed of columnar crystals or grains running in a spanwise direction. In the case of a rotating turbine blade, where spanwise centrifugal forces along the blade are characterized by accelerations on the order of 20,000 times the force of gravity, the columnar grains are thus now aligned along the major stress axis. Their alignment strengthens the blade
and effectively eliminates destructive crack initiation between grains in directions normal to blade span. In gas turbine operation, directionally solidified turbine blades have much improved ductility and thermal fatigue life. They also provide a greater tolerance to localized strains (such as at blade roots), and have allowed small increases in turbine temperature and performance. Once material properties were measured and manufacturing technique perfected, directionally solidified turbine blades and vanes were ready for engine application. Their first use by Pratt & Whitney in a production engine was in 1969, to power the SR-71 Blackbird supersonic reconnaissance aircraft. Commercial jet engine use of these airfoils followed, starting in 1974. This success set the stage for the invention of singlecrystal turbine airfoils, and with it much greater efficiency improvements. Running the Pigtail While casting directionally solidified crystals in the late 1960s, AMRDL researcher Barry Piearcey found that if a right-angle bend occurred in the casting mold, a short distance above the knurled chill plate surface (called the “starter” chamber), the number of columnar crystals exiting the bend would be reduced. Two such bends reduced the number even more. Later, while investigating the 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
31
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
intake
compression
combustion
exhaust
Wikimedia Commons
air inlet
combustion chambers cold section
properties of single-crystal springs, Giamei found that a helical channel with smooth continuous turning was a natural substructure filter, admitting columnar crystals from the starter and emitting one single crystal to start the singlecrystal structure of the turbine blade. This single-crystal selector was dubbed the “pigtail”; mastering it proved challenging, however. As the single-crystal structure forms, onedimensional heat conduction must be maintained as the mold is withdrawn from the temperature-controlled enclosure. Any heat conducted to the mold’s lateral surfaces can cause localized crystallization, which disrupts the singlecrystal structure with secondary grains. Pratt & Whitney then refined techniques to manufacture single-crystal turbine airfoils and overcome casting defects such as secondary grains or recrystallized regions. This early pioneering work has been carried on by other manufacturers and improved on over
A mathematical modeling image illustrates how a helical formation selects out a single crystal from a solidifying metal alloy. Each color represents a different crystal grain. (Image courtesy of Charles-André Gandin, CNRS.) 32
turbine hot section
A jet engine operates by first taking air in and compressing it. Fuel is added and combusted to heat the air, which then turns the rotor blades of a turbine. The hot exhaust is expelled through a nozzle to create thrust.
the past 40 years. Yields greater than 95 percent are now commonly achieved in the casting of single-crystal turbine airfoils for aviation gas turbines, which minimizes the higher cost of singlecrystal casting compared to conventionally cast blades. According to Gell, the first singlecrystal castings were made from existing polycrystalline alloys. These alloys all contained carbon, boron, and zirconium, three elements that preferentially segregate themselves to grain boundaries, which provides high temperature grain boundary strength and ductility for creep resistance. But in the singlecrystal castings, which have longer solidification times and no grain boundaries, these three elements produced compounds with carbons, resulting in poor high and low fatigue properties. In the early 1970s, alloys specifically for single crystals were developed that eliminated carbon, boron, and zirconium, resulting in higher melting points, higher creep strength, and greatly improved high and low cycle fatigue resistance in the final blades and vanes. An alloy dubbed PWA 1484, which Pratt & Whitney developed in the early 1980s, consists (by weight) of nickel (59 percent), cobalt (10 percent), tantalum (9 percent), aluminum (6 percent), tungsten (6 percent), and a few other elements (10 percent). One of the others is rhenium (3 percent), which provides a significantly higher metal temperature capability. Gell notes that rhenium is a “by-product of a by-product,” derived from specific copper-molybdenum ores, and a very costly element in limited supply. Before committing to the use of PWA 1484, Pratt & Whitney manage-
ment had to be assured that rhenium could be obtained over time at a known, acceptable price. The novel solution was that the company entered into a longterm contract with a Chilean mining company to provide the material. The first real engine tests of singlecrystal turbine blades were carried out in 1967 and 1968 at test facilities in Florida, on the SR-71 Blackbird engine. However, the tests on this supersonic power plant showed that the technology was not ready. Later, in the 1970s, with more mature technology, singlecrystal turbine airfoils were installed in P&W F100 production engines, to power the F-15 and F-16 jet fighters. The first commercial aviation use was in the JT9D-7R4 jet engine, which received flight certification in 1982, powering the Boeing 767 and Airbus A310. In 1986, Pratt & Whitney received the ASM International Engineering Materials Achievement Award for the development of single-crystal turbine blades. Technology history shows that a game changer such as single-crystal turbine blades usually entails a longterm process, typically 30 years or more. Pratt’s AMRDL group did it in less than 10 years, from concept to a marketed product. This targeted group success is worthy of study in itself, something that has been undertaken by Samant Chandrashekar and his colleagues at the National Institute of Advanced Studies in Bangalore, India. The story of the creation of these gems of gas turbine efficiency is an exemplar for others to follow. Rolls-Royce, one of Pratt & Whitney’s competitors, considers such turbine blades one of their very high-
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
value-added manufacturing core the lower vacuum furnace chamcompetencies. Computer prober to yield single-crystal solidifigrams can use a type of mathematcation. The glowing mold is then ical modeling called finite element cooled and broken apart, freeing analysis to further refine the singlethe blades to be cleaned and treatcrystal solidification process. ed for final inspection. All in all, In jet engine use, single-crystal my visit to Howmet showed how turbine airfoils have proven to much single-crystal technology has have as much as nine times more advanced since its invention. relative life in terms of creep As more manufacturers start strength and thermal fatigue resiscasting single-crystal blades for tance, and over three times more such expanded use in power genrelative life for corrosion resiserating turbines, the technology is tance, when compared to smalllikely to become less expensive, grained crystal counterparts. which means that more wideModern high-temperature turbine spread power plants may start to jet engines with long life (that is, use these durable blades. With reon the order of 25,000 hours of opcent decreases in the price of nateration between overhauls) would ural gas, the use of gas in power not be possible without the use A worker prepares to remove a glowing-hot mold from a generation is likely to increase, of single-crystal turbine airfoils. furnace after casting. (Image courtesy of Alcoa Howmet.) leading to a more urgent need for By eliminating grain boundarreduction in greenhouse gas emisies, single-crystal airfoils have longer 60 percent. The 9H, weighing 367,900 sions. The long life of single-crystal thermal and fatigue life, are more cor- kilograms, uses single-crystal turbine blades can help these plants work at rosion resistant, can be cast with thin- vanes and blades with lengths of about higher temperatures and thus mainner walls—meaning less material and 30 to 45 centimeters (the blade lengths tain efficiency, consequently reducing less weight—and have a higher melt- in Pratt & Whitney’s aircraft engines emissions, for the long haul. ing point temperature. These improve- are about 8 centimeters). Each finished ments all contribute to higher gas tur- casting weighs about 15 kilograms, Bibliography Carter, P., D. C. Cox, C. A. Gaudin, and R. C. bine thermal efficiencies. and each is a single-crystal airfoil. Reed. 2000. Process modeling of grain selecRecently, to bring myself up to date tion during solidification of single crystal Engines on the Ground on single-crystal casting technology, superalloy castings. Materials Science and The newest chapter of the single-crystal I visited a foundry where the latest, Engineering A280:233–246. story concerns their introduction in very large combined-cycle blades are Chandrashekar, S., R. Nagappa, L. Sundaresan, large gas turbines used in electric power cast. Located in Hampton, Virginia, and N. Ramani. 2011. Technology & Innovation in China: A Case Study of Single Crystal plants. These units—producing as much Alcoa Howmet is a foundry where gas Superalloy Development for Aircraft Turbine as 500 megawatts of electricity, enough turbine manufacturers such as General Blades, R4–11. ISSSP National Institute of Adto power several hundred thousand Electric, Siemens, Alstom, and Mitvanced Studies, Bangalore. http://isssp.in/ homes—are using supersized single- subishi contract turbine blade casting. wp-content/uploads/2013/01/Technology_________________________ and-Innovation-in-China-A-case-Study-ofcrystal blades and vanes for both corro- They can choose directional solidifica_________________________ ___________ Single-Crystal4.pdf sion resistance and increased tempera- tion (expensive), single-crystal (more ture capability, which add to efficiency. expensive), or single-crystal with exact Copley, S. M., A. F. Giamei, M. F. Hornbecker, and B. H. Kear. 1971. Apparatus and Method Their first use was for corrosion resis- lattice orientation specified (most exfor Single Crystal Casting. United States Pattance in a 163-megawatt electric power pensive). Because single-crystal propent Office, December 7, Patent No. 3,625,275. gas turbine produced by Siemens, intro- erties such as elastic modulus (the ten- Gell, M., D. N. Duhl, D. K. Gupta, and K. D. duced to the market in 1995. In more re- dency of the material to deform along Sheffler. 1987. Advanced superalloy airfoils. Journal of Metals 39:11–15. cent years, to increase thermal efficiency, a specific axis) vary with lattice anguelectric power gas turbines inlet turbine lar orientation, the optimization of this Giamei, A. F. 2013. Development of single crystal superalloys: A brief history. Adtemperatures have been increased to property can improve specific problem vanced Materials and Processes 171(9):26–30. aviation levels, and so single-crystal air- areas of blade design, such as creep life Langston, L. S. 2006. Crown jewels. Mechanical foils with higher temperature capacity or critical vibration modes. Engineering Magazine 128(2):31–33. are now needed for long life. Howmet’s vacuum furnaces for Langston, L. S. 2013. The adaptable gas turGeneral Electric’s 9H turbine, a casting single-crystal blades are huge. bine. American Scientist 101:264–267. 50-hertz combined-cycle gas turbine Each is about two stories high, with a Piearcey, B. J. 1970. Single Crystal Metallic Part. (meaning it uses its waste heat to pro- lower chamber where the investment United States Patent Office, February 10, Patent No. 3,494,709. duce additional power in a steam cy- casting ceramic mold (which can have cle), is one of the world’s largest. The multiple cavities to cast a number of Shank, M. E. 1991. Francis Louis VerSnyder. Memorial Tributes: National Academy of Enfirst model went into service in 2003 blades at once) is positioned for pregineering, Vol. 4. Washington, DC: The Naat Baglan Bay on the south coast of heating. Then the mold is raised to tional Academies Press, pp. 323–326. Wales, feeding as much as 530 mega- an upper chamber where pouring of Sharke, P. 2000. Lost and foundry. Mechanical watts of electricity into the United the molten superalloys occurs, under Engineering Magazine 122(9):62–67. Kingdom’s electric grid at a combined- single-crystal conditions. The mold is Smil, V. 2010. Prime Movers of Globalization. Cambridge: MIT Press. cycle thermal efficiency of just under then lowered at a controlled rate into www.americanscientist.org
American Scientist
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
33
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
FEATURE ARTICLE
Arsenic, the “King of Poisons,” in Food and Water Levels of this poisonous element can far exceed the US Environmental Protection Agency's water standards in common foods such as rice. Andrew Yosim, Kathryn Bailey, and Rebecca C. Fry
A
rsenic is contaminating water in regions around the world, including the United States. For example, our recent work has highlighted that arsenic is contaminating the water of residents of North Carolina and Mexico with detrimental health impacts, particularly in children. The World Health Organization has called arsenic contamination in Bangladesh, with pervasive arsenic-rich drinking water resulting from tube well establishment, “the largest mass poisoning of a population in history.” At present, it is estimated that more than 100 million individuals worldwide are chronically exposed to levels of inorganic arsenic in drinking water that can pose a significant threat to human health. Developing fetuses and children are particularly
Andrew Yosim is a graduate student at the UNCGillings School of Global Public Health. He studies the epigenetic mechanisms underlying the health consequences associated with exposure to toxic metals using systems biology approaches. Kathryn Bailey is a toxicologist at Syngenta Crop Protection. She has years of experience studying the mechanisms of arsenic-associated toxicity with particular interest in the latent health effects associated with arsenic exposure in early life. Rebecca Fry is an associate professor of Environmental Sciences and Engineering at the UNC-Gillings School of Global Public Health. She is the Deputy Director of the UNC Superfund Research Program. Fry is an expert on epigenetic mechanisms underlying metals-induced toxicities, particularly related to prenatal and early life exposures. E-mail for Fry: _______ [email protected]. 34
susceptible to arsenic; exposures have been linked to a number of health outcomes, including increased morbidity, mortality, changes in the immune system, and increased risk for cancers and chronic diseases later in life. Drinking water tends to be the largest source of arsenic exposure worldwide. Based on a growing body of research, in 2001 the US Environmental Protection Agency (EPA) established a maximum contaminant level of 10 parts per billion in public drinking water. Although the EPA’s regulatory power has helped reduce arsenic exposure from public drinking water, recent results suggest that consumers may be unknowingly exposed to arsenic from a currently unregulated source, and one of the most humble of culprits. In November 2012, Consumer Reports released a report of inorganic arsenic testing performed on one of the most common food staples worldwide: rice. The results were striking; among the 223 rice and rice products tested, most exceeded the EPA’s limit on inorganic arsenic in drinking water of 10 parts per billion. Many of the samples contained arsenic levels significantly higher than this limit; the highest sampled product exceeded 270 parts per billion. Around the same time, the US Food and Drug Administration (FDA) released their own results of almost 200 rice products available for purchase in the United States and found similarly high levels of inorganic arsenic. The national media quickly reported the findings, but many were left to won-
der if this were simply another case of a mass media–fueled panic. Unfortunately, the health implications of such reports would not be so easily dismissed. Instead, they provide a new insight into a potentially significant source of chronic arsenic exposure in the United States. The acute toxicity of arsenic has been recognized since antiquity. Known as both the “king of poisons” and the “poison of kings,” the element’s infamy grew during the Middle Ages as an almost untraceable means of murder. Nearly odorless and tasteless, arsenic could be discreetly slipped into food or drink and would all but guarantee the victim’s untimely departure, masked by the similar symptoms of food poisoning. As a result, the sudden death or illness of nobility was often accompanied by suspicion of assassination via the toxicant. For example, the Italian House of Borgia acquired considerable wealth and power through their use of arsenic-tainted wines to assassinate influential popes and cardinals during the 15th and 16th centuries. Eventually arsenic’s use as a poison diminished once sensitive tests were developed to detect it, starting in the mid-19th century. However, its infamy is not forgotten, and modern epidemiology has shown that arsenic does not need to fall into the hands of a killer to be deadly. This metalloid—an element with metallic and nonmetallic properties—is present in every region of the globe. Arsenic is found in soil, due to its natural distribution throughout the Earth’s crust, and also as a by-product of current and historical industrial or
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
agricultural processes such as pesticide use, smelting, and wood preservation. Given the ubiquity of arsenic in the environment, it is interesting that reports detailing the high inorganic arsenic content of certain rice samples have been published only recently. To understand the findings, it is important to recognize arsenic’s underlying chemistry. This element may be present in either organic or inorganic forms, and inorganic arsenic is far more toxic than most organic forms found in the environment and food. In the human body, a series of successive reduction and oxidative reactions biotransforms inorganic arsenic to yield different arsenic metabolites, some of which may be more toxic than inorganic arsenic. The toxicity of each metabolite is reflective of its valence www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
Because of natural and anthropogenic factors, rice fields in Bangladesh have high levels of arsenic, some of which is taken up by rice plants and passed on to the consumer. (Photograph from Kai Udert/Nature Geoscience.)
state, which determines its reactivity, half-life, and distribution in the body. As a result, each arsenic form can have different effects on the body, giving rise to large differences in toxicity. Until recently, the most common and cost-effective method of measuring arsenic content did not differentiate between organic and inorganic forms in food, and it was thus predicted that the accumulation of arsenic in different foods was predominantly organic (arguably the less toxic form). However, advances in technologies have allowed researchers to conduct specialized testing of different products, which, in the case of rice, has
revealed substantial accumulation of inorganic arsenic. Consequences of Arsenic Exposure In the past 40 years, researchers have uncovered a range of health conditions in adults associated with exposure to inorganic arsenic, including diabetes, blood diseases, cardiovascular disease, and various cancers such as those of the lung, urinary bladder, skin, and kidneys. Due to the body of evidence concerning its links to cancer, the International Agency for Research on Cancer has listed inorganic arsenic as a Group 1 carcinogen, meaning it is a known carcinogen in humans. As a 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
35
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
inorganic arsenic (parts per billion)
food
5 apple juice
1 pork
100 rice
1 chicken
1 shrimp
4 fresh fish
result of its toxicity and potential for human exposure, the US Agency for Toxic Substances and Disease Registry has ranked inorganic arsenic as the highest health hazard on their substance priority list. Children and infants are suspected to be particularly vulnerable to the harmful effects of inorganic arsenic. In cases of in utero exposure, arsenic has been shown to readily cross the placenta. As a result, the embryo experiences exposure at levels similar to those of the pregnant mother. This is important to consider as the fetus progresses through critical windows of developmental susceptibility that can influence birth outcomes, as well as the child’s health later in life. Prenatal arsenic exposure has been 36
M q M q
M q
M q MQmags q
Although the inorganic arsenic content of some other foods has been of concern, most of these foods are below the threshold for inorganic arsenic in drinking water set by the EPA. By contrast, rice far outstrips these other foods in inorganic arsenic content. (All arsenic levels are approximate and based on: FDA. 2011. http://1.usa.gov/1ybRXBx; R. A. Schoof, et al. Food and Chemical Toxicology 37:839; FDA, 2013; K. E. Nachman, et al. Environmental Health Perspectives 121:818; J. J. Sloth, K. Julshamin, and A. K. Lundebye. Aquaculture Nutrition 11:61; Fontcuberta, M. Journal of Agricultural and Food Chemistry 59:10013.)
linked to increased mortality, neurotoxicity, and impediment of growth, as well as changes to the peripheral and central nervous systems. In addition to health outcomes affecting the newborn, exposure to inorganic arsenic in utero has also been associated with adverse health effects during childhood and later in adulthood, such as the development of various cancers. These health endpoints are particularly worrisome in light of the recent rice testing and recent studies showing increased levels of inorganic arsenic in the cord blood of newborns whose mothers consumed contaminated rice compared to those whose mothers did not. In addition to infants, children are also at increased risk due to the harmful effects of arsenic exposure. As the brain and nervous system develop, children are particularly vulnerable to the effects of environmental toxicants. A child who experiences the same or a similar level of exposure as an adult may be at increased risk because the exposure may be greater in proportion to their body weight and the enzymes needed to detoxify an agent may not be as abundant or active in a child as in an adult. There is also the potential for increased exposure levels in children. For example, a 2009 report by the European Food and Safety Authority found that the diets of children under the age of three contained up to three times as much inorganic arsenic as the diet of an adult. In addition, researchers using data from the National Health and Nutrition Examination Study found that children who consumed rice had higher levels of total urinary arsenic than children who did not. Taken together, these exposure levels are particularly worrisome, because elevated arsenic exposure during childhood has been linked to a range of negative health outcomes
in adulthood, including cardiovascular disease, lung disease, and a wide range of cancers. How Arsenic Induces Disease Although inorganic arsenic exposure is associated with a multitude of health effects, the precise manner by which it induces toxic effects is not known. Some of the strongest experimental evidence indicates that mechanisms such as enzyme inhibition, disruption of the endocrine system, altered DNA repair, the generation of oxidative stress, and epigenetic modifications may be multifactorial contributors related to arsenic’s toxicity. Although the extent and interplay of these mechanisms are still not fully understood, it has been proposed that the metalloid may aberrantly turn “on” or “off” genes such as those that regulate critical genes and proteins that check for errors during DNA replication and repair, as well as those that control metabolism and fetal growth. Such changes in gene expression may be reflective of changes in the epigenome, a set of biological information not contained within the DNA sequence itself, but the information that influences how DNA is transcribed or translated. The new and promising field of epigenetics is enabling researchers to study the effects of arsenic on the epigenome. Unlike some chemicals that have the potential to directly modify DNA bases, arsenic can alter the function of the genome through epigenetic mechanisms such as DNA methylation, histone modification, and changes in microRNA expression. For example, in the case of DNA methylation, many environmental toxicants can induce the addition or deletion of epigenetic “marks” or “tags” onto the DNA, which can activate or silence particular genes. Researchers studying epigenetics are beginning to unravel how and to what extent the environment, including environmental contaminants such as arsenic, can modify the epigenome. Importantly, alterations to the epigenome during critical periods of fetal development have been proposed as a plausible link between environmental toxicant exposure and health complications later in life. Inorganic arsenic exposure has been linked to numerous epigenetic alterations, across the genome and in specific genes. Many of these genes are implicated in disease development,
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
In the United States, areas with high levels of rice production also happen to have some of the highest levels of arsenic in their soils. (Maps courtesy of US Geological Survey and US Department of Agriculture, National Agricultural Statistics Service.)
0 0
250 500 kilometers 250
500 miles
arsenic percentile 90 to 100 80 to 90 70 to 80 60 to 70 50 to 60 40 to 50 30 to 40 20 to 30 10 to 20 0 to 10
milligrams per kilogram 10.4 to 166 8.3 to 10.4 7.0 to 8.3 6.0 to 7.0 5.2 to 6.0 4.3 to 5.2 3.5 to 4.3 2.7 to 3.5 1.9 to 2.7 <0.6 to 1.9
830 outlier, concentration in milligrams per kilogram
including those with the potential to cause or prevent cancers. These diseaseassociated genes have a variety of functions, such as making sure the cell can utilize nutrients, performing DNA repair, or triggering programmed cell death (a natural defense mechanism important in preventing the formation of cancer). The effects of inorganic arsenic on the epigenetic machinery is yet another possible mechanism underlying its potency as a disease-causing agent and may explain how chronic exposure to the toxicant is associated with a variety of negative health outcomes. How Arsenic Contaminates Rice Arsenic accumulates in food such as rice and is therefore a potential source of children’s exposure. Rice has been described as a natural “sponge” of metallic compounds, and it can incorporate a variety of heavy metals present in soil or water, including arsenic, cadmium, and mercury. Unlike most other grains, rice plants transport silicon from the soil to fortify and protect their stalks and hulls. The same www.americanscientist.org
American Scientist
acres not estimated <10,000 10,000 to 19,999 20,000 to 34,999 35,000 to 59,999 60,000 to 99,999 100,000 +
mechanisms that sequester silicon in rice can also transport and incorporate arsenic into the plant because the metalloid readily accumulates when rice is grown in arsenic-rich water or soil. The forms of arsenic that rice will absorb are affected by water and soil chemistry as well as the variety of rice being grown. As a result, certain rice species have a greater affinity than others for the accumulation of inorganic or organic forms of arsenic. Most soil naturally contains arsenic levels of about 1 to 10 parts per billion, but in many areas where rice is grown, arsenic may be present at much higher concentrations. This Countries with the highest levels of arsenic in the world are shown at right. Because levels of testing for arsenic vary, the data shown are neither comprehensive, nor do they reflect the true scale of the problem. Arsenic concentrations and estimates of exposure are reflective of specific study areas and underestimate the extent of each country’s burden of exposure and contamination. (Data from M. F. Naujokas, et al. Environmental Health Perspectives 121:295.)
may be due to natural variations in arsenic distribution, the use of arseniccontaining fertilizer or pesticide, or runoff from industrial operations. Scientists have long recognized the high arsenic content of rice grown in parts of the world with very high levels of arsenic in groundwater, such as Bangladesh. However, even in the United States, the soil on many farms used to grow rice contains high levels of arsenic, which results in the production of arsenic-rich rice. Although the soil and groundwater used to irrigate many US rice farms has substantially lower inorganic arsenic content compared to “hot spots” of arsenic contamination such as Bangladesh, the resulting rice crops can accumulate substantial levels of the toxicant that exceed the EPA’s limit for inorganic arsenic in water. Although people may be exposed to the metalloid through the air or soil, drinking water and food tend to be the largest sources of exposure in the United States. Rice is far from the only potential source of food-based exposure. For many decades researchers were concerned about the arsenic content of shellfish. Similar to the manner in which large, fatty fish such as tuna can accumulate mercury, scientists observed that shellfish, which
Country
estimated exposed population (millions)
arsenic concentration in drinking water (micrograms per liter)
Argentina
2.0
<1 to 7,550
Bangladesh
35 to 77
<10 to >2,500
Chile
0.4
600 to 800
China
0.5 to 2.0
<50 to 4,400
Ghana
< 0.1
<2 to 175
India
>1.0
<10 to >800
Mexico
0.4
5 to 43
Taiwan
N/A
<1 to >3,000
United States
>3.0
<1 to >3,100
Vietnam
>3.0
< 0.1 to 810
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
37
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
early life health effects
later life health effects
low birth weight preterm birth neurotoxicity increased susceptibility to infections
cardiovascular disease lung disease cancers of the lung, liver, bladder, kidney, and skin
When a developing fetus or young child is chronically exposed to arsenic, some health effects may develop quickly, but others may not show up until later in life.
sift through large volumes of water as filter feeders, could accumulate high levels of arsenic. However, further testing on shellfish has revealed that although the invertebrates have high levels of total arsenic, most of the arsenic is present in relatively nontoxic organic forms. In addition to rice, arsenic may enter our diet through a number of other crops. Many fruit orchards, such as apple and pear, are grown in soil with high levels of arsenic. Although the fruit produced by such trees can have elevated levels of arsenic, FDA testing of apple and pear juice and juice concentrates found most commercial apple juices were well below the EPA’s limit of inorganic arsenic in water. Recently, many have called for the reduction of arsenic-based poultry feed and arsenic-based antibiotics, after samples
of chicken and pork were found to contain high levels of inorganic arsenic. In response, some companies actively suspended use of arsenic-based drugs in poultry and swine feed, and the FDA initiated a ban on several of these products. Unlike the previous examples, the arsenic content in rice and rice-based foods could be up to 100 times greater than the arsenic present in fruit, shellfish, or meat. In September 2013, the FDA released the results of a comprehensive study of more than 1,300 rice and rice-based products. As with the testing they had performed a year before on approximately 200 rice samples, the results were surprising. Of the 486 tested samples of rice available for purchase in the United States, all were above the EPA’s limit for water (10 parts per billion), with the highest level
300
280
250 arsenic (parts per billion)
214 200
150 119 103 100
50 10 0
38
M q M q
M q
M q MQmags q
EPA limit
average rice
average rice cereal
highest rice
highest rice cereal
reaching almost 25 times that amount (249 parts per billion). The arsenic content of brown rice was greater than that of white rice and almost twice that of basmati rice. This discrepancy between varieties was to be expected, because arsenic disproportionately accumulates in the rice bran and husk, which are polished off in almost all commercial white rice production. Equally alarming were the results of rice-based products. The average rice-based snack far exceeds the current limit set by the World Health Organization for inorganic arsenic in water, and in many cases was higher than the average arsenic content of rice. Individuals who frequently consume contaminated rice may be at increased risk for health effects from chronic arsenic exposure. Although the average American consumes 25 pounds of rice a year, certain populations may consume many times this amount. The results from Consumer Reports and FDA testing revealed that rice-based products, ranging from noodles to snacks such as rice cakes, contained arsenic at levels comparable to those of rice itself. As a result, individuals with dietary restrictions—such as those on gluten-free diets, those who consume rice-based products to reduce cholesterol, or individuals following diets that encourage the consumption of complex carbohydrates like brown rice—may be at additional risk. Researchers from the Dartmouth Toxic Metals Superfund Research Program recently reported on the high In a 2013 FDA study, all of the 486 samples of rice were above the EPA’s limit for drinking water. The highest level reached almost 25 times the maximum level allowed by the EPA in drinking water. (Data from Food and Drug Administration, 2013.)
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
arsenic content of organic brown rice syrup, a high-fructose corn syrup alternative found in many health-conscious products, including cereal bars, milk formula, and energy bars or shots. Individuals who consume products with organic brown rice syrup may also be at increased risk for arsenic-associated health effects. Even more worrisome, the researchers found many rice-based infant formulas and first foods had similarly high levels of inorganic arsenic exceeding the EPA’s limit of inorganic arsenic in water. What to Do About Arsenic in Rice? The FDA, tasked with protecting the safety of the nation’s food supply, is currently investigating the long-term potential health consequences related to chronic rice consumption. The agency reports that they have re-initiated an assessment for determining risk, a systematic approach to quantifying the added risk to an individual’s health from exposure to a particular agent. As part of the process, the agency will convene a panel including toxicologists, epidemiologists, and nutritionists to determine possible adverse effects following an individual’s chronic consumption of rice. This process will lead to the accumulation of quantifiable data on the range of exposures an individual may encounter, as well as health effects associated with such exposures. In cases where dose-response or health-effect data are scarce, the risk-assessment process includes standard uncertainty values used to err on the side of caution. This same review process was recently carried out concerning the arsenic content of apple juice. Similar to the unfolding story of rice, the FDA began testing the arsenic content of commercially available apple juices after stories of the arsenic content of apple juice gained national attention. Children are perhaps the largest consumers of apple juice, and as a result, many organizations were concerned that apple juice might be a source of chronic exposure in children. Following this risk assessment, in 2013 a new action level for the arsenic content of apple juice of 10 parts per billion (the same limit the FDA set for bottled water) was introduced. Although the new action level on apple juice is meant to ensure safety, it should be noted that the arsenic levels of most of the tested apple juice products were already lower than 10 parts per billion and thus in www.americanscientist.org
American Scientist
As residual arsenic in soil P Si arsenic contaminated groundwater
P
As As Si
As Si
As
P As
Si arsenic from parent rock
P
arsenic is taken up into roots by phosphate or silicon transporters
As rice grows, the roots take up silicon and phosphorous, which contribute to the structure and health of the plant. These same mechanisms transport arsenic present in the water and soil, which explains why rice readily accumulates arsenic in comparison to other food plants.
maximum
husk
fertilizer, mulch
husk
bran layer
endosperm
embryo
“bran,” brown rice, organic brown rice syrup rice, rice flour, rice starch “germ”
endosperm minimum Inorganic arsenic especially accumulates in the rice grain’s outer husk and bran layer. Through an elemental and isotopic imaging technique called laser ablation inductively coupled plasma mass spectrometry, the image on the left shows arsenic content in a sliced cross-section of an immature grain of rice (indicated by red-dotted line on right). The image on the right shows the way different parts of the rice grain are used. Rice products that retain the nutrient-rich outer layers of rice also tend to have higher inorganic arsenic content. (Image on left from A.-M. Carey, et al. Analytical and Bioanalytical Chemistry 402:3275.) 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
39
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
problems antibiotics and fertilizers Drugs containing arsenic are used in livestock production for a variety of reasons. Fertilizer produced from these animals’ manure contains concentrated amounts of arsenic, and is redistributed to crops.
growing conditions American rice grows in flooded fields, allowing arsenic uptake from soil and water.
pesticides Many fields in the south central US, where 75% of US rice is grown, have high levels of arsenic due to historical pesticide use. Today, some arsenical herbicides are still used.
genes Certain rice varieties absorb more arsenic.
natural occurrence
water Water used to flood rice fields can have high levels of arsenic.
Some soils and water sources naturally have higher arsenic levels than others.
A variety of factors contribute to the high arsenic content of rice.
compliance with the new rule. Rice, on the other hand, may contain arsenic levels up to 25 times higher than apple juice products. Given this proportion, as well as the lifetime frequency and ubiquity of rice consumption, it stands to reason that an action level for rice (and other inorganic arsenic–rich food) would be warranted. At present, there exists no US regulatory limit on the amount of arsenic in any solid food product. In 2002, in an effort to reduce exposure to the toxicant and its associated burden of disease, a new drinking water standard of 10 parts per billion was enacted from the previous limit of 50 parts per billion. At this time, water was considered a greater source of exposure than food, because most food was believed to contain only small concentrations of inorganic arsenic and was thus considered to be relatively safe. The recent results of testing performed on rice by both the FDA and Consumer Reports have forced a reassessment of some of these earlier assumptions. Certain manufactures of rice and rice products have already begun 40
taking steps to ensure public health in light of the recent reports. For example, Nature’s One, a manufacturer of organic brown rice syrup, has developed filters to reduce the inorganic arsenic content of their product below the current EPA limit for drinking water. Certain governmental bodies have begun reviews of their own, such as the World Health Organization’s Codex Alimentarius Commission, which approved new research into the formation of limits on the levels of arsenic in food. Other countries such as China, which recently set an inorganic arsenic limit in food, already lead the United States in responding to the problem of arsenic-contaminated rice. Without a definitive maximum for arsenic in rice set by the FDA, many have been left to wonder about the relative risk posed by continued consumption of rice and rice-based products. Although any concrete serving recommendations are beyond the scope of our expertise, in light of the rice-testing results, the FDA released a statement recommending “wellbalanced diet for good nutrition and
to minimize potential adverse consequences from consuming an excess of any one food.” A similar recommendation was put forth by the American Academy of Pediatrics to eat “a wide variety of food” to minimize arsenic exposure. The UK Food Standard Agency recently recommended that infants and young children limit the amount of rice milk and other ricebased beverages to minimize exposure to inorganic arsenic. Given the links between the health effects associated with arsenic exposure, particularly during times of developmental susceptibility, it would seem prudent to follow such recommendations. Unfortunately, even if the FDA deems a new standard necessary to minimize dietary inorganic arsenic exposure, such limits will potentially do little to combat the global health issue. Millions around the world continue to drink arsenic-tainted water and eat arsenic-rich rice and vegetables far beyond any possible exposure faced within the United States. Closer to home, millions of US families on unregulated private wells may face
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
solutions potential policy
what you can do
Set a new federal limit for inorganic arsenic in rice.
Ban the use of drugs containing arsenic in animal production.
Prepare rice with excess water and then drain.
Reduce the amount of rice and rice products your family eats.
Phase out the use of herbicides containing arsenic.
Restrict the use of manure containing arsenic for fertilizers and animal feed.
Rinse raw rice before cooking.
Diversify the grains in your diet.
Establish agricultural practices that reduce arsenic uptake in rice.
A variety of solutions could help reduce arsenic levels in the rice people eat, through policy and through individual actions.
similar exposures from contaminated water in addition to their rice-based arsenic exposures. For example, we have identified well water levels reaching upwards of 800 parts per billion in North Carolina. The ubiquity with which water, and now the top global food staple rice, can be contaminated with inorganic arsenic demands increased attention. Ultimately, as researchers and policy makers seek to understand and limit exposure, arsenic continues, as it has for hundreds of years, to reign as the “King of Poisons.” Bibliography Abedin, M. J., M. S. Cresser, A. A. Meharg, J. Feldmann, and J. Cotter-Howells. 2002. Arsenic accumulation and metabolism in rice (Oryza sativa L.). Environmental Science and Technology 36:962–968. Consumer Reports. 2012. Arsenic in your food. http://www.consumerreports.org/cro/ magazine/2012/11/arsenic-in-your-food/ _________________________ index.htm. ______ Food and Drug Administration (FDA). 2013. Arsenic in rice and rice products. http://1. usa.gov/1xXf5oS. Jackson, B. P., V. F. Taylor, M. R. Karagas, T. Punshon, and K. L. Cottingham. 2012. Arsewww.americanscientist.org
American Scientist
nic, organic foods, and brown rice syrup. Environmental Health Perspectives 120:623–626. Jackson, B. P., V. F. Taylor, T. Punshon, and K. L. Cottingham. 2012. Arsenic concentration and speciation in infant formulas and first foods. Pure and Applied Chemistry 84:215–223. Juhasz, A. L., et al. 2006. In vivo assessment of arsenic bioavailability in rice and its significance for human health risk assessment. Environmental Health Perspectives 114:1826–1831. Laparra, J. M., D. Velez, R. Barbera, R. Farre, R. Montoro. 2005. Bioavailability of inorganic arsenic in cooked rice: Practical aspects for human health risk assessments. Journal of Agricultural and Food Chemistry 53:8829–8833. Meharg, A. A. 2004. Arsenic in rice— understanding a new disaster for SouthEast Asia. Trends in Plant Science 9:415–417. Meharg, A. A., et al. 2008. Speciation and localization of arsenic in white and brown rice grains. Environmental Science and Technology 42:1051–1057. Meharg, A. A., and M. M. Rahman. 2003. Arsenic contamination of Bangladesh paddy field soils: Implications for rice contribution to arsenic consumption. Environmental Science and Technology 37:229–234. Meharg, A. A., and F.-J. Zhao. 2012. Arsenic & Rice. New York: Springer. Ren, X., et al. 2011. An emerging role for epigenetic dysregulation in arsenic toxicity
and carcinogenesis. Environmental Health Perspectives 119:11–19. Smith, A. H., E. O. Lingas, and M. Rahman. 2000. Contamination of drinking-water by arsenic in Bangladesh: A public health emergency. Bulletin of the World Health Organization 78:1093–1103. Williams, P. N., et al. 2005. Variation in arsenic speciation and concentration in paddy rice related to dietary exposure. Environmental Science and Technology 39:5531–5540. Zavala, Y. J., R. Gerads, H. Gorleyok, and J. M. Duxbury. 2008. Arsenic in rice: II. Arsenic speciation in USA grain and implications for human health. Environmental Science and Technology 42:3861–3866.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.112/past.aspx
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
41
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Journey to the Solar System's Third Zone When New Horizons reaches Pluto in July, it will close one era of space exploration and open an exciting new one. S. Alan Stern
42
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
T
NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Steve Gribben
his July, NASA’s New Horizons spacecraft will complete a 9-year, 5-billion-kilometer journey from Earth to the frontier of the Solar System, where it will undertake the first close study of Pluto and its astonishingly diverse system of satellites. It will be a raw act of exploration unparalleled since NASA’s Voyager missions to the giant planets in the late 1980s. Nothing quite like it has occurred in decades, and nothing like it is set to happen again in our lifetimes. When most of us were taught basic astronomy in grade school, we learned that the Solar System consists of 4 inner rocky planets (the “terrestrials”), four outer giant, gaseous planets (“the Jovians”), and one small misfit: Pluto. But that was old-school science, limited by mid-20th century technologies that prevented us from seeing the cosmos as it truly is. Beginning in the 1990s, planetary scientists—by then armed with large telescopes, high-sensitivity digital cameras, and fast computers—discovered that Pluto is no misfit at all. It is simply the brightest member of a vast population of objects orbiting beyond the Jovians: an entire third zone of the solar system. This region, first hypothesized in the 1940s by Gerard Kuiper, is now called the Kuiper Belt. It is littered with a diverse array of comets and small planets, of widely varying sizes. Pluto is both the largest (2,350 kilometers wide) of them and the first discovered, decades before the rest. The Kuiper Belt is, in turn, by far the largest zone of our planetary system. New Horizons has flown for more than nine years to reach this distant shore. In the months around its closest approach on July 14 of this year, the probe will conduct a detailed survey of Pluto, its array of moons, and its surroundings. In doing so it will also perform the first exploration of the Kuiper Belt—the opening of an entirely new astronomical frontier. Right now we know ridiculously little about Pluto. We know it has an atmosphere consisting largely of nitrogen, like our Earth’s though drastically less dense. It has an ultra-cold crust covered with ices of nitrogen, carbon monoxide, and methane. It has at least five moons, polar caps, and an interior that is primarily composed—surprisingly—of rock. Most important, we know that Pluto is the archetype for an entire class of planets that have never been explored. Beyond that, it is a mystery, a virgin world. Who knows what discoveries await? The great lesson of planetary exploration—from the 1960s flybys of Mars and Venus to the initial ex-
plorations of Mercury and Jupiter, Saturn, Uranus, and Neptune—is to expect the unexpected. No one expected dry riverbeds on Mars. No one expected Mercury to be an exposed planetary core with its mantle stripped away, or to find volcanoes and geysers on the moons of giant planets. No one expected oceans inside Jupiter’s moon Europa, or ice in the clouds of Venus. All of these surprising truths emerged from the early reconnaissance missions. As my team and I prepare for New Horizons’s encounter with Pluto, we are preparing to be surprised yet again by the richness of nature and the grandeur of seeing a new, faraway planet for the first time. New Horizons is a small spacecraft. It is dwarfed by Voyager 1 and 2 that preceded it to open up the exploration of giant worlds, and it costs barely one fifth as much as the Voyager project. Nevertheless, it carries much more powerful scientific instruments. By analogy with the computing revolution we’ve witnessed since the 1970s when the Voyagers were built, New Horizons is like a tablet computer compared to Voyager’s mainframe, packing much greater capability into a much smaller volume, and at a much lower price. Beginning in May, New Horizons will deliver higherresolution images of Pluto and its satellites than are possible from any telescope on Earth—even the Hubble Space Telescope. For 10 weeks before and after the day of encounter, it will “own” the system. At closest approach New Horizons will sample Pluto’s atmosphere, search for new moons, look for possible rings, map the composition and temperature distribution across all the bodies in the Pluto system, and take images so good that if it were making an equivalent pass over New York City it could spot wharfs on the Hudson River. I have worked for 25 years to make the New Horizons mission happen, because the scientific promise is so great. The exploration of Pluto will mark both the opening of the exploration of the Solar System’s third zone and the historic closing of the initial reconnaissance of our planetary system as a whole. Where will you be when humankind makes its farthest-ever landfall? What will you tell your children and grandchildren you learned about space because you were there with New Horizons, riding along virtually on television or the Internet? And what will you tell them you learned about ourselves, this wonderful species that seeks to know the universe from which it was born?
PREVIEW: Illustration of New Horizons's flight past Pluto and its largest moon, Charon, is guided by paltry Earth-based observations. Pluto has strong markings and a thin nitrogen atmosphere. Almost every other detail will be a surprise.
S. Alan Stern is a planetary scientist and the principal investigator of NASA’s New Horizons mission. He is former head of NASA’s space and Earth science program and is slated to fly to space in 2016 as a researcher on both Virgin Galactic and XCOR suborbital spacecraft.
www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
43
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
BEGINNING: New Horizons team members performed a systems check on the 2.1-meter main antenna (far left) in February 2005, while the probe was under construction at Johns Hopkins University's Applied Physics Lab. Liftoff took place on January 19, 2006, from Cape Canaveral, Florida (left). Riding atop an Atlas V rocket, New Horizons became the fastest spacecraft ever launched; it passed the distance of the Moon in nine hours.
Haumea
NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Alex Parker, Steve Gribben
Pluto PT1
AT JUPITER: New Horizons observed the planet and its volcanic moon Io during a flyby in 2007. Images of the two bodies were obtained one day apart and combined into this montage. A large eruption plume is visible above Io's northern nightside.
REX
NEW HORIZONS INSTRUMENTS Ralph is the visible and infrared imager/spectrometer on New Horizons. It will provide color, composition, and thermal maps. Alice is an ultraviolet imaging spectrometer. It will analyze the composition and structure of Pluto's atmosphere and look for atmospheres around Charon and any Kuiper Belt objects visited after the Pluto encounter. REX (Radio Science Experiment) will measure atmospheric composition and temperature by detecting distortions to radio signals from Earth. LORRI (Long Range Reconnaissance Imager) is a telescopic camera that will map Pluto's far side and provide high resolution remote geologic data. SWAP (Solar Wind Around Pluto) will measure the escape rate of Pluto's atmosphere and observe Pluto's interaction with the solar wind. PEPSSI (Pluto Energetic Particle Spectrometer Science Investigation) will measure composition and density of ions escaping from Pluto's atmosphere. SDC (Student Dust Counter), built and operated by students, is measuring the space dust peppering New Horizons as it travels across the Solar System.
American Scientist
PEPSSI
Alice
SWAP
Ralph LORRI
SDC
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Hyd ra Ker be ros Ni x Sty x
American Scientist
PLUTO SYSTEM ENCOUNTER July 14, 2015
+2 h
ours
New 14 k Horizon ilom eters s Trajec to per seco ry nd
Charon +1 h
our
Pluto to Sun
Charon-Sun shadow Pluto-Sun shadow Charon closest approach 27,000 kilometers
Pluto closest approach 10,000 kilometers
AT PLUTO: These blurry globes, painstakingly constructed using data from the Hubble Space Telescope, are the best views of Pluto—for now. Its markings have changed considerably since a decade earlier, indicating a dynamic surface. The bright area in the middle image is covered with carbon monoxide frost; it will be a high-priority target for New Horizons.
PT1
40 kilometers diameter
90°
180°
Charon
270°
1,210 kilometers diameter
Earth
12,742 kilometers diameter
EXTENDED MISSION: A 40-kilometer-wide object known as PT1 could be New Horizons's next stop after Pluto (PT1 stands for "potential target 1"). It was discovered by the Hubble Space Telescope during a dedicated search for a follow-on destination; it was identified by its motion (red circles) relative to the stars. Pluto is the largest member of the Kuiper Belt, the zone of icy bodies beyond Neptune. Little PT1 is more typical of the myriad objects out there. It is probably unchanged since the birth of the Solar System.
American Scientist
Pluto
2,360 kilometers diameter
RELATIVE SIZE: Pluto is small compared to Earth. Its size and location within the Kuiper Belt are why the International Astronomical Union reclassified it as a "dwarf planet," but Pluto is enormous compared to most other Kuiper Belt Objects, such as PT1. Pluto also has a unique relationship with Charon, which is by far the largest moon relative to its parent planet. Pluto is the key to understanding the outer Solar System and its connection to the evolution of Earth and the other planets.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
The Acoustic World of Harbor Porpoises New research gives a clearer picture of how this specialized mammal perceives its underwater environment. Magnus Wahlberg, Meike Linnenschmidt, Peter T. Madsen, Danuta M. Wisniewska, and Lee A. Miller
T
he world as we experience it is a heavily filtered and modified version of the “real” world. For example, our hearing abilities are limited to a frequency range from about 20 hertz to 20 kilohertz, making infrasonic signals from elephants and ultrasonic echolocation signals from bats inaudible to us. Jakob von Uexküll, the 20th-century Estonian-German biologist and philosopher, introduced the concept of an animal’s Umwelt: the world around the animal as experienced through its sense organs. One of the classic examples is von Uexküll’s description of a tick’s Umwelt, described in his book A Foray Into the Worlds of Animals and Humans. Ticks use chemical and temperature cues to “decide” whether they should grab onto a possible host. They have neither eyes nor pressuresensitive ears. Even though they inhabit the same fields that we may enjoy walking in, their Umwelt is completely different from ours. Similarly, the Umwelt of ants, birds, frogs, and snakes living in the same field will all be very different from each other, as their sensory organs will pick up and process subsamples of environmental information in very different ways.
Magnus Wahlberg is an associate professor of biology at the University of Southern Denmark. Meike Linnenschmidt received her PhD from the University of Southern Denmark and is now a postdoctoral fellow in neurobiology at Ludwig-Maximillians-University Munich. Peter Madsen is a professor of zoophysiology at Aarhus University in Denmark. Danuta M. Wisniewska received her PhD from Aarhus University, where she is now a postdoctoral fellow in bioscience. Lee A. Miller is an emeritus associate professor of biology at the University of Southern Denmark. E-mail for Wahlberg: ______________ [email protected] 46
Take the Umwelt of whales in aquatic environments. The ancestors of whales were terrestrial until some 40 million years ago. From this time whales have secondarily adapted to the aquatic environment in many physical and physiological ways. Among other traits, they have eyes and ears tuned to be functional under water, allowing for orientation and pursuit of prey such as krill, squid, fish, and in some cases, other marine mammal species. The Umwelt of humans in water is very different. Beyond the fact that we need special equipment to breathe, our own terrestrial adaptations reveal obvious disadvantages when we dive under water. Hearing Under Water To understand another animal’s Umwelt, we first must understand our own—in this case, understanding what we do not perceive. Underwater, without a dive mask, a beautiful coral reef becomes blurred, and we lose our ability to tell the direction of sound sources such as the eerie calls of whales and fish. Weightlessness and difficulties in recognizing visual and acoustic landmarks lead to disorientation. These disadvantages for humans and other terrestrial mammals have been greatly reduced for whales through evolutionary time. There are many reasons our senses do not work properly in water. Let us take hearing as an example. Briefly, airborne sound is collected by the outer ear, called the pinna, and led into the ear canal. At the end of the canal, the tympanic membrane is set into vibration from the pressure fluctuations created by the impinging sound wave. The three middle ear bones transfer the vibrations from the tympanic membrane to the oval window of the cochlea, putting the fluid inside it, called endo-
lymph, into motion. A complex process of mechanical movements in the cochlea stimulates the auditory hair cells to evoke nerve impulses in the eighth cranial nerve, which leads to the brain stem. From there, neural information is transmitted to the auditory cortex after passing through synaptic connections in several ganglia. Eventually, more than 50 milliseconds after the sound arrived at the outer ear, we consciously experience having heard something. Our ears are adapted for listening to airborne sound. The path from the outer to the inner ear only works properly in air, where the eardrum and middle ear bones efficiently transfer sound energy. Our directional hearing is achieved by the shadowing effect of the head, by the reflective properties of the pinna, and by the differences in time of arrival between the two ears. In water, sound can penetrate the body at many locations and be led directly to the inner ear through so-called bone conduction. Our directional hearing is seriously limited under water mainly because the mechanical and acoustical properties of water resemble those of the tissues of our bodies more so than air. In water the speed of sound and the wavelength for any given frequency are about 4.4 times greater than in air. So for humans in water the directional cues for finding a sound source are greatly diminished. Hearing is the most important sensory modality for a whale because sound travels farther than light in water and can be used at night, or in murky waters. But how have adaptations in whales overcome the difficulties of hearing in water? Obviously, whales do not have an outer ear, even though the remnants of the opening of the ear canal can be seen as a small dot on the skin in some species. Instead, in
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
toothed whales, sound is conducted to the middle and inner ears through specialized fat channels in the lower jaw. The structure (and probably the function) of the inner ear in toothed whales is similar to that of other mammals. For example, toothed whales and humans can pinpoint the direction of a sound source with similar precision, to a few degrees. Just how this directionality is achieved in toothed whales is not well understood, but their inner ears are, unlike in humans, located in bony capsules below and separate from the skull. This positioning limits bone conduction and thus makes it possible to keep the two ears acoustically isolated from each other, which can help better determine the direction to a sound source or echo-reflecting prey. In general the underwater hearing range of toothed whales is vastly expanded compared to a human’s in air. For example, the harbor porpoise (Phocoena phocoena) can hear frequencies from about 100 hertz to 150 kilohertz. Higher www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
Harbor porpoises emit clicks for both echolocation and communication. Much about their physiological and social systems remains unknown. Studies of these porpoises at the Fjord&Bælt facility in Denmark are uncovering some clues. (Photograph by Peter Verhoog, Fjord&Bælt.)
frequencies mean shorter wavelengths, making it possible for them to echolocate smaller targets. A Harbor Porpoise Facility Although humans have limited senses in the underwater environment, we have the intriguing prospect of learning what that missing perception might be like by studying the creatures that have it. We have had the opportunity to study the acoustic Umwelt of the harbor porpoise at facilities in Kerteminde, Denmark: the Fjord&Bælt, an educational outreach facility with an outdoor enclosed water area, and the Marine Research Laboratory of the University of Southern Denmark. These facilities opened in 1997 when two 1- to 3-year-old harbor porpoises that had been by-caught in fishing nets were introduced to the outdoor pool of the Fjord&Bælt. At the ends of the pool
are nets separating it from the narrow Kerteminde harbor, which is connected to the Great Belt at one end and to a shallow fjord at the other end. Tides, wind, and current cause large water movements in and out of the harbor, keeping the water in the facility fresh. In 2004 a third harbor porpoise was added to the Fjord&Bælt. A female calf was born in 2007; this event marked the first successful birth and maturation of a harbor porpoise in captivity, offering a unique opportunity to study its development. Head trainer Jakob Kristensen and his team at Fjord&Bælt train the porpoises there using operant conditioning with positive reinforcement. In colloquial terms, this technique is called “clicker training” by, for example, dog and horse trainers. The trainer makes a movement signifying the porpoise should perform a certain behavior. After the animal accomplishes this task, the 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
47
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
60 porpoise hearing threshold in water human hearing threshold in air
sound intensity (decibels)
50 40 30 20 10 0 –10 –20 0.1
1
10 frequency (kilohertz)
100
Audiograms for human hearing in air (blue) and harbor porpoise hearing in water (green) shows how much more sensitive porpoise hearing is over that of humans, and the broader frequency range over which porpoises can pick up sound. The sound intensity is in decibels relative to the human hearing threshold intensity in air of 1 to 2 kilohertz. (Data adapted from Kastelein et al. 2010 and the American National Standards Institute Audiometer Standard 3.6 (1969).)
trainer gives what’s called a bridge signal (usually a whistle) indicating that the response was correct, and the porpoise returns to receive its reward (a fish). The harbor porpoise is one of the smallest whales. It has been described as a whale “in the fast lane” by marine biologists Andrew Read (of Duke University) and Aleta Hohn (of the National Oceanic and Atmospheric Administration) because it breeds at yearly intervals, and it needs to eat nearly constantly because of its small size and its habitat in the temperate to cold waters of the Northern Hemisphere. Adult harbor porpoise females weigh about 60 kilograms and are about 170 centimeters in
“phonic lips”
blowhole
length whereas adult males are about 45 kilograms and 150 centimeters long. The details of their social system are not fully known. There seems to be no stable family structure, such as is found in many dolphin species. Instead, animals loosely aggregate when foraging, but otherwise seem to live solitary lives, except for the strong mother–calf bond that lasts for at least 8 months. Observations of animals in human care indicate that females also can have strong bonds to other females. Porpoises feed on small fish and squid and perform up to 10-minutelong dives down to a maximum of a few hundred meters, but most dives are short and shallow.
vestibular airsac
outgoing sound
nasal passage
melon
larynx inner ear
sound conducting tissue
incoming sound
Harbor porpoises use structures called phonic lips to create clicks. The fatty, rounded tissue called the melon serves to focus the porpoise’s signal into a narrow beam. The returned echoes are funneled through specialized fat channels to the middle and inner ears. The latter are located in bony capsules separate from the skull, to aid in locating the direction of sounds. 48
M q M q
M q
M q MQmags q
Many people are more familiar with dolphins than porpoises, but there are some significant differences between these two closely related families of animals. There are about 40 species of dolphins, but only 6 of porpoises. The most consistent physiological difference, across all these species, is the teeth: Porpoises have rounded ones; dolphins have pointed ones. Among other traits, commonly porpoises have less diverse vocalizations, and they tend to have rounded noses, unlike the beaked faces of dolphins. And dolphins tend to have about twice the life span of porpoises (about 40 versus 20 years, respectively). Acoustic Signals Two decades of research on porpoises in our facility have given us many insights into their acoustic Umwelt. To a large degree, they rely on acoustics to find their prey and navigate underwater. Almost constantly they emit short and powerful clicks of extremely high frequency, which were first described in 1971 by Nikolai Dubrovskiy and his colleagues, now of the Russian Academy of Sciences, and in 1973 by Søren Andersen and Bertel Møhl, both at the University of Copenhagen at the time. The clicks are only 50 to 100 microseconds long and have a frequency centered around 130 kilohertz, making them some of the most high-pitched signals produced by any animal. Naturally the clicks are inaudible to humans, but they are still of extremely high intensity: If we could hear these frequencies well under water, their most powerful clicks repeated at a high rate could actually cause hearing damage in humans, even at several meters’ distance. When the clicks bounce off a fish or another item in the water, a faint echo returns. If the echo is audible to the porpoise, the delay time from the emitted click to the returning echo tells the porpoise the distance to the fish and, with its sensitive hearing, the porpoise can also determine the direction to the prey. Thus, the porpoise has a built-in echo sounder it can use for echolocating prey and for orientation. We call this biological sonar or biosonar for short, and toothed whales share this trait with bats, the only other animals that use biosonar while capturing prey. When swimming and searching for prey, harbor porpoises emit clicks about 20 times a second. When homing in on prey, the click rate increases and ends at several hundred clicks per second in what’s called a
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
www.americanscientist.org
10 5 0 pressure (pascals)
terminal buzz when the prey is captured. The pattern of emitted signals during prey capture is very similar among harbor porpoises and other toothed whales, as well as in insectivorous bats. Besides echolocation, porpoises also use their high-pitched clicks for communication. Actually these are the only signals heard from harbor porpoises. Most dolphins, unlike porpoises, use a wide range of whistles and clicks for communication. By varying the repetition rate of clicks, porpoises can express various types of signals, but the meaning of these click patterns is still largely unknown. However, work by one of our former Master’s students, Karin Clausen, suggests that a signal with a very high repetition rate indicates aggression, whereas an upsweep in repetition rate seems to be used as a contact call. We were fortunate to follow the development of biosonar in a newborn calf at the Fjord&Bælt. Just after birth, the calf started to emit relatively lowpitched signals, audible to humans. Within an hour, it started to produce clicks with high frequencies centered around the main frequency of the adult clicks (see figure at right). After a few days, the young porpoise’s biosonar seemed fully functional, but it did not catch and eat fish until it was weaned (after more than 8 months). Harbor porpoises make their click sounds with a pair of special organs called the phonic lips, located in the nasal air passage just below the blowhole. The blowhole is really the fused nostrils of the whale, having migrated upwards during evolutionary time to facilitate breathing. The porpoise forces air through the phonic lips, which causes these structures to vibrate and produce clicks. We have shown that they use primarily only the right pair of phonic lips for click generation, as do other toothed whales, for reasons that remain unknown. The sound leaves the head through the fatty roundish tissue between the rostrum and the blowhole, called the melon. (See bottom figure on the opposite page.) Properties of the melon and structures around it cause the sound to be emitted in a narrow beam, about 12 degrees wide. This beam, together with the high frequencies, enables the porpoise to focus sound on the target while reducing echoes from nearby objects. Also, high frequencies will in general improve the resolution of the biosonar system, making it possible
American Scientist
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
–5 –10 1,000 500 0 –500
–1,000 0 power in decibels
American Scientist
50
200
100 150 time (microseconds)
250
0 –10 –20 –30 0
20
40
60
80 100 120 frequency (kilohertz)
140
160
180
200
A harbor porpoise calf born in captivity at Fjord&Bælt (top) provided the opportunity to record and study the animal’s acoustic systems from birth. The calf’s clicks (red curve, middle) differ from an adult’s (black curve). Spectra of the two clicks are shown in the bottom panel. (Photograph courtesy of Fjord&Bælt.)
for the porpoise to obtain information about small objects and prey. Finally, and perhaps most important, using very high frequencies makes it difficult for a predator such as killer whales to hear the porpoises’ signals. Getting Signals Back Another important aspect of an animal’s Umwelt is sound reception. There would be little function to biosonar without pairing the outgoing signals
with the harbor porpoise’s excellent hearing. Ronald A. Kastelein and his colleagues of the Sea Mammal Research Company in the Netherlands found the best sensitivity at about 125 kilohertz with an extremely low auditory threshold (see figure on page 48). The auditory sensitivity of the harbor porpoise is about the same as the most sensitive bat, Megaderma lyra. Hearing can be studied using psychophysics, where the animal is trained to 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
49
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
sound amplifier and filter laptop
ABR (auditory brain stem response)
response paddle
target
target
target
hydrophone
1
reply whether or not it can hear a sound presented to it. This process is very similar to how hearing is normally measured in humans. Such experiments, however, take intensive training of the animal and consequently a very long time. Therefore scientists increasingly rely on direct electrophysiological measurements of the neural response to sound. The deflection of the hair cells in the cochlea causes neurons in the eighth cranial nerve to generate action potentials. The auditory ganglia in the brain stem contain numerous large neurons that produce big electrical responses and rapidly conduct neural signals to the auditory cortex. This neural activity can readily be recorded via electrodes attached to the skin near the source. This is the auditory brain stem response (ABR) to sound stimuli. The same technique is used to measure the hearing abilities of newborn babies. When working with harbor porpoises, we attach suction cup electrodes to the skin of the head and back regions and, with the appropriate recording equipment, we can measure ABR to their echolocation clicks as well as to the echoes from objects they ensonify (see figure above). Recently we used combined methods to study echolocation and hearing in harbor porpoises at the same time. First, we designed a psychophysical experiment around echolocation. The porpoise was sent down to a hoop, where it remained stationed in front of an opaque screen and an acoustic screen. In some trials, a target (a hollow 18-centimeter-long cylinder) was lowered down one meter on the other side of the opaque and acoustic screens at different distances (2, 4, and 8 meters) from the porpoise. When the acoustic screen was removed, the animal could echolocate through the 50
2
3
4 distance (meters)
5
6
click ABR ABR (auditory brain stem response) (microvolts)
0
7
echo ABR
8
distance
0.5 0 –0.5
8 meters
0.5 0 –0.5
4 meters
0.5 0 –0.5
2 meters
0
2
4
6
8
10
12
14
16
18
20
time (milliseconds) Measuring a porpoise’s auditory brain stem response (ABR) while it is echolocating is a way to quantify hearing. The animal, equipped with electrodes, is trained to swim down to a hoop (top), where a screen blocks vision but not acoustic signals. A target is placed at varying distances, and the porpoise's returned echolocation signals are recorded (bottom). The animal swims to a response paddle if it detects the target. (Adapted from Linnenschmidt et al., 2012.)
opaque screen, but could not see the target. A hydrophone on the acoustic axis recorded the level of the emitted clicks. For a target present, the animal should leave the hoop and indicate that it can detect the cylinder by touching the tip of its rostrum on a response paddle at the surface. If the target is not there, the animal should stay at the station for a certain time. If the animal makes the correct choice, the trainer reinforces the porpoise by blowing a whistle and giving it a fish reward. While the animal is performing the behavioral experiment, ABR electrodes in suction cups are attached so we can also obtain information on its hearing abilities. With this method we can record the ABR of sounds made by the porpoise, the emitted click level, as well as that of echoes returning from the target. This experimental setup
is similar to that used on dolphins at the Hawaii Institute of Marine Biology spearheaded by Paul Nachtigall, Alexander Supin, and their colleagues. As the target range increases, the sound level of the echo decreases, meaning that the ABR elicited by the echo should also decrease. But the surprising result in our studies, and in those on dolphins too, was that the ABR generated by the echo remained nearly unchanged, independent of target distance. It seems that the animal can adjust the perceived sound to a convenient level and thus compensate for attenuation caused by changes in distance to the target. This ability may help the animal detect, localize, and classify targets. Several mechanisms can explain these behaviors. First of all, the emitted sound level is reduced when the target
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
M q MQmags q
THE WORLD’S NEWSSTAND®
range decreases. Second, the powerful outgoing signal affects the echo reception, a well-described phenomenon not only in whales but also in other mammals. Our harbor porpoise study suggests that besides these two mechanisms, there is a third one in play. This mechanism seems to be able to reduce the hearing sensitivity in a way not explained by the other two. Sensing the Nets Combining these data about harbor porpoise’s sensory abilities with a view of its Umwelt can help with understanding, and hopefully solving, a threat to these animals. Danish porpoises live in shallow waters mainly less than 50 meters deep, and will go for minute-long dives to hunt for their favorite prey, a variety of small fish and squid. Fishermen accidentally by-catch harbor porpoises in gill nets, where the porpoises can drown. But the properties of porpoise biosonar indicate they should be able to detect gill nets at distances of 5 to 10 meters. In fact it seems porpoises can detect nets at much greater distances. Porpoises are found very close to the coast at our Northern Funen field site. Here our former Master’s student, Torben Nielsen, tracked porpoises from land while at intervals he deployed a fishing net in the area. When comparing the distribution of porpoises, he could see that the porpoises reacted to the fishing net at very long distances, beyond 50 meters and perhaps close to 100 meters. These are much longer detection ranges than previously assumed. The discrepancy in detection distance might be explained by our discovery of the porpoise’s ability to regulate their auditory perception. If the porpoises can detect fishing nets at such great distances, then why do they get by-caught? One obvious possibility is that they are directing their biosonar somewhere other than at the nets. We know porpoises like to feed on bottom-dwelling fish and in this case they would not detect a gill net because the animals would be directing their biosonar downward instead of forward. One technique that has proven useful for reducing by-catch is placing acoustic alarms on fishing nets. Such A porpoise wears eye covers and a digital tag during an experiment to record the stream of clicks it produces when catching prey. At b and c, the porpoise emits a terminal buzz, a series of about 500 clicks per second, to home in on prey. (Adapted from Wisniewska et al., 2012.) www.americanscientist.org
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
alarms either scare porpoises away or draw their attention to the net, hopefully causing the porpoise to avoid it. Studies by Finn Larsen at the Technical University of Denmark and others show that acoustic alarms on nets indeed reduce porpoise by-catch. The adult porpoises at Fjord&Bælt react promptly when we throw a dead fish into the enclosure. It does not take
Porpoises can adjust a perceived sound to a convenient level and thus compensate for attenuation caused by changes in distance to the target. many seconds for a porpoise to localize, approach, and capture the fish. This behavior has been studied in great detail. The animals are trained to wear suction cups on their eyes, occluding visual cues, but they can still find and capture the fish by emitting clicks and listening for the returning echoes. This experiment has taught us much about how porpoises use their biosonar. First, just like other echolocators, porpoises avoid “stepping on their own
a
b
echoes.” That is, after emitting a click they will wait for all relevant echoes to return before emitting the next click. While searching for prey, porpoises usually emit clicks of rather long interclick intervals (about 50 milliseconds), allowing echoes to return from ranges of up to a few tens of meters before the next signal is emitted. The animal is constantly moving its head, and thus the sound beam, from side to side and up and down while searching for interesting echoes from different directions. Once the direction to the prey item has been determined, the animal keeps its sound beam on the prey while approaching. During this phase, the intervals between clicks as well as the intensity of the clicks are progressively decreasing. At very close range the porpoise switches to the so-called buzz phase where it can emit up to 500 clicks per second during prey capture (see figure below). With the progressive decrease in intensity of the emitted biosonar click while approaching the prey, and the auditory control adaptation mentioned above, the perceived levels of echoes from the prey remain relatively constant and independent of the distance to the prey. Even though we have a good understanding of these various parts of the chase in terms of echolocation behavior, our understanding of how the porpoise perceives the situation is less clear. To picture the acoustic world of the porpoise during the search, we can imagine
c
20 sound pressure (pascals)
American Scientist
15 10
a
b
c
5
*
*
*
0 –5
–10 –15 –4
–3.5
–3
–2.5 –2 –1.5 time before capture (seconds) 2015
–1
–0.5
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
0
51
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
using a stroboscopic light when walking through a dark forest. We get a visual snapshot of our surroundings for each flash, and if the trees are far apart, we can navigate with a low flash rate to save batteries. If we want to pick an apple off a branch, we will have to increase the flash rate to precisely localize the apple. However, light is processed in
parallel, whereas sound is processed sequentially; this means that the “picture” obtained by echolocation is probably not three-dimensional, as is the case with vision, but rather one- or two-dimensional. Light is processed without delay whereas echoes from biosonar signals are processed sequentially with delay. Thus the harbor porpoises’ acoustic Umwelt is
aluminum 37
200
sample size 100
correct target strength = −39 decibels relative pressure
wrong
1
0 −20
0
0
200
50 100 33
38
relative pressure
0
target strength = −43 decibels
110
0 −20
0
0
100
count
steel
200
50 100 36
34
35
105
E
F
S
T
target strength = −37 decibels
0
0 response
−20 0
decibels 0
52
time (microseconds)
500
0
frequency (kilohertz)
count
Plexiglas
frequency (kilohertz)
−1 100
37
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
frequency (kilohertz)
American Scientist
updated in a discrete fashion for every new click, making it much more limited than our visual Umwelt. What is the acoustic Umwelt of the porpoise like during the chase? How does the animal make up its mind within a split second whether a detected echo is an edible item worth pursuit, an inedible item it should ignore, or perhaps a predator it should actively avoid? What does the auditory world look like to the porpoise when it finds a fish and goes into the final stage of capture, emitting clicks at a rate of up to 500 per second? Would it appear as a motion picture, as it would to us if we increased the flash rate to over our flicker-fusion-frequency of about 50 flashes per second? Some of the above questions are very suitable to study with trained porpoises. By using external video recordings and an archival digital recording tag (which is fitted with two hydrophones and inertial sensors) built by Mark Johnson at St. Andrews University, we have studied the detailed searching behavior of porpoises when choosing between two spherical targets of various materials, each with a tiny hydrophone to measure the impinging porpoise signal. There are small, subtle differences in the echoes from the balls made of different materials. The porpoise can use these cues to discriminate between the objects. It first sends a biosonar click toward one of the balls, then directs its “acoustic gaze” toward the other one. By shifting its attention back and forth between the targets, the porpoise can build up a comparison between them and make the correct decision in most cases. From a set of two balls, the porpoises were supposed to always choose the aluminum ball (the standard) over the comparison ball of different material. The porpoises made most mistakes when trying to discriminate between the aluminum sphere and the steel A blindfolded porpoise (top) has been trained to look for an aluminum sphere (here, to the right). It emits hundreds of echolocation clicks when distinguishing this sphere from the Plexiglas one at left. Responses from different porpoises to aluminum, Plexiglas, and steel balls (bottom) show that echoes from Plexiglas (in colors) are sufficiently different than those from aluminum that porpoises never confuse the two (gray bars at left; numbers indicate sample size). However, steel is similar enough to aluminum that porpoises can't always tell them apart (black bars). (Top photograph by Solvin Zankl, Fjord&Bælt; data adapted from Wisniewska et al., 2012.)
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
6
following clicks one-way distance (meters)
5
pontoon bridge
4 3
surface reflection of target echo target
2
water surface
1 0 –14
–12
sphere, likely because the echoes from these two spheres were quite similar. They had no problem discriminating between the aluminum and Plexiglas spheres (see figure on the opposite page). The above example illustrates an important difference between the porpoise’s acoustic Umwelt and our visual Umwelt in that, had the aluminum and Plexiglas spheres been painted black, the porpoise could still hear the difference between the materials, but we could not see the difference. Also, the porpoise can receive indirect echoes reflected from objects outside of its acoustic beam, such as echoes from the water surface (see figure above), whereas visual cues outside our field-of-view are not detectable to us. These features make the porpoise’s acoustic Umwelt potentially more complex than otherwise expected. Naturally they have passive hearing like us. However, we know little about how sounds—such as pile driving during offshore wind farm construction, seismic explosions used for oil and gas exploration, and ship noise— influence their behavior. We are now using digital recording tags on wild harbor porpoises to provide some answers to these questions. Toothed whales and bats have a very special way of perceiving their surroundings through sound. By combining knowledge gained from tagged wild harbor porpoises with the results of experiments performed on trained animals, we are improving our knowledge of these elusive small whales, so that we can better understand their acoustic Umwelt, and in this way refine our ways of protecting them and the environment in which they live. www.americanscientist.org
American Scientist
–10
–8 –6 time to target interception (seconds)
–4
–2
0
A digital tag on a porpoise recorded this echogram as the animal swam toward a target, with colors indicating the intensity of returning echoes. The red area at the bottom indicates the clicks the porpoise produced while approaching the target. Echoes from the water surface are seen throughout. Around 0 seconds, the terminal buzz gives multiple clicks and echoes. This image illustrates the numerous acoustic cues available to the porpoise and contributing to its Umwelt. (Adapted from Wisniewska et al., 2012.)
Bibliography Dubrovskij, N. A., P. S. Krasnoff, and A. A. Titov. 1971. On the emission of echo-location signals by the Azov Sea harbor porpoise. Soviet Physics—Acoustics 16:444–448. Johnson, M., and P. L. Tyack. 2003. A digital acoustic recording tag for measuring the response of wild marine mammals to sound. IEEE Journal of Oceanic Engineering 28:3–12. Kastelein, R. A., L. Hoek, C. A. F. de Jong, and P. J. Wensveen. 2010. The effect of signal duration on the underwater detection thresholds of a harbor porpoise (Phocoena phocoena) for single frequency-modulated tonal signals between 0.25 and 160 kHz. Journal of the Acoustical Society of America 128:3211–3222. Kyhn, L., J. Tougaard, K. Beedholm, F. H. Jensen, E. Ashe, R. Williams, and P. T. Madsen. 2013. Clicking in a Killer Whale habitat: Narrowband, high-frequency biosonar clicks of harbour porpoise (Phocoena phocoena) and Dall’s porpoise (Phocoenoides dalli). PLoS ONE 8:1–12. Larsen, F., C. Krog, and O. R. Eigaard. 2013. Determining optimal pinger spacing for harbour porpoise bycatch mitigation. Endangered Species Research 20:147–152. Linnenschmidt, M., K. Beedholm, M. Wahlberg, J. H. Kristensen, and P. E. Nachtigall. 2012. Keeping returns optimal: Gain control elicited by dynamic hearing thresholds in a harbour porpoise. Proceedings of the Royal Society B 279:2237–2245. Madsen, P. T., D. Wisniewska, and K. Beedholm. 2010. Single source sound production and dynamic beam formation in echolocating harbour porpoises (Phocoena phocoena). Journal of Experimental Biology 213:3106–3110. Miller, L. A. 2010. Prey capture by harbor porpoises (Phocoena phocoena): A comparison between echolocators in the field and in captivity. Journal of the Marine Acoustics Society of Japan 37:156–168.
Møhl, B., and S. Andersen. 1973. Echolocation: High-frequency component in the click of the Harbor Porpoise. Journal of the Acoustical Society of America 54:1368–1372. Nielsen, T. P., M. Wahlberg, M. Hiekkilä, P. Sabinsky, and T. Dabelsteen. 2012. Swimming patterns of wild harbour porpoises Phocoena phocoena show detection and avoidance of gillnets at very long ranges. Marine Ecology Progress Series 453:241–248. Read, A. J., and A. A. Hohn. 1995. Life in the fast lane: The life history of harbor porpoises from the Gulf of Maine. Marine Mammal Science 11:423–440. Uexküll, J. von. 1934. Streifzüge durch die Umwelten von Tieren und Menschen. Reprinted in English: A Foray Into the Worlds of Animals and Humans. 2010. Minneapolis: University of Minnesota Press. Verfuss, U. K., L. A. Miller, P. K. D. Pilz, and H. U. Schnitzler. 2009. Echolocation by two foraging harbour porpoises (Phocoena phocoena). Journal of Experimental Biology 212:823–834. Villadsgaard, A., M. Wahlberg, and J. Tougaard. 2007. Echolocation signals of wild harbour porpoises, Phocoena phocoena. Journal of Experimental Biology 210:56–64. Wisniewska, D. M., M. Johnson, K. Beedholm, M. Wahlberg, and P. T. Madsen. 2012. Acoustic gaze adjustments during active target selection in echolocating porpoises. Journal of Experimental Biology 215:4358–4373.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.112/past.aspx
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
53
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
When the Cause of Stroke Is Cryptic Identifying the reason for a stroke can help doctors avert future ones; if the cause remains unknown, mathematics may point toward a probabilistic answer. David M. Kent and David E. Thaler
S
udden weakness on one side of the body, slurred or incoherent speech, a droop in the muscles of the face—these are all possible signs of a stroke, an interruption in the circulation of blood throughout the brain. According to figures from the American Stroke Association, on average someone in this country suffers a stroke every 40 seconds—a rate that amounts to nearly 800,000 a year. Somewhere between 4 and 8 percent of these strokes are fatal; the remainder cover a spectrum from clinically trivial to catastrophic, depending on the size and anatomic location of the brain area affected. A stroke can produce temporary or long-lasting visual field defects, loss of sensation, paralysis, loss of the ability to speak or to understand language, or any of a wide variety of other debilitating and sometimes curious neurological symptoms and syndromes. Just as the consequences of a stroke can vary, so too can the causes. These causes divide strokes into two broad categories: ischemic strokes, in which a cerebral blood vessel becomes blocked,
David M. Kent is professor of medicine, neurology, and clinical and translational science at Tufts University School of Medicine. He directs the Predictive Analytics and Comparative Effectiveness (PACE) Center, where large databases and predictive modeling methods are used to explore how treatment effects vary across patients with different characteristics. David E. Thaler is chair of neurology and associate professor of medicine at Tufts University School of Medicine. He served on the steering committee of the RESPECT Trial, the largest trial examining PFO closure in cryptogenic stroke patients, and is director of the Comprehensive Stroke Center at Tufts Medical Center. E-mail: _______ dkent1@tufts medicalcenter.org. 54
and the less common hemorrhagic strokes, in which a vessel ruptures or leaks blood into the surrounding tissue. In either case prompt treatment is essential, because brain tissue dies off quickly without a continuous supply of nutrients and fresh oxygen to the neurons that control our muscles, our perceptions and thoughts, and even our ability to breathe. Following a stroke, clinicians typically run tests to learn as much as possible about its cause. For example, in the case of an ischemic stroke, did the problem begin with the buildup of plaque in a large vessel, or with a fragment of plaque that broke off and migrated to a smaller vessel? Did a blood clot originate in the left atrium of the heart, owing to stasis from an abnormal rhythm, or somewhere else in the circulatory system—perhaps at a site quite distant from the brain? Such distinctions may provide crucial information that can guide therapy to help the patient avert another stroke in the future. A Hole in the Heart In approximately 30 percent of stroke cases, the cause cannot be identified even after an extensive workup. Such cases are known as cryptogenic strokes, which is simply to say that the cause remains unknown. Thus, cryptogenic stroke represents a “wastebasket” category of strokes with many different etiologies. Fairly often, patients with cryptogenic stroke are found to have a patent foramen ovale (PFO), a tunnel-like structure in the septum that separates the right and left atria of the fetal heart. During gestation, the foramen ovale allows the circulation to bypass the dormant fetal lungs, because blood is
oxygenated in the placenta. Usually the foramen ovale seals shortly after birth, when the infant’s first breath creates pressure that brings together two flaps of tissue to form an impenetrable septum. Sometimes, however, the two flaps fail to come together completely, leaving a PFO in about 25 percent of adults. Most often, this incomplete closure apparently poses no threat throughout the individual’s lifetime. However, by providing a small channel from the right side to the left side of the heart, a PFO may, in theory, permit venous blood clots to pass into the arterial circulation, creating the conditions for an ischemic stroke from a paradoxical embolism. (This process is illustrated on the facing page.) What makes such an event paradoxical is that the contents of a vein could never travel directly into an artery under normal conditions. As mentioned earlier, figuring out the probable cause of a stroke may help guide therapy to prevent the next one. After a stroke from atherosclerotic disease, most physicians recommend antiplatelet therapy and statins (as well as antihypertensive drugs, in cases of high blood pressure). For prevention of a second stroke from atrial fibrillation (caused by clotting from stagnant blood in the left atrium that embolizes to the brain), the appropriate treatment is usually some form of anticoagulant or blood thinner. For strokes believed to be caused by paradoxical embolism, the intuitively appealing approach is to close the PFO with a device such as the one that appears on page 56. The device is usually placed in the heart by a catheter in a minimally invasive procedure. However, because PFO is so common
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
and is usually regarded as benign, its presence in a patient with cryptogenic stroke may be a completely incidental finding—and the stroke may have been caused by another occult mechanism altogether. In such cases, even the relatively small risks associated with the procedure and the permanent cardiac device seem poorly justified. Although PFO closure is widely if haphazardly employed, clinical proof of its benefit has been hard to find. Three recent trials, comprising more than 2,000 patients, were unable to find a statistically significant benefit in preventing stroke recurrence for mechanical closure, as compared to clot-preventing medication alone. Nevertheless, the trials did not put the question to rest, because they showed some tantalizing (yet inconclusive) signals suggestive of benefit. Many experts think larger beneficial effects might be found if we were better able to identify the patients whose strokes are most likely to have been caused by paradoxical embolism. For the past five years, we have led an international collaboration called the Risk of Paradoxical Embolism (RoPE) Study. One of us is a neurologist with a special interest in the role of PFO in stroke; the other is a medical doctor and research methodologist with a long-standing interest in how mathematical risk models might be applied to disaggregate the overall results of trials to provide more detailed evidence for the care of patients based on their specific characteristics. (For a discussion of how averages tend to hide individual differences in clinical trials, see the article by Kent and Hayward in the January–February 2007 issue of American Scientist.) The premise of the RoPE Study is that the benefits of PFO closure in a patient are depenNormally, oxygen-poor blood from the veins passes from the right atrium and right ventricle of the heart into the lungs to pick up fresh oxygen, then to the left atrium and left ventricle, and finally through the arteries to the rest of the body, including the brain. Following this path, a clot that forms in the veins of the leg would get trapped in the lung, causing a pulmonary embolism; if it was very small, such a clot might be of little clinical consequence. In cases of paradoxical embolism, however, a small opening in the walls of the heart allows the clot to pass directly from the right atrium into the left atrium and then into the systemic arterial circulation, where it can cause a stroke if it disrupts blood flow in the brain. www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
aorta pulmonary artery to left lung
to right lung
from left lung blood clot from right lung
right atrium
left atrium
patent foramen ovale (PFO)
left ventricle
right ventricle
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
55
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
If the septum between the right and left atria of the heart fails to seal completely at birth, the result is a flap-like opening called a patent foramen ovale, or PFO. Doctors can repair a PFO with this device resembling a tiny umbrella, which can be inserted through a catheter in a minimally invasive procedure.
dent on the joint probability of two distinct dimensions of risk: the probability that the first, or index, cryptogenic stroke was attributable to the PFO (in other words, the PFO was not an incidental finding) and the probability that the stroke will recur. We call this joint probability the attributable recurrence risk. What’s the Evidence? In an individual case, it is typically not possible to determine with certainty whether a PFO discovered in the setting of a cryptogenic stroke was the cause of the stroke or just an incidental finding. However, evidence from epidemiologic studies shows that PFO is a much more common finding in patients with cryptogenic stroke than in the general population or in patients with strokes whose cause is known. Based on studies that compare the prevalence of PFO in cases of cryptogenic stroke against its prevalence in a control group (either from the general population or from patients with a stroke of known cause), it is possible to determine the fraction of cryptogenic strokes that can be attributed to the presence of a PFO; the mathematical logic is illustrated by the patients of “Stroke Ward XYZ,” on page 57. For this example, let us posit a PFO prevalence of 40 percent among those with cryptogenic stroke and 25 per56
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Oto/Chu Bordeaux / Science Source
American Scientist
cent among a control population. If we assume the prevalence of PFO (in patients whose cryptogenic stroke is unrelated to PFO) to be equal to that in the general population—that is, 25 percent—these figures suggest that approximately 50 percent of PFOs discovered in the setting of cryptogenic stroke would be merely incidental. In essence, the role of PFO in cryptogenic stroke is a question of probability. Using Bayes’s theorem to solve for the probability that PFOs are incidental yields the equation shown below on this page. Thus, as PFO prevalence in cryptogenic stroke patients decreases, the probability that a discovered PFO is merely incidental will increase. If the prevalence of PFO in cryptogenic stroke patients were equivalent to that in a control population (approximately 25 percent), the probability of a discovered PFO being merely incidental would be 100 percent—which is to say, this would be the expected rate if PFO was not a risk factor for cryptogenic stroke at all. Conversely, if we can find characteristics that identify a cryptogenic stroke
population with an especially high prevalence of PFO, this would suggest a much higher probability that a stroke in this population was attributable to the presence of the PFO. The equation has another interesting property as well: The right side is numerically equivalent to the inverse of the odds ratio in case-control studies. This feature makes it easy to convert the odds ratio of case-control studies associating the prevalence of PFO in cryptogenic stroke patients (versus the prevalence in control subjects) into what we really want to know: the probability that a discovered PFO is just an incidental finding, unrelated to the stroke. When we applied this formula across 23 previously published case-control studies, we found that—on average— about one-third of the PFOs discovered in the setting of cryptogenic stroke are incidental; the others are considered pathogenic. But more important, we found tremendous variation across studies in these estimates; moreover, the variation seemed to correlate with the characteristics of the different populations in the different studies. This provided us with an important clue as to how we might estimate the attributable risk for an individual. Paradoxical Data Given that PFO is strongly associated with cryptogenic stroke, and that it can be eliminated with a minimally invasive procedure, why would a physician withhold this therapy and choose instead to wait around for a second, possibly disabling, event? Indeed, in some centers, PFOs discovered in the setting of a cryptogenic stroke are routinely closed. Despite the compelling logic, however, the consistent association of PFO with cryptogenic stroke has been accompanied by a surprising yet equally consistent finding: PFO appears not to be a risk factor for recurrent stroke. Patients with a PFO have similar (or lower) stroke recurrence risks compared with other cryptogenic stroke patients. To some researchers, this finding offers an argument against closure: Why close a
probability PFO is incidental in CS cases = prevalence of PFO in controls × (1 - prevalence of PFO in CS cases) prevalence of PFO in CS cases × (1 - prevalence of PFO in controls)
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q MQmags q
THE WORLD’S NEWSSTAND®
PFO if the condition apparently does not increase the patient’s risk? Even more surprising is the suggestion, from some studies, that small PFOs may be associated with a higher risk of recurrence than larger PFOs, leading some of us to propose (jokingly) that “high-risk” small PFOs should perhaps be dilated into “lowrisk” large ones. Although several hypotheses have been proposed to explain these counterintuitive findings, we (with our colleague Issa Dahabreh, now at Brown University) have described a fundamental bias that affects all research on the causal mechanisms underlying the risk of recurrent events, when the risk factors for the recurrent episode might be similar to the risk factors for the first event, that can account for these paradoxical observations. We have named this phenomenon index event bias, because it arises in studies that select patients based on the occurrence of an index event. Selecting patients on this basis induces a negative correlation in the occurrence of risk factors, and the study can then tend to underestimate their importance in determining a future event. PFO in cryptogenic stroke provides an excellent illustration of index event bias. Since it is a congenital anomaly, essentially distributed randomly at birth, people with and without PFO are equally likely to have vascular risk factors for stroke, such as diabetes, hypertension, and so forth. Among patients with a cryptogenic stroke, however, those with a PFO tend to be younger than those without a PFO and also tend to have much lower rates of the conventional risk factors for stroke, such as diabetes, hypertension, hypercholesterolemia, or smoking. This negative correlation between PFO and other risk factors for strokes arises in those with cryptogenic stroke because both PFO and these risk factors contribute to the same outcome— cryptogenic stroke. Heuristically, the association may be considered to emerge because patients with PFO do not require the same burden of risk factors to fall victim to a stroke; in certain circumstances, having a PFO may be sufficient. This skewing of risk factors explains why the presence of a PFO appears to “protect” cryptogenic stroke patients from the other risk factors (both known and unknown) for cryptogenic stroke—and why they www.americanscientist.org
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Stroke Ward XYZ (total number of patients: 200)
patients without PFO (120): 60%
patients with PFO (80): 40%
patients with non-PFO related stroke (160): 80%
patients without PFO
patients with PFO-attributable stroke (40): 20%
patients with incidental (harmless) PFO patients with pathogenic (stroke-causing) PFO
subgroup of patients with PFO incidental: 50% pathogenic: 50%
In this hypothetical example, each of the 200 patients in Ward XYZ has experienced a cryptogenic stroke. Cardiac imaging reveals a PFO in 40 percent of them, or 80 patients. Factoring in the prevalence of PFO in the general population, which is 25 percent, yields the probability that in 50 percent of this subgroup, or 40 patients in all, the stroke can be attributed to their PFO; in the other 50 percent, the PFO is merely incidental.
might be at similar or lower risk for recurrence relative to those without a PFO, whose cryptogenic stroke is caused by other mechanisms. The finding that stroke appears to have an equal or greater risk of recurrence in cases of small PFOs than of large ones may be similarly explained by index event bias: It may simply be a sign that the small PFOs are more likely to be incidental and coincide with other risk factors, and with mechanisms unrelated to PFO that may confer more risk than paradoxical embolism does. The large and consistent differences in the characteristics of cryptogenic stroke patients with and without PFO hold several important implications for the prevention of future strokes. First, they add to the indirect evidence that PFO is pathogenically important in the first cryptogenic stroke; otherwise, cryptogenic stroke patients with or without this anatomical variant would be expected to have a similar burden of stroke risk factors—just as in the general population. Second, the finding suggests that PFO may
be an important risk factor for stroke recurrence, because it “compensates” for the shortfall in other risk factors. Most significant, the differences in patient profiles (in comparisons of patients with and without PFO) gives us a means of predicting the probable presence or absence of PFO in a patient with a cryptogenic stroke, even before we perform any imaging on his or her heart. Although it is totally unpredictable in the general population who might have a PFO, we can use the presence or absence of vascular risk factors and other characteristics in cryptogenic stroke patients in mathematical models that can predict the probability of finding a PFO. We call this individualized estimate PFO propensity—that is, the probability that a cryptogenic stroke patient has a PFO, as calculated on the basis of other characteristics. Among cryptogenic stroke patients, this propensity is increased by lower age and by the absence of conventional stroke risk factors. The equation on page 56 relates PFO prevalence to the probability that a PFO is incidental. The same equa2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
57
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
The RoPE (risk of paradoxical embolism) calculator is a way to estimate the likelihood that a patient’s cryptogenic stroke can be attributed to an embolism that reached the brain by the “paradoxical” route illustrated on page 55. The patient “Krassen,” whose stroke came at a relatively early age and who has no other known risk factors, has a high RoPE score, meaning that his stroke can likely be attributed to PFO. By contrast, “Marco,” who has several risk factors—including greater age, hypertension, diabetes, and a history of smoking—has a low RoPE score that suggests his stroke was not caused by PFO.
tion can be used in an individual case to estimate the likelihood that a discovered PFO is incidental or pathogenic; the calculations here would be based on the prevalence of PFO in cryptogenic stroke patients with similar characteristics. 100
Calculating the RoPE Score To get a robust estimate of these probabilities, we needed a much larger database of patients with cryptogenic stroke investigated for PFO than had previously been assembled. We therefore came up with a euphonious ac25
estimated proportion of strokes attributable to PFO estimated risk of stroke/ TIA within two years
90 20%
80
88%
84%
20
72% 70 15
12%
50
38%
10
34%
8%
7%
30
6%
6% 5
20
2%
10 0
percent
percent
62% 60
40
0% 0–3
0 4
5
6
7
8
9–10
RoPE score When cryptogenic stroke patients with PFO are stratified by their RoPE score, an inverse relation between two trends emerges clearly: The higher a patient’s RoPE score, the more likely it is that the stroke is attributable to the PFO. This likelihood follows directly from the PFO prevalence found in cryptogenic stroke patients within that stratum, according to Bayes's theroem. A higher RoPE score, however, also corresponds with a much lower risk of recurrent stroke or TIA. 58
M q M q
M q
M q MQmags q
ronym (the RoPE Study) and formed an international collaboration to pool data from 12 different centers, including more than 3,000 cryptogenic stroke patients. After more than a year of work to harmonize the data across these different studies, we performed analyses that found the odds of detecting PFO to be diminished by older age, the presence of diabetes, coronary artery disease, hypertension, and hypercholesterolemia, as well as by current smoking and history of stroke or TIA (a transient ischemic attack or “ministroke”). If medical imaging revealed a superficial stroke (in the periphery of the cerebral cortex rather than deep within the brain), this also increased the probability of finding a PFO. These results were found to be consistent across all 12 databases. On the basis of our mathematical model, we developed a point score, assigning a single point for the absence of each of three vascular risk factors (diabetes, hypertension, smoking), the absence of a prior stroke or TIA, and the presence of a cortical stroke on brain imaging. For age, we assigned a point for each decade younger than 80, ranging from one (for patients in their 60s) up to five (for patients in their 20s). This system yields a 10-point score of risk. A score of 10 represents the highest level of PFO-attributable risk; at the other end of the scale, a 0 or 1 represents the lowest level of PFO-attributable risk. Thus a young and apparently healthy
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
patient who has suffered a cryptogenic stroke, such as the individual whose chart appears at the top of page 58 under the name “Krassen,” ends up with a RoPE score of 8, whereas an older patient whose risk factors include hypertension and diabetes (and whose chart bears the name “Marco”) receives a RoPE score of only 2. Given the strength and consistency of the effects that have already been found, the presence or absence of these features should allow clinicians to identify sizable groups of cryptogenic stroke patients with very different prevalences of PFO, ranging from approximately 20 to 80 percent. This range in turn suggests clinically important variation among patients in the probability that a PFO is pathogenic (likely to have caused the stroke) rather than incidental, as estimated on the basis of easily observed characteristics. Patients with a low RoPE score (say, 0 to 3 points) have a nearzero probability that their stroke was caused by their PFO, whereas for patients with a high RoPE score (9 or 10), the likelihood that their stroke was caused by PFO is very high, near 90 percent. Even among the relatively well-selected patients who meet the conventional criteria for entry into the major randomized trials, calculating the RoPE score can uncover variations that may hold clinical significance. A major premise of this calculation, of course, is that in a control group (a population without PFO-attributable stroke), the prevalence of PFO is totally uncorrelated with RoPE score. Such a dissociation—a correlation of PFO with risk factors in patients with cryptogenic stroke but not in other populations—is exactly what would be expected if the apparent correlation arises from the index event bias we have described. We checked this assumption in a sample of patients who had experienced strokes of known cause and found that, as predicted, the PFO prevalence was about 25 percent, regardless of their RoPE Score. Yet Another Wrinkle With such strong, consistent, clinically intuitive and meaningful results, one might anticipate that the problem of patient selection for mechanical closure is solved. But our RoPE analysis revealed yet another wrinkle: Recurrence risk appears to be considerably lower in patients most likely to have www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
had a PFO-attributable stroke (that is, with a high RoPE score) than in cryptogenic stroke patients whose PFO is probably just incidental (with a low RoPE score). The bottom figure on page 58 shows the two-year recurrence rates of stroke or TIA in patients with PFO, stratified by their RoPE score. The red bars in the figure show that recurrence rates decrease dramatically as the RoPE score increases, suggesting that patients with index events most likely to be PFO-attributable are the ones least likely to experience recurrent ischemic events. These results provide useful clinical insights into the widespread and growing disease of stroke. At the same time, they underscore the challenges of selecting the appropriate patients for PFO closure and the methodological hurdle inherent in PFO closure trials: If the RoPE score is used to select the patients with a high fraction of their strokes attributable to PFO, the number of recurrent events may be too low to provide adequate power for a trial to find any clinical benefit. Put another way, it is hard to show that PFO closure prevents stroke when the vast majority of patients remain stroke-free even without closure. Other characteristics, such as anatomical features of the PFO itself, may need to be added to select highrisk patients from among those with a high RoPE score to identify the patients most likely to benefit. Nevertheless, the RoPE score can be useful for doctors when we counsel patients, even while the debate over mechanical closure continues. We can tell “Marco” and other patients with a low RoPE score there is a high probability that their PFO is an innocent bystander. For such patients, aspirin and statins would be indicated; these have been shown to lower recurrence rates of stroke generally, and they should be especially helpful for those with atherosclerotic risk factors. As for “Krassen” and other patients with a high RoPE score, we can reassure them that while the PFO is likely to have been involved in their stroke, they are at relatively low risk for a PFO-related recurrence, even with medical therapy alone. Whether PFO closure might further reduce their risk remains controversial. More generally, the RoPE project provides a new way to think about patients with cryptogenic stroke: Even
when the stroke cause cannot be determined with certainty, there might be indirect ways to estimate it probabilistically. The study also provides one more example of how overall results from clinical trials may be difficult to apply at the individual patient level— and how bringing together clinical, statistical, and epidemiological reasoning can ultimately help us deliver the right treatment to the right patient. Bibliography Alsheikh-Ali, A. A., D. E. Thaler, and D. M. Kent. 2009. Patent foramen ovale in cryptogenic stroke: Incidental or pathogenic? Stroke 40(7):2349–55. doi: 10.1161/ STROKEAHA.109.547828. Epub May 14. Carroll, J. D., et al. 2013. Closure of patent foramen ovale versus medical therapy after cryptogenic stroke. New England Journal of Medicine 368(12):1092–1100. Dahabreh, I. J., and D. M. Kent. 2011. Index event bias as an explanation for the paradoxes of recurrence risk research. Journal of the American Medical Association 305(8):822– 823. doi: 10.1001/jama.2011.163. Kent, D. M., and R. Hayward. 2007. When averages hide individual differences in clinical trials: Analyzing the results of clinical trials to expose individual patients’ risks might help doctors make better treatment decisions. American Scientist: 95:60–66. Kent, D. M., D. E. Thaler, and RoPE Study Investigators. 2011. The risk of paradoxical embolism (RoPE) study: Developing risk models for application to ongoing randomized trials of percutaneous patent foramen ovale closure for cryptogenic stroke. Trials 12:185. doi: 10.1186/1745-6215-12-185. Kent, D. M., et al. 2013. An index to identify stroke-related vs. incidental patent foramen ovale in cryptogenic stroke. Neurology 81(7):619–25. doi: 10.1212/ WNL.0b013e3182a08d59. Kitsios, G. D., D. E. Thaler, and D. M. Kent. 2013. Potentially large yet uncertain benefits: A meta-analysis of patent foramen ovale closure trials. Stroke 44(9):2640–2643. doi: 10.1161/STROKEAHA.113.001773. Thaler, D. E., et al. 2013. The risk of paradoxical embolism (RoPE) study: Initial description of the completed database. Stroke 8(8):612–619. doi: 10.1111/j.17474949.2012.00843.x. Thaler, D. E., et al. 2014. Recurrent stroke predictors differ in medically treated patients with pathogenic vs. other PFOs. Neurology 83(3):221–226.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.112/past.aspx
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
59
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Like Holding a Piece of Sky Aerogels are the lightest solids in existence. Despite that, their complex internal structure makes them strong and exceptional insulators. Mark Miodownik
O
ne day in 1998 I walked into the lab just as one of the technicians was taking a piece of material out of the microscope. "I'm not sure if you’re allowed to see this,” he said, “so we’d better be on the safe side, otherwise I’m going to have to fill in a load of paperwork.” He quickly covered up the material. I was working for the US government at the time, in a nuclear weapons laboratory in the desert of New Mexico. Being a British citizen I had only the basic security clearance, and so there were areas in the laboratory complex I couldn’t go. Most areas, in fact. But this was our lab, so the behavior of the technician was definitely odd. I knew better than to ask him more. This was the late 1990s, a time when Chinese espionage in US national laboratories was a very sensitive issue. The material was extraordinary, and although I only saw a small fragment of it for a mere second, I found it impossible to forget. Day after day, the thought of the mystery material would pop into my head and I would wonder what on Earth it could be. The fact that I couldn’t talk to anyone about it made it all the more difficult to forget. I remembered it as being transparent, yet strangely opalescent—like a hologram of a jewel: a ghost material. I had definitely seen nothing like it before. Had it, I wildly speculated, been salvaged from some alien spacecraft? I didn’t see it again until a few years later. I was back in the United Kingdom, having taken a job as the head of the Material Research group at King’s College Mark Miodownik is a professor of materials science and society, and director of the Institute of Making, at University College London. Excerpted from stuff matters by Mark Miodownik. Reprinted by permission of Viking Penguin and Houghton Mifflin Harcourt Publishing Company. Copyright © 2013/2014 by Mark Miodownik. All rights reserved. 60
London. One afternoon they announced on the TV news that on January 2, 2004, the NASA mission to capture stardust had successfully engaged with the comet Wild 2. The news program then showed a picture of my material. Well, obviously not my material, but the material I desperately wanted to be mine. “So it was alien!” I said triumphantly to my empty flat, as I scrambled on to my computer to find out more. “They are harvesting it from space,” I thought. Wrongly. The material turned out to be a substance known as aerogel. I had got the wrong end of the stick from the news report: It was the aerogel that was being used to collect the stardust. I didn’t really stop to think about this but plowed on, collecting information about aerogels and their history. Aerogels were not of alien origin, I found out, but they nevertheless had a very strange backstory: They were invented in the 1930s by a man called Samuel Kistler, an American farmer turned chemist, who conjured them into existence solely to satisfy his curiosity about jelly. Jelly? The Gelatin Skeleton What was jelly? he asked. He knew that it wasn’t a liquid, but it wasn’t really a solid either. It was, he decided, a liquid trapped in a solid prison, but one in which the prison bars were like an invisibly thin mesh. In the case of edible jelly, the mesh is made from long molecules of gelatin, which is derived from the protein, collagen, that makes up most connective tissues, such as tendons, skin, and cartilage. When added to water, these gelatin molecules unravel and connect with one another to form a mesh that traps the liquid within it and prevents it from flowing. Jelly is basically like a water balloon, but instead of being an outer skin that holds the water within, it inhabits the water from the inside. The water is held inside the mesh by a force known as surface tension—the
same force that makes water feel wet and form drops, and causes it to stick to things. The surface tension forces inside the mesh are strong enough for the water to be unable to escape the jelly but weak enough for it to slosh around, which is why jelly wobbles. It’s also why jelly feels so amazing when you eat it: It’s almost 100 percent water, and with a melting point of 35 degrees Celsius the internal gelatin network promptly melts, freeing the water to burst in your mouth. The simple explanation—a liquid trapped by a solid internal mesh—was not enough for Samuel Kistler. He wanted to know whether the invisible gelatin mesh within a jelly was all of a piece. In other words, was it a coherent, independent internal skeleton, such that if you could find a way to remove all of the liquid from it, the mesh could stand on its own? To answer the question he conducted a series of experiments, the results of which he published in a letter to the scientific journal Nature in 1931 (3211[127]:741). The letter is entitled “Coherent Expanded Aerogels and Jellies,” and here is how he introduced the report: The continuity of the liquid permeating jellies is demonstrated by diffusion, syneresis, and ultrafiltration, and the fact that the liquid may be replaced by other liquids of very diverse character indicates clearly that the gel structure may be independent of the liquid in which it is bathed. What Kistler is saying in this opening paragraph is that various experiments have shown that the liquid in a jelly is connected throughout, rather than being compartmentalized, and can be replaced by other liquids. This demonstrates, in his opinion, that the solid internal skeleton may indeed be
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
independent of the liquid in the jelly. And in using the word “gel,” as a more general word for jelly, he is saying that this is true of a whole range of jellylike materials that span the gap between being truly solid and truly liquid, from hair gel, to solid chicken stock, to setting cement (where the internal mesh is formed by calcium silicate fibrils). He goes on to point out that no one had yet managed to separate the liquid of a jelly from its internal skeleton: Hitherto the attempt to remove the liquid by evaporation has resulted in shrinkage so great that the effect upon the structure may be profound. www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
Aerogels are 99.8 percent air, yet they are exceptional insulators because they thwart air molecules from transferring thermal energy. The thin aerogel shown here is completely shielding a flower from the direct flame of a Bunsen burner. (All images courtesy of NASA.)
In other words, those in the past who have tried to remove the liquid by evaporation have found that the internal skeleton simply collapses. He then goes on to say triumphantly that he and his collaborators have found a way to do it: Mr. Charles Learned and I, with the kindly assistance and advice of Prof. J. W. McBain, undertook to test the hypothesis that the liquid in a jelly can be replaced by a gas with little or no shrinkage. Our efforts have met with complete success.
Their cunning idea was to replace the liquid with a gas while it was still inside the jelly, and so use the pressure of the gas to keep the skeleton from collapsing. First, though, they found a way to replace the water in the jelly with a liquid solvent (they used alcohol), which would be easier to manipulate. The danger of using a liquid solvent was that it too would evaporate, but they found a way to stop it: Mere evaporation would inevitably cause shrinkage. However, the jelly 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
61
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
is placed in a closed autoclave with an excess of liquid and the temperature is raised above the critical temperature of the liquid, while the pressure is maintained at all times at or above the vapor pressure, so that no evaporation of liquid can occur and consequently no contraction of the gel can be brought about by capillary forces at its surface. An autoclave is simply a pressure tank that can be heated. By increasing the pressure in the autoclave, the liquid inside the jelly is prevented from evaporating, even when the temperature is increased beyond its boiling point. The capillary forces he talks about, meanwhile, are caused by the surface tension of the liquid. Kistler speculates that when the liquid is
and mechanically sound, thus proving his hypothesis. It must have been a very satisfying moment. But he didn’t stop there. These internal skeletons of jelly were incredibly light, fragile things, comprising mostly air. They were, in fact, foams. Perhaps he could make them stronger, he thought, by making a jelly not out of gelatin but out of something more rigid. So it was that he engineered a jelly in which the internal skeleton was made of the mineral silicon dioxide, the main constituent of glass. Using exactly the same process described above, he then created from this jelly a “silica aerogel,” the lightest solid in the world. This was the material I had seen for a split second all those years ago in a laboratory in the desert.
Kistler died in 1975 having never seen his most wonderful material find a place in the world. gradually removed through evaporation, these same forces that hold the jelly together are responsible for tearing it apart. But when he raises the temperature of the whole jelly above the “critical temperature”—the point at which there is no difference between a gas and a liquid because both have the same density and structure—the whole liquid becomes a gas without going through the destructive process of evaporation. He says, When the critical temperature is passed, the liquid has been converted directly into a permanent gas without discontinuity. The jelly has had no way of “knowing” that the liquid within its meshes has become a gas. This is a stroke of genius: Under the pressure from the autoclave, the newly created gas cannot escape from the jelly and so the internal skeleton stays intact. “All that remains is to allow the gas to escape, and there is left behind a coherent aerogel of unchanged volume," he continues. Only now does he let the gas escape slowly, leaving the internal skeleton of the jelly completely intact 62
M q M q
M q
M q MQmags q
A microscopic view of an aerogel shows its complex skeleton structure, which gives the material its strong insulating properties.
Not content with this achievement, Kistler went on to make other aerogels, and he lists them in the paper: So far, we have prepared silica, alumina, nickel tartarate, stannic oxide, tungstic oxide, gelatine, agar, nitrocellulose, cellulose, and egg albumin aerogels and see no reason why this list may not be extended indefinitely. Note that despite his triumph with silica aerogel he couldn’t resist making an aerogel from egg albumin—that’s egg white. So while the rest of the world were using egg whites to cook light fluffy omelets and bake cakes, Kistler did a different type of cooking using an autoclave to create egg aerogel: the lightest meringue in the world. Billions of Surfaces Silica aerogel looks extremely odd. Put it against a light background and it disappears almost entirely. In this sense, it
is harder to see than normal glass, despite being less transparent. When light passes through glass, its path is distorted slightly—it is refracted—and the degree of distortion is known as glass’s refractive index. In the case of aerogel, because there is simply less of the stuff, light’s path is hardly distorted at all. For this same reason, there is no hint of reflection on its surfaces, and because of its ultra-low density it appears to have no distinct edges, to not be fully solid at all. Which of course it isn’t. The internal skeleton of a jelly has a structure not unlike that of bubble bath foam, with one main difference, which is that all of the holes link up. Silica aerogel is so full of holes that it is typically 99.8 percent air and has a density only three times greater than air, which means that it has practically no weight at all. At the same time, when placed against a dark background silica aerogel is undoubtedly blue. And yet, because it is made from clear glass, it ought to have no color at all. For many years, scientists wondered why this might be. The answer, when it came, was rather satisfyingly odd. When light from the Sun enters the Earth’s atmosphere, it hits all sorts of molecules (mostly nitrogen and oxygen) on its way down and bounces off them like a pinball. This is called scattering, which means that on a clear day, if you look at any part of the sky, the light you see has been bouncing around the atmosphere before coming into your eye. If all light was scattered equally, the sky would look white. But it doesn’t. The reason is that the shorter wavelengths of light are more likely to be scattered than the longer ones, which means that blues get bounced around the sky more than reds and yellows. So instead of seeing a white sky when we look up, we see a blue one. This Raleigh scattering, as it is called, is very slight indeed, so you need an enormous volume of gas molecules to see it: The sky works but a room full of air doesn’t. Put another way, any one bit of the sky doesn’t look blue but the whole atmosphere does. But if a small amount of air happens to be encapsulated in a transparent material that happens to contain billions and billions of tiny internal surfaces, then there will be sufficient Raleigh scattering off these surfaces to change the color of any light that passes through it. Silica aerogel has exactly this structure, and this is where its blue hue comes
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
from. When you hold a piece of aerogel in your hand, it is, in a very real way, like holding a piece of sky. Aerogel foams have other interesting properties, the most remarkable of which is their thermal insulation— their ability to act as a barrier against heat. They are so good at this that you can put the flame of a Bunsen burner on one side of a piece of aerogel and a flower on the other and still have a flower to sniff a few minutes later. Because it is a foam, aerogel has within it the equivalent of a billion billion layers of glass and air between one side of the material and the other. This is what makes it such a superb thermal insulator. Having discovered this and other remarkable properties, Kistler reported them in the final sentence of his paper as follows:
cheaper, not more expensive, and there was no awareness of the problem of global warming. An expensive thermal insulator such as aerogel just didn’t make economic sense. Having failed to find a market in thermal insulators, Monsanto rather bizarrely found applications for it in various inks and paints, its role being to flatten them optically by scattering light, creating a matte finish. Aerogel finally ended up being used ignominiously, as a thickening agent in screw-worm salves for sheep
gives clues to the nature of the particle and so provides a very exotic means of identifying which of the many invisible particles the scientists are dealing with. Aerogel is extremely useful for this purpose—providing a material through which the particle can travel—as it is, effectively, a solid version of a gas, and it continues to be used for this today, helping physicists unravel the mysteries of the subatomic world. Once aerogels found their way into physicists’ labs, with their sophisticated equipment, eso-
Apart from the scientific significance of these observations, the new physical properties developed in the materials are of unusual interest. Unusual interest, indeed. He had discovered the best insulator in the world. A Forgotten Wonder The scientific community applauded briefly, but then promptly forgot all about aerogels. It was the 1930s and they had other fish to fry; it was hard to know what would shape the future and what would be forgotten. The world of materials was exploding and materials scientists would soon deliver nylon, aluminum alloys, silicon chips, fiberglass, and many other revolutionary materials. Somehow in all the excitement aerogels got lost and everyone forgot about them. Everyone except one man, Kistler himself. He decided that the beauty and thermal insulation properties of these jelly skeletons were so extraordinary that they should and must have a future. Although silica aerogel is as fragile and brittle as glass, for its weight (which is minuscule) it has good strength—certainly enough to make it industrially useful. So he patented it and sold the license to manufacture it to a chemical company called Monsanto Corporation. By 1948 it was making a product called Santogel, which was a powdered form of silica aerogel. Santogel seemed to have a bright future as the best thermal insulator in the world, but alas the time was not right for it. Energy was getting cheaper and www.americanscientist.org
American Scientist
Aerogels found a niche as collectors of comet particles on NASA's Stardust mission. Here Mike Zolensky (left), Stardust curator and co-investigator, and Donald Brownlee, principal investigator with the University of Washington (right), study material in the detector after its return to Earth.
and in the jelly used to create napalm for bombs. In the 1960s and 1970s, cheaper alternatives usurped aerogel even from this rather limited repertoire of applications, and finally Monsanto gave up making it altogether. Kistler died in 1975 having never seen his most wonderful material find a place in the world. The revival of aerogels came not as a result of any commercial application but because their unique properties attracted the attention of some particle physicists at CERN studying something called Cherenkov radiation. This is the radiation given off by a subatomic particle when it travels through a material faster than light can travel through it. Detecting and analyzing this radiation
teric aims, and big budgets, the material’s reputation started to grow again. At that time in the early 1980s, aerogels were so expensive to make that they could only live in labs where money was no object. CERN was one such lab, but soon NASA followed. The first applications of silica aerogels in space exploration were to insulate equipment from extreme temperatures. Aerogels are particularly suitable for this application because not only are they the best insulators in the world, but they are also extremely light, and when you’re launching spacecraft out of the gravitational pull of the Earth, reducing weight matters rather a lot. Aerogel was used first in 1997 on the 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
63
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Mars Pathfinder mission and has been used as an insulator on spacecraft ever since. But once the scientists at NASA found that aerogel could cope with space travel, they realized that the material had another possible use. If you look up into the sky on a clear night you might see a shooting star, which appears as a bright trail of light crossing the sky. For a long time it has been known that these are meteors that enter the Earth’s atmosphere at high speeds and burn brightly as they heat up. It is thought that most of these are space dust, which is leftover material from the creation of the Solar System 4.5 billion years ago, along with comets and other asteroids. Determining exactly what materials these heavenly bodies are made from has been of interest for many years, because this information could help us understand how the Solar System was formed and may also account for the chemical composition of the Earth. Analyzing the composition of meteorites has given us some tantalizing clues, but the problem with these specimens is that they have all been heated to extremely high temperatures by their passage through the atmosphere. Wouldn’t it be nice, the people at NASA thought, if they could capture some of these objects out in space and bring them back to Earth in a pristine state? The first problem with this idea is that objects in space tend to be traveling rather fast. Space dust is often going at speeds of 50 kilometers per second, or 18,000 kilometers per hour, a lot faster than a bullet. Catching an object like that is not easy. As with stopping a bullet with, say, your body, either the force of the bullet exceeds the rupture pressure of your skin, meaning it goes through you, or you employ a bulletproof vest made of a high-rupturestrength material, such as Kevlar, which results in a compressed and deformed bullet. Either way, it’s a risky business. However, in principle, it is quite possible—just as when catching a cricket ball or baseball with “soft” hands, the trick is to spread and dissipate the ball’s energy rather than bracing yourself for a single, high-pressure impact. What NASA needed, then, was a way to slow the dust down from 18,000 kilometers per hour to zero without damaging the dust or the spacecraft— ideally a material with a very low density, so that the dust particles would be slowed gently without being damaged; 64
M q M q
M q
M q MQmags q
Particles from the comet Wild 2 left these centimeter-long tracks in Stardust's aerogel.
ideally one that could do so within the space of a few millimeters; and ideally one that would be transparent, so that scientists could find the tiny specks of dust once they were buried in it. Catching Stardust That such a material existed was a minor miracle. That NASA had already used it in space flights was extraordinary. It was, of course, silica aerogel. The mechanism by which aerogel pulls off this feat is the same as the one used to protect stunt actors in movies when they fall off tall buildings: A mountain of cardboard boxes, each box absorbing some of the energy of the impact as it collapses beneath the actor’s weight, and the more boxes, the better. In the same way, each foam wall within aerogel absorbs a tiny amount of energy when it is struck by the dust particle, but because there are billions of them per cubic centimeter,
there are enough of them to bring it to a halt relatively unharmed. NASA built an entire space mission around the ability of aerogel to gently collect stardust. On February 7, 1999, the Stardust spacecraft was launched, containing all of the equipment necessary to take a trip through the Solar System, while also being programmed to fly past a comet called Wild 2. The idea was that it would collect interstellar dust from deep space as well as the dust being ejected from a comet, allowing NASA to study the material composition of both. In order to do this, they developed a tool that resembled a giant tennis racket, but instead of holes between the strings there was aerogel. During the summer and autumn of 2002, while millions of kilometers from any planet, the Stardust spacecraft opened a hatch and poked out its giant tennis racket fitted with aerogel. It had no opponent in this game of interstellar tennis and the balls it was looking for were microscopically small: the remains of other stars long gone, the leftover ingredients of our own Solar System still flying around. The Stardust spacecraft couldn’t hang around in deep space too long because it had an appointment to keep with the comet Wild 2, now hurtling from the outer reaches of the Solar System and approaching the center, which it does every 6.5 years. Having withdrawn its aerogel tennis racket, the spacecraft sped off for its meeting. It took just over a year to get to the right position, but on January 2, 2004, the spacecraft found itself on a collision course with the comet, which was 5 kilometers in diameter and speeding off around the Sun. Once it had maneuvered itself into the slipstream of the comet, 237 kilometers behind it, the spacecraft opened its hatch and once again poked out its aerogel tennis racket, this time using the B-side, and started to collect, for the first time in human history, virgin comet dust. After collecting the comet dust, the Stardust spacecraft returned to Earth, arriving back two years later. As it approached the Earth it veered away, jettisoning a small capsule, which fell under Earth’s gravity, entering the atmosphere at a speed of 12.9 kilometers per second, the fastest re-entry speed ever recorded, and so becoming for a while a shooting star itself. After 15 seconds of free fall, and having reached red-hot temperatures, the capsule deployed a drogue parachute to slow
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
down the rate of descent. A few minutes later, at a height of 10,000 feet above the Utah desert, the capsule jettisoned the drogue chute and deployed the main parachute. At this point the recovery crews on the ground had a good idea of where the capsule was going to land and headed out into the desert to welcome it back from its 7-year, 4-billionkilometer round trip. The capsule hit the sand of the Utah desert at 10:12 GMT on Sunday, January 15, 2006. Until they opened the capsule and started examining the aerogel samples, scientists had no idea whether they held any answers to anything. Perhaps the space dust would have passed straight through the aerogel. Or perhaps the violence and deceleration of re-entry would have disintegrated the aerogel into meaningless powder. Or perhaps there would be no dust at all. They need not have worried. Once they got the capsule back to the NASA laboratories and opened it up, they found that the aerogel was fully intact and almost completely perfect. There were minuscule puncture marks in the surface and it was these that were subsequently shown to be the entry points for the space dust. Aerogel had done the job that no other material could do: It had brought back pristine samples of dust from a comet formed before the Earth even existed. Since the return of the aerogel capsule, it has taken NASA’s scientists many years to find the tiny pieces of dust embedded within the aerogel, and the work continues to this day. The dust they are looking for is invisible to the naked eye, and so it must be found by microscopic examination of the samples, which has taken years. The project is so massive that NASA has enlisted the public to help with the search. The scheme Stardust@Home trains volunteers to use their home computers to look through thousands of microscopic images of the aerogel samples and try to spot the signs that a piece of space dust is present. The work so far has thrown up a number of interesting results, the most surprising of which is that most of the dust from the comet Wild 2 shows the presence of aluminum-rich melt droplets. It’s very hard to understand how these compounds could have formed in a comet that had only ever experienced the icy conditions of space, because they require temperatures of more than 1,200 degrees to do so. Because comets www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
are thought to be frozen rocks that date back to the birth of the Solar System, this has come as a bit of a surprise, to say the least. The results seem to indicate that the standard model of comet formation is wrong, or there is a lot more we don’t understand about how our Solar System formed. Blue-Sky Material Now that the Stardust mission is over, will this be the fate of aerogel too, to end in obscurity? It is all too possible. Although aerogels are the best insulators we have, they are very expensive and it is not clear that even now we care about energy conservation enough to value aerogels economically. There are several companies selling aerogel for such thermal insulation
There is no weight to it that you can perceive, and its edges fade away so imperceptibly that it is impossible to see where the material stops and the air begins. applications, but at the moment the main ones are for extreme environments such as drilling operations. Research on developing new aerogels has been taking place at an increasingly rapid pace. There are now a number of aerogel technologies that result in a material that is not rigid and brittle, as silica aerogels are, but flexible and bendy. These so-called x-aerogels are made flexible by a neat piece of chemistry that detaches the rigid foam walls of an aerogel from one another and inserts between them polymer molecules that act like hinges within the material. These x-aerogels can be made into flexible materials such as textiles and could be used to make the warmest but lightest blankets in the world, potentially replacing duvets, sleeping bags, and the like. Because they are so light, they would also be perfect for outdoor clothes and boots designed for extreme environments. They could even replace the foam soles in sports shoes that make that type of footwear so springy. Re-
cently, a family of carbon aerogels has been created that conduct electricity, as well as super-absorbent aerogels that can suck up toxic waste and gases. Aerogels may yet be part of our everyday lives, the answer perhaps to living in a more extreme and volatile climate. But although as a materials scientist it’s good to know that we are likely to have the right materials to offer the world in the event that global warming is not averted, this is not the kind of future I want for my children. In a world where we have industrialized so many materials (including those we used to hold sacred, such as gold and diamond), I like to think there may again be a place for a material valued solely for its beauty and significance. Most people will never hold a piece of aerogel in their hand, but those who do never forget it. It is a unique experience. There is no weight to it that you can perceive, and its edges fade away so imperceptibly that it is impossible to see where the material stops and the air begins. Add to this its ghostly blue color and it really is like holding a piece of sky. Aerogels have the ability to compel you to search your brain for some excuse to be involved with them. Like an enigmatic party guest, you just want to be near them, even if you can’t think of anything to say. These materials deserve a different future, not of oblivion or embedment in a particle accelerator, but to be valued for themselves. Aerogels were created out of pure curiosity, ingenuity, and wonder. In a world where we say we value such creativity, and give out medals to reward its success, it’s odd that we still use gold, silver, and bronze to do so. If ever there was a material that represented humankind’s ability to look up to the sky and wonder who we are, if ever there was a material that represented our ability to turn a rocky planet into a bountiful and marvelous place, if ever there was a material that represented our ability to explore the vastness of the Solar System while at the same time speaking of the fragility of human existence, if ever there was a blue-sky material—it is aerogel.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.112/past.aspx
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
65
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
S c i e n t i s t s’
Nightstand
The Scientists’ Nightstand, American Scientist’s bookreview section, offers brief reviews and other booksrelated content. Please see also our Scientists’ Nightstand e-newsletter, which notes books coverage and news from the world of science publishing: http://amsci.org/nightstand-news
ALSO IN THIS ISSUE UMAMI: Unlocking the Secrets of the Fifth Taste. By Ole G. Mouritsen and Klavs Styrbaek. page 68 THE OLDEST LIVING THINGS IN THE WORLD. By Rachel Sussman. page 69 MOLECULES: The Elements and the Architecture of Everything. By Theodore Gray. page 70
Diagram of a taste bud, from Umami
66
Journeys to the Brink SHOCKED: Adventures in Bringing Back the Recently Dead. David Casarett. x + 262 pp. Current, 2014. $27.95.
W
hether they’re prime-time dramas like House or Grey’s Anatomy, comedies like Scrubs or MASH, or afternoon soaps like General Hospital, medical shows hinge on a particular drama riveting to viewers: the heroic resuscitation of patients. The typical revival occurs in a busy emergency room, and the patient’s passage from death back to life is characteristically swift and decisive. Author David Casarett, a practicing hospice physician, explains in his book Shocked: Adventures in Bringing Back the Recently Dead that his own early notions of medicine were shaped by such scenes. As a youth, having watched his share of medical drama cliffhangers, he believed physicians could usually revive patients in the span of a single commercial break, a conviction he dubs the “Big Mac rule of resuscitation”: “A perceptive watcher of these shows would conclude that the fate of a newly dead person is determined in the span of time that it takes to learn about the merits of cookies made by Keebler Elves or a sing-along of the McDonald’s Big Mac jingle. . . . By then, your victim is probably wide-awake and hugging the rescuers.” With this early impression as his starting point, Casarett embarks on a journey to examine the boundaries of resuscitation in real life. His encounters along the way range from the clinical to the bizarre, and in the process he examines the historical, scientific, pseudoscientific, medical, and
social aspects of reviving the dead or nearly dead—a transition zone that, contrary to popular belief, remains ill defined. Combining the insights of a physician with an accessible writing style, Shocked raises important social and ethical issues, particularly around resuscitating very old, profoundly ill patients in view of the high costs and the potential for patient harm. Casarett’s personal evolution guides the book’s narrative. Initially he had planned to be an emergency room physician. As a student he was captivated by the story of a revived toddler whose heart had stopped after she had lain submerged for over an hour in a stream brimming with snowmelt. Over the years, however, following training in medical anthropology and ethics, he changed his focus to palliative care. Casarett covers territory familiar to him throughout the book, and he proves an apt, reassuring guide; he presents his findings as a kind of travelogue, narrating with wit and aplomb visits to historical sites and cutting-edge medical labs. In the process he also weaves in interviews with patients, witnesses, and practitioners from the spectrum of biomedical sciences. The journey begins in Amsterdam, the city of canals, where drowning was once common—hence the city’s emergence as the earliest recorded venue for resuscitation research (dating back to 1767) with the formation of the curiously named Society in Favor of Drowned Persons, a group of volunteers who gathered information from rescues, aiming to develop better resuscitation treatments. Various methods emerged for reviving the near-dead. Among the techniques purported to have saved lives were rubbing the body with liniments and brandy, blowing tobacco smoke into the rectum, and tickling the back of the
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
throat with a feather. In London, under the aegis of what later became the Royal Humane Society, similar methods were being developed contemporaneously to save drowning victims in the Hyde Park Serpentine, an elongated lake with an adjacent receiving house (essentially an emergency room that doubled as a research laboratory). The receiving house served as the proving ground for many novel methods of resuscitation. Some of the techniques developed there, although largely useless by modern standards, were precursors of ones used today, such as warming some victims who suffered severe hypothermia and chilling others for complicated surgeries. From these 18th-century origin stories, Shocked moves on to notable recent successes, taking stock of research milestones along the way. Casarett discusses the much publicized resuscitation in 1999 of Anna Bagenholm, a young Swedish medical student who took a spill while skiing, plunging head-first into a frozen streambed. For hours she remained submerged in the cold water under the ice, suspended upside down by her skis. The story of her remarkable rescue and recovery after being “dead” for over 5 hours sets the next signpost for the author’s journey. He turns to the history of cardiac resuscitation, specifically the use of electricity for defibrillation, a tale that begins in 1774 and leads to the breakthrough discovery in 1947 that made the technique common. Yet “it’s one thing to restart a heart,” Casarett explains. “Protecting a brain and other organs until the heart starts beating again is a whole different problem.” Because of the loss of blood flow to the brain, the benefits of cardiac resuscitation are typically time limited; he attributes Bagenholm’s survival, as well as that of the toddler who inspired his career, to the brain being chilled. The hypothermia thus induced greatly slowed the damage compared to what happens at a normal body temperature. Inspired by these case studies, medical researchers are actively exploring thermal manipulation to protect and preserve brain health in the cardiac clinic. Casarett examines in detail how techniques for sharply reducing the body temperature—by chilling the body, stopping the shivering response pharmacologically, or diluting and cooling the blood using a heart-lung machine— have permitted extraordinary surgeries, such as the repair of an aortal aneurysm www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
that required stopping the heart for a longer period than normally permissible. From there he launches into a fascinating discussion of the biochemistry of hibernation (where the metabolic rate drops precipitously) among amphibians and mammals, especially for nonhuman primates like lemurs. Researchers in this field are currently attempting to locate key hibernation response–inducing biomolecules, such as D-alanine-D-leucine-enkephalin, ghrelin, or adenosine monophosphate (AMP). These molecules could potentially induce a condition in humans somewhat akin to states of suspended animation portrayed in science fiction movies to accommodate space travel. In fact, in anticipation of manned missions to Mars, NASA is currently funding research on the effects of inducing a state of deep sleep, called torpor, by chilling the body to 5 degrees Fahrenheit below normal and chemically suppressing the shiver response.
Resuscitation research emerged in Amsterdam, the city of canals, where drowning was once common. What NASA is contemplating for its astronauts pales in comparison to the extreme measures already being embraced by “cryonauts,” a group whose members want to have their bodies frozen at death. They hope that future doctors will revive and repair them once the cure for their cause of death has been discovered—hundreds, perhaps thousands of years from now. Casarett’s account of the Alcor Life Extension Foundation’s 40th anniversary conference, a gathering of the cryonauts, is at once intriguing, miserable, comical, and bizarre. It provides examples that powerfully testify to humanity’s craving for immortality against all costs and odds and its hubris in the face of death. Attempts at cryonic preservation began in the late 1960s. More recently, a biotech company, Suspended Animation, has partnered with Alcor to freeze the bodies of a few hopeful humans. Interestingly, some
have chosen to freeze only their heads, anticipating that by the time the cryonauts can be revived, heads will be readily transplantable to donor bodies. The bodies of some amphibians have adapted to survive cycles of freezing and thawing, as Casarett explains, which may account for some of the optimism. Unfortunately, there is not a single case of a human head or whole body being frozen and then revived functionally. The cryonauts’ futuristic fantasy is further complicated by logistical and bureaucratic obstacles: recovering the body after death in a timely manner, obtaining necessary medical clearances for preparing the body and freezing it, and ensuring the legal and technological safekeeping of the body far into the future. Despite the difficulties and long odds of success, there are enthusiastic takers willing to pay huge sums. While companies like Alcor garner outsize media attention, Casarett notes that most people fail to appreciate how thoroughly the real forms of resuscitation technology have infiltrated Western society, even far outside of the hospital setting. Consider the proliferation of CPR classes; the Red Cross offers a wealth of courses, many so carefully tailored that babysitters and construction workers may find classes designed to suit their needs. Consider the number of airports, airplanes, shopping malls, and other public spaces where automatic external defibrillators are readily available, their robotic voices programmed to guide bystanders as they attempt to revive the stricken. In some cases the technology can now do the work entirely on its own, as with implantable cardiac defibrillators that protect patients from sudden cardiac arrhythmia and death, constantly monitoring the heart rhythm and delivering shocks as needed. In the last stop of the book’s fascinating journey, the author tags along with paramedics to get a firsthand look at resuscitation technology as it is used to provide emergency treatment for aged, often feeble patients. Here the discussion turns reflective, raising questions about the value of resuscitation technologies for very aged patients with multiple chronic ailments—particularly in the face of an expensive, overburdened health care system and the incessant suffering patients may endure following extensive revival efforts. 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
67
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Even if a patient has no wish for resuscitation, a crisis event typically results in a 911 call followed by the rapid appearance of paramedics whose training and medical ethics prepare them to save lives at all costs. Often unaware of patients’ wishes or written directives to the contrary— or disregarding them and acceding to pleas of the close family—emergency responders resuscitate and transport aged, profoundly ill patients, many of whom end up in the hospital ICU. Frequently they die there, away from home and hooked up to medical paraphernalia. Casarett does not provide any easy answers. Instead, he leaves the reader to reflect on the thorny socio-ethical issues surrounding the end of life. Shocked is sobering and information laden, but Casarett eases the reader through with abundant good humor and an affable style. Enjoyable for the scientist and nonscientist alike, this tour of life and death should not be missed. Ram Ramabhadran, PhD, is a pharmaceutical/biotechnology consultant working with biotech companies and nonprofit organizations in the global health sector. He holds an adjunct faculty position in the curriculum in toxicology at the University of North Carolina, Chapel Hill, and resides in Cary, North Carolina.
Taste Masters
The theory has turned out to be incomplete, however. At almost the same time that Hänig published his work, Japanese chemist Kikunae Ikeda demonstrated the existence of a fifth taste, which he named umami (from the term umai, for “delicious”). Ikeda identified one part of the chemical basis for umami as glutamic acid and its salts (glutamate), but Western scientists were slow to agree that this substance produces a distinct taste on the tongue. Only in 2000, when researchers found receptor molecules in taste buds that responded specifically to glutamate, did umami gain worldwide recognition. Ironically, this fifth taste has played a vivid role in Western food all along—we just didn’t have a word for it. In Umami: Unlocking the Secrets of the Fifth Taste, Ole G. Mouritsen, a biophysicist, and Klavs Styrbaek, an experienced chef, have given us a book that not only defines umami but explains how best to bring out this taste in a wide variety of dishes. Most important, they reveal that the full power of umami comes from the synergy between glutamic acid and a ribonucleotide (a chemical compound
containing ribose plus one base and one or more phosphate groups). As it turns out, the food additive monosodium glutamate, or MSG, once considered all-powerful in Western renditions of Asian cuisine, is almost flavorless in its pure granular form, because the glutamate ions released in food produce very little taste on their own. The fullest effects of umami come from combining the glutamic ingredient with a ribonucleotide like the inosinate in beef soup, the guanylate in dried shiitake mushrooms, or the adenylate in scallops. Mouritsen and Styrbaek have each published previous books on food, and their enthusiasm for the fifth taste comes through vividly in this new volume. (Umami is Mouritsen’s third book on the science of cooking. His Seaweeds: Edible, Available, and Sustainable was excerpted in the November– December 2013 issue of American Scientist.) Together, the scientist and the chef aim to raise awareness of the sensory possibilities of umami by “taking advantage of the gustatory synergy produced by combining different ingredients.”
UMAMI: Unlocking the Secrets of the Fifth Taste. Ole G. Mouritsen and Klavs Styrbaek. xvi + 264 pp. Columbia University Press, 2014. $34.95.
F
or generations, people at work in Western kitchens have tended to sort the tremendous variety of tastes available to us into four categories: sweet, sour, salty, and bitter. The notion of four basic tastes goes back to the ancient Greek philosophers; it became an established principle of sensory science in 1901, when German researcher David P. Hänig published a paper describing the subjective impression of distinct tastes on different parts of the tongue. That work inspired the theory that our favorite dishes and drinks—from a sweet, smoky barbecue sauce to a tangy margarita in a saltrimmed glass—all derive their appeal from ingenious combinations of these basic tastes.
68
Maccha (top of page), made of powdered green tea whipped up in warm water, delivers umami directly from the unroasted tea leaves. Aged cheese, slow-leavened bread, and wheat beer (bottom of page) are just a few of the familiar Western foods that gain their robust umami flavors through the process of fermentation. From Umami.
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Those of us with less adventurous palates (a.k.a. “picky eaters”) may wonder whether the synergistic principle of umami, so deeply rooted in Japanese culture, can really apply to ingredients found in other parts of the world. Mouritsen and Styrbaek show that it can, transplanting the concept of umami to the bracing environment of new Nordic cuisine by creating recipes based on the food resources of mainland Denmark and the Faroe Islands, Iceland, and Greenland: a rich array of fish and shellfish, seaweeds, mushrooms, smoked meats, and dairy products. The authors draw on gastronomic and scientific literature to explore the neural basis for the complicated set of perceptions that we call taste, using crisp illustrations and medical images to explain the action of umami on receptor cells in taste buds located all across the tongue. Then comes a smorgasbord of tips on the preparation, uses, and even the folklore of umami-rich foods, distributed into three chapters according to their sources: umami from the sea (found in seaweeds, fish, and shellfish), from the land (fungi and plants), and from land animals (meat, eggs, and dairy). Interspersed throughout the book are short sections that provide interesting sidelights, for example, one demystifying the medical condition known as “Chinese restaurant syndrome,” and another deconstructing the powerful umami kick of the British sandwich spreads Bovril and Marmite and their Australian cousin, Vegemite. There is plenty of material here for readers who may approach this book from a perspective more culinary than chemical. Abundant photography by Mouritsen’s son, Jonas Drotner Mouritsen, shows preparations of food in ways that are beautiful, informative, or appetizing—and often all three at once. The book also contains 39 umami-themed recipes, which run the gamut from basic (potato water dashi, the traditional Japanese soup stock) to elaborate (white chocolate cream, black sesame seeds, Roquefort, and brioche with nutritional yeast), crisscrossing categories of traditional Western cuisine without missing a beat. An engaging read, lucidly translated and adapted from the original Danish edition by Mariela Johansen, Umami is at once a scientific treatise, cultural history, unique collection of recipes, and www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
handsome coffee-table—or, for that matter, kitchen-table—book. Sandra J. Ackerman is senior editor of American Scientist. Her most recent book, Hard Science, Hard Choices: Fact, Ethics, and Policies Guiding Brain Science Today, was published in 2006 by Dana Press.
The World’s Survivors THE OLDEST LIVING THINGS IN THE WORLD. Rachel Sussman. xxxiv + 270 pp. University of Chicago Press, 2014. $45.00.
P
hotographer Rachel Sussman spent 10 years visiting every continent to find and document the Methuselahs of life on Earth, those aged at least 2,000 years. Primarily a visual artist, she describes her work as “outside of conventional scientific methodologies.” Nevertheless, there is no doubt that this tome is a scientifically important compilation of these often elusive and remote organisms—one that illustrates the importance of conserving them to better understand the
mysteries of longevity and protect their diverse forms. As art historian Hans Ulrich Obrist points out in his opening essay, focusing on the organisms that survive through multiple millennia can offer a message of hope in a time when mass extinction is a persistent worry. The study of longevity remains a constantly evolving field, and the advent of better dating techniques using radioisotopes and genetics means that the ages of some of these organisms were discovered or updated as Sussman worked on this photodocumentary project—a circumstance that further emphasizes the wealth of information that can be lost when they are neither appreciated nor protected. In her quest to catalog this ragtag group of creatures, Sussman presents more than 120 photographs in The Oldest Living Things in the World, accompanied by her travel and research anecdotes and introduced in two opening essays by Obrist and Carl Zimmer. Despite their capacity for survival, some of these ancients are threatened with extinction. As Sussman notes, “Extreme longevity can lull us into a false sense of permanence.” In fact, some are the last living individuals of their species—for example, the 43,600-yearold king’s holly (Lomatia tasmanica) shrub in Australia. Such astonishing
Rachel Sussman’s handwritten photo labels contribute to the book’s field-guide aesthetic. This lovely image of Posidonia sea grasses off the coast of Ibeza counterbalances others that capture the grasses’ seed pods in all their “hairball” glory. From The Oldest Living Things in the World. 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
69
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
life spans prompt the question of how these organisms outlive everyone else. “The durable mystery of longevity makes the species in this book all the more precious, and all the more worthy of being preserved,” Zimmer writes. Although humans may consider the week-long life span of a gastrotich to be a mere blink, he says, our own life span similarly diminishes compared to that of a 13,000-year-old Palmer’s oak. Why organisms have different life spans remains unknown, but there are some lessons in the book on how they do so. “The fast and furious,” as Sussman puts it, are not the survivors, and efficiency is a key survival strategy. For example, she notes, the bristlecone pine, which can live upward of 5,000 years, may rely on a single live branch when the rest of the tree looks dead and can retain each needle for as long as 40 years. Some organisms survive by being unappealing to predators and people. Hollow trees, such as the 3,000-yearold olive tree that is the pride of Crete, may survive because they have been spared the ax. Although some of the world’s oldest organisms, such as the giant sequoias, are impressive in size or appearance, most are decidedly more humble: Sussman does not shy from the scrubby and scrappy life that is more fascinating than photogenic. For example, her photograph of the box huckleberry (Gaylussacia brachycera) of Perry County, Pennsylvania, depicts a plant so unremarkable one would overlook it completely on a hike, but it turns out to be somewhere between 8,000 and 13,000 years old. Others on the list are downright ugly, and Sussman retains a deep if bemusing fascination with these. When she first encounters the Posidonia sea grass in a local Spanish newspaper, she says the article includes a “tantalizing photo of a tangle of dead grass and seedpods that looked like the world’s largest hairball.” Let’s just say that her photographs do the plant complete justice. In the midst of these massive, scrubby, and homely compadres, another kind of survivor emerges. Evidenced by the fairy rings of the 2,400-yearold honey mushroom in Oregon and the seedlings of the 3,000-yearold Llareta in Chile, circular growth patterns indicate an organism’s ability to make many identical copies of itself. The very oldest organisms in (continued on page 72) 70
M q M q
M q
M q MQmags q
5IF#POETUIBU.BLF&WFSZUIJOH MOLECULES: The Elements and Architecture of Everything. Theodore Gray, with photographs by Nick Mann. 240 pp. Black Dog and Leventhal Publishers, 2014. $29.95. E-book app for iPhone and iPad, Touch Press, 2014. $13.99.
I
n his book The Elements, Theodore Gray, a chemist and cofounder of Wolfram Research, depicted the most basic of building blocks in gorgeous detail. Cast from the same aesthetic mold, his followup work, Molecules, reveals the next level of complexity with similarly
breathtaking results. Gray provides a primer on atoms, the forces that hold them together into compounds, and the basics of chemical diagrams. Some typical ball-and-stick depictions of molecules are included to ground the reader in the composition of various substances (these
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
diagrams are much more lively in the iPad version), but the focus of the book is to paint a picture of what molecules really look like on a macro scale. The result is both beautiful and informative. Gray spends some time visually unpacking how naming conventions for compounds arose historically. He discusses the difference between organic and inorganic compounds. From there, he delves into some specific substances, such as soaps, ores, painkillers, sugars, and polymers, and then opens up the debate about www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
The closing page of Molecules’ first chapter showcases the remarkable diversity of chemical compounds in our lives. Nonetheless, Gray posits that “it’s astonishing how much of chemistry involves only about half a dozen elements.” From Molecules.
the merits of natural versus artificial compounds. He goes on to detail what gives molecules certain properties, such as scent or color. A separately available interactive iPad version of the book allows a more personalized connection to the material. It features video clips and clickto-rotate functionality, but the star of the show is the molecule gallery. Readers can spin molecules and may bend them if the substance’s molecular
structure is “floppy” enough. The molecules vibrate too, their tempo set by a slider that indicates temperature. Gray gives fair treatment to the full range of substances, safe and dangerous, beloved and despised. As he notes, “Just as every living creature has a place and a role (even mosquitoes), so too every compound wants to be known and appreciated for what it contributes to the richness of the natural world.” —Fenella Saunders 2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
71
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
A clonal ring of Yucca schidigera endures despite harsh desert conditions—or perhaps because of them. In tracking down Earth’s oldest survivors, Sussman has learned “that extreme conditions can foster uniquely adapted life.” From The Oldest Living Things in the World.
(continued from page 70) the book are clonal: the 400,000- to 600,000-year-old Actinobacteria of Siberia, the 100,000-year-old Posidonia sea grass, and the 80,000-year-old stand of 47,000 quaking aspen trees in Utah known as Pando. Sussman’s account of her worldwide travels to document these epic survi-
Statement of ownership, management and circulation (required by 39 U.S.C. 3685). 1. Publication title: American Scientist. 2. Publication number: 2324-0. 3. Filing date: October 1, 2014. 4. Issue frequency: Bimonthly. 5. No. of issues published annually: 6. Annual subscription price: $30. 7. Complete mailing address of known office of publication: P.O. Box 13975, Research Triangle Park, NC 27709-3975. 8. Complete mailing address of headquarters or general business office of publisher: P.O. Box 13975, Research Triangle Park, NC 27709-3975. 9. Full names and complete mailing addresses of publisher, editor, and managing editor: David Moran, publisher, P.O. Box 13975, Research Triangle Park, NC 27709-3975; Jamie Vernon, editor, P.O. Box 13975, Research Triangle Park, NC 27709-3975; Fenella Saunders, managing editor, P.O. Box 13975, Research Triangle Park, NC 277093975. 10. Owner: Sigma Xi, The Scientific Research Society, P.O. Box 13975, Research Triangle Park, NC 27709-3975. 11. Known bondholders, mortgagees, and other security holders owning or holding 1 percent or more of total amount of bonds, mortgages, or other securities: None. 12. The purpose, function, and nonprofit status of this organization and the exempt status for Federal income tax purposes: Has not changed during preceding 12 months. 13. Publication Title: American Scientist. 14. Issue Date for Circulation Data: Sept–Oct 2013–July–August 2014. 15. Extent and nature of circulation: science. A. Total no. copies: Average no. copies each issue
72
M q M q
M q
M q MQmags q
vors is deeply personal. She does not censor some of her failures and challenges: the organisms she could not document because of one impracticality or another; a clumsy injury that tested the limits of Sri Lankan health care; getting foolishly lost in a remote part of Greenland; and awkward meetings with her boyfriend, one of which
during preceding 12 months, 47,138; no. copies of single issue published nearest to filing date, 44,880. B. Paid circulation: B1. Mailed outside-county paid subscriptions stated on PS Form 3541: average no. copies each issue during preceding 12 months, 17,303; no. copies of single issue published nearest to filing date, 15,011. B2. Mailed in-county paid subscriptions: average no. copies each issue during preceding 12 months, 0; actual no. copies of single issue published nearest to filing date, 0. B3. Paid distribution outside the mails including sales through dealers and carriers, street vendors, counter sales, and other paid distribution outside USPS: average no. copies each issue during preceding 12 months, 9,431; no. copies of single issue published nearest to filing date, 8,671. B4. Paid distribution by other classes of mail through the USPS: average no. copies each issue during preceding 12 months, 0; no. copies of single issue published nearest to filing date, 0. C. Total paid distribution: average no. copies each issue during preceding 12 months, 26,734; no. copies of single issue published nearest to filing date, 23,682. D. Free or nominal rate distribution: D1. Free or nominal rate outside-county copies as stated on PS Form 3541: average no. copies each issue during preceding 12 months, 384; no. copies of single issue published nearest to filing date, 378. D2. Free or nominal rate in-county copies as stated on PS Form 3541: average no. copies each issue during preceding 12 months, 0; actual no. copies of single issue pub-
ultimately ends in a breakup. Although these accounts often ramble, the book’s format, reminiscent of a field notebook, fits Sussman’s conversational style and leaves room for nuggets of intimate reflection. A favorite example: “There is a special kind of cognitive dissonance to being featured in the pages of the Wall Street Journal while simultaneously unable to pay one’s rent.” This unique, ambitious book sets out to review the interdisciplinary and fragmented work of those who have discovered by accident or pure perseverance an incredibly ancient living thing. In addition to the photography and writing, Sussman includes several helpful summary figures: a timeline, map, phylogeny, and graph of growth strategies. She admits the work is far from over, because more than half of her subjects were discovered in the past 30 years. For some organisms, she just couldn’t get there: “Sometimes a girl really does need a submarine.” Whether or not a sub becomes available, one hopes a second album of ancients will find its way to readers eventually. Until then, Sussman’s quirky catalog of travel and photodocumentation is both artistically and scientifically one of a kind. Katie L. Burke is an associate editor of American Scientist. She received her PhD in biology from the University of Virginia in 2011. She blogs about ecology at http://the-understory.com.
lished nearest to filing date, 0. D3. Free or nominal rate copies mailed at other classes mailed through the USPS: average no. copies each issue during preceding 12 months, 0; actual no. copies of single issue published nearest to filing date, 0. D4. Free or nominal rate outside the mail: average no. copies each issue during preceding 12 months, 868; no. copies of single issue published nearest to filing date, 835. E. Total free or nominal rate distribution: average no. copies each issue during preceding 12 months, 1,252; no. copies of single issue published nearest to filing date, 1,213. F. Total distribution: average no. copies each issue during preceding 12 months, 27,986; no. copies of single issue published nearest to filing date, 24,895. G. Copies not distributed: average no. copies each issue during preceding 12 months, 19,152; no. copies of single issue published nearest to filing date, 19,985. H. Total: average no. copies each issue during preceding 12 months, 47,138; no. copies of single issue published nearest to filing date, 44,880. I. Percent paid: average no. copies each issue during preceding 12 months, 96 percent; no. copies of single issue published nearest to filing date, 95 percent. 16. Total circulation includes electronic copies: no. copies of single issue published nearest to filing date: a. paid electronic copies, 15,421. B. Total paid print copies + paid electronic copies, 42,155. C. Total print distribution + paid electronic copies, 43,407. D. Percent paid: 97%. 50% of all distributed copies (electronic & print) are paid above a nominal price.
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Sigma Xi
F
Distinguished Lecturers 2015–2016
or the 77th consecutive year, Sigma Xi presents its panel of Distinguished Lecturers as an opportunity for chapters to host visits from outstanding individuals who are at the leading edge of science. These visitors communicate their insights and excitement on a broad range of topics. The Distinguished Lecturers are available from July 1, 2015, to June 30, 2016. Each speaker has consented to a modest honorarium together with full payment of travel costs and subsistence. Local chapters may apply for subsidies to support expenses related to hosting a Distinguished Lecturer. Applications must be submitted online by March 1 for funds to be available the next fiscal year. Additional support for the program comes from the American Meteorological Society and the National Cancer Institute. Lecturer biographies, contact information, and additional details can be found online under the Lectureship Program link at www.sigmaxi.org or by email to __________ lectureships@ sigmaxi.org. Judith Herzfeld, Chair Committee on Distinguished Lectureships
Application Deadline: March 1, 2015
Lee Dugatkin, Professor of Biology, University of Louisville 5IF&WPMVUJPOPG(PPEOFTT 1 ( r.S+FGferson and the Giant Moose: When Natural )JTUPSZBOE)JTUPSZ$PMMJEF 1 ( 4 r(FOFT $VMUVSFBOE#FIBWJPS.BUFDIPJDF$PQZJOHJO)VNBOTBOE/POIVNBOT 1 ( 4 r "MUSVJTN8SJU4NBMM8IZ.JDSPCFT1SPUFDU 0OF"OPUIFSGSPN"OUJPCJPUJDT ( 4
Debra Fischer, Professor of Astronomy, Yale University 5IF4FBSDIGPS)BCJUBCMF8PSMET 1 r1VUting our Solar System in Context: ComparaUJWF1MBOFUPMPHZ ( r)PX8JMM8F'JOE &BSUIT (
Krishna L. Foster, Professor of Chemistry, California State University, Los Angeles Oxygenated-PAH as a Source of Singlet 0YZHFOJOUIF-PXFS"UNPTQIFSF 4 r&MVcidating the Role of Aged Particulate Matter PO"JS2VBMJUZ ( r*OWFTUJHBUJOHUIF3PMF of Fine Particles on Air Quality (P)
http://www.sigmaxi.org/programs/lectureships/index.shtml
Andrea Bertozzi, Betsy Wood Knapp Chair for Innovation and Creativity and Professor of Mathematics, University of California at Los Angeles .BUIFNBUJDTPG$SJNF 1 ( 4 r*NBHF 1SPDFTTJOHBOE-BSHF%BUB"OBMZTJT ( 4 r 4XBSNJOHCZ/BUVSFBOECZ%FTJHO ( 4 r.BUIFNBUJDTJOUIF3FBMBOE*NBHJOBSZ World (P)
From Salomon’s House to Synthesis Centers ( 4 r5IF$VMUVSBM$POUSBEJDUJPOTPG4DJFODF+FPQBSEZBOE0QQPSUVOJUZ 1 ( 4 r.BLJOH4DJFODF6TFGVM .BLJOH6TFGVM 4DJFODF ( 4 r1FFS3FWJFXBOEUIF$POduct of Science (G,S)
Richard Canary, Professor of Mathematics, University of Michigan
Sandra L. Hanson, Professor of Sociology, Catholic University
/PO&VDMJEFBO4QPSUTBOEUIF(FPNFUSZPG 4VSGBDFT 1 ( r)ZCFSCPMJD4QPSUTBOEUIF Geometrization of 3-Dimensional Spaces (S)
(JSMTJO4DJFODF8IZ4P'FX 1 ( r4DJFODFGPS"MM %JWFSTJUZJO4DJFODFJOB(MPCBM &DPOPNZ 1 ( r4XJNNJOH"HBJOTUUIF Tide: African American Girls and Science &EVDBUJPO 4
P (Public), G (General), S (Specialized)
www.americanscientist.org
American Scientist
Edward J. Hackett, Professor, Arizona State University
Details available at http://www.sigmaxi.org/programs/lectureships/index.shtml
2015
January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
73
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
John G. Hildebrand, Regents Professor, University of Arizona -FBSOJOHGSPN*OTFDU#SBJOT&YQMPSBUJPOTPG B4JNQMF0MGBDUPSZ4ZTUFN 1 ( r5IF.PTU Dangerous Animals in the World: Arthropod 7FDUPSTPG%JTFBTF 1 ( r/FVSBM1SPDFTTJOH PG#FIBWJPSBMMZ4JHOJGJDBOUi0EPS0CKFDUTuJO BO*OTFDUT#SBJO ( 4
Nicholas Hud, Professor of Chemistry and Biochemistry, Georgia Institute of Technology The Scientific Quest for the Origin of Life 1 ( r&YQFSJNFOUBM*OWFTUJHBUJPOTPGUIF 0SJHJOBOE&BSMZ&WPMVUJPOPG-JGF 1 ( 4 r "4FMG"TTFNCMZ"QQSPBDIUPUIF0SJHJOPG RNA (G,S)
Omowunm (Wunmi) Sadik, Professor of Chemistry and Director, Center for Advanced Sensors and Environmental Systems, State University of New York at Binghamton #JPDIFNJDBM4FOTPS"O0CKFDUJWF"QQSPBDI GPS1BJO.FBTVSFNFOU 1 ( 4 r$BO:PVS J1IPOF5FMM:PV8IBUTJO:PVS'PPE Nanosensors for Rapid Detection of Food 1BUIPHFOT 1 ( r"/FX$MBTTPG$POEVDUJOH1PMZNFST#BTFEPO'MFYJCMF1PMZ BNJD "DJE.FNCSBOFT 4 American Meteorological Society Mark Serreze, Research Professor of Geography, University of Colorado Boulder 5IF/FX"SDUJD 1 ( r$PNNVOJDBUJOH Climate Change: Lessons Learned at the /BUJPOBM4OPXBOE*DF%BUB$FOUFS 1 (
Nola M. Hylton, Professor of Radiology and Biomedical Imaging, University of California, San Francisco
Michael Spencer, Professor, Department of Electrical Engineering, Cornell University
'VODUJPOBM.3*5FDIOJRVFTGPS"TTFTTing Breast Cancer Response to Treatment 1 ( 4 r.3*GPS#SFBTU$BODFS4DSFFOJOH BOE3JTL"TTFTTNFOU 1 ( 4
7BOEFS8BBMT&QJUBYZPG(SBQIFOFBOE% .BUFSJBMTr***/JUSJEF%FWJDFTGPS)JHI 7PMUBHF(SJE"QQMJDBUJPOTr3BEJP*TPUPQF /VDMFBS#BUUFSJFTGPS.FEJDBM*NQMBOUTBOE 4FOTPSTr5IF1SPNJTFPG%.BUFSJBMT
Bryant C. Nelson, Staff Research Chemist, National Institute of Standards and Technology
Herman O. Sintim, Associate Professor, University of Maryland, College Park
'VOEBNFOUBM*OUFSBDUJPOTPG&OHJOFFSFE Nanoparticles and Nanomaterials with DNA 1 ( 4 r*OIJCJUJPOPG%/"3FQBJS1SPUFJO "DUJWJUZCZ(PME<"V55] Nanoclusters (G,S) r5IF*OIJCJUJPOPG'SFF3BEJDBM*OEVDFE %/"%BNBHFCZ#PUI4JOHMFBOE.VMUJXBMM $ BSCPO/BOPUVCFT ( 4
Karen K. Oates, Professor of Biochemistry and the Dean of Arts and Sciences at Worcester Polytechnic Institute #SJOHJOH$JWJD&OHBHFNFOUJOUPUIF4DJFODF $MBTTSPPN ( 4 r)PX1FPQMF-FBSOBOE $SFBUJWJUZPG4DJFODF 1 ( r#PPTUJOH*OOPWBUJPO"O&DPMPHJDBM"QQSPBDIGPS4DJFOUJTUTBOE&OHJOFFST ( 4 r5IF0CMJHBUJPO of Knowledge (P,G)
P (Public), G (General), S (Specialized)
74
M q M q
M q
M q MQmags q
#BDUFSJBM$POWFSTBUJPO 8IBU*T*UBOE)PX $BO*U#F4UPQQFEUP1SFWFOU*OGFDUJPOTGSPN 4QSFBEJOH 1 ( r/FX4USBUFHJFTUP$VSC #BDUFSJBM*OGFDUJPOTWJBUIF%JTSVQUJPOPG Quorum Sensing and/or Cyclic Dinucleotide 4JHOBMJOH 4 r1FSTPOBMJ[FE.FEJDJOFBOE the Role of Simple Diagnostic Platforms ( 4 r5IF#BDUFSJBM3FTJTUBODF1SPCMFN And Why We Should Care (P,G)
Todd Surovell, Associate Professor, University of Wyoming 6TJOHUIF1SFTFOUUP'JHVSFPVUUIF1BTU"O &UIOP"SDIBFPMPHJDBM4UVEZPG.POHPMJBT 3FJOEFFS)FSEFST 1 ( 4 r8IBU$BVTFE UIF&YUJODUJPOPG/PSUI"NFSJDBT.BNNPUIT 1 ( 4 r*DF"HF)VOUFS(BUIFSFST PGUIF3PDLZ.PVOUBJOT 1 ( 4
Details available at http://www.sigmaxi.org/programs/lectureships/index.shtml
American Scientist, Volume 103
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
January – February 2015 Volume 24 Number 1
Sigma Xi T day A NEWSLETTER OF SIGMA XI, THE SCIENTIFIC RESEARCH SOCIETY
Superior Student Presenters
From the President
2014 International Research Conference High School Division
Natural and Social Sciences Karthik Raju, Mira Loma High School Physical Sciences and Engineering Thorsen Wehr, Odessa High School Undergraduate Division
Behavioral and Social Sciences Camden MacDowell, Emory University Bryan Nelson, New York University Cell Biology, Biochemistry, Physiology, and Immunology Steven Romanelli, Fordham University Erin Feeney, Oakland University Michelle Oberoi, University of California, Irvine Eden Barragan, University of California, Irvine Nicholas Farrar, The Ohio State University Chemistry Christina Owens, University of California, Irvine Ecology and Environmental Sciences Charis Royal, Arizona State University Claudia Mazur, Mount Holyoke College Engineering Taylor Gambon, Clemson University Iwnetim Abate, Minnesota State University, Moorhead Math and Computer Sciences Katherine Marszalkowski, East Carolina University Physics and Astronomy Carl Fields, Arizona State University Todd Hodges, Arizona State University Graduate Division
Behavioral and Social Sciences Avery Russell, University of Arizona Cell Biology, Biochemistry, Physiology Manindra Singh, Ohio University Chemistry Suntara Fueangfung, Michigan Technological University Ecology, Environmental, and Geosciences Adrienne Godschalx, Portland State University Patricio Becerra, University of Arizona Engineering, Math, and Computer Sciences Bich Nguyen, Mercer University Physics and Astronomy Amanpreet Kaur, Clemson University www.americanscientist.org
American Scientist
An Energizing Annual Meeting As Sigma Xi president, I am pleased to report that the Annual Meeting and International Research Conference held recently in Arizona were remarkable successes on many different levels. Perhaps the most exciting events involved the young researchers who presented their posters and engaged extensively with delegates and attendees from throughout the Society. These collegiate and high school students (all remarkably accomplished!) were not only enthusiastic Sigma Xi President George Atkinson, on left, presented a medal about their own scientific research proj- to Steven Romanelli of Fordham ects, but were also eager to learn about University November 8 for his the work of others. Renewing and ex- Superior presentation in Sigma panding the Society’s commitment to Xi’s 2014 International Research ensure that these young researchers be- Conference. Other top presenters come and remain active Sigma Xi mem- are listed on the left side of this page. More details about the conbers is vital to the future of the Society. The Society also had the privilege of ference are on page 76. (Photo by recognizing several individuals who Katie-Leigh Corder/Sigma Xi.) have made significant contributions to scientific research and how science is effectively communicated to the public. These individuals have helped define the importance of scientific research over many decades and have pioneered new ways to convey the significance of science to the public using the still emerging technologies of social media. Detailed information on all these Sigma Xi awards is available at https:/ /www.sigmaxi.org/programs/prizes_______________________________ awards. _____ Finally, there was important progress made on formalizing a variety of new programs and procedures concerning how the mission of Sigma Xi can become more relevant to the interests of its members and the public in general. The Board of Directors reached decisions that are designed to strengthen the financial well-being of the Society. As we collectively move forward, I encourage all Sigma Xi members to be actively engaged with your respective chapters and with the national headquarters. In the meantime, I look forward to joining you at the next Annual Meeting in Kansas City in October 2015.
George H. Atkinson
2015 January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
75
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
ANNUAL MEETING 2014
Sigma Xi Meets in Arizona for Annual Meeting and International Research Conference Sigma Xi members and students gathered in Glendale, Arizona, November 6–9 for the Society’s Annual Meeting and International Research Conference. In addition to undertaking governance tasks for the 115th Assembly of Delegates, participants attended sessions on chapter best practices and effective science communication, and were treated to research presentations from this year’s award winners. They also attended two debates about critical issues in water and food. These debates were demonstrations of events Sigma Xi will co-host in 2015 with the Institute on Science for Global Policy to promote the public’s understanding of science. More than 100 high school, undergraduate, and graduate students came from across the country, as well as from other countries, to present posters about their research in the International Research Conference. Sigma Xi members judged the students’ posters and ranked 24 as Superior presenters. These students were awarded medals and received an offer of having their initiation fee and first year’s dues paid if they accepted an invitation to join Sigma Xi. Nominations for Sigma Xi membership were extended to 93 students who qualified. The 2015 meeting will be held October 22–25 in Kansas City, Missouri.
See more photos at http://bit.ly/1zWlr8K
Photos (clockwise from top): William Procter Prize Grant-in-Aid of Research recipient Amy Katz, William Procter Prize for Scientific Achievement winner Jenny Glusker, and Young Investigator Award winner Thomas H. Epps III at the Sigma Xi Annual Meeting. Top student presenters in the International Research Conference received medals. Members of the Columbia-Willamette Chapter presented a chapter poster. Honorary Member Derek Muller, second from right, is host of the YouTube channel Veritasium. Sigma Xi President George Atkinson moderated debates on water and food. Rachel Merz, vice president of the Swarthmore College Chapter, talks with other chapter leaders during a workshop about keeping members engaged. (Photos by Cristina Gouin-Paul, Katie-Leigh Corder, and Heather Thorstensen.)
76 Sigma Xi Today
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
AWARDS AND RECOGNITION
Chapter Award Winners Chapter of Excellence Awards have been bestowed on the following chapters for exceptional chapter activity, innovative programming, and true community leadership during 2013–2014. Nominees were chosen by the regional and constituency directors based on chapter annual reports, and winners were selected by the Committee on Qualifications and Membership. Alfred University Cornell University Northwestern Pennsylvania Swarthmore College Woods Hole Charleston (Honorable Mention)* Pace University (Honorable Mention)* *Chapters that received a Chapter of Excellence Award are not eligible to win an award again for three years. However, many of these chapters continue their meritorious work in subsequent years. The Committee on Qualifications and Membership provides such chapters with an honorable mention. Below: The first students from Tougaloo College in Tougaloo, Mississippi, to join Sigma Xi were inducted in 2014. They join as members of Sigma Xi’s Brown University Chapter. (Submitted photo.)
www.americanscientist.org
American Scientist
Chapter Program Awards have been bestowed on the following chapters for organizing and/or hosting a single outstanding program during 2013–2014. Nominees were chosen by the regional and constituency directors based on chapter annual reports, and winners were selected by the Committee on Qualifications and Membership.
Charleston Chapter for Piccolo Darwin Week. Kansas State University for chaptersponsored Grants-in-Aid of Research Awards. Louisiana Tech University for a reactivation luncheon. University of Colorado for chaptersponsored Undergraduate Research Awards.
Top 15 Electing Chapters
The following chapters are recognized for initiating the most new members in 2013–2014: CHAPTER NAME
INITIATE COUNT
Brown University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swarthmore College. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Washington University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Princeton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smith College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amherst College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claremont Colleges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oberlin College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mercer University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Worcester Polytechnic Institute . . . . . . . . . . . . . . . . . . . . . . Denison University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fordham University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Williams College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Union College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Georgetown University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
229 139 110 102 94 71 56 53 47 44 44 42 42 42 41
2015 January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
77
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
WALSTON CHUBB AWARD FOR INNOVATION
Adding Emotion to Computing Affective computing sets out to improve interactions between people and technology. It sounds like it also applies to technology that can improve interactions between people.
Ultimately it’s about technology that helps us better understand and communicate emotion, whether it’s in a human–computer interaction or a human–human interaction or a human– human mediated by a computer. If we can use the technology to help us better understand that emotion and better communicate it and be better understood then that’s a great value. Why do we want computers to have emotional intelligence?
Rosalind W. Picard, who joined Sigma Xi in 1986, spoke about her research on emotionsensing technology at Sigma Xi’s Annual Meeting November 7 in Glendale, Arizona. (Photo by Cristina Gouin-Paul.)
Rosalind W. Picard took a risk when she started telling colleagues that technology needed to be able to use and interpret emotions. “’Emotion’ was not a word or topic anybody serious wanted to be associated with back when I first encountered its important role,” she said. She was trying to develop computers that could see and hear like humans when she realized that emotions influence what people see and hear or how they choose language or action—and that was being left out of computing. For being a pioneer in affective computing, Professor Picard of the Massachusetts Institute of Technology Media Lab received Sigma Xi’s 2014 Walston Chubb Award for Innovation. Heather Thorstensen, Sigma Xi’s manager of communications, spoke with Picard about her work. Watch the full interview at _______________ https://www.sigmaxi. org/news/article/picard
78
If it’s interacting with people face-toface in any kind of way that involves language or interaction that could be perceived as social, then it’s got to have emotional intelligence. People expect that the system will see things— like are things going well, are things not going well—and learn from that. What do you think about the balance between improving technology and the consequence that we might interact more with technology than we do with other people?
We really need to think about what is human well-being and how do we build technology that serves that? … One of the things that we’ve worked on is technology that helps people better understand the emotions of themselves and of others so that they can learn how to better succeed in face-toface human relationships. You’re the co-founder of Affectiva and that has a tool that marketers can use to measure facial expressions.
The product is called Affdex, people can go online with a webcam and try it … We decided we were going to build a better system that could work in face-to-face interaction for people on the autism spectrum. In order to get the technology to work really accurately, we needed more data … By going into marketing, we actually got customers to pay for people to go online and give us their data. If you go use this application online, you can choose
whether you want to share your data. If you share it, you might show up in a computer vision or machine learning publication where we’re actually using your data to build a more accurate machine learning system to help not just market researchers but a lot of people out there who have difficulty reading facial expressions. Let’s talk about Empatica, of which you are the co-founder and chief scientist. It sells wearable sensors that monitor people’s bodies [the sensors use photoplethysmography to monitor heart rate and stress, measure electrodermal activity to track emotions such as excitement, record surrounding temperature to provide context of when a person moves from one area to another, and contain accelerometers to measure activity.] What are the key uses for this technology?
It’s a product designed for researchers and for some sophisticated users to learn about themselves … We take all that data and we use machine learning pattern analysis to help interpret it in different situations with a focus today on several medical situations, especially epilepsy, depression, PTSD, and sleep … Empatica is planning to come out with a device that will allow people to get their seizures detected. What is your favorite part of affective computing?
One of my absolute favorites has been working with people on the autism spectrum … they’re just so honest and forthright. [One said] “Roz, I think affective computing is a bad idea.” And I said, “Really? Tell me more. Why do you think it’s a bad idea?” And she said, “Well, you know, I don’t like looking at faces. Faces and eyeballs freak me out. And if you’re going to make computers more emotional, more like people, then I’m not going to like them as much…” And I’m like, “Well, that’s really important feedback.” … We could have a face-to-face interaction, mediated by computer, where you see the parts you want to see and I see the parts I want to see … so the computer could start to be smart about making that interaction easier.
Sigma Xi Today
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
FELLOW COMPANIONS
Meet Your Fellow Companion: Niraj Lodhi Sigma Xi’s motto is the Greek “Spoudon Xynones,” or “Companions in Zealous Research.” With that thought in mind, we like to highlight Sigma Xi members to learn more about their work. Niraj Lodhi is a postdoctoral associate at Weill Cornell Medical College in New York. He has studied a protein to determine its function and role in controlling cancer cell growth and hopes to develop an inhibitor to eliminate cancer cells. Please explain your research.
After completing my PhD in plant molecular biology from the National Botanical Research Institute in Lucknow, India, I started my research work on cancer. At the Fox Chase Cancer Center in Philadelphia, I studied the role and function of poly(ADP-ribose)polymerase-1 (PARP-1) in mitosis. PARP-1 is a protein that adds poly(ADP)ribose residues to nuclear proteins by utilizing the coenzyme NAD+ and regulates DNA damage repair, chromatin remodeling, and transcription. This function of PARP-1 is called poly(ADP-ribosly) ation. In an earlier study, we found that during mitosis PARP-1 remains associated with chromatin, a highly condensed complex that allows DNA to fit inside a nucleus. My research work reveals PARP-1 acts as an epigenetic factor that binds to chromatin during mitosis and memorizes genes for transcription activation after mitosis. [Transcription is the first step of gene expression.] PARP-1 bookmarks the specific genes in chromatin during mitosis, and
Submitted photos
www.americanscientist.org
American Scientist
maintains the precise inheritance of gene expression in daughter cells. Upon completion of mitosis, PARP-1 and other proteins loosen the chromatin and reinitiate the transcription of PARP-1-dependent genes responsible for cell–cell communication, cell adhesion, and cell maintenance. We called them cell identity genes. PARP-1 also bookmarks protooncogenes and tumor suppressor genes throughout the genome during mitosis. A proto-oncogene is a normal gene that becomes an oncogene due to mutations or abnormally increased expression. Oncogenes have the potential to cause cancer by transforming a normal cell into a cancer cell. Tumor suppressor genes act opposite of oncogenes. Mis-regulated expression of proto-oncogenes and tumor suppressor genes in different cancer cells can be associated with PARP-1 bookmarking. Therefore, this study can be used to control over-expression of proto-oncogenes to prevent these genes from becoming oncogenes or to design specific PARP-1 inhibitors. This work was published in 2014 in Nucleic Acids Research, Volume 42, Number 11. All presently available PARP-1 inhibitors function as competitive inhibitors to NAD+. Because NAD+ is involved in most metabolic pathways, use of these inhibitors affect physiological activities of cells. Based on our research, I am working to develop a specific inhibitor of PARP-1, independent of NAD+. We tested this inhibitor on a human prostate cancer cell line and mouse model and the results are very exciting and promising. For further study, we collaborated with another group to conduct clinical trials. Recently, I moved to the Weill Cornell Medical College in New York and started work on the role of DNA methylation on non-genetic inheritance of behavior, specifically anxiety, from one generation to the next. Our lab iden-
tified the differential methylated regions (DMRs) in the brain, specially the hippocampus, and I am focusing on three dimensional interactions of DMRs with genomes and how they affect gene expression from generation to generation. What do you hope to accomplish with the PARP-1 inhibitor?
Recent research shows PARP-1 inhibitors control the growth of breast cancer cells, however after some generations the cells become resistant to inhibitors. The exact mechanism is not known other than PARP-1’s role in DNA damage repair. My plan is to continue to explore PARP-1’s epigenetic functions and utilize all recent research information to develop a specific inhibitor that can effectively overcome resistance from cancer cells and eliminate them. What do you enjoy about being a Sigma Xi member?
Since I joined Sigma Xi in 2012, I took advantage of all benefits of membership. The best benefit for me is to get scientific news from all possible sources including email, social media, and the Sigma Xi online community. The most important thing Sigma Xi provides is the views of fellow scientists for career development. I strongly support Sigma Xi membership to develop my career and I nominated one of my colleagues who is benefiting from membership. Other members may contact me at [email protected] _______________ or on Twitter at @nirajlodhi.
2015 January–February
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
79
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
FELLOW COMPANIONS
Meet Your Fellow Companion: Prasad Bichu Prasad Bichu, a Sigma Xi member from the University of Missouri Chapter, researches aspects of kidney dysfunction and disease. He is also a medical doctor, treating patients with kidney issues, sometimes printing research papers to educate patients. As director of the Pediatric Dialysis Division of Nephrology at the University of Missouri’s Children’s Hospital, he is looking for research collaborators to move the science forward. What are the research projects that you’re doing?
My research projects are related with some of the most common risk factors that cause kidney dysfunction. And the most important one that we are seeing nowadays is obesity. Obesity seems to cause a lot of things including high blood pressure, insulin resistance, as well as progressive kidney dysfunction. Conventionally, we have been looking for protein in the urine and worsening kidney function in the form of a rise in creatinine, a protein that goes up whenever kidneys fail. But we are trying to find new modalities in our test to early detect this dysfunction due to obesity. We are hoping that with this early detection we will be able to intervene at an earlier stage so that we can
take care of the problem better. One of the other research projects that I’m doing is related with one of the most dreadful diseases in childhood of less than five years of age, which is hemolytic uremic syndrome. So far we haven’t had any luck in terms of a medication which will cure this disease but recently we’ve got a drug which may be used in certain situations [to reverse the disease process]. And we’re trying to figure out which kids can use this drug with the least complications. How does hemolytic uremic syndrome affect kids?
Hemolytic uremic syndrome is a disease which is [the most common cause of kidney failure] in the age group of zero to five years of age. It is usually caused by a bug called E. coli 0157:H7, which is usually spread through contaminated food or raw food—unpasteurized milk … This disease can affect the patient’s brain, the heart, liver, pancreas, and lastly the kidneys. It is a very serious disorder. What it really means is that it causes a lot of small clots in the body which can obstruct the blood flow to various parts of our organs.
What are some of the challenges that you’re having with this research project?
Some of the challenges that we have with this research project is, number one: the actual cost of this medicine to begin with because this medicine is extremely expensive, it is very difficult to obtain, and not covered by insurance companies. Number two: We are fighting against the unknowns. We are not aware of the long term complications of this medicine. This medicine is actually called Eculizumab, it is a complement C5b inhibitor … And number three: … Inhibiting the complement [a part of the immune system] seems to help to decrease the clot formation in this situation. The hurdle is trying to connect the two sites and figuring out why exactly this happens. Is this something that would have a lot of potential for collaboration with other researchers?
Yes, absolutely. Hemolytic uremic syndrome is more kind of a sporadic disease where you tend to get about 15 to 20 cases in a year in a center so it’s really difficult to get the numbers needed to do a large study. It would definitely help to have different centers collaborate. Number two: It would definitely help to collaborate with somebody who does research in complements and complement inhibition. We are also looking at novel ideas for research in terms of obesity associated with renal dysfunction. So just like cystatin C, we are also looking at other proteins which we could test in these kids to early detect the dysfunction. So we would like to collaborate with anybody who is doing a similar project. Listen to the full interview with Prasad Bichu at www.sigmaxi.org. Go to “News” then “Meet Your Fellow Companions.” He can be reached by potential collaborators at [email protected]. ________________
Sigma Xi Today is edited by Heather Thorstensen and designed by Spring Davis. Submitted photo
80
Sigma Xi Today
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Birds of a feather save together. As a member of Sigma Xi you could save even more on your car insurance with a special discount. Join your fellow members who are already saving with GEICO.
geico.com/sigmaxi | 1-800-368-2734
Some discounts, coverages, payment plans and features are not available in all states or all GEICO companies. Discount amount varies in some states. Discount is not available in all states or in all GEICO companies. One group discount applicable per policy. Coverage is individual. In New York a premium reduction may be available. GEICO is a registered service mark of Government Employees Insurance Company, Washington, D.C. 20076; a Berkshire Hathaway Inc. subsidiary. © 2014 GEICO
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Big landscapes Inspire big thinking
-
&,%*#&(&'% #%#(% # #"%&& '*&"&,#&(&'% -&$%#+!','#&)(&' #$$#%'("','#''%'!","* '&$%#%!*&#"#'&'",%& *(&'% ") #$!"'&" #(%. ''%' #'#"'%&'"&'%#""'%"'#" %&%$%'"%&$&*%&' & (&'% &#")%,#"-& &''#)&'"' (%#(%&'"(!%# '&,'%-&"##('',- ' " #(''&#")"'#"#%,%&'##! Dr Louise Wong, International Board Member
VISIT AUSTRALIA.COM/BUSINESSEVENTS/ASSOCIATIONS _____________________________________________ FOR EVERYTHING YOU NEED TO PLAN YOUR AUSTRALIAN EVENT.
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®