Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
CHICKADEES CALLING • THE ARTFUL SLICER • CALDER'S EDUCATION
AMERICAN
Scientist September–October 2012
www.americanscientist.org
Speedy Carbon Circuits
Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Research can cost millions of dollars.
FORTUNATELY, YOU COULD SAVE WITH GEICO’S SPECIAL DISCOUNT. Sometimes the scientific method takes a lot of time and money. Fortunately, it’s easy to save money with GEICO. Just tell us you’re a member of Sigma Xi. It’s easier than proving a hypothesis.
Get a free quote.
1-800-368-2734 geico.com/greek/sigmaxi
MENTION YOUR SIGMA XI MEMBERSHIP TO SAVE EVEN MORE. Some discounts, coverages, payment plans and features are not available in all states or all GEICO companies. Discount amount varies in some states. One group discount applicable per policy. Coverage is individual. In New York a premium reduction may be available. GEICO is a registered service mark of Government Employees Insurance Company, Washington, D.C. 20076; a Berkshire Hathaway Inc. subsidiary. GEICO Gecko image © 1999-2012. © 2012 GEICO
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
AMERICAN
Scientist Departments
Volume 100 • Number 5 • September–October 2012
Feature Articles
354 From the Editor 355 Letters to the Editors 358 Macroscope The survival of the fittists Howard Wainer 362 Computing Science Alice and Bob in cipherspace Brian Hayes 368 Engineering A portrait of the artist as a young engineer Henry Petroski 374 Marginalia Bonding to Hydrogen Roald Hoffmann 379 Science Observer Cracking with electricity rIn the news
388 388 Graphene in High-Frequency Electronics This two-dimensional form of carbon has unique properties Keith A. Jenkins
398
408
382 The Big Picture More examples of the magazine’s finest artwork 416 Sightings Forest elephant chronicles
Scientists’ Bookshelf 418 Scientists’ Bookshelf The nature of computation rPlant senses rEconomic genius rA nation of agrarians
From Sigma Xi 429 Sigma Xi Today Procter Prize winner rYoung Investigator Award rMeet your fellow companion
398 The Complex Call of the Carolina Chickadee Can the chick-a-dee call provide lessons about language? Todd M. Freeberg, Jeffrey R. Lucas, Indrik¸is Krams
408 Slicing a Cone for Art and Science Albrecht Dürer searched for beauty with mathematics Daniel S. Silver
The Cover Transmission electron spectroscopy (TEM) images are taken by transmitting a beam of electrons through an ultra-thin sample. Interactions between the sample and the electrons, such as absorption or complex wave interference, create the contrast in the image. Electrons can provide much higher resolution images than light, allowing atomic-level detail. On the cover, a grid used to support the thin samples is shown in red with its one-micrometer-diameter holes in blue, and a small flake of graphene is imaged in green. As Keith A. Jenkins explains in “Graphene in High-Frequency Electronics” (pages 388–397), this single-atom-thick form of carbon has great potential for use in circuits, but scaling up the pieces to usable size has taken some work. Jenkins and his colleagues have created the first electronic device using graphene, a component essential to wireless communication networks. (Image courtesy of Zettl Research Group, Lawrence Berkeley National Laboratory and University of California at Berkeley.)
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
From the Editor
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
AMERICAN
Hooked on Conics
A
bout seven weeks before each issue’s deadline, the American Scientist staff gathers for what we call an issue-planning meeting. In truth, by that point the editors have decided the contents of the upcoming issue and who will be responsible for what. So it’s really a meeting to describe each column and article to the assembly—to murmurs (we hope) of approval. Often the theme of this essay becomes apparent at that meeting. The June 19 rendition of this six-timesa-year ritual turned up cones. First off was the description of Henry Petroski’s Engineering column, “A Portrait of the Artist as a Young Engineer” (pages 368–373). Actually, it’s not primarily about cones—it looks deeply into the engineering education of sculptor Alexander Calder—but they do figure in a section of the piece. Henry describes the difficulties budding engineers face in visualizing and drawing how planes intersect cones. In particular, he describes the travails of imagining what will happen when a hexagonal pencil is worked in a conical pencil sharpener, noting that this is frequently drawn incorrectly. This mention got the full attention of the associate art director, a Pratt Institute graduate with extensive background in mechanical drawing. In preparing the figure you see on page 371, Tom confirmed that “about half the time” illustrators get it wrong. About then, our managing editor piped up with a summary of the feature article by Daniel S. Silver, “Slicing a Cone for Art and Science” (pages 408– 415). Albrecht Dürer didn’t think of himself as a mathematician. Instead, he viewed mathematics—and geometry in particular—as a tool to assist in accurately depicting the beauty of the world. In the Painter’s Manual he described, among other things, the many shapes formed when planes intersect cones— including parabolas, hyperbolas and, significantly, ellipses. At risk of spoiling the story, I’ll just say that the correspondence of astronomer Johannes Kepler specifically mentions Dürer.
A
close runner-up for the issue’s theme goes to comparatively simple elements turning out to be bit more complex than anticipated. What could be simpler than hydrogen? Its molecular form, H2, was an early candidate for study by this issue’s Marginalist, Roald Hoffmann. He made some with his hobby chemistry set as a youth, revisited it in high school with similar incendiary results and, to his great surprise, returned to dihydrogen much later in his career as a theoretical chemist. In “Bonding to H2” (pages 374–378), he describes some mighty peculiar behavior attributable to this fundamental molecule. Later on Keith A. Jenkins reveals some surprising traits exhibited by a one-atom-thick layer of carbon—graphene. In “Graphene in High-Frequency Electronics” (pages 388–397), he tells us why this material, first extracted only eight years ago, could be the basis for some higher-speed circuitry, as semiconductors near their limits of scalability. Finally, be sure to review this issue’s Classic, “The Big Picture” (pages 382– 387), wherein we review some of the magazine’s finest illustrations over the years. I may be stretching it a bit to liken this to entries on the periodic table, but it certainly contains the visual rendition of Strunk and White’s elements of style.—David Schoonmaker
Scientist www.americanscientist.org
VOLUME 100, NUMBER 5 Editor David Schoonmaker Managing Editor Fenella Saunders Senior Editor Anna Lena Phillips Contributing Editor Catherine Clabby Contributing Editor Laura Poole Editorial Associate Mia Evans Art Director Barbara J. Aulicino Contributing Art Director Tom Dunne Senior Writer Brian Hayes SCIENTISTS’ BOOKSHELF Editor Anna Lena Phillips AMERICAN SCIENTIST ONLINE Managing Editor Greg Ross Publisher Jerome F. Baker Associate Publisher Katie Lord ADVERTISING SALES Advertising Manager Kate Miller BEWFSUJTJOH!BNTDJPSHt ___________ EDITORIAL AND SUBSCRIPTION CORRESPONDENCE American Scientist P.O. Box 13975 3FTFBSDI5SJBOHMF1BSL /$ tGBY ________ ____________ FEJUPST!BNTDJPOMJOFPSHtTVCT!BNTDJPSH PUBLISHED BY SIGMA XI, THE SCIENTIFIC RESEARCH SOCIETY President Kelly O. Sullivan Treasurer Ronald Millard President-Elect Joseph A. Whittaker Immediate Past President Michael Crosby Executive Director Jerome F. Baker PUBLICATIONS COMMITTEE Chair: Paul C. Kettler Jerome F. Baker, Marc Brodsky, Beronda Montgomery, David Schoonmaker American ScientistHSBUFGVMMZBDLOPXMFEHFT TVQQPSUGPSi&OHJOFFSJOHwUISPVHIUIF-FSPZ Record Fund. Sigma Xi, The Scientific Research SocietyXBT GPVOEFEJOBTBOIPOPSTPDJFUZGPSTDJFOUJTUT BOEFOHJOFFST5IFHPBMTPGUIF4PDJFUZBSFUP GPTUFSJOUFSBDUJPOBNPOHTDJFODF UFDIOPMPHZBOE society; to encourage appreciation and support PGPSJHJOBMXPSLJOTDJFODFBOEUFDIOPMPHZBOE to honor scientific research accomplishments. Printed in USA
354
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Letters to the Editors Infrared Dating
To the Editors: In “Herschel and the Puzzle of Infrared” (May–June 2012), Jack White mentions that it is not known who coined the term “infrared.” This mystery caught my attention. A Google Books search for “infra-red” finds two articles published in April 1874, both of which use the term in the context of Edmond Becquerel’s treatise on light. In that work, La Lumière (1867, vol. 1, p. 141), the French infra-rouge is used. One of the articles appeared in The Photographic News for Amateur Photographers (18:176), and is by M. de St. Florent; the other is uncredited but appeared in The British Journal of Photography (21:160) and is attributed to de St. Florent elsewhere in the volume. I have not been able to trace de St. Florent’s full name, but he published contemporaneously in Bulletin de la Société française de photographie. This author appears to be the coiner of “infra-red,” having translated it from French. There are two curious sidelights to this story: Becquerel was the father of Henri Becquerel, for whom the unit of radioactivity was named; and the term “ultraviolet” was coined by William Herschel’s son John Herschel in 1840. Gary Rosenberg Academy of Natural Sciences Drexel University Philadelphia, PA Mr. White responds: Gary Rosenberg did some very nice Internet detective work pushing the earliest known usage of the word “infrared”—or the French “infra-rouge”—back at least into the 1860s. Becquerel uses “infra-rouge” casually and frequently, which likely indicates the term was in common usage at that time. I would consider “infrared,” “infra-rouge,” and the German “infra-rot” as equivalent and give credit to the first usage in any of those forms.
My mention of the 1880s in the article came from E. S. Barr’s 1960 article in the American Journal of Physics, in which he wrote: In an 1873 paper, Abney refers to “ultra red,” but in another in 1881 he used the term “infra-red.” It is a matter of personal vexation that I have not been able to determine the exact origin of the modern term! This article is well researched and I recommend it highly, but it was written in the dark ages before the Internet. Rosenberg’s find is a reminder of the Internet’s amazing, growing power to search original works in different languages. Lowering Limits
To the Editors: Brian Hayes’s May–June 2012 Computing Science column is excellent as always. “Computation and the Human Predicament” is a very good assessment of the controversial but timely issue of limits to growth. In the comparison between the standard run and the doubled-resources run, however, I would like to see a third run. If the uncertainty in the parameter resources allows so big a leap, why not halve it and test how the model behaves? If this run gave more or less the same result as the doubled-resources run, I would doubt the model’s efficacy a little more. Héctor Osvaldo Mato Boulogne, Argentina Mr. Hayes responds: The comparison that Mr. Osvaldo Mato asks about can be made with the Web version of the World3 model available at http://bit-player.org/limits/. One element of the model’s control panel allows the initial-resources multiplier to be set to various values between 1/8 and 32. Readers are invited to try the experiment for themselves. In general, reducing the initial stock of resources hastens the collapse of the
modeled society; increasing the initial stock delays the collapse but makes it more severe when it comes. Science’s 99 Percent
To the Editors: I agree completely with Roald Hoffmann’s observation in Marginalia (March–April 2012) that change in chemistry and in human affairs “will come about through many small actions by individuals.” Textbooks attribute progress to single personalities (perhaps this tendency is mainly due to space limitations). But the real contribution is the accumulation of those many small actions by individuals—for instance, the 10n lab techs working feverishly in labs around the world. Despite the fact that they are not often hailed in publications, they can be called the “99 percent” of scientific progress. Ronald F. Smith St. Paul’s College University of Manitoba Those Puppy-dog Eyes
To the Editors: Pat Shipman, in her Marginalia “Do the Eyes Have It?” (May–June 2012), hypothesizes that domesticated dogs would have proffered significant advantages in population growth to modern humans competing with Neandertals. I have worked with coastal Australian Aboriginal communities for a number of years and adopted a dog from one of their camps [see photo on page 356]. I feel strongly that Shipman’s hypothesis will prove to be accurate. Dingos (Australian wild dogs) are an indispensable part of Aboriginal community life. I have seen camp dogs used in the hunting of monitor lizards and even in rounding up schools of fish in a retreating tide. Perhaps most important, they functioned as a community warning and protection system, specifically around women’s camps. In competition between Neanderthals and modern humans, the de-
American Scientist (ISSN 0003-0996) is published bimonthly by Sigma Xi, The Scientific Research Society, P.O. Box 13975, Research Triangle Park, NC 27709 (919-549-0097). Newsstand single copy $4.95. Back issues $6.95 per copy for 1st class mailing. U.S. subscriptions: one year $28, two years $50, three years $70. Canadian subscriptions: one year $36; other foreign subscriptions: one year $43. U.S. institutional rate: $70; Canadian $78; other foreign $85. Copyright © 2012 by Sigma Xi, The Scientific Research Society, Inc. All rights reserved. No part of this publication may be reproduced by any mechanical, photographic or electronic process, nor may it be stored in a retrieval system, transmitted or otherwise copied, with the exception of one-time noncommercial, personal use, without written permission of the publisher. Second-class postage paid at Durham, NC, and additional mailing office. Postmaster: Send change of address form 3579 to Sigma Xi, P.O. Box 13975, Research Triangle Park, NC 27709. Canadian publications mail agreement no. 40040263. Return undeliverable Canadian addresses to P. O. Box 503, RPO West Beaver Creek, Richmond Hill, Ontario L4B 4R6.
www.americanscientist.org
American Scientist
2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
355
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
online @ AMERICAN
Scientist.org
12
e
a
1
sC
ls
ist
Ame
nt
r
American Scientist at 100 The year 2012 marks an Scie 100 years of the ic publication of this magazine. Please 91 3–20 nt i join us as we look enn back at the past century of American Scientist http://amsci.org/100thanniversary ________
The First 75 Reasons The 1986 article “75 Reasons to Become a Scientist,” mentioned in our July–August issue, is now available online: http://amsci.org/75-reasons Find American Scientist on Facebook: facebook.com/ AmericanScientist
____________
And follow us on Twitter: twitter.com/AmSciMag
fense of women and, by association, small children could have been a game changer. Improved hunting success may have been a factor in modern humans’ success, but to my mind, the community-protection services provided by dogs was likely far more valuable. Shipman’s article brings to mind the title anthropologist Deborah Bird Rose used for her book on Yarralin community life: Dingo Makes Us Human. Dogs have certainly made me more human. Philippe Max Rouja Principal Scientist, Marine Heritage and Ocean Human Health Department of Conservation Services Government of Bermuda To the Editors: I enjoyed Pat Shipman’s May–June Marginalia on the coevolution of humans and dogs. Shipman reviews nicely several of the benefits that have led to selection for that coevolution. I especially like the hypothesis she discusses about the evolution of white sclerae and nonconcealing eyelids in
Peru & Easter Island
humans for communication with dogs. Dogs also have white sclerae and eyes large enough for their sclerae to show, and my own observations suggest that they use their eyes to communicate with humans as well as with other dogs. Although my dogs often move their heads to direct my attention toward a distant object, they frequently move their eyes only, thereby using their white sclerae to show the direction of gaze. Given the extensive coevolution between humans and dogs, I am amazed that some people are uncomfortable around dogs. Could avoiding dogs have a selective advantage as well? Roger A. Powell Department of Biology North Carolina State University Dr. Shipman responds: Regarding Dr. Powell’s question, it seems likely to me that the diversity of human responses to animals (not only dogs) is partly based on learned experience and probably partly genetic. That is, a child who has never been encouraged to observe or interact with an animal probably won’t develop much sensitivity to the “language” of that animal. During the process of socialization, children develop a theory of mind-—they learn that other people have feelings and ideas that might be different from theirs, and they learn how to deal with those differences. Such learning can occur with animals as well. A genetic component likely exists as well. Just as some people are very
We invite you to travel the World with Sigma Xi! Travel with Sigma Xi groups, led by expert naturalists, archaeologists, historians, and anthropologists, on outstanding itineraries! Betchart Expeditions Inc. 17050 Montebello Road Cupertino, CA 95014-5435 Phone: Fax:
(800) 252-4910 (408) 252-1444
Email:
[email protected] ________________________
Including Machu Picchu & extraordinary archaeological sites in Peru along the North Coast! January 18-31, 2013
356
6,*0$;,([SHGLWLRQV 7+(6&,(17,),&5(6($5&+62&,(7<
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
Illustr ation Credits Macroscope Page 361 Tom Dunne Computing Science Pages 362–366 Brian Hayes Engineering Page 371 Tom Dunne Graphene: A New Material for High Frequency Electronics? Figures 2, 4–9 Tom Dunne The Complex Call of the Carolina Chickadee Figures 1, 8; page 403 Emma Skurnick Figures 3, 4 Barbara Aulicino Slicing a Cone for Art and Science Figures 3 Barbara Aulicino
good at communicating with others and understanding or intuiting their feelings, some are very poor. This ability is a particular challenge for people with autism. Despite that, Temple Grandin, for instance, has autism, is brilliant at understanding animals and is also high-functioning. Whatever genetic basis there might be for understanding animals is likely to involve multiple genes and to require training. Genetic variability could involve some other useful trait that research hasn’t uncovered yet, or it could result from simple variability. Questions and Answers
To the Editors: I found it interesting that in one issue of American Scientist (March–April 2012), Colin Allen’s review of the book Mindreading Animals explored the question, “Can a chimpanzee understand what another sees?” and an In the News item helped answer it. The latter summarized an article from Current Biology in which observational evidence from Uganda suggested that chimpanzees recognize and try to combat ignorance among their
companions. I was glad to see a question about theory of mind answered by field biology. Thanks for editing a great magazine! Andrew Durso Ph.D. Student, Department of Biology Utah State University How to Write to American Scientist
Brief letters commenting on articles appearing in the magazine are welcomed. The editors reserve the right to edit submissions. Please include an e-mail address if possible. Address: Letters to the Editors, P.O. Box 13975, Research Triangle Park, NC 27709 or ______ editors@ amscionline.org.
Erratum The cover of the July–August 2012 issue mistakenly featured an image that was an example of electrostatic induction discharge (where charge is created by an electrically charged object placed near a conductive object) rather than a triboelectric discharge (where electricity is created when materials are brought into contact and then separated).
DISCOVER. INNOVATE. ACHIEVE. At Worcester Polytechnic Institute, graduate students work in teams with faculty who challenge them to conduct research that matters in the real world. We invite you to discover WPI—a premier university for graduate studies in science, engineering, and business.
grad.wpi.edu/+science _________________
_____________
www.americanscientist.org
American Scientist
2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
357
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Macroscope
The Survival of the Fittists Howard Wainer
T
he concept of replicability in scientific research was laid out by Francis Bacon in the early 17th century, and it remains the principal epistemological tenet of modern science. Replicability begins with the idea that science is not private; researchers who make claims must allow others to test those claims. Over time, the scientific community has recognized that, because initial investigations are almost always done on a small scale, they exhibit the variability inherent in small studies. Inevitably, as a consequence, some results will be reported that are epiphenomenal—false positives, for example. When novel findings appear in the scientific literature, other investigators rush to replicate. If attempts to reproduce them don’t pan out, the initial results are brushed aside as the statistical anomalies they were, and science moves on. Scientific tradition sets an initial acceptance criterion for much research that tolerates a fair number of false positives (typically 1 out of 20). There are two reasons for this initial leniency: First, it is not practical to do preliminary research on any topic on a large enough scale to diminish the likelihood of statistical artifacts to truly tiny levels. And second, it is more difficult to rediscover a true result that was previously dismissed because it failed to reach some stringent level of acceptability than it is to reject a false positive after subsequent work fails to replicate it. This approach has meant that the scientific literature is littered with
Howard Wainer is Distinguished Research Scientist at the National Board of Medical Examiners and an adjunct professor of statistics at the Wharton School of the University of Pennsylvania. He has published 20 books, including Uneducated Guesses (Princeton University Press, 2011). Address: National Board of Medical Examiners, 3750 Market St., Philadelphia, PA 19105. E-mail:
[email protected] __________ 358
Understanding the role of replication in research is crucial for the interpretation of scientific advances an embarrassing number of remarkable results that were later shown to be anomalous. ESP in North Carolina
A wonderful example of this effect originated in the 1930s at Duke University. J. B. Rhine, a botanist turned parapsychologist, designed studies that he hoped would discover people with extrasensory perception (ESP). He thought he had found one in Adam Linzmayer, an economics undergraduate at Duke. In spring 1931, as a volunteer in one of Rhine’s experiments, Linzmayer performed far better than chance suggested he should. In subsequent experiments his performance retreated back to chance. Rather than dismiss the initial finding, Rhine concluded that Linzmayer’s “extra sensory perception has gone through a marked decline.” But Rhine kept searching for people with ESP talent until he encountered another experimental subject, Hubert Pearce, who had a remarkable run of successes before he too suffered the loss of his psychic gift. This spotty record did not deter the energetic Rhine. The University of Chicago researcher Harold Gulliksen wrote a scathing review of Rhine’s 1934 opus Extra-Sensory Perception, suggesting that although the statistical methods Rhine used were seriously flawed, he
would not discuss them for fear that he would distract attention from the monumental errors in Rhine’s experimental design. (For example, if you looked carefully at the cards he used to test subjects, you could see an outline of their patterns from the reverse side. Such flaws are often overlooked by scientists inexperienced in magic. Stanford statistician and magician Persi Diaconis spent a fair amount of time debunking claims of ESP made by Uri Geller and others. Diaconis proposed that he was uniquely qualified for such a task; magicians couldn’t do it because they didn’t understand experimental design, and psychologists couldn’t do it because they didn’t know magic. His claim has subsequently been borne out by evidence.) Rhine’s reaction to and interpretation of normal stochastic variation provides an object lesson in how humans, even scientists, allow what they want to be true to overwhelm objective good sense. Nobel Prize Laureate Daniel Kahneman spends the 500 pages of his recent book, Thinking, Fast and Slow, laying out how and why humans behave this way. It is left to scientists to remember this tendency as we do our work. Shrinking Effects in Science
Alas, the shrinking size of scientific results is not a phenomenon confined to scientific exotica like ESP. It manifests itself everywhere and often leads the general public to wonder how much of the scientific literature can be believed. In 2005 John Ioannidis, a prominent epidemiologist, published a paper provocatively titled “Why Most Published Research Findings Are False.” Ioannidis provides a thoughtful explanation of why research results are often not as dramatic as they were first thought to be. He then elaborates the characteristics of studies that control the extent to which their results shrink upon replication.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Repeated testing of any phenomenon will result in a range of results, some negative and some positive. As a result, initial studies sometimes suggest positive results, as was the case in parapsychologist J. B. Rhine’s experiments on extrasensory perception (ESP) in the 1930s. Perhaps not surprisingly, Rhine’s findings were not supported by further research. Above he is shown testing subjects using Zener cards, which were designed for that purpose by Rhine’s colleague, psychologist Karl Zener. (Photo courtesy of Duke University Archives.)
None of Ioannidis’ explanations came as a surprise to those familiar with statistics, which is, after all, the science of uncertainty. Larger studies with bigger sample sizes have more stable results; studies in which there are great financial consequences may more often yield biases; when study designs are flexible, results vary more. The publication policies of scientific journals can also be a prominent source of bias. Let me illustrate with a hypothetical example. Assume that we are doing a trial for some sort of medical treatment. Furthermore, suppose that although the treatment has no effect (perhaps it is the medical equivalent of an ESP study) it seems on its face to be a really good idea. To make this more concrete, imagine that modern scientific methods were available and trusted in the 19th century, and someone decided to use them to test the efficacy of using leeches to draw blood (which was once believed to balance the bodily humors and thence cure fevers). If a single study was done, the www.americanscientist.org
American Scientist
odds are it would find no effect. If, over a long period of time, many such studies were done, we might find that most would find no effect, a fair number would show a small negative effect and an equal number a small positive effect—all quite by chance. But chance being what it is, if enough studies were done, a few would show a substantial positive effect—and be balanced by a similar number that showed a complementary negative effect (see the figure on page 360). Of course, if we were privy to such a big-picture summary, we could see immediately that the treatment has no efficacy and is merely showing random variation. But such a comprehensive view has not been possible in the past (although there is currently a push to build a database that would produce such plots for all treatments being studied—the Cochrane Collaboration). Instead what happens is that researchers who do a study and find no significant effect cannot publish it; editors want to save the scarce room in their journals for research that finds
something. Thus studies with null, or small, estimates of treatment effects are either thrown away or placed in a metaphorical file drawer. But if someone gets lucky and does a study whose results, quite by chance, fall further out in the tail of the normal curve, they let out a whoop of success, write it up and get it published in some A-list journal—perhaps the Journal of the American Medical Association, perhaps the New England Journal of Medicine. We’ll call this the alpha study. A publication in such a prestigious journal garners an increase in the visibility of both the research and the researcher—a win-win. The attention generated by such a publication naturally leads to attempts to replicate; sometimes these replication studies turn out to have been done before the alpha study, lending support to the hypothesis that the alpha study might be anomalous. Typically these studies do not show an effect as large as that seen in the alpha study. Moreover, the replication studies are not published in the A-list journals, for 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
359
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
0.5
probability of occurence
dia. Subsequently, independent studies also appear, but few are seen by 0.4 a significant number of readers, and fewer still are picked up by the me0.3 dia to diminish the impression of a alpha breakthrough generated by the alpha study 0.2 study. Sometimes, though, news of diminished efficacy percolates out to the 0.1 field and perhaps even the public at large. Then we start to worry, “Does 0.0 any treatment really work?” big small 0 small big One version of this effect, delineated negative negative positive positive in a 1995 paper by Geneviève Grégoire experimental effect size and her colleagues at the Hôtel-Dieu It’s become a truism that initial studies with de Montréal in Quebec, has come to more positive results are more likely to be pubbe called the Tower of Babel bias. The lished, and published in prestigious journals. authors considered meta-analyses pubEfforts to replicate a study will produce a range lished in eight English-language mediof findings from negative to positive, including cal journals over a period of two years. a number that indicate the subject has no effect The advantage of a meta-analysis is that at all. Performing a larger number of studies it combines the findings of many other helps clarify which effects are significant. studies in an effort to establish a more rigorous conclusion based on the totalthey are not pathbreaking. They ap- ity of what has been done. More than pear in more minor outlets—if they are just a research review, it allows each accepted for publication at all. study to be weighted proportional to So a pattern emerges. A startling its validity. Grégoire and her colleagues and wonderful result appears in the found that a majority of the analyses exmost prestigious of journals, and news cluded some studies based on language of the finding is trumpeted in the me- of publication, and that the analyses’ MTHFR C677T gene polymorphism and coronary heart disease
0.3
0.4
0.6
M q M q
M q
M q MQmags q
1
2
5
10
An Exception to the Rule
The stage is now set for us to shift our gaze to research done in the East. Chinese medical research, for example, is almost invisible to Western scientists, but the reverse is not true: Chinese researchers seem well aware of major findings in the West, although they are probably less familiar with the more minor publications. Keeping in mind the phenomenon of shrinking effect sizes, if we looked carefully at the findings of Chinese medical researchers as they strive to replicate Western medical findings, we would expect to find the same shrinkage as is the rule in the West. Is this what happens? Zhenglun Pan, of Shandong Provincial Hospital in Shandong, China, and
GSTM1 gene deletion and lung cancer
0.4
odds ratio Chinese, PubMed-indexed
results might have been altered had they included studies published in languages other than English. More generally, it is almost a truism by now that studies whose results either do not achieve statistical significance or show only a small effect are published in local journals or not at all. Thus international estimates of treatment effects tend to have a positive bias.
0.5
0.8
1
2
3
4
odds ratio Chinese, not indexed in PubMed
non-Chinese Asian
non-Chinese, non-Asian
Zhenglun Pan and a team of coauthors performed meta-analyses of studies that explored gene–disease associations for six diseases common in China. Shown above are the results for two of these, which considered the association of specific genes with coronary heart disease and lung cancer, respectively. Each horizontal line represents a single study on the subject. The position of the square on each line indicates its odds ratio—the measure of the strength of association between two values—at a confidence interval of 95 percent. An odds ratio of 1 indicates that the study found no genetic effect, an odds ratio greater than 1 indicates genetic predisposition and an odds ratio of less than 1 indicates genetic protection against the condition in question. Studies by Chinese authors had a much greater likelihood of finding positive associations between the genetic factor they studied and disease. (Figure adapted from Pan, Z., et al. 2010. PLoS Medicine 7:e334.) 360
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
a team of international scholars did a large meta-analysis of dozens of studies done in China that were meant to be replications of earlier studies. They then redid the same meta-analysis with studies from other Asian (but non-Chinese) researchers, as well as non-Asian, non-Chinese researchers. The studies they considered, in the field of genetic epidemiology, seemed to find effect sizes at or surpassing those found in the alpha study. The authors call this a “reverse Tower of Babel” bias. Although the bias was greatest in Chinese studies, it was also found, to a lesser extent, in nonChinese Asian research. Replication studies on the same subject by nonChinese, non-Asian researchers found the smallest effect sizes of all. (Summaries of two of the meta-analyses by Pan and colleagues are shown on the facing page.) Several speculative reasons for this effect come to mind—perhaps it is a matter of cultural norms; perhaps there is an interaction between treatment and ethnicity. Thus far we must await further research to determine its sources. What Have We Learned?
Science is designed to be selfcorrecting. Attempts to replicate provide evidence of when it has gone astray. Or at least that’s the theory. The real world, filled with fallible people and institutions, practically guarantees that the path toward progress meanders, sometimes massively. But this is not all bad. As physicist David Deutsch has emphasized, the evolution of a scientific idea is different than the Darwinian evolution of an organism in at least two important ways. First, ideas evolve in ways that are directed by the intelligence of the investigators. In contrast, biological evolution has no goal other than maximizing the likelihood that a particular mixture of genes will spread. Second, if a particular phenotype emerges that cannot survive, it becomes an evolutionary dead end, but an idea that is a failure can still have parts that can be retrieved and used subsequently. The story told here provides some compelling examples, if any were needed, that the road to improvement is fraught with potholes of misinformation and twists of political intrigue. But as long as we maintain a healthy skepticism and remain free to publicly question the status quo, we www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
will continue to advance—at least for those disciplines for which the direction of an advance is known. In a 1966 American Scientist article, Princeton University psychologist Julian Jaynes offered an evocative metaphor to delineate what he saw as the differences between psychology and physics. I would apply Jaynes’s words more broadly to the differences between the hard sciences and the humanities— and, to some extent, the social sciences: Physics is like climbing a mountain: roped together by a common asceticism of mathematical method, the upward direction, through blizzard, mist, or searing sun, is always certain, though the paths are not. . . . The disorder is on the ledges, never in the direction. . . . [Psychology] is less like a mountain than a huge entangled forest in full shining summer, so easy to walk through on certain levels, that anyone can and everyone does. The student’s problem is a frantic one: he must shift for himself. It is directions he is looking for, not height. . . . Multitudes cross each other’s paths in opposite directions with generous confidence and happy chaos. The bright past and the dark present ring with diverging cries and discrepant echoes of “here is the way!” from one vale to another. The pitons and cleats so critical for ascending a mountain, Jaynes continues, are replaced with blinders and earplugs as people wander the forest. This passage may help explain why the scientific method, so powerful in the hard sciences, fails when applied to subjects in which there is no broad consensus of what constitutes an advance. In the world we inhabit, the rules of science interact with the foibles of scientists. What we see should not be taken at face value—even “objective science.” Every scientific study carries along with its results some sense of its own credibility. Studies with larger sample sizes are more credible than those that are smaller, ceteris paribus. If those doing the study have a great deal riding on the result, credibility suffers. We must always be vigilant. But the current, flawed system, in which independent studies are used to test results obtained by someone else, is the best available. As flaws are detected,
we can institute reforms. Such reforms should always move in the direction of greater openness and greater accessibility to the raw data from which the conclusions are drawn. The success of the scientific method relies on the continued existence and prosperity of researchers who relentlessly fit experimental data to theory. The validity of science depends on the survival of the fittists. Acknowledgment
I am thankful to David Donoho who, over dinner one evening, told me about the results of a meta-analysis of Chinese medical research, thus instigating the writing of this essay. Bibliography Deutsch, D. 2011. The Beginning of Infinity: Explanations that Transform the World. New York: Viking. Grégoire, G., F. Derderian and J. Le Lorier. 1995. Selecting the language of the publications included in a meta-analysis: Is there a Tower of Babel bias? Journal of Clinical Epidemiology 48:159–163. Gulliksen, H. O. 1938. Extrasensory perception: What is it? American Journal of Sociology 43:623–634. Ioannidis, J. P. 2005. Why most published research findings are false. PLoS Medicine 2:e124. Jaynes, J. 1966. The routes of science. American Scientist 54:94–102. Kahneman, D. 2011. Thinking, Fast and Slow. New York: Farrar, Straus & Giroux. Pan, Z., et al. 2010. Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature. PLoS Medicine 7:e334. Rhine, J. B. 1934. Extra-Sensory Perception. Boston, MA: Bruce Humphries.
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
361
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Computing Science
Alice and Bob in Cipherspace Brian Hayes
A
lice hands bob a locked suitcase and asks him to count the money inside. “Sure,” Bob says. “Give me the key.” Alice shakes her head; she has known Bob for many years, but she’s just not a trusting person. Bob lifts the suitcase to judge its weight, rocks it back and forth and listens as the contents shift inside; but all this reveals very little. “It can’t be done,” he says. “I can’t count what I can’t see.” Alice and Bob, fondly known as the first couple of cryptography, are really more interested in computational suitcases than physical ones. Suppose Alice gives Bob a securely encrypted computer file and asks him to sum a list of numbers she has put inside. Without the decryption key, this task also seems impossible. The encrypted file is just as opaque and impenetrable as the locked suitcase. “Can’t be done,” Bob concludes again. But Bob is wrong. Because Alice has chosen a very special encryption scheme, Bob can carry out her request. He can compute with data he can’t inspect. The numbers in the file remain encrypted at all times, so Bob cannot learn anything about them. Nevertheless, he can run computer programs on the encrypted data, performing operations such as summation. The output of the programs is also encrypted; Bob can’t read it. But when he gives the results back to Alice, she can extract the answer with her decryption key. The technique that makes this magic trick possible is called fully homomorphic encryption, or FHE. It’s not exactly a new idea, but for many years it was viewed as a fantasy that would never come true. That changed in 2009, with a Brian Hayes is senior writer for American Scientist. Additional material related to the Computing Science column appears at http://bit-player. org. Address: 11 Chandler St. #2, Somerville, MA __ 02144. E-mail: ___________
[email protected] 362
A new form of encryption allows you to compute with data you cannot read breakthrough discovery by Craig Gentry, who was then a graduate student at Stanford University. (He is now at IBM Research.) Since then, further refinements and more new ideas have been coming at a rapid pace. Homomorphic encryption is not quite ready for everyday use. The methods have been shown to work in principle, but they still impose a heavy penalty of inefficiency. If the system can be made more practical, however, there are applications ready and waiting for it. Many organizations are eager to outsource computation: Instead of maintaining their own hardware and software, they would like to run programs on servers “in the cloud,” a phrase meant to suggest that physical location is unimportant. But letting sensitive data float around in the cloud raises concerns about security and privacy. Practical homomorphic encryption would address those worries, protecting the data against eavesdroppers and intruders and even hiding it from the operators of the cloud service. Three’s a Crowd
In the early days of their relationship, Alice and Bob kept no secrets from each other; it was the rest of the world they wanted to shut out. Their main problem was how to communicate privately over a public channel, where nosy third parties—such as Eve the eavesdropper—might be listening in.
To solve this problem, Alice and Bob devised a variety of cryptographic schemes. Before sending a message to Bob, Alice would encrypt it with a secret key, turning plaintext into ciphertext; even if Eve intercepted the ciphertext, she could make no sense of it. But Bob had the decryption key, so he could recover the plaintext. For some cryptosystems, Alice and Bob must each hold a copy of the same key, which both encrypts and decrypts. But then they face the thorny issue of how to transmit the key itself, without having it fall into Eve’s hands. A particularly clever solution, called public-key cryptography, splits the key into two parts. Alice and Bob each publish a public encryption key, which allows anyone to send them an encrypted message. But they keep secret the corresponding decryption keys, so that only they can read the messages they receive. Another innovation that helped Alice and Bob keep their private conversations out of the tabloid press was probabilistic cryptography, introduced in the early 1980s by Shafi Goldwasser and Silvio Micali of MIT. Earlier systems were deterministic: The same plaintext always produced the same ciphertext. But determinism is dangerous in public-key cryptography. Eve can try guessing the content of a message; then she encrypts the guess with the public key and checks to see if it matches an intercepted ciphertext. With a probabilistic scheme, every plaintext message has a multitude of possible encodings, and the system chooses randomly among them. Even if you correctly guess the plaintext, there’s almost no chance of matching the random encryption. On decryption, however, all of the alternatives collapse to the same plaintext. Cryptographic technology of this kind has become a routine part of life on the Internet—so routine that it often
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
goes unnoticed. When you check your bank balance on the Web, or make an online purchase, you rely on a secure version of the hypertext transfer protocol (https rather than plain http), which provides a layer of encryption behind the scenes. Even Google searches are encrypted. These measures are meant to protect your messages while they are in transit. Encrypted communication shuts out Eve, who is sitting at the next table in Starbucks, tapping into your wifi connection. On the other hand, the cryptographic protocols conceal nothing from the recipients of your messages, who have the keys to decipher them. Usually, that’s just fine, because the intended recipient is a trusted party. Homomorphic encryption is the tool for those occasions when you don’t trust anyone, not even Bob. A Parallel Universe
Over the years, Alice and Bob have gone their separate ways. Alice now works as the research director of a cryptographic software company; Bob has gone into hardware, running a
cloud computing service. As they have drifted apart, their security and privacy needs have changed somewhat. When Alice talks to Bob, she still needs to guard against Eve’s snooping. But, in addition, Alice’s company now has proprietary information that she must not disclose to Bob. Complicating her predicament, she wants to use Bob’s computers for tasks that involve the secret data. Ordinary cryptography is no help in this situation. Alice can encrypt the data when she sends it to Bob, but he can do nothing with it unless he can decrypt it. That is exactly what Alice seeks to avoid. They are at an impasse, which homomorphic encryption is designed to surmount. Before trying to explain how homomorphic encryption works, I should try to explain the word homomorphic. The Greek roots translate as same shape or same form, and the underlying idea is that of a transformation that has the same effect on two different sets of objects. The concept comes from the esoteric world of abstract algebra, but
conventional encryption
fully homomorphic encryption
circuit decrypt
I can offer a more homely example, where the two sets of objects are the positive real numbers on the one hand and their logarithms on the other. Then multiplication of real numbers and addition of logarithms are homomorphic operations. For any positive real numbers x, y and z, if x · y = z, then log(x)+log(y) = log(z). This homomorphism offers two alternative routes to the same destination. If we are given x and y, we can multiply them directly; or we can take their logarithms, then add, and finally take the antilog of the result. In either case, we wind up with z. Homomorphic cryptography offers a similar pair of pathways. We can do arithmetic directly on the plaintext inputs x and y. Or we can encrypt x and y, apply a series of operations to the ciphertext values, then decrypt the result to arrive at the same final answer. The two routes pass through parallel universes: plainspace and cipherspace. Arithmetic in plainspace is familiar to everyone. A number is conveniently represented as a sequence of bits (binary digits 0 and 1) and algorithms act
circuit encrypt
Bob’s computer
encrypt
Bob’s computer
decrypt
Alice’s computer
encrypt
decrypt
Alice’s computer
Alice has confidential data she wants to process on Bob’s computer, which is a server “in the cloud.” But she wants to make sure no one else gains access to the data—not even Bob. Conventional encryption (left) protects her information while it is in transit but not while the computation is underway on Bob’s computer (red portion of pathway). Homomorphic encryption (right) offers security from the moment the data stream leaves Alice’s computer until it returns. The strategy requires that all the arithmetical and logical operations needed in the computation (symbolized here by a circuit of Boolean gates) be applied to the encrypted form of the data. In this diagram the distinction between encrypted and unencrypted data—between ciphertext and plaintext—is suggested by a typographic convention: The ciphertext is shown in numerals of the Devanagari alphabet, which have the sequence stuvwxyz{| www.americanscientist.org
American Scientist
2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
363
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
cipherspace
6 + 10
encrypt(x) = 2x
3 + 5
=
16
decrypt(x) = x/2
=
8
(6 r 10)/2 = 30
encrypt(x) = 2x
3 r 5
decrypt(x) = x/2
=
15
plainspace The concept of homomorphism describes a parallel linkage between operations on two sets of objects. In this toy example the sets of objects are the set of all integers (lower panel) and the set of even integers (upper panel). The operations on the objects are addition and multiplication. Going back and forth between the two sets is just a matter of doubling or halving a number. Addition works the same way in both sets. In the case of multiplication, an adjustment is needed: For even numbers, the product of x and y is defined as (x· y)/2. These sets and operations can be pressed into service as a rudimentary homomorphic cryptosystem. Plaintext integers are encrypted by doubling; then any sequence of additions and multiplications can be carried out; finally the result is decrypted by halving.
and Bob in their debut performance as celebrity cryptographers.) The basic RSA scheme is partially homomorphic: It allows multiplication of ciphertexts but not addition. Rivest, Adleman and Dertouzos pointed out this fact and also mentioned a few other ways to achieve partial “privacy homomorphisms.” They asked whether it would be possible to construct a secure scheme capable of general computation on ciphertexts. In the next 30 years there were occasional advances on this front. For example, in 2005 Dan Boneh, Eu-Jin Goh and Kobbi Nissim devised a homomorphic system that allowed an unlimited number of additions on the ciphertext, followed by a single multiplication. (Boneh, by the way, was Gentry’s thesis advisor.) In spite of such incremental progress, however, Gentry’s announcement of a fully homomorphic scheme came as a total surprise in 2009. Noisy Arithmetic
on the bits according to rules of logic and arithmetic. Among the many operations on numbers we might consider, it turns out that adding and multiplying are all we really need to do; other computations can be expressed in terms of these primitives. Doing mathematics in cipherspace is much stranger. Indeed, the task seems all but impossible. Encryption is a process that thoroughly scrambles the bits of a number, whereas algorithms for arithmetic are extremely finicky and give correct results only if all the bits are in the right places. Nevertheless, it can be done. As a proof of concept, I offer an extremely simple homomorphic cryptosystem. Assume the plaintext consists of integers. To encrypt a number, double it; to decrypt, divide by 2. With this scheme we can do addition on enciphered data as well as a slightly nonstandard version of multiplication. Given plaintext inputs x and y, we can encrypt each of them separately, add the ciphertexts, then decrypt the result. This roundabout calculation gives the correct answer because 2x+2y = 2(x+y). To make multiplication come out right, we have to define the product of ciphertexts as (x · y)/2, whereas plaintexts are multiplied by the usual formula x · y. With this rule it’s easy to verify that the three-step sequence encrypt-multiply-decrypt yields the 364
same result as simply multiplying the plaintexts. (Fiddling with definitions in order to get the right answer may seem like cheating, but many mathematical objects come with their own idiosyncratic rules for multiplication. Two examples are matrices and complex numbers.) As cryptosystems go, the doubling scheme is certainly simple, and it’s fully homomorphic. We can do all the arithmetic we want on ciphertexts. On the other hand, the system is not recommended if you actually want to keep secrets. Doubling a number does not thoroughly scramble the bits; it merely shifts them left by one position. Devising a secure fully homomorphic cryptosystem is much harder. That’s what Gentry accomplished in 2009. Making the system efficient enough for practical applications is yet another challenge, still being addressed. First Date
The idea of computing with encrypted data was first proposed in 1978 by Ron Rivest, Len Adleman and Michael L. Dertouzos, who were all then at MIT. Just a few months before, Rivest and Adleman, along with Adi Shamir, had introduced the first implementation of a public-key cryptosystem, which came to be known as RSA after their initials. (The RSA paper, by the way, also introduced Alice
In broad outline, here is Gentry’s FHE construction kit. He creates a cryptosystem with the usual encrypt and decrypt functions, which convert bits from plaintext to ciphertext and back. He also builds an evaluate function that accepts a description of a computation to be performed on the ciphertext. The computation is specified not as a sequential program but as a circuit or network, where input signals pass through a cascade of logic gates. Such circuits are most often assembled from Boolean gates (and, or, not, etc.), but they can also be specified in terms of addition and multiplication steps. The evaluate function amounts to a complete computer embedded in the cryptosystem. In principle, it can calculate any computable function, provided that the circuit representing the function is allowed to extend to arbitrary depth. The depth of a circuit is the number of gates on the longest path from input to output. A full-powered computer must be able to handle circuits of arbitrary depth. Here the homomorphic system runs into a barrier. The problem is that ciphertext data are contaminated with numerical “noise”—slight discrepancies from their ideal values. Every arithmetic operation amplifies the noise, until eventually it overwhelms the signal. The origin of the noise lies in the probabilistic encryption process. Think of each ciphertext value as a point in
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q MQmags q
THE WORLD’S NEWSSTAND®
space. The probabilistic encrypt function injects a smidgen of randomness into each of the point’s coordinates, displacing it slightly from the position it would occupy in a deterministic cryptosystem. The decrypt function filters out the noise by treating each point as if it were located at the nearest unperturbed position. When the noise is amplified by homomorphic computations, however, the point wanders farther from its correct position, until finally the decrypt function will associate it with an incorrect plaintext value. Roughly speaking, each homomorphic addition doubles the noise, and each multiplication squares it. Hence the number of operations must be limited or errors will accumulate. Because of the limit on circuit depth, this version of the cryptosystem cannot be called fully homomorphic but only “somewhat homomorphic.” The depth limit could be evaded in the following way: Whenever the noise begins to approach the critical threshold, decrypt the data and then re-encrypt it, thereby resetting the noise to its original low level. The trouble is, decryption requires the secret key, and the whole point of FHE is to allow computation in a context where that key is unavailable. The Pause That Refreshes
This is where the story gets wacky and wonderful. The evaluate function built into the cryptosystem is capable of performing any computation, provided it does not exceed the noise limit on circuit depth. So we can ask evaluate to run the decrypt function. Evaluate is designed to work with encrypted data, so the secret key supplied to it in this circumstance is an encrypted version of the normal key; specifically, the secret key supplied to decrypt running within evaluate is the ciphertext produced when encrypt is applied to the plaintext of the secret key. When decrypt is run with this enciphered key, the result is not plaintext but a new encryption of the ciphertext, with reduced noise. In effect, Alice is giving Bob a copy of the key needed to unlock the data, but the key is inside a securely locked box and can only be used within that box. As a matter of fact, the box is locked with the very key that is locked inside the box! (Gentry discusses an even more elaborate version of this dizzying metaphor, in which Alice manufactures jewelry in the locked boxes.) www.americanscientist.org
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The pause to re-encrypt and refresh the noisy ciphertext can be repeated as needed. In this way the computer can handle a circuit of any finite depth, and the system becomes fully homomorphic. It can carry out arbitrarily complex computations on encrypted data. An essential assumption in this scheme is that the decrypt circuit is itself shallow enough to run without exceeding the noise threshold. Indeed, its depth needs to be a little less than the limit, or else the computer will spend all its time refreshing the data and will never accomplish any useful work. When Gentry first formulated his FHE scheme, he found that this condition was not met. The evaluate function could not run the decrypt routine without accumulating excessive noise. The remedy was a technique for “squashing” decrypt, at the cost of
encryption
making the key larger and more complicated. With this last innovation, the problem was solved. Hard Problems
Gentry described his FHE system in his doctoral dissertation and in a paper at the Symposium on the Theory of Computing in 2009. In the three years since then, dozens of variations, elaborations and alternative schemes have been published, along with at least three attempts to implement homomorphic encryption in a working computer program. Most of the systems share the same overall architecture, with a somewhat homomorphic scheme that gets promoted to full homomorphism. Where the ideas differ is in the underlying cryptographic mechanism—the way that bits are twiddled and secrecy is achieved.
homomorphic operations
decryption
Random “noise” in a secure cryptosystem is the principal impediment to homomorphic operation. Encrypted data can be envisioned as points (purple disks) that are given small random displacements from a finite set of lattice points (white disks). On decryption, each purple disk is attracted to the nearest white lattice point. Homomorphic operations amplify the random displacements. If the noise level exceeds a threshold, some of the disks gravitate to the wrong lattice point, leading to an incorrect decryption (red arrows). Without some means of noise control, the system can support only a limited number of homomorphic operations. normal decryption
ciphertext
|y{y|ytvzxwyuusytw
plaintext decrypt
secret key
2 0 0 8 0 5 0 013 010 7 0 9 0 3 0 0 2 3
10 6 6 9 8 6 14 3 6 8 5 7 8 0 2 4 4 4 2 8
ciphertext
re-encryption to reduce noise
|y{y|ytvzxwyuusytw encryption of secret key
refreshed ciphertext decrypt
˻˹˹́˹˾˹˹˺˼˹˺˹̀˹̂˹˼˹˹˻˼
tsyy|{ytwvy{xz{suw A noise-abatement mechanism was invented by Craig Gentry in 2009. Gentry observed that if a noisy ciphertext could be decrypted and then re-encrypted, it would be “refreshed,” with reduced noise. But decryption requires a secret key, which is not available. The solution is to run the ciphertext through the decryption algorithm, but with an encrypted version of the decryption key. The result is a new ciphertext, just as secure as the original but with lower noise. (Here the refreshed ciphertext is represented by numerals from another alphabet, this one Arabic.) 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
365
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Every cryptosystem is based on a problem that’s believed to be hard in general (so that Eve can’t solve it) but easy if you know a shortcut (so that Alice and Bob can decrypt messages efficiently). RSA’s hard problem is the factoring of large integers; the shortcut is knowledge of the factors. Gentry’s 2009 algorithm relies on a problem from the theory of integer lattices—sets of discrete points arranged like the atoms of a crystal in a high-dimensional space. Lattices give rise to an abundance of computationally difficult problems. For example, from a random position in space it is hard to find the closest lattice point unless you happen to know a specific set of coordinates that serve as a geometric guidebook to the lattice. In 2010 another homomorphic cryptosystem was invented by Marten van Dijk of MIT, Gentry, Shai Halevi of IBM and Vinod Vaikuntanathan, now at the University of Toronto. In this case the hard problem comes from number theory; it’s called approximate GCD. The exact GCD, or greatest common divisor, is easy to calculate; Euclid gave an efficient (and famous) algorithm. A “noisy” version of the problem seems to be much harder. If two large numbers have the GCD p, and you alter those numbers by adding or subtracting small random quantities, it becomes difficult to find p. In the cryptosystem, p is the secret key. A problem called learning with errors forms the basis of a third FHE system introduced by Zvika Brakerski of Stanford and Vaikuntanathan. Here the task is to solve a system of simultaneous equations where each equation has some small probability of being false. As with GCD, this is an easy problem in the exact case, where there are no errors, but searching for a subset of correct equations is laborious. More recently, Brakerski, Vaikuntanathan and Gentry have developed a variant of the learning-with-errors system that takes a different approach to noise management. Instead of stopping the computation at intervals to re-encrypt the data, they incrementally adjust parameters of the system after every computational step in a way that prevents the noise level from ever approaching the limit. Working Code
Computing in cipherspace is a cute theoretical novelty, but can it ever become a practical technology? Ques366
M q M q
M q
M q MQmags q
tions of computational efficiency and overhead are more challenging in FHE than in other kinds of cryptography. When encryption is used only to create a secure communications channel, it has no direct effect on the efficiency of computations done at either end of the connection. Homomorphic encryption is different: The cryptosystem becomes the computing platform, and any inefficiency slows the entire process. Many homomorphic schemes exact a high price for security. During encryption, data undergo a kind of cosmic inflation: A single bit of plaintext may blow up to become thousands or even millions of bits of ciphertext. The encryption key can also become huge—from megabytes to gigabytes. Merely transmitting such bulky items would be costly; computing with the inflated ciphertext makes matters worse. Whereas adding or multiplying a few bits of plaintext can be done with a single machine instruction, performing the same operation on the inflated ciphertext requires elaborate software for high-precision arithmetic. Much current work is directed toward mitigating these problems. For example, instead of encrypting each plaintext bit separately, multiple bits can be packed together, thereby “amortizing” the encryption effort and reducing overhead. The ultimate test of practicality is to create a working implementation. Nigel P. Smart of the University of Bristol and Frederik Vercauteren of the Catholic University of Leuven were the first to try this. They built a somewhat homomorphic system, but could not extend it to full homomorphism; the bottleneck was an unwieldy process for generating huge encryption keys. Gentry and Halevi, working with a somewhat different variant of the lattice-based algorithm, did manage to get a full system running. And they didn’t need to build it on IBM’s Blue Gene supercomputer, as they had initially planned; a desktop workstation was adequate. Nevertheless, the public key ballooned to 2.3 gigabytes, and generating it took two hours. The noise-abating re-encryptions took 30 minutes each. In another implementation effort, Kristin Lauter of Microsoft Research, Michael Naehrig of the Eindhoven Institute of Technology and Vaikuntanathan show that large gains in efficiency are possible if you are willing to compromise on the requirement of full
homomorphism. They do not promise to evaluate circuits of unbounded depth, but instead commit only to some small, fixed number of multiplications, along with unlimited additions. They have working code based on the learning-with-errors paradigm. Except at the highest security levels, key sizes are roughly a megabyte. Homomorphic addition takes milliseconds, multiplication generally less than a second. These timings are a vast improvement over earlier efforts, but it’s sobering to reflect that they are still an order of magnitude slower than the performance of the ENIAC in 1946. Putting Code to Work
Lauter, Naehrig and Vaikuntanathan also discuss some of the ways we might use homomorphic computing. Ensuring the privacy of online medical records is one application. The patient would grant doctors access to selected records by sharing a secret key. Wall Street is another potential customer for homomorphic services. The “quants” who base investment decisions on computational analysis have a strong proprietary interest not only in their data but also in their algorithms. With FHE both can be protected by the same mechanism. A third idea is to build a cryptographic privacy fence between online advertisers and consumers. Advertisers, eager to reach individuals with specific interests or habits, gather and cross-index data on people’s activities on the Internet and elsewhere. A service based on homomorphic encryption could match ads to targeted consumers while ensuring that advertisers learn nothing about the people selected. When I asked Vaikuntanathan what application he thought might be deployed first, he had another suggestion: spam filtering. If you publish a public key and invite correspondents to send you encrypted email, a spammer can take advantage of the key to encrypt advertisements and the other effluvia that fills our mailboxes. Spamfiltering services cannot read and reject the encrypted spam unless you are willing to share your decryption key; homomorphic encryption could solve that problem. My own fantasy application is an offshore bank called the Homomorphic Trust Company. The online interface might look much the same as any other bank’s, with the usual cryptograph-
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
ic safeguards against intruders. But at this bank, even the bankers could not know the details of your transactions. I think Alice might be interested; she could get rid of that suitcase full of uncountable cash. Bibliography Boneh, D., E.-J. Goh and K. Nissim. 2005. Evaluating 2-DNF formulas on ciphertexts. In Proceedings of Theory of Cryptography, pp. 325–341. Brakerski, Z., C. Gentry and V. Vaikuntanathan. 2011. (Leveled) fully homomorphic encryption without bootstrapping. In Proceedings of the Third Innovations in Theoretical Computer Science Conference, pp. 309–325. Brakerski, Z., and V. Vaikuntanathan. 2011. Fully homomorphic encryption from ringLWE and security for key dependent messages. In Proceedings of the 31st Annual Cryptology Conference, pp. 505–524. Coron, J., D. Naccache and M. Tibouchi. 2011. Public key compression and modulus switching for fully homomorphic encryption over the integers. In Proceedings of Eurocrypt 2012, pp. 446–464. Gentry, C. 2009. A fully homomorphic encryption scheme. Ph.D. dissertation. Stanford University. Available at ___________ http://crypto.stanford.edu/craig. _________ Gentry, C. 2009. Fully homomorphic encryption using ideal lattices. In Proceedings of the 41st ACM Symposium on Theory of Computing, pp. 169–178. Gentry, C. 2010. Computing arbitrary functions of encrypted data. Communications of the ACM 53(3):97–105. Gentry, C., and S. Halevi. 2011. Implementing Gentry’s fully-homomorphic encryption scheme. In Proceedings of Eurocrypt 2011, pp. 129–148. Gentry, C., S. Halevi and N. P. Smart. 2012. Fully homomorphic encryption with polylog overhead. In Proceedings of Eurocrypt 2012, pp. 465–482. Goldwasser, S., and S. Micali. 1982. Probabilistic encryption and how to play mental poker keeping secret all partial information. In Proceedings of the 14th ACM Symposium on Theory of Computing, pp. 365–377. Gordon, J. 1984. The story of Alice and Bob. http://www.johngordonsweb.co.uk/concept/ ________ alicebob.html. Lauter, K., M. Naehrig and V. Vaikuntanathan. 2011. Can homomorphic encryption be practical? In Proceedings of the Third ACM Workshop on Cloud Computing Security, pp. 113–124. Rivest, R. L., L. Adleman and M. L. Dertouzos. 1978. On data banks and privacy homomorphisms. In Foundations of Secure Computation (New York: Academic Press), pp. 169–180. Smart, N. P., and F. Vercauteren. 2010. Fully homomorphic encryption with relatively small key and ciphertext sizes. In Proceedings of the Conference on Practice and Theory in Public Key Cryptography, pp. 420–443. van Dijk, M., C. Gentry, S. Halevi and V. Vaikuntanathan. 2010. Fully homomorphic encryption over the integers. In Proceedings of Eurocrypt 2009, pp. 24–43. www.americanscientist.org
American Scientist
Data Analysis and Graphing Software Powerful. Flexible. Easy to Use. preserves its ‘‘leadingOverallstatusOriginPro as the most functional and comprehensive data analysis and graphing software on the market. Although other software programs are available, few are as easy to use, accessible, and high-end when it comes to performing rigorous data analysis or producing publication-quality graphs. Keith J. Stevenson Journal of American Chemical Society, March 2011
’’
‘‘
In a nutshell, Origin, the base version, and OriginPro, with extended functionality, provide point-and-click control over every element of a plot. Additionally, users can create multiple types of richly formatted plots, perform data analysis and then embed both graphs and results into dynamically updated report templates for efficient re-use of effort. Vince Adams Desktop Engineering, July 2011
’’
®
Compatible with Windows 7. Native 64-bit version available. Learn more at www.OriginLab.com
OriginLab Corporation One Roundhouse Plaza Northampton, MA 01060 USA USA: (800) 969-7720 FAX: (413) 585-0126 EMAIL:
[email protected] WEB: www.originlab.com 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
367
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Engineering
Portrait of the Artist as a Young Engineer Henry Petroski
W
hen I last wrote about the engineering background of Alexander Calder (in the July–August 2009 issue of this magazine), I did not have the benefit of his transcript from his beloved alma mater, Stevens Institute of Technology, which was identified during Calder’s time there as “a College of Mechanical Engineering.” This document, in conjunction with information in the institution’s catalog for the 1918–1919 academic year, has provided further insight into and confirmation of the hypothesis that his engineering education had a profound influence on his art, especially as manifested in his signature compositions known as mobiles and stabiles. Stevens Institute of Technology is located on Castle Point, a promontory on the Hudson River, and the highest elevation of land in Hoboken, New Jersey, which is just across the river from midtown Manhattan. The land was once the estate of Col. John Stevens III, the inventor and engineer of early steamboats. In 1802 he built a boat driven by a screw propeller, well before that means of propulsion was commonly accepted as more effective in open water than paddlewheels. In 1809, Stevens’s steamboat Phoenix, in sailing from Hoboken to Philadelphia, became the first to navigate the open ocean. In 1811, his steamboat Juliana became the first to provide ferry service between Hoboken and New York City. Stevens went on to design and build some of the earliest locomotives for the developing railroad. The will of Col. Stevens’s son, Edwin Augustus Stevens, who died in 1868, established the namesake “institution of learning” by bequeathing land, a $150,000 building fund and Henry Petroski is Aleksandar S. Vesic Professor of Civil Engineering and a professor of history at Duke University. Address: Box 90287, Durham, NC 27708-0287 368
Alexander Calder’s years at Stevens Institute of Technology likely influenced his later artistic paths a $500,000 endowment fund. (A collateral inheritance tax diminished the bequest by about $45,000.) The institution was realized in 1870 as Stevens Institute of Technology, which offered but the single course of study leading to the degree of Mechanical Engineer (M.E.). After 1910, the institute included the mansion built by Edwin in 1853 known as Stevens Castle. This 40-room Victorian structure would serve as an administration and residential building until 1959, when it was torn down to make way for a high-rise building for the campus. A Practical Education
Sandy Calder, as the future artist was known to his family, friends and classmates, matriculated at Stevens in September 1915, just a month after his 17th birthday. Tuition during the time he attended was $225 per year, an amount that was said to cover only about half of the cost of the education received. When laboratory fees, textbooks, drafting instruments and the requisite military uniform were taken into account, the total annual cost was approximately $300 per student, exclusive of room and board. Board was about $6.25 per week. Charges for dormitory accommodations in the castle ranged from $85 to $190 per school year, depending on which of the 54 available rooms was in-
volved. Calder had the good fortune of being assigned a room in the tower of the castle. In his autobiography, he described what had been Edwin Stevens’s drafting room to be “a wonderful room, with windows looking up and down the river, and across—it was all windows.” Decades later, when the artist Calder would build his own studio in Roxbury, Connecticut, it too would be virtually “all windows.” According to Calder’s college transcript, his New York City home address while at school was 27 Waverly Place, which is on the fringe of Greenwich Village, by then established as a magnet for artists, poets and the avant-garde generally. Sandy’s father, the sculptor Alexander S. Calder, had relocated the family from California to New York because he had a commission to sculpt images of George Washington for the monumental arch located in Washington Square Park, for which a stretch of Waverly Place forms the northern border. The Calder family residence was located just one block east of the park and two blocks from the arch. The younger Calder’s transcript also indicates that prior to entering Stevens Institute he had graduated from San Francisco’s Lowell High School, where he took courses that prepared him well to enter an engineering curriculum: two units of algebra; one each of chemistry, physics and plane geometry; and onehalf unit each of solid geometry and trigonometry. In addition, he had taken three units each of English and Latin, two of German, and one each of biology, ancient history and U.S. history. It was not much different from my own college entrance record 44 years later. However, unlike the typical 1960s engineering curriculum, which leaned heavily toward a mathematical and theoretical approach, Calder followed one that included numerous hands-on courses involving drawing, drafting, surveying and shop practice. Time
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
spent in such courses was a large part of the engineering program that Calder experienced. During his four years at Stevens, he took at least 14 distinct courses each year, most of which—especially in the first two years—were continued over two semesters. Hours per week, instead of the now more common semester hours, were used to convey the intensity of class and laboratory time. A lecture course might meet nominally for three or four hourlong sessions a week, and a mechanical drawing course might meet for six hours throughout the week. In his freshman year, Calder took nine hours of shop practice in the first 14-week term (semester) and six hours in the second term. In this hands-on course he would have gained experience using standard machine tools and become familiar with methods used in machine shops, forge shops, wood shops and foundries—experience that would be invaluable for the artist he would become. In a supplementary term, there was more shop practice, as well as surveying. Shop practice had largely disappeared from the engineering curriculum I experienced in the 1960s, but surveying had yet to do so. I and most of my contemporaries went away for a summer session of two weeks at surveying camp, where we learned to distinguish between precision and accuracy by studying in class and going out in the field with surveying instruments that included chains, rods, transits and levels. A Heavy Load
Calder took no fewer than 29 hours per week of course, shop and laboratory work during regular terms of his four years at Stevens, with some of the latter terms, which are even today notoriously laboratory intensive, having as many as 34 hours of class and laboratory time. (A typical engineering curriculum today requires about 128 semester hours, which amounts to 16 credit hours per semester. Of course, a typical four credit-hour engineering course might involve three hours of lecture and three hours of laboratory work per week, but even with a full load, today’s engineering student seldom exceeds 20 or so hours per week in the classroom and laboratory. Neither in Calder’s time nor today does this reflect the time the student is expected to spend out of the classroom or laboratory studying and working on homework problems.) www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
CNAC/MNAM/Dist. Réunion des Musées Nationaux/Art Resource
American Scientist
Sculptor Alexander Calder trained as a mechanical engineer, graduating from Stevens Institute of Technology in 1919. The author argues that Calder’s studies in mechanics, draftsmanship and other engineering skills had a significant influence on his development as an artist. This photograph of Calder with his model circus rigging was made in 1929.
The Calder-era Stevens catalog includes photographs of some of the laboratories in which he would have conducted experiments on and gained experience with a wide variety of engines, generators, pumps, and associated mechanical equipment, apparatus, devices and instruments. The catalog photographs and prose descriptions show machinery being driven by systems of shafts, pulleys and belts, as was then still the case in older mills and
factories. Contemporary photos of engineering laboratories at other schools show similar installations. One particularly striking photo of students working in an engineering laboratory in 1920 at the University of Nevada—which appears to have a remarkably rich archival photograph collection—shows an especially dense system of pulleys and belts that exudes motion even in the static tableau that the camera captured. The idea and reality of motion 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
369
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
This dormitory room in the tower of the Castle at Stevens was likely that of Calder for at least part of his stay at the institution. He remarked about the wonderful views of the Hudson River out of the large expanses of glass. (Photograph courtesy of the Stevens Institute of Technology.)
would have swirled all around the student Sandy Calder, and it must have left a lasting impression on him and greatly informed his art. What is perhaps most interesting about Calder’s transcript is the amount of time he spent taking course and laboratory work in mechanical drawing and related subjects, which was consistent with the fact that at the time the department of descriptive geometry and mechanical drawing, with its three professors plus three instructors, was the largest of the institution’s 15 departments. Mechanical drawing introduces students to seeing things from an engineering perspective, and the subject was once believed to be fundamental to developing an engineer’s eye and hand for seeing, sketching, drawing and drafting. Another photo from the University of Nevada archives shows a mechanical drawing class from 1912, and it is likely that Calder took a similar class at Stevens. The students are dressed in full military uniform and are sitting—each with a drafting board on his lap and a pencil in his hand—facing a belt-driven pulley that they have clearly been charged with sketching, as if it were an artist’s model posing in a life-drawing class. Some of the students are bent over their boards sketching; others are staring intently at the subject, perhaps planning their next line; one student is even taking a break from the intensity of the task, his board at ease and his eyes contemplat370
M q M q
M q
M q MQmags q
ing the photographer. A similar picture taken at Stevens might have captured Sandy Calder looking away from the machinery for a moment, thinking not about it but about art. In addition to sketching machinery, the engineering student of the time was introduced to the convention of orthogonal projection, which taught engineers-to-be to think of a threedimensional object in terms of two dimensional views—or projections—of the object. Such a convention is not entirely unique to engineers, for architects employ a version of it in the form of plans and elevations of houses and other buildings. Everyone who reads a roadmap interprets the routes across the three-dimensional topography of a hilly region in terms of their twodimensional projection onto a flat surface, whether it be a sheet of paper or, today, a GPS screen. But what makes orthogonal projection different for engineers is the way it treats what other forms of projection generally ignore. In mechanical drawing classes, engineers learn that the “hidden line” is as important as the visible line. That is, they learn to look through a solid object, as if it were made of transparent Lucite, and see all the edges that would manifest themselves in such a replica of the thing. The edges on the far side of the object are conventionally rendered as broken or dashed lines, thereby signaling to the viewer that they would be “hidden” were the object made of an opaque material like wood, concrete or
steel. Calder, in his studies for stabiles, for example, used such dashed lines to clarify the three-dimensional nature of the object he was rendering in two dimensions. Seeing through solid objects with an engineer’s eye might also have helped Calder develop his concept of wire portraits, in which threedimensional forms were rendered effectively in a two-dimensional space, albeit one that might be a bit curved. Also at Stevens, Calder took two semesters of descriptive geometry, a notoriously difficult course for engineering students who did not have an eye for three-dimensional interactions between objects ranging from simple straight lines and flat planes to complex intersections of cones and cylinders. An example of the latter is a sharpener forming a wooden-pencil point, which geometrically is effectively a right circular cone cutting into a hexagonal cylinder. The lines where these two common geometric figures meet is a complicated one of convoluted scallops, and the difficulty of seeing and drawing it correctly is evident in many a newspaper cartoon or magazine advertisement where an improperly rendered sharpened wooden pencil is used to illustrate a point, whether literally or metaphorically. Another example of a common situation that might be the subject of a descriptive geometry problem would be the true shape of the intersection of a plane and a torus—the surface(s) created when a knife slices through a bagel at an oblique angle, leaving not two nearly equal halves on which to spread butter or cream cheese but two badly mismatched parts of irregular shape. Statics, Kinematics and Kinetics
Among other courses that may have had a profound influence on Calder’s later work as an artist must surely have been those in mechanics, which began in the sophomore year. The texts for the mechanics courses that Calder took were written by Stevens faculty member Louis Adolphe Martin Jr., M.E., A.M., who was the institute’s professor of mechanics. The first volume of his Text-Book of Mechanics was published in 1906 by the venerable publishing house of John Wiley & Sons, which traced its roots to a printing shop established in 1807 in lower Manhattan. Beginning as a publisher of law books, the firm soon was publishing writers, including James Fenimore Cooper, Washington Irving, Herman Melville and Edgar Allan
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Belts were still the predominant method of transferring power in the mid-teens of the 20th century. For engineers, at least as important as understanding how they machinery was the ability to project its three dimensions details onto a flat sheet of paper. (Photograph courtesy of the Stevens Institute of Technology.)
Poe, whose works became American classics. Around mid-century, contemporaneously with the flourishing of the industrial revolution, Wiley began publishing books on science and technology, soon expanding very aggressively into those fields. It is unlikely that any engineering student, in Calder’s time or mine, did not pore over the explications, diagrams and formulas contained in at least one textbook published by Wiley, which after 195 years in New York moved its headquarters to Hoboken in 2002. Martin’s Text-Book of Mechanics was issued in six volumes, treating respectively the following subjects: Statics, which deals with the nature and balance of forces acting on objects of all kinds; Kinematics and Kinetics, which deal with the nature and geometry of motion and the effects of forces on it; Mechanics of Materials, which deals with the strength of materials and the internal forces that cause things to break; Applied Statics, which deals with the response of structures to the forces applied to them; Hydraulics, which deals with the behavior and motion of water and other essentially
incompressible fluids; and Thermodynamics, which concerns itself with the nature of heat and other forms of energy. Calder would have used all six volumes of the Text-Book in courses taken in his sophomore and junior years, and especially the first two of the courses no doubt had a profound influence on the future artist and his art. Indeed, the contents of the courses and their texts may have served as inspiration for his signature pieces of art in motion—his mobiles. In these texts Calder would have been introduced to the technically precise meanings and implications of words and terms like volume, vector and density. It was evident that his engineering education had made a strong impression on him when he titled his first major exhibit, held in Paris in 1931, Volumes-Vecteurs-Densités: Dessins-Portraits. The show demonstrated Calder’s tal-
ent for creating portraits out of a single piece of wire, a medium he had favored since childhood, when he made jewelry for his sister and her dolls. His technique of using little more than his hands and a pair of pliers to bend the wire into curves ranging from tight and intricate to open and sweeping produced astonishing likenesses of people and animals. Calder became famous for always having a pair of pliers in his pocket, at the ready to doodle in wire. That he did this from childhood might have played a role in steering him into engineering school, just as tinkering with bicycles and automobiles at midcentury would for so many engineersto-be, and as building, taking apart and playing with computers would later in the century. The dessins of the Paris show’s subtitle evoked a concept that Calder would have heard repeatedly in his time at Stevens, not only explicitly in a course like Machine Design, but also implicitly in virtually every course in the curriculum and especially in his senior year, when everything that was studied in the earlier years was brought together and applied. In the Structural Engineering course, for example, a student like Calder was expected to use what he had learned in mathematics and theoretical and applied mechanics courses “in the design of structures of steel, wood, and masonry.” This tradition of simulating real-world problem solving in the classroom continues in engineering schools today, when the culmination of all that came before is a so-called senior capstone design course. In the Paris show, the concept of design was realized in Calder’s mechanized sculptures consisting of various objects and shapes (volumes and densities) connected by wires (vectors) and strings to a motor that put the whole thing in motion. It was for this kind of artwork that the artist Marcel Duchamp suggested the term mobile. (Calder’s structural
Descriptive geometry is the engineering course that taught students how to project correctly complex forms, including the sharpened point of a hexagonal pencil, a task that still flummoxes many artists. www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
371
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
engineering course would certainly have helped him design his large-scale stabiles not only to stand stably but also to support their own massive weight without the relatively thin twodimensional components bending or buckling. The ribs that Calder incorporated into his largest works were in fact an engineer’s preventive measure against failure.)
culcates into students a facility for working problems generally, a process whereby the students are expected to gain insight and feel for the subject. The combination of levers illustrated in the exercise, when considered by the kind of imaginative and creative individual that Calder obviously was, can suggest other combinations and variations on the theme, which in turn can sugGetting to the Fulcrum gest analogous arrangements The mobiles for which Calder of weights, rods and fulcrums. would really become faAlthough the engineering stumous did not rely on a modent was expected to derive a tor to impart motion to them. formula of sorts to relate the Rather, they were composed force applied to one end of one of abstract shapes (volumes), lever in the arrangement to the sometimes of different mateweight hanging from the end rials (densities) so delicately of another lever in the arrangebalanced at the ends of wires ment, in time the student could (vectors) that the whole sysbe expected to see the system as tem was moved by the drafts a whole and grasp in an instant issuing from air vents or the the way it works and know inbreeze created by an open tuitively that such a relationdoor or a passing art patron. ship does indeed exist. To an engineer, Calder’s moAs the end of an engineering biles look like a series of instudent’s course of study apterconnected levers. Indeed, proached in senior year, he was in his textbook on statics, expected in courses like StrucCalder would have read that tural Engineering to go be“the lever is a rod or bar, eiyond the exercises of the kind Alexander Calder’s college transcript provides strong evidence ther straight or curved, sup- of the rigors of training in mechanical engineering at the time. that make up problem sets for ported at one point. This point His course loads would be considered unusually heavy today. which there is a single relation is called the fulcrum.” The (Photograph courtesy of the Stevens Institute of Technology.) to be derived or a single numdefinition was followed by an ber to be determined. Here, the example, which made it clear that bal- dence that his art and his engineering student is expected to be able to use all ance about the fulcrum depended not were intertwined. the analytical tools and techniques that only on the weights that were hung In the chapter on applications of he had learned in prior years as aids in from the ends of a rod but also on the statics to simple machines, there is an synthesizing something new. Rather rod’s weight and its distribution along exercise in which the student is asked than seeking a single answer to a single the rod itself. A figure accompanied to find the relation between two forces well-defined problem, the student is the example, and the drawing contains acting on a combination of four levers. expected to find an acceptable and pervirtually all the essential features of the The lengths of the levers and the posi- haps economical solution to a poorly elementary components of a Calder tion of each fulcrum are given in the di- defined problem, such as “design and mobile. By understanding the simple agram accompanying the exercise. This build a combination of levers that will problem of balancing each compo- is a fairly representative problem that balance in the still air, move gently in a nent about its fulcrum, the solution to an engineering student is expected to mild breeze, and is affordable within a which followed the technical statement be able to work after having been intro- predetermined budget.” Increasingly of the problem, an understanding of duced in the preceding paragraphs to in engineering education today, such a the mechanics of any Calder mobile the basic principles. A course instructor problem might also be given to freshcan be had. The existence of textbook will typically assign a number of such men, who are expected to determine examples like this does not answer the problems as a homework assignment what analytical tools they might need question of whether they later inspired due in a class period or two. Collective- as they progress toward a solution. Calder the artist to create mobiles or ly, the problems are called a problem set, After his mobiles had gained wide whether they explained technically and engineering students often struggle recognition as a new art form that emto him what he had achieved empiri- with and grouse over such assignments ployed motion rather than more tracally, perhaps even before opening an and equate them with what is consid- ditional compositional elements such engineering textbook. However, the ered the onerous workload associated as color and line, Calder began to be presence of such examples and illus- with the engineering curriculum. In asked by what he considered “some of trations in his texts does provide evi- fact, the working of such problems in- the lesser lights” in the art community, 372
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
Pierre Vauthey/Corbis
American Scientist
Calder was known for using a pair of pliers to fashion out of wire three-dimensional objects almost as if doodling. This image was made in 1967 at his studio near Sache, France.
“What formula do you use?” Of course, he did not need a formula per se, for his engineering background and insight had given him a keen understanding of the nature of a system of forces, levers
and fulcrums. When he set out to create a new mobile, he did not need or follow a formula, for he was engaged in the creative process of design rather than the analytical one of problem solving. In fact, writing in his autobiography about his use of motion as a compositional element, he asserted that “this I consider a rather natural turn for me, for I was once an engineer, and am a graduate of Stevens Institute of Technology.” Calder may have considered himself to have once been an engineer, but once an engineer, always an engineer. Still, without question he also became an artist. Having been a member of both camps, he understood their similarities and their differences. He understood that there were strong similarities between the acts of an engineer designing a combination of levers to accomplish a relation between an applied force and a supported weight and an artist creating a mobile expressing an aesthetic of color, extension and motion. Calder also understood that there were differences between the two creative acts: “To an engineer, good enough means perfect. With an artist, there’s no such thing as perfect.” That is why Calder made so
many mobiles, each of which may have been good enough for the engineer in him, but none of which was perfect, at least to the artist himself. Acknowledgments
I was prompted to look more deeply into Alexander Calder’s engineering education when I was asked to deliver the Annual Semans Lecture at the Nasher Museum of Art at Duke University. The lecture was given on March 15, 2012, in conjunction with the exhibition Alexander Calder and Contemporary Art: Form, Balance, Joy, which was sponsored in part by Sigma Xi and American Scientist. I am also grateful to Adam Winger, Head of Special Collections and Digital Initiatives Librarian in the S.C. Williams Library at Stevens Institute of Technology, for providing me with copies of the school’s 1918–1919 catalog, Calder’s transcript and photographs. Bibliography Calder, Alexander. 1977. Calder: An Autobiography with Pictures. New York: Pantheon. Martin, Louis A., Jr. 1906. Text-Book of Mechanics. Six volumes. New York: John Wiley & Sons. Stevens Institute of Technology. 1918–1919. Annual Catalogue. Hoboken, N.J.: Stevens.
The 64th Southeastern Regional Meeting of the American Chemical Society will take place in beautiful, historic and lively downtown Raleigh, North Carolina. Abstracts in all areas of chemistry are requested for the comprehensive symposia, special conferences, and general sessions. Program Highlights 2YHU,QYLWHG6\PSRVLD6HVVLRQV*HQHUDO2UDODQG3RVWHUV6HVVLRQV The Center for Solar Fuels (UNC EFRC) Conference The 41st Southeastern Magnetic Resonance Conference )URQWLHUVRI&KHPLVWU\DQG0HGLFLQH6\PSRVLXP(QWUHSUHQHXULDO&KHPLVWU\ 8QGHUJUDGXDWH5HVHDUFK+LJK6FKRRODQG$&63URMHFW6(('3URJUDPV $ZDUGV6RFLDO1HWZRUNLQJ6FLHQFH0XVHXP1LJKW Vendor Exposition with booths and seminars showcasing a wide variety of products and services Graduate Fair featuring graduate schools and programs $&6/HDGHUVKLS'HYHORSPHQWDQG6KRUW&RXUVHV7HFKQLFDODQG&DUHHU:RUNVKRSV &2$&K:RUNVKRSV*UDQW:ULWLQJ&KHPLVWU\'HPRQVWUDWLRQV September 16$EVWUDFWVXEPLVVLRQFORVHVOctober 31 - Advance registration closes Website: www.SERMACS2012.org
www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
373
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Marginalia
Bonding to Hydrogen Roald Hoffmann
M
y first encounter with H2 was typical for a boy in the age of chemistry sets that had some zing to them. My set, made by A. C. Gilbert Co., contained some powdered zinc. It had no acids, but it taught you to generate them from chemicals it included (for instance HCl from NaHSO4 and NH4Cl), or—the manual said— you could buy a small quantity from your local apothecary. Perhaps I got it there, asking politely for the acid in my best accented English a year or so after coming to Brooklyn from Europe. I poured some of the dilute acid on the zinc in a test tube, watched it bubble away, lit (with some fear) a match and heard that distinct pop. Flammable Air
Next, I encountered the gas, Henry Cavendish’s inflammable air, in a high school electrolysis experiment. We ran a current through water with a little salt dissolved in it, collected the unequal volumes of gases formed, each trapped in an inverted tube. Both gases gave small pyrotechnic pleasures—one, hydrogen, with that satisfying pop when a newly extinguished splint came near it; the other, oxygen, revived exuberantly the flame of the same splint. Primo Levi, in an early chapter in his marvelous The Periodic Table, describes an initiation into chemistry that features the same experiment, with more fearsome results: I carefully lifted the cathode jar and holding it with its open end down, lit a match and brought it close. There was an explosion, small but sharp and angry, the jar burst into splinters (luckily, I was holding it level with my chest and Roald Hoffmann is Frank H. T. Rhodes Professor of Humane Letters, Emeritus, at Cornell University. Address: Baker Laboratory, Cornell University, Ithaca, NY 14853-1301. E-mail: _________
[email protected] 374
The simplest molecule, made for connections not higher) and there remained in my hand, as a sarcastic symbol, the glass ring of the bottom.… It was indeed hydrogen, therefore: the same element that burns in the sun and stars, and from whose condensations the universes are formed in eternal silence. In my high school lab I had no idea that I was reliving, with different methods, part of the experiment Antoine Laurent Lavoisier thought important enough over two days in February 1785 to invite a select group of luminaries of French science to witness. In a tour de force of the big science of his day, using some remarkable instruments he had constructed at his own expense, he decomposed water into its constituent hydrogen and oxygen, and followed that by a recombination of the elemental gases thus generated into water. Henry Cavendish had proved that water ris formed in the combustion of hydrogen some years before; Lavoisier not only decomposed water, but determined that the cycle of its decomposition and reformation proceeded with conservation of mass. Not everyone was convinced—they should have been—yet with this experiment, a new chemical age dawned. Water and air, those seemingly homogeneous elements of the Greeks, were shown to be a compound and a mixture, respectively. A Diatomic Molecule
Chemistry and I progressed; it took chemistry a good 75 years from Lavoisi-
er’s time to have the macroscopic compounds—there at the beginning, with us today—be joined by a realization of an underlying microscopic reality, imagined well before it was proven, that of molecules. And it took another 65 years (now we’re circa 1925) for the new quantum mechanics to be created, explaining the why and wherefore of the molecules of dihydrogen (a nomenclature I will use when I need to distinguish hydrogen molecules from hydrogen atoms). In my education, I made that transition from compounds to molecules, much as chemistry did. Except I did it in three years instead of 140. I encountered the molecule, more precisely the quantum mechanical treatment of H2, in a class George Fraenkel taught, and beautifully so, in my last year at Columbia College. Fraenkel took us through the first calculation on H2 by Heitler and London, in 1927, a calculation parlayed by Linus Pauling into a general theory of covalent bonding. By this time the dissociation energy of H2 (the strength of the bond, the energy needed to take it apart into two hydrogen atoms) was known. It was 4.48 electron volts (eV) per molecule, 104 kilocalories per mole (kcal/mol). If that doesn’t touch you, let’s begin with the fact that a mole of H2 (roughly 22 liters of it in gaseous form at room temperature) has a mass of 2.0 grams. Not much, that’s why it was used in airships. Kcal/ mol? To heat a liter (about a quart, 1.057 quarts to be exact) of water from room temperature to boiling (a real-life operation most of us, even men, have done) takes about 80 kcal. That should help—to knock 2 grams of hydrogen molecules into hydrogen atoms takes about the same energy as to heat one and a quarter liters of water to boiling. Except, don’t try it on your stove—remember the Hindenburg airship. The energy of the hydrogen molecule as a function of distance is de-
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
energy (electron volts)
an equilibrium distance of 0.87 Å. Not too great (compared with experiment) but a remarkable result: For the first time quantum mechanics “explained” the existence of a molecule. Which classical mechanics coupled with electrostatics, try as it might, couldn’t. One could not solve the Schrödinger equation, the wave equation that describes all matter, exactly for H2, but the path down a road of increasingly accurate approximations to the exact solution seemed beautifully logical and enticing to this young apprentice. Fraenkel took us through it first of all by another method, called the molecular orbital (MO) method, pioneered by Friedrich Hund and Robert S. Mulliken. A molecular orbital is a combination of atomic orbitals, an approximate way to describe the location of Marie Anne Pierrette Paulze Lavoisier created this image in 1789 of the equipment Antoine Laurent Lavoisier electrons in a molecule—I used to decompose water into hydrogen and oxygen and then reconstitute them to water. The making of hy- will show you one soon. drogen proved an inspiration for the author’s early forays into chemistry and would return to fascinate him This method eventudecades later. (Image courtesy of the Division of Rare & Manuscript Collections, Cornell University Library.) ally dominated chemical thinking from the 1950s scribed by a “potential energy curve” way as a consequence of Heisenberg’s through today, but initially gave a shown in the figure below, a graphical uncertainty principle, does not sit still poorer description of the H-H bond in depiction of how the chemical poten- at the minimum of the potential en- H2. Yet both the MO and the Heitlertial energy of the molecule varies with ergy curve. The molecule vibrates, the London methods (expanded into Paulseparation of the hydrogen atoms vibrations of the molecule are quan- ing’s “valence bond” [VB] approach) (actually their nuclei) in the molecule tized—and in its lowest energy state could be systematically, logically imfrom each other. The depth of the well the hydrogen nuclei retain some mo- proved. We followed and understood relative to the separated atoms is the tion (in a way like a pendulum but that path in our class, culminating in a dissociation energy I described above. less deterministically so) around the remarkable 1933 calculation by H. M. But any molecule is a quantum me- “equilibrium distance.” Sometimes James and A. S. Coolidge, using handchanical entity; so the molecule, in a they are a little closer, sometimes a cranked mechanical calculators, that little farther apart, on the average they matched experiment. I would like to show you the molecare ~0.74 = 10–8 centimeter, 0.74 Ång6 ström (Å) from each other. We call that ular orbitals of H2, because (a) they’re the bond distance. important, and (b) I can’t escape them; 5 The bond distance in the H2 mol- they bring to me new chemistry at ecule and its dissociation energy were roughly 25-year intervals. The two 1s 4 known by the time the new quantum orbitals of the individual atoms com3 mechanics came. Heitler and London bine in in-phase and out-of-phase fashgot a dissociation energy of 3.14 eV, ion to give molecular orbitals called mg 2 and mu*, shown in the figure at the top of the next page. 1 The potential energy for a dihydrogen molThe m and the subscripts and superecule varies as a function of the distance bescripts on it are labels, symmetry labels; 0 tween the nuclei of the two atoms. In the case what matters is that mg has no node be0 1 2 3 4 5 of dihydrogen this distance is on average distance between nuclei (Ångströms) tween the nuclei, while mu* does. That about 0.74 Ångströms. www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
375
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Treat Me Right u
I had a small, almost disastrous encounter with the molecule again, right after earning my Ph.D. Well, spiritually, not materially. An approximate molecular orbital method I and some fellow theoreticians working with W. N. Lipscomb had devised, called the “extended Hückel” theory, did well on some larger, organic molecules, giving reasonable geometries and relative energies. But when I tried it on dihydrogen, the molecule collapsed—the calculated internuclear distance going to zero. That was a shock. It took some courage to go on with a method that could not get right (for good reasons, as we found out) the simplest molecule in the world. Or, just maybe, this small apparent disaster helped. For it made me and my students rely less on numbers than on understanding. On we did go, and got a lot of chemistry with this deficient method.
*
H H g
H2 has two molecular orbitals, mg and mu*. Black means one phase of the wave function, white another. The mg orbital has no node between the atoms, putting it at a lower energy than mu*, which has a node. In the ground state of H2, mg is occupied by two electrons.
puts mg low in energy, mu* high. And, importantly, mg is a “bonding” orbital, if occupied (as it is in H2), the electrons in it bring the atoms together, whereas mu* is an antibonding orbital, any electrons in it (there are none in an unperturbed H2 molecule, at least in the simpolest analysis) pushing the nuclei apart. Interesting that the big guys, the massive nuclei, move where the small electrons tell them to move.
A Lousy Acid, a Lousy Base
Molecular hydrogen is pretty unreactive, as is methane. Hydrogen burns, of course (with a flame that is nearly
u
*
dz2
colorless but very, very hot). But to get it to burn you need a match, even though the reaction to form water, Cavendish’s and Lavoisier’s reaction, gives off ~68 kcal/mol of dihydrogen burned. That’s chemistry: Things that should spontaneously proceed by the dictates of thermodynamics (like hydrogen burning) actually encountering substantial barriers to doing so. Chemical reactivity is predominantly that of acids and bases—that is why we spend so much time in introductory chemistry on this property of molecules. A base (ammonia, for example) is a good donor of electrons; in MO terms it has an energetically high-lying filled molecular orbital. An acid (the hydronium ion, the aquated proton, H3O+) is a good acceptor of electrons, as it has a low-energy empty MO. Hydrogen has an occupied MO, just one; you’ve seen it—it’s the mg in the MO picture of the molecule (see figure at left top). That MO lies low in energy; H2’s ionization potential, a measure of the energy of that MO, is large, 15.4 eV. And H2’s lowest unoccupied MO, mu*, is relatively high lying—to promote an electron from the filled MO to the unfilled one takes ~11 eV. Put into plain English, the hydrogen molecule is a lousy base and a lousy acid. The molecule is then relatively unreactive, even as it burns giving off a good bit of heat. Other molecules lack a good handhold, so to speak, on H2. Döbereiner’s Feuerzeug
d xz
x g
z y
L L L
L
L
M L
L L
L
H
H
H
H
M L
H2 (right) is normally quite unreactive, but the introduction of an appropriate transition metal-ligand fragment (left) can induce it to bind and form a new compound. In this case, the organometallic ML5 fragment contains an acid function with an empty metal-based orbital (dz2) and a base function with a filled orbital (dxz). The dz2 interacts with the mg molecular orbital of H2, its base function, and dxz interacts with the mu*, the acid function of H2. The dashed lines indicate the orbitals involved in stabilizing interactions or bond formations. 376
Yet hydrogen was known to react all along with some metal surfaces. In another column (American Scientist 86:326–329 [1998]) I recounted how Johann Wolfgang Döbereiner discovered in 1823 that hydrogen burned on a platinum surface. This is the first wellcharacterized catalytic reaction. Döbereiner did not know that there were molecules in his hydrogen gas (generated the same way I did as a boy, from Zn plus an acid, sulfuric acid in his case). And, of course, he did not know in atomistic detail how those H2 molecules fell apart on his Pt surface, and how they combined with oxygen from the atmosphere. Döbereiner made a Feuerzeug, a source of fire based on hydrogen, that became a household firelighting tool for half a century. Molecular Complexes of Dihydrogen
By the 1980s there emerged evidence for weak “complexation” (binding) of
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q MQmags q
THE WORLD’S NEWSSTAND®
hydrogen with various metal atoms. And surface scientists were piecing together the mechanism of Döbereiner’s seeming magic. Elsewhere, organometallic chemists found some reactions in which hydrogen molecules added to a metal center, the two hydrogen atoms split apart in the process. Experimentalists and theorists begin to view the seeming chemical inertness of dihydrogen as a challenge rather than dogma. In 1984 Jean-Yves Saillard, a French postdoctoral associate (now at the University of Rennes) and I did a careful study of the interactions of hydrogen and methane with discrete transition metal centers with associated ligands. These MLn (M is a metal atom, L a ligand, say CO or PH3, n the variable number of such ligands) fragments, if carefully chosen to be good bases and acids at the same time, could, in our approximate calculations, bind dihydrogen. The molecular orbital essence of our argument is shown in the figure at lower left; a similar picture and interpretation is there in earlier work of three Alains—Dedieu, Strich and Sevin. A small interlude here on so-called interaction diagrams, which is what you see in the figure at lower left on the previous page. These diagrams, my professional bread and butter, show the interaction of the important orbitals of two pieces of a molecule (when it can be taken apart into pieces). That’s the way we build understanding, putting together, in LEGO style, the orbitals of a more complex molecule from simpler pieces. The L5M(H2) molecule in the middle (at that time unknown, at least to us) is built from two simpler pieces—an ML5 fragment at left, and my old friend H2 at right. The orbitals of H2 are easy—you’ve seen them above, the mg MO, with both of the 1s orbitals of the component H atoms in-phase, at low energy; the mu* MO, unfilled by electrons, at high energy. On the other side are orbitals of the ML5 fragment, mostly on the metal. They are more complicated (the metal has important 3d orbitals), but the essential feature is that there are orbitals on the metal filled with electrons and some that are empty, and these match in symmetry and overlap reasonably well with the orbitals on the H2. The dashed lines in the figure guide us to just these stabilizing interactions. www.americanscientist.org
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Greg Kubas and his colleagues at Los Alamos National Laboratory were the first to synthesize a dihydrogen complex. The atoms represented are hydrogen (green), tungsten (light brown), phosphorus (purple), oxygen (red) and carbon (silver). Some hydrogens bonded to the carbons in the molecule are omitted for clarity. (Image by Brian Scott and Josh Smith, courtesy of Los Alamos National Laboratory.)
Here’s what happens in this theoretical analysis: The acid function of the ML5 fragment (its empty orbital, called dz2) interacts with the base mg of H2, the base function of ML5 (a filled dxz orbital) interacts with the mu*, the acid function of H2. (Did I not say that there is a reason for all that seeming torture on acids and bases in first-year chemistry?) Importantly, there are consequences to the strength and length of the H2 as a function of the interaction: As a result of the mixing of MOs of ML5 with those of H2, some electrons are transferred from the mg orbital of H2, depleting its bonding density. And some electrons are transferred in the opposite direction, from ML5 to the H2 mu* orbital. Both actions—decreasing bonding, increasing antibonding—will stretch the H-H distance, even as they overall bind H2 to MLn. The figure is for ML5, but the reasoning extends to other numbers of ligands bound to the metal. Saillard and I made no prediction of specific molecules. What we did not know when we did our work is that the first such “complex” had just been made. Greg Kubas at Los Alamos had synthesized (and with no nuclear reactions involved), the molecule shown in the figure above. It was followed over the years by a significant group of dihydrogen complexes, even ones in which the metal held more than hydrogen molecule. In time the H-H distance in these molecules was determined accurately
(one needs neutron diffraction for that; metric information also comes from nuclear magnetic resonance studies). Kubas understood very well what was going on—his qualitative thinking about what bound H2 in his molecules, quite independently conceived, was similar to ours. But what fun for us! A theoretical idea about how a molecule could bind—and not just any molecule, but normally inert hydrogen—translated into reality! We were happy. And Kubas deserves all the credit, because science is ultimately about the reality of a compound in hand—theories come and go, the molecule is there. The First Element under Pressure
In the past few years, my colleague Neil Ashcroft and I have had a fruitful collaboration on the response of molecules and extended structures to extreme pressure. Three years ago we returned to a first love of Neil’s, hydrogen. In this we were joined by a talented French postdoc, Vanessa Labet. Experimentally, one can learn much about matter under pressure (see “The Squeeze Is On,” American Scientist 97:108 [2009]) from studies in diamond anvil cells, where in a small reaction volume, between two tough diamonds and enveloped by (one hopes) an unreactive metal, a sample of matter is compressed. At what pressure solid, cold hydrogen (yes, hydrogen freezes, at 14 degrees kelvin) metallizes is the subject 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
377
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
shortest H–H distance (angstroms)
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
0.85
0.80
0.75
0.70 0
M q M q
M q
M q MQmags q
100
200
300
400
500
there. Model calculations confirmed that the little dance of H-H separations with pressure that experiment and theory observe in dense, cold H2 was the outcome of two competing effects: simple physical confinement, and the chemical effect of the molecular orbitals of confined and confining molecules interacting, mixing, transferring electrons, stretching that bond. I love it—the same bonding that occurs in discrete transition metal organometallic molecules is there in a highly compressed crystal of pure H2.
pressure (gigapascals)
One World When H2 is put under pressure, the intramolecular distance changes but does not necessarily decrease. The vertical lines mark changes in packing, where small discontinuities in structure occur. The “error bars” are actually spreads, showing a range of distinct distances in the structure. Depending on the interactions between orbitals and the exchange of electrons, that distance may actually increase.
of hot, current dispute. But some things people agree on—solid hydrogen retains molecular diatomic units up to pressures such as those at the center of the Earth (3.5 million atmospheres). And from a spectroscopic measurement one can even deduce the internuclear distance in the confined diatomic. As the pressure rises, the H-H equilibrium separation contracts a little, then begins to stretch. The magnitude of the excursion is small, less than 2 percent of the 0.74 Å separation. There are places in physics and chemistry where theory can afford a clearer picture of a phenomenon, and matter at extreme conditions is one such place. If one can trust the theory.… Vanessa Labet had a numerical laboratory at her disposal of the best structures calculated for compressed H 2 by Chris Pickard and Richard Needs. We used that laboratory to get physical insight, to reason out why hydrogen did what it did. The figure above shows the small dance the calculated shortest, intramolecular H-H distance does with pressure—it goes down a little, up for a while, down again, up, down. The discontinuities, the jags in the curve are understandable—they are the consequence of abrupt changes from one preferred form to another, so-called phase transitions. The calculations matched experimental findings pretty well. But what was behind the small dance steps? We first thought about the effect of confinement, one hydrogen molecule simply squeezed by other hydrogen molecules in that tense space. Now a 378
model for that was already there in earlier work of Dudley Herschbach and Richard LeSar. They looked at the energy levels of H2 confined in a rigid spheroidal box, as the dimensions of the box decreased. As one might expect, the internuclear separation responded by decreasing. Labet probed confinement by a slightly softer box, a hydrogen molecule imprisoned between two helium atoms, the most ungiving chemical walls we could think of. The earlier results were confirmed—such confinement only made the H2 distance contract. What else could it do? But that’s not what our numerical laboratory and experiment showed; in a real and modeled crystal of H2, the hydrogen molecule shrank, expanded, expanded some more, shrank. By just a little. What could possibly make it grow longer? As it was squeezed? At this point I remembered Kubas’s wonderful organometallic complexes. In them the coordinated hydrogen molecules expanded, to 0.82–0.89 Å in length. And from the work Saillard and I did, we knew why! The metal fragment provided electrons to populate hydrogen mu*, depopulate mg, both weakening the H-H bond. In compressed hydrogen, at pressures approaching those at the center of the Earth, there were no metals in sight. But under these extreme conditions, could other hydrogen molecules around a given H2 possibly play that role? We looked at the population of the molecular orbitals of a given molecule, and sure enough the effect was
The first element, the simplest diatomic molecule there is—what could be simpler? Hold on—an H2 molecule in solid H2 under pressure, an H2 molecule approaching a Pt surface in Döbereiner’s firelighter, the H2 bubbling out of the solution of a 13-year-old boy playing with slightly dangerous chemicals in a Brooklyn apartment, the H2 in transition metal complexes Greg Kubas saw for the first time in the world—of course, each is different, peculiar, set apart by its conditions of generation and preservation. But there can’t be different rules of nature operating for one H2 and not the other. The joy is in seeing the connections. Acknowledgment
I am grateful to Neil Ashcroft and Vanessa Labet for making me think about something I never thought I’d be working on again, or that I could imagine there was something left to learn about. As there is. Bibliography Kubas, G. J., R. R. Ryan, B. I. Swanson, P. J. Vergamini and H. J. Wasserman. 1984. Characterization of the first examples of isolable molecular hydrogen complexes, M(CO)3(PR3)2(H2) (M = molybdenum or tungsten; R = Cy or isopropyl). Evidence for a side-on bonded dihydrogen ligand. Journal of the American Chemical Society 106:451–452. Labet, V., R. Hoffmann and N. W. Ashcroft. 2012. A fresh look at dense hydrogen under pressure: 3. Two competing effects and the resulting intramolecular H-H separation in solid hydrogen under pressure. Journal of Chemical Physics 136:074503. Levi, P. 1984. The Periodic Table, trans. R. Rosenthal. Schocken Books: New York. Saillard, J.-Y., and R. Hoffmann. 1984. C-H and H-H activation in transition metal complexes and on surfaces. Journal of the American Chemical Society 106:2006–2026. Salem, L. 1987. Marvels of the Molecule (Molécule, la merveilleuse. English), trans. James D. Wuest. VCH: New York.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
Science Observer
Science Observer
Cracking with Electricity Faults seem to give off a warning signal before they slip Troy Shinbrot is no stranger to research that defies standard beliefs. The Rutgers University biomedical engineer focuses on grains and powders, specifically how they mix and gain electric charges. A few years ago, this specialty led him to work on what’s called the Brazil nut effect: In a group of particles of different sizes (such as a container of mixed nuts), shaking makes the larger ones (the Brazil nuts) rise to the top. Common wisdom was that small particles could slip below larger ones, leaving the big ones on top with nowhere to go. Other researchers found that the grains would “convect,” rising in the center and sinking at the edges. Large particles would lift up with the whole bed and then get stranded on top, unable to fit into the narrow margins at the sides. Shinbrot and his colleagues, however, found that large, lightweight particles would sink instead of rising, dubbed the reverse Brazil nut effect—but only when the grains were vibrated above a certain frequency that makes the bed “fluidize.” Then the mixture behaves like a liquid with typical buoyancy characteristics, so light objects rise while heavy ones sink. The results were so counterintuitive that reviewers of the paper thought they were impossible. An editor of the journal Physical Review Letters tested it out for himself and confirmed Shinbrot’s findings, but was still nervous. “He called me up and wanted to make sure that I wasn’t playing a joke on him,” Shinbrot says. “He was a little anxious that there was something funny going on that would make him regret publishing the paper.” Two years ago, Shinbrot again succeeded in convincing a journal that his unexpected results in another study were not spurious. In sandstorms, volcanic ash plumes and dust clouds www.americanscientist.org
American Scientist
in food, drug or coal processing, the grains spontaneously generate strong electrical charges and can sometime emit flashes or even explode. But the grains themselves are inert, so it’s hard to understand how they could charge. Shinbrot and his colleagues proposed a mechanism whereby particles are initially polarized by an external electric field. When they collide in a cloud, the contacting sides cancel their charges, leaving one particle with an overall negative charge and the other with a positive charge. Once they separate, the external field polarizes the grains again, adding one unit of charge to each particle with each collision. But Shinbrot’s results went so against accepted theories of electrostatic charging that for several years he considered the results to be unpublishable. “Most physicists view these problems as solved; they figure the chapter is closed,” Shinbrot says. “They don’t
look at the history and recognize that there are still many open subjects.” Now Shinbrot has made a discovery that he freely admits is very strange and hard to understand, but he is pretty sure his results aren’t mistaken. The problem started with powders destined for pharmaceuticals, which electrostatically charge during processing and stick to surfaces. “So there are problems where you want to mix two powders, and one of them might charge and one of them might not, or they might charge differently,” Shinbrot says. “That can cause them to separate, which is a severe concern when you want to have known amounts of drugs in each tablet.” Because of his prior interest in dust storms, Shinbrot was aware that electrical discharges from moving powders were possible. He had also heard about visible flashes being reported at the time of earthquakes. “We had this instrumented tumbler, and these instruments for measuring charge,” he says. “To my knowledge nobody had put these two pieces of equipment together, and I just thought ‘well, I wonder what will happen.’”
As a bed of powder (shown in false color) slides side to side, cracks open and close at repeatable locations. A voltage probe, moving with the chamber, focused on a single crack, and would consistently record a negative voltage signal a few seconds before each occurrence of the crack opening. (Image courtesy of Troy Shinbrot/PNAS.) 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
379
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Science Observer
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
And he was not expecting what did happen. “It seemed so utterly implausible,” Shinbrot recalls. He and his colleagues reported their results in the June 11 online edition of the Proceedings of the National Academy of Sciences of the U.S.A. As the powder tumbled, before a large slab of it fractured off in what is called a slip event, the material emitted a detectable negative voltage. In other words, an electric warning signal occurred before the slip happened. Although it’s not a surprise that cracks and slips occur in a tumbling tub of powder, could such signals predict an earthquake, where the soil is also essentially a compacted powder? “We realized there were conceivable connections with geologic events,” says Shinbrot. To firm up their results, Shinbrot and his colleagues used three different setups: the tumbler, a bed that simply tipped and one that slid from side to side (called a shear cell). In all cases, they tried to control for electrostatic charging in every way possible. “We increased the humidity, we used different materials, we used a static eliminator, we tried to measure in different regions,” he says. “We don’t know exactly how static charging would creep in, but I’m not going to rule it out completely.” However, he is pretty sure the electrical signal is coming from the crack itself, not from some kind of static discharge. One piece of evidence is that the type of bed used seems to affect how far in advance the electrical signal preceded the slip event. The signal seems to be emitted when a precursor defect opens within the material. “If you just take a bucket of powder and tip it, you’ll get a precursor of a half second or so,” Shinbrot says. “If you put it in a tumbler, you might get that same precursor that doesn’t work its way from deep in the bed up to the surface and produce a visible effect for five seconds or so.” Additionally, the location where the researchers placed the electric probe affected the amount of signal received. “It seems like cracks start at a particular location in the bed, and that’s consistent with our speculation that maybe the cracks are what are producing the voltage,” Shinbrot explains. “But why the cracks start there we have absolutely no clue.” In the shear cell, which Shinbrot filled with ordinary flour, he and his 380
M q M q
M q
M q MQmags q
Seen end-on and in false color, a tumbler of powder rotates counterclockwise (circular arrow). A train of defects (black arrowheads) emanates from one point in the tumbler. A voltage signal is emitted shortly before the defects slip, and the signal is strongest if a probe is placed at the point from which the defects emanate (white dot). (Image courtesy of Troy Shinbrot/PNAS.)
colleagues could watch the electrical signal happen repeatedly as the cell slid from side to side and the same crack opened and closed. “But it still remains very strange that you can put a powder like flour into a container and get 200 volts out of it,” Shinbrot says. Although Shinbrot cannot explain why cracks produce voltages, he theorizes that the mechanism is related to the dilation of grains before a slip event. “If you think of a stack of marbles, they’re all sort of interlocked because they’re all sitting against one another,” he explains. “If you try to move one, you actually have to lift it up over a little hump before it can flow.” The effect, Shinbrot says, seems to be similar to other unusual behavior in everyday materials. It has been reported for some time that when transparent tape is peeled off of its roll, it emits light at the point of separation. Biting a wintergreen Lifesaver also produces a flash, with some of the energy large enough to produce x rays. Shinbrot doesn’t yet know whether the electrical warning signal will be as clear if the grains are not all the same size. When the cracks are of a more jagged shape and less clearly defined, the signal is also affected. Both of these factors might impact the phenomenon’s usefulness for something like earthquake prediction. Shinbrot’s next move is to size up the test bed to a meter or two, as a first step in determining whether the effect might happen at all on a geologic scale. It’s possible that an increased area could decrease the stress on the grains and drop the signal—or the opposite
could occur, and with more areas of contact to be broken the result could be magnified. “If the effect grows with size, then we’ll try to collaborate with geophysicists and look at larger scale systems,” he says. “If it decreases with size, then we’ll just say ‘well this was an interesting trip,’ and we’ll go on with something else.” Assuming there is a relationship between the group’s results and seismic events, Shinbrot hopes to coordinate the work with other indicators. For instance, earthquakes are known to emit acoustic signals, and Shinbrot plans to explore any possible connection between them and the electrical discharges. But he faces some unusual challenges in this area, for which his history of unconventional studies may have prepared him: “If you look up ‘earthquake lightning,’ you’ll find an equal number of websites that talk about it having something to do with UFOs or government conspiracies or whatnot, as authentic scientific research. We don’t want to be tarred by the same brush. But it made this topic a very interesting one to study. In the past 10 years there have been some serious scientific studies, and I think there is hope that in the next 10 years this may get on a firmer scientific footing.” It seems likely that no matter what, Shinbrot will persevere in finding the exceptions in physics that show that the field is still full of surprises. As he says, “That’s what makes it fun.” —Fenella Saunders
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
In the News
I
n this roundup, Fenella Saunders summarizes notable recent items about scientific research, selected from news reports compiled in the free electronic newsletter Sigma Xi SmartBrief. Online: _________ https://www. smartbrief.com/sigmaxi/index.jsp
Riding Raindrops
temperatures could be kept to around 850 degrees Celsius instead of over 1,000 degrees. The researchers found that the materials could be reused at least five times, but more surface area and higher proven reusability rates would be needed to scale up this process to an industrial level. They also hope to get the required temperatures down to the point where waste heat from steel mills or power plants could be used to run the process. Xu, B., et al. Low-temperature, manganese oxide-based, thermochemical water splitting cycle. Proceedings of the National Academy of Sciences of the U.S.A. 109:9260–9264 (June 12)
Artificial Rat-Jellyfish
A raindrop can weigh 50 times more and travel 10 times faster than a mosquito, but the insects can still fly through a downpour without damage. Researchers used high-speed video of captive mosquitoes subjected to a water jet to confirm this finding. A mosquito’s low mass and strong exoskeleton cause the drops to lose little momentum when they collide in midair, so the bugs receive small impact forces. The researchers also found that the insects go with the water flow, sticking to the front of the drop for up to 20 body lengths, but their long legs and wings provided enough drag for them to rotate free before hitting the ground. (Image courtesy of David Hu.) Dickerson, A. K., et al. Mosquitoes survive raindrop collisions by virtue of their low mass. Proceedings of the National Academy of Sciences of the U.S.A. 109:9822–9827 (June 19)
Cooler Hydrogen Hydrogen is considered an alternative energy source, but one major means of obtaining it—splitting water atoms—requires a lot of electricity, which is often generated by fossil fuels. Researchers have developed a way to split water using heat and catalysts, but at relatively low temperatures and without corrosive intermediate products. They use manganese oxide and shuttle sodium ions in and out of it while heating, which drives off oxygen that bonds with the oxygen in added water. With the sodium, www.americanscientist.org
American Scientist
does not transit as viewed from Earth. Now high-resolution spectroscopy from a telescope in Chile has captured light from the exoplanet for the first time, and measurements of the planet’s carbon monoxide absorption have been made. From these readings, astronomers have calculated that the planet is orbiting at an inclination of about 44 degrees and has a mass about six times that of Jupiter. The technique could be used to detect atmospheres on other exoplanets that do not transit. (Image courtesy of ESO/L. Calçada.) Brogi, M., et al. The signature of orbital motion from the dayside of the planet Tau Boötis b. Nature 486:502–504 (June 28)
Toothsome Diets Using a flower-shaped piece of silicone and muscle cells from a rat heart, researchers have built a synthetic jellyfish. When placed in an electric field, the petals convulse downward and the device pulsates forward, swimming much like its natural namesake. Researchers mapped the cells of juvenile moon jellies (Aurelia aurita) and found that electrical signals spread through the animals’ muscles in a smooth wave as they swam. They grew a single layer of rat heart muscle on a patterned polymer membrane to mimic the contraction pattern. The investigators built the device to better understand the fundamental workings of muscular pumps, and think it could be a platform to test medications that aim to improve heart-pumping activity. (Image courtesy of Harvard University and Caltech.) Nawroth, J. C., et al. A tissue-engineered jellyfish with biomimetic propulsion. Nature Biotechnology (published online July 22)
Do Not Pass Planets outside of our solar system (dubbed exoplanets) are usually studied when they transit, passing in front of or behind their parent stars. One of the first exoplanets discovered, called Tau Boötis b,
A trip to the dentist usually involves getting rid of plaque from teeth, but such buildup on ancient human fossils has been a boon to researchers studying early hominin diets. Analysis of the tartar on teeth of a 2-million-year-old hominin called Australopithecus sediba shows that they ate leaves, fruits and bark, suggesting they lived in a woodland environment. Dental wear and carbon ratios in the tooth chemistry supported the findings. Previously described diets of other early hominins pointed to an open savannah habitat. This was the first time that tartar has been found in such an ancient hominin. It is possible that the individuals lived during a time of drought and were forced to eat such foods because of lack of other resources. (Image courtesy of Amanda Henry.) Henry, A. G., et al. The diet of Australopithecus sediba. Nature 487:90–93 (July 5) 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
381
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
an Scie
ls
3–20
a
1
sC
91
12
ist
Ame
ic
nt
r
THE WORLD’S NEWSSTAND®
e
nt i enn
The Big Picture This installment is the second in this year of centennial celebrations to feature American Scientist’s illustrations, one of the magazine’s most defining features. Examples have been selected to highlight the range of illustrations created to accompany articles published within the past two decades, when the magazine’s visual tradition truly bloomed. Staff and freelance artists have created illustrations to communicate the detail of research in ways that words alone could not. Their handiwork presents science in all its complex and beautiful forms while allowing subjects to be displayed and understood in original and accessible formats. “Prokaryotes,” November–December 1999, shows the levels of subcellular organization in a typical Escherichia coli cell. Artist David Goodsell, art director Linda Huff.
“Why Ravens Share,” July–August 1995, illustrates posture and feather configurations related to a bird’s status. Artist and art director Linda Huff.
382
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
I
R
Q3
P
Q “Group Theory in the Bedroom,” September–October 2005, detailed patterns for rotating car tires. Artist Brian Hayes, art director Barbara Aulicino.
Y Q2
“Virtual Fossils from 425 Million-year-old Volcanic Ash,” November–December 2008, reconstructed an ancient arthropod with digital imaging. Computer-generated images by Derek E. G. Briggs, Derek J. Siveter, David J. Siveter, Mark D. Sutton and assembled by Barbara Aulicino. Art director Barbara Aulicino.
“Secrets in the Shell,” September–October 2007, revealed the criss-crossing layers that give strength and flexibility to a conch shell. Artist Stephanie Freese, art director Barbara Aulicino. www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
383
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
solenoid
radiation from Jupiter’s magnetosphere
oxidants surface radiation danger (10 centimeters)
comets
sunshine
fuels valve
–180 degrees Celsius clinging life forms
“Fuel efficiency and the Economy,” March–April 2005, shows solenoid-actuated valves that can adapt to changing engine conditions. Artist Tom Dunne, art director Barbara Aulicino.
light (meters deep)
too dark photosynthetic plants tides
~0.1 degrees Celsius per meter
floating life forms
acidic ocean proteinaceous membrane with [4Fe-4S]2+/1+
Fe2+ and Fe(III)
0 degrees Celsius (a few kilometers down) ocean: reservoir of endogenous and exogenous substances
RNA/peptide coevolution
[Fe4S4][SNiS]2 FeS
“Tides and the Biosphere of Europa,” January–February 2002, imagined the biological niches creates by cracks in the icy crust. Artist Barbara Aulicino, art director Tom Dunne.
HS–
HS– abundant amino acids sparse nucleic acids
HS–
“First Life,” January–February 2006, pictured bubbles around ancient thermal vents. Artist Tom Dunne, art director Barbara Aulicino.
eye optic nerve
rhinal cortex
optic tract
parahippocampal cortex
parahippocampal gyrus
CoS fusiform gyrus
ventral occipitotemporal cortex lateral occipital area
1.4 10 ce 1.1 centimeters
collateral sulcus 0
V4/V8 V3/VP
V1 V2
lingual gyrus
“Perceptual Pleasure and the Brain,” May–June 2006, highlighted the overlap between visual and pleasure centers of the brain. Artist and art director Barbara Aulicino. 384
meters per second
0
“How Gecko Toes Stick,” March–April 2006, caught geckoes recovering from a fall. Artist Tom Dunne, art director Barbara Aulicino.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
“Group Decision Making in Honey Bee Swarms,” May–June 2006, visualizes a bee communicating via vibrations. Artist Stephanie Freese, art director Barbara Aulicino.
clot
LDLs e
qu
pla
foam cells immune cells (monocytes)
oxidized LDLs
macrophage
“Statins: From Fungus to Pharma,” September–October 2008, explains the inflammation process of atherosclerosis. Artist and art director Barbara Aulicino.
“Elasticity in Arteries,” November–December 1998, peels apart the structures that allow arteries to expand and contract. Artist Tom Dunne, art director Linda Huff. www.americanscientist.org
American Scientist
“The Molecular Anatomy of an Ancient Adaptive Event,” January–February 1998, compares evolutionary changes in amino acids. Artist and art director Linda Huff. 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
385
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
probability of patch transitio
n
American Scientist
tim e
urban g radient
“Bird Song and the Problem of Honest Communication,” March–April 2008, portrays a courting song sparrow. Artist and art director Barbara Aulicino.
“A New Urban Ecology,” September–October 2000, graphs the probability of urban development. Artist Barbara Aulicino, art director Tom Dunne.
31.39% 2.38%
6.55%
38.50% Northern America
Western Europe
Eastern Europe
Argentina, Bahamas, Barbados, Belize, Bermuda, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Dominican Republic, Ecuador, El Salvador, Guadeloupe, Guatemala, Guyana, Haiti, Honduras, Jamaica, Mexico, Nicaragua, Panama, Paraguay, Peru, Surinam, Trinidad-Tobago, Uruguay, Venezuela
USA, Canada
Austria, Belgium, Denmark, Finland, France, Germany, Greece, Iceland, Ireland, Italy, Luxembourg, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, United Kingdom
Albania, Armenia, Azerbaijan, Bosnia & Herzegovina, Bulgaria, Belarus, Croatia, Czech Republic, Estonia, Hungary, Kazakhstan, Latvia, Lithuania, Moldova, Poland, Republic of Georgia, Romania, Russian Federation, Serbia, Slovakia, Slovenia, Turkmenistan, Ukraine, Uzbekistan
9 62
2.
94
.2
42 3.
1.
25
2.
69
11
.3
8
59
.9
9
75
.4
7
Latin America
05
0.44% 1.
Northern Africa Algeria, Chad, Djibouti, Egypt, Eritrea, Ethiopia, Libya, Mauritania, Morocco, Somalia, Sudan, Tunisia
0.73%
2.06%
2.79%
15.16%
Southern Africa
Middle East
Asiatic region
Oceania
Angola, Benin, Botswana, Burkina Faso, Burundi, Cameroon, Central African Republic, Congo, Equatorial Guinea, Gabon, Gambia, Ghana, Guinea, Guinea-Bissau, Ivory Coast, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mozambique, Namibia, Niger, Nigeria, Rwanda, Senegal, South Africa, Swaziland, Tanzania, Togo, Uganda, Zaire/Republic of Congo, Zambia, Zimbabwe
Bahrain, Iraq, Iran, Israel, Jordan, Kuwait, Lebanon, Oman, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates, Yemen
Afghanistan, Bangladesh, Brunei, Cambodia, Hong Kong, India, Indonesia, Japan, Laos, Malaysia, Mongolia, Myanmar (Burma), Nepal, North Korea, Pakistan, Peoples Republic of China, Philippines, Singapore, South Korea, Sri Lanka, Taiwan, Thailand, Vietnam
Australia, Fiji, French Polynesia, New Zealand, Papua New Guinea
world share of papers average data for 1991–1998
papers per 100,000 people (data for 1998)
“Scientific Publication Trends and the Developing World,” November–December 2000, compares two measurements for scientific paper publication worldwide. Artist Barbara Aulicino, art director Tom Dunne. 386
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
CO2 in the atmosphere is absorbed by the ocean
globally, phytoplankton produces 16 gigatons of carbon on the ocean surface
500 CO2 1,000
1,500
phytoplankton photosynthesis
zooplankton
organic carbon plus oxygen (growth and reproduction)
decaying excrement organisms
grazing
feeding
decaying excrement organisms
feeding decaying excrement organisms
depth (meters)
2,000
2,500
3,000
microbes feed on excrement and decayed organisms in marine snow
3,500
4,000
limited food supports diverse organisms on the seafloor
4,500
globally, 3% of the carbon reaches the deep seafloor
“An Empire Lacking Food,” November–December 2010, tracks how food from the sea surface reaches the ocean bottom. Artist Tom Dunne, art director Barbara Aulicino.
“Brain Plasticity and Recovery from Stroke,” September–October 2000, gave examples of finger manipulations used to assess neural activity in stroke patients. Artist and art director Tom Dunne.
b
a c
g
d f e
i j
h
“Amber’s Botanical Origins Revealed,”March–April 2007, displays all the reasons that trees produce resin. Artist Emma Skurnick, art director Barbara Aulicino.
www.americanscientist.org
American Scientist
“The Uniqueness of Human Recursive Thinking,” May–June 2007, depicts René Descartes as a mental time traveler. Artist Tom Dunne, art director Barbara Aulicino. 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
387
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Graphene in HighFrequency Electronics This two-dimensional form of carbon has properties not seen in any other substance Keith A. Jenkins
T
he Nobel Prize in Physics for 2010 was awarded jointly to Andre Geim and Konstantin Novoselov of the University of Manchester “for groundbreaking experiments regarding the two-dimensional material graphene.” Essentially it was awarded for the discovery in 2004 of the form of carbon known as graphene, which led to an explosion of experimental and theoretical work with the material around the world. It is remarkable not only that the Nobel Prize was given for the discovery of a material (rather than for the elucidation of some physical principle), but also that the material was already in one of the most common substances in human history, and that the prize was awarded such a short time after the discovery. In part, the speed of this recognition is due to the amazing excitement created by Geim and Novoselov’s findings. According to Geim, several technical papers are published every day on the subject of graphene. This enthusiasm arose from the quick realization that graphene has many remarkable physical properties not seen in any other material. What is graphene and what are its properties? It is a two-dimensional Keith A. Jenkins is a research staff member at the IBM Thomas J. Watson Research Center. He received a Ph.D. in physics from Columbia University in 1978 for work in high-energy physics. He continued his research at the Rockefeller University before joining IBM, where he has investigated a variety of semiconductor device and circuit subjects, including high-frequency performance of devices, radiation-device interactions, low-temperature electronics and silicon-on-insulator self-heating effects. His current activities include investigating the use of graphene for high-frequency integrated circuits, and developing on-chip circuits for in situ measurement of timing jitter, power supply transients, device variability and circuit reliability. Address: IBM T. J. Watson Research Center, Yorktown Heights, NY 10598. E-mail: ___________
[email protected] 388
form of carbon, a single layer of carbon atoms, in which the atoms are arranged in a hexagonal chicken-wire or honeycomb configuration. It is found in nature and is simply one of the layers of the common substance graphite. Each layer of graphene stacked up to make graphite is only loosely bonded to the others. The layers adhere only by van der Waals forces, weak dipole-to-dipole attractions between adjacent molecules, rather than by stronger covalent bonds, in which the molecules share electrons. This loose bonding explains why graphite is a good lubricant. Technically, graphene was known since X-ray crystallography was able to uncover graphite’s structure in the early 1900s, but it was not isolated into individual planes of graphene until 2004. And there was little interest in graphene by itself until Geim and Novoselov were the first to detail its properties. Graphene is one of very few twodimensional materials with a crystalline structure. Its physical layout and electronic properties result in many remarkable characteristics. It is an excellent heat conductor. It is flexible, yet 10 times stronger than any other measured material, at equivalent thicknesses. It is a good absorber of light over a wide spectrum, yet it is effectively transparent. (This dichotomy is not a paradox: A layer of graphene absorbs 2.3 percent of the light impinging on it, which is a lot for a single atomic layer, but still allows most light to pass through.) It is fairly chemically inert, yet when appropriately treated it is a sensitive electrical detector of very small concentrations of chemicals. And it has very high electron mobility, allowing the transit of electrical charges through the material at a rate that is at least 100 times greater, in its purest state, than conventional semiconductors used in electronics, such as silicon or gallium arsenide.
Graphene in Transistors The property of very high electron mobility has excited the electronics community and led to countless predictions. Our apparently insatiable need for speedier computers and greater data throughput in wireless devices has resulted in a universal acceptance that electronic devices and circuits must be made to operate ever faster. The traditional way of achieving this result has been by reducing the size of transistors. Transistors are made with the class of materials called semiconductors, which conduct electricity under some conditions but prevent its flow in others. That property lets the transistor act as an on/ off switch for the binary signals used in computers. The quantum mechani-
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Lawrence Berkeley National Laboratory/Science Photo Library
Figure 1. A transmission electron micrograph (TEM), made by shooting a beam of electrons through a material, shows the honeycomb arrangement of atoms in a flake of graphene. This one-atom-thick crystalline form of carbon is both flexible and strong, and conducts electricity faster than silicon and other conventional semiconductors. Scaling graphene sheets up to a usable size, as well as figuring out the engineering details required to make the material work in integrated circuits, has been an ongoing challenge that is now starting to show some results.
cal energy bands that form when atoms are bound together in a crystalline solid make it possible for currents in semiconductors to be carried by both positively charged and negatively charged particles. The negatively charged particles are electrons and the positively charged particles are called holes, implying a place where an electron is missing. Materials that conduct electrons are called n-type (for negative-type) and ones that conduct holes are called p-type (for positivetype). Both n-type and p-type materials can be used in transistors. The field-effect transistor (FET) is the most common type used in modern www.americanscientist.org
American Scientist
computers. A gate electrode at the center of the device is separated from the semiconductor surface by a thin insulating layer. A current of electrons or holes passing between the FET’s two terminals, called the source and the drain, is controlled by the amount of voltage on the gate. In an n-type FET, for instance, a positive voltage on the gate electrode forms a layer of electrons in the channel region, creating a conducting path in the semiconductor through which current flows. To speed up a FET, the concept is simple: shorten the distance—the channel length—between the source
and drain, and signals will pass between the terminals more quickly. This reduction of dimensions was dubbed scaling by IBM researcher Robert Dennard in the 1970s. (An important side benefit to scaling is that it leads to cramming more transistors into a given area, creating the potential of more functionality on a similarly sized semiconductor.) However, the difficulty of continuing to reduce the size of conventional silicon-based transistors has caused many to think that the era of scaling is near an end. How can we then make transistors pass signals more quick2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
389
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
graphene layer source gate insulator dielectric
drain
chan n leng el th
Figure 2.. A sheet of graphene (left), made up of a single layer of carbon atoms, has been incorporated into a field-effect transistor (FET) (right). Current flows from source to drain (red arrow) through the channel when the correct voltage is applied to the gate terminal (which is insulated from the device with a dielectric material). The channel length is one factor in determining the speed of the transistor. A layer of graphene increases the mobility, the speed at which electrical signals travel through the material when a voltage is applied.
ly? The solution lies in the material of the transistor. Between the source and drain terminals of a transistor is the semiconductor. Although reducing the distance between source and drain increases speed, another way to achieve the same goal is to use a material that conducts electrical signals more rapidly. This aim is why graphene’s mobility is important. Mobility is a measure of the speed with which electrical signals travel through a material when a voltage is applied. It is easy to see that a material with higher mobility transmits signals faster than one of lower mobility if the dimensions are the same. Compared to other semiconducting materials, silicon actually has a relatively low mobility, but its use is widespread because it has lots of other advantages, such as great mechanical strength and relative ease of manufacture. However, transistors are made from materials that have higher mobility than silicon, such as gallium arsenide (GaAs) and indium phosphide (InP), when they are destined for certain special applications, such as high-frequency wireless transmitters and receivers used in cell phones, or in specialized electronics such as military communications equipment. Graphene transistors may be expected to play a role in these applications because the highest mobility measured for graphene is greater than for the other compounds. Many scientists and engineers around the world have become excited about the possibility of replacing silicon with graphene to make faster transistors and circuits. Even before graphene’s properties were fully measured or understood, my colleagues 390
and I at IBM, as well as other groups worldwide, believed we should jump right in and try to build transistors and circuits in a way that might lead to technology that could eventually be manufactured just as silicon is used today. The U.S. Defense Advanced Research Projects Agency also believed in this goal and supported this work. Not surprisingly, we encountered many difficulties, but the progress has been very rapid. Making Graphene Graphite is made of a huge number of sheets of graphene stacked on top of each other. If a single layer of graphite is peeled off, we have graphene. A light pencil mark on paper may actually leave traces of graphene, as pencil “lead” actually contains graphite. But peeling off a single layer of free-standing graphene without having it buckle and fold is not the same as taking a sheet off a stack of paper. Geim and Novoselov first obtained graphene by exfoliation using transparent tape: By repeatedly sticking the tape on graphite and removing it, they were able to sometimes transfer small flakes of graphene to another carrier material. This technique let them obtain enough graphene to do the seminal experiments that led to their Nobel Prize. This ease of producing graphene probably led to the intense and rapid interest in the material: Any university with a chunk of high-quality graphite, a supply of tape and patient graduate students could produce enough of the material to do some interesting experiments! However, the flakes of graphene made this way are tiny: tens of micrometers in width. Such small pieces
are enough for experiments but useless for the real electronics applications that our group had in mind. Modern electronic circuits are manufactured by wafer processing: Thin discs with a diameter of 200 or 300 millimeters, made of silicon or other materials, are treated with chemical and optical steps to make dozens or hundred of identical circuits simultaneously. For graphene electronics to be practical, graphene has to be made in larger amounts and sizes, and processed into circuits in a way that can be mass-produced. At the moment there are two methods of producing graphene on this scale. Both involve processes to grow a single layer of atoms stretching across many centimeters—enormous for this material. Graphene is a picoscale (300 x 10–12 meters) material in one dimension, but macroscopic in its other two dimensions. One growth method is the formation of an epitaxial layer (where the crystalline structure of the layer aligns to that of the substrate) of graphene on silicon carbide (SiC). At sufficiently high temperatures the silicon–carbon bonds will break and the silicon will evaporate from the surface, leaving exposed carbon atoms. If this event occurs with appropriate gases present, additional carbon atoms will adhere to those on the surface, forming the hexagonal pattern that makes graphene. This method has been used for a few years to produce 50-millimeter wafers with a silicon-carbide support structure covered completely by a layer of graphene, which can be patterned and formed into devices and circuits using many of the processing techniques found in conventional electronics fabrication. Wafers
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
100 millimeters in diameter are now becoming available, which makes this method even more attractive. Silicon carbide is particularly useful for radio frequency electronics: Unlike silicon, it is an insulator, so unwanted signals will not propagate from device to device through the substrate. Additionally, it is optically transparent. One drawback of the epitaxial graphene is that one of its surfaces is fairly tightly bonded to the base, impacting its mobility and making for a transistor that would be slower than one made of free-standing graphene. Another method of producing waferscale graphene is growth through the chemical vapor deposition (CVD) of carbon on a catalyst material. Good quality graphene has been produced using copper as the catalyst. If a wafer covered with copper is exposed to ethylene (C2H4) under appropriate conditions, a single layer of graphene forms on it. Of course, copper is a conducting material, and it connects all the graphene together, making it useless for circuits, so the graphene layer must be transferred to an insulating substrate before it can be put into production. The graphene is first coated with a polymer, then the copper on its underside is chemically etched away. The graphene/polymer sheet is placed on a carrier wafer, and the polymer is then removed, leaving a single sheet of carbon atoms. This technique can be used to cover 200- or 300-millimeter wafers with graphene, and any final substrate can be used. Therefore it’s now possible to transfer the graphene onto already-fabricated circuits to make a hybrid technology. The drawback of the CVD method is that, at the moment, the mobility of the graphene is not as high as that formed from epitaxial growth, due to residues of the polymer adhering to the graphene, physical domains (places where the grid of carbon atoms doesn’t align, creating boundaries) and even wrinkles in the graphene sheet. But with the amount of effort spent on generating graphene with CVD, this situation will probably improve. Graphene Added A transistor made with graphene looks very much like a conventional FET, in that it has a region covered by an insulator under a gate electrode that controls the flow of electrons or holes from source to drain. However, the physical properties of graphene are very differwww.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
ent from other semiconductors, and the mechanism of current control is altered, so the electrical properties of a graphene FET diverge quite a lot from conventional FETs. The most important of these different properties is that graphene does not have a bandgap, an energy range in
most nonmetals where electron states cannot exist. In other words, for graphene, as its electrons travel in orbitals around their carbon atoms, there is no energy difference between the conduction band (where electrons are free to move around and form bonds) and the valence band (where electrons are
gate drain
source
50 nanometers
graphene
Figure 3. A TEM of a cross-section of a graphene FET shows the thin, white layer of graphene in the device (top). A 50-millimeter wafer of graphene on silicon carbide (bottom) shows the repeating patterns of graphene devices. The wafer and the graphene layer are transparent; the structures that are visible are the metal pads that connect to the transistors. (Unless otherwise indicated, all photographs are courtesy of the author.) 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
391
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
graphene field effect transistors
silicon field effect transistors drain current (milliamps per micrometer)
drain current (milliamps per micrometer)
0.5 0.6
0.5
0.4
0.4
0.3
0.2
0.1
0 –2
–1
0 gate voltage
1
2
3
drain current (milliamps per micrometer)
drain current (milliamps per micrometer)
–3
1.0
0.8
0.6
0.4
0.2
0
0.5
0
0.5
1.0 gate voltage
1.5
0.5
0.4
0.3
0.2
0.1
0
0 0
0.5
1.0
1.5
2.0
drain-source voltage
1.0
1.5
drain-source voltage
Figure 4.. The direct-current electrical characteristics of a graphene FET (left) differ from a traditional silicon FET (right). The transfer characteristic (top row) relates output current through the drain to input voltage on the gate. This property shows how “strong” a device is: A stronger device delivers more current for a given input voltage than a weaker device. The output characteristic (bottom row) relates the drain current to the drain voltage, from which the output resistance can be computed (each curve is for a different gate voltage). The slope of the curves (dashed line) is proportional to the reciprocal of the output resistance. These properties of any transistor need to be known to design analog circuits.
tightly bound to their atoms). Because of this property, some call graphene a gapless semiconductor and some call it a semi-metal. In semiconductor FETs, the bandgap allows the channel to be turned on or off—preventing or allowing a flow of electricity—by applying a voltage to the gate. Without a bandgap, graphene always conducts, but the amount of conduction is controlled by the number of charge carriers in the material. Thus, the current can be modulated, but it cannot be completely stopped. In silicon FETs, the ratio of the on current to the off current might be 1:104 or 1:105, but the ratio for graphene FETs may be smaller than 1:10. Also, in silicon FETs, a state called pinch-off occurs when part of the chan392
nel is effectively off, leading to a high resistance to current. High output resistance is required for transistors to amplify the input signal at the gate, which is needed for almost all practical circuits. But the absence of a bandgap in graphene means there is no pinchoff in its channel region, so achieving high output resistance is difficult. Because silicon FETs can be turned off, they can be used for digital computers for which there are only two states allowed: on or off, current flowing or current blocked. In standard siliconmanufacturing technology, most of the transistors are off most of the time. With millions of transistors in an integrated circuit, this means the current flow is controllable. If the off-state of such tran-
sistors only resulted in a small reduction of current, then the circuits would require enormous amounts of power to keep them going, and we would never have home computers or portable electronic devices, because they would heat up and deplete their batteries very quickly. However, analog and radio frequency circuits, which modulate the amplitude of signals rather than turn them on or off digitally, are essentially always conducting. Silicon FETs can be used for digital or analog circuits, but graphene FETs are better suited—at the moment—just for analog circuits. There are some other interesting differences between graphene and semiconductors. Because it is a purely twodimensional material, it has no body,
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
100 D
id
G
current gain
ig
10
gate length 240 nanometers 550 nanometers
S
cutoff frequency
1 10
1
100
frequency (gigahertz) Figure 5. A circuit diagram of a FET (left) shows a small alternating-current (AC) signal (ig) applied to the gate (G), after which the AC drain current (id) is measured. The current gain (right) decreases in proportion to the inverse of the frequency. The cutoff frequency is the point at which this value falls to one (which in this case happens at 100 gigahertz, shown at the red arrow), and is an indicator of how quickly a signal can travel from source to drain.
www.americanscientist.org
American Scientist
voltage on the gate, from which transconductance is derived. Qualitatively this measurement shows how “strong” the device is: A stronger device delivers more current for a given input voltage than a weaker device. The output characteristic relates the drain current to the drain voltage, from which the output resistance is computed. Both transconductance and output resistance of any transistor need to be known to design analog circuits.
Yet in spite of the very different physics and resulting DC electrical behavior, it has been shown that graphene FETs behave very much like their semiconductor counterparts when they are operated at high frequency, under alternating-current (AC) conditions. This comparison is made by measuring each system’s frequency response, its output signal strength when it is given a particular input signal.
500
peak fcutoff frequency (gigahertz)
no bulk. All current flows on its surface. As a result, anything contacting graphene might affect its current flow by scattering it off the atoms of the contacting material. Additionally, in semiconductors the level of gate voltage required to turn the transistor on has been controlled by adding small amounts of other materials, in a process called doping. But doping in graphene is only possible through surface contact, which again is hard to control during manufacturing. Finally, current sometimes flows equally well in both directions through a graphene FET. In silicon and other semiconductors with a bandgap, current is predominantly via either electrons or holes, as determined by the fabrication of the transistor. But graphene allows almost equal conduction of electrons and holes, so it is an ambipolar device. (However, when the graphene is made from silicon carbide, it becomes unipolar.) So far, this property is regarded as a nuisance, but there might be some unique applications resulting from this quirk. There have been demonstrations of frequency multipliers and dual-mode amplifiers (both used with communications signals) that make use of this peculiar property. The result of these different behaviors can be seen in traditional directcurrent (DC) electrical characteristics of the graphene FETs (see Figure 4). The transfer characteristic relates the output current through the drain to the input
450
ITRS 2008
400
epitaxial graphene
350
CVD graphene
300
exfoliated graphene
250
inverse of gate length
200 150 100 50 0 10
100
1,000
gate length (nanometers) Figure 6. Cutoff frequency in various graphene FETs is a function of gate length, and is always higher than that of silicon FETs (dashed line) at the same gate length. The projected performance of semiconductor FETs from a report called the International Technology Roadmap for Semiconductors (ITRS 2008) is given for comparison. The way that the graphene sheets are produced—grown in an epitaxial layer, created by chemical vapor deposition (CVD) or exfoliated from a larger piece of graphite—can alter their cutoff frequency. 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
393
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
inductor (M3)
contact spacer
inductor (M3) gate (M2)
dielectric source (M1) spacer contact
graphene silicon carbide
drain (M1)
Figure 7. An integrated circuit called a frequency mixer is made using many steps to deposit and pattern layers of materials to create the required components. This device has an area of less than 1 square millimeter. More than 60 processing steps were required to make the mixer, 5 to 10 times more than the number required to make a wafer full of single graphene transistors. The base of the device is a silicon carbide wafer covered with epitaxially grown graphene (black area). There are several patterned metal layers (labeled M1, M2 and M3); dashed lines represent contacts between the metal layers. This exploded-view diagram allows all the separate patterned layers to be viewed, but in the actual device, the layers are fabricated directly on top of each other so the resulting structure is almost planar. Frequency mixers are at the heart of almost every wireless communication circuit. They combine two signals of different frequencies and convert the signal to a frequency that is usable by the receiver.
The primary means of estimating frequency response is through the cutoff frequency. A universally used metric for describing the frequency response of FETs, cutoff frequency is a fundamental property indicating how quickly a signal can travel from the gate to the drain of the device, and all new device technologies are measured against both existing ones and projections of expected cutoff frequencies in the future. This metric is derived from other measurements that describe the frequency dependence of the small-signal AC current gain of a transistor when the output is short-circuited. An AC voltage is applied to the gate at various frequencies, resulting in an AC input current. The measured output current (also AC) is divided by the input current, as a function of frequency. In the case of FETs, this only makes sense for AC measurements, as no DC current can flow through the insulating gate. If a FET is well behaved, this current gain should decrease as the inverse of the frequency. The point at which it falls to a value of 1 is the cutoff frequency. (See Figure 5.) Some of our earliest work with graphene FETs, which were actually 394
made from tiny flakes of the material, showed the right AC behavior. The current gain drops with frequency, so “cutoff frequency” is a meaningful term. Additionally, cutoff frequency was shown to be proportional to the transconductance divided by gate capacitance, and cutoff frequency increased as the channel length was reduced. These three traits showed that in high-frequency operation, graphene FETs act a lot like semiconductor FETs, which nurtured the idea that graphene transistors could potentially be used in similar circuits. In a few short years, we have seen cutoff frequency increase from a few gigahertz to over 300 gigahertz, almost catching up with the most sophisticated semiconductor devices. Frequency response depends inversely on the length of the gate, and proportionally to the transconductance, which is loosely proportional to mobility. Thus, to make faster devices, we can either reduce the gate length or use higher mobility materials. The work with graphene has consistently shown the advantage of its mobility. Compared to silicon FETs, graphene FETs have always had higher cutoff
frequencies at the same gate length. (See Figure 6.) Connecting Pieces Transistors are one of the building blocks of electronic circuits, but unless they are connected, they don’t have any useful function. Prior to the 1960s, transistors were connected to other electronic components—resistors, capacitors, other transistors and so on— by wires, then made into items such as radios and televisions. Then the era of modern electronics started, with the invention of the monolithic integrated circuit, in which all these components were fabricated together on a single substrate, eliminating bulky wiring, which led to miniaturization and a tremendous increase in function and variety. All of today’s radios, phones and computers are made this way. With that in mind, our team at IBM decided to start work on an integrated graphene circuit, while also developing improved graphene transistors. We realized we shouldn’t wait until the transistors were perfect before we tried to make a circuit with them. The transistor is fairly simple compared to a circuit with interconnections and other
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
components. We knew that in the process of developing an integrated circuit, we would uncover problems that were better faced as soon as possible, and we were eager to start. The idea of the integrated circuit is that most or all of the circuit is built on the same substrate, using many steps to deposit and pattern layers of materials to create the required components. These steps are repeated on many locations on a wafer, yielding a large number of identical circuits produced on a single substrate. In semiconductor technology, these processes are numerous and very sophisticated. However, applying these steps to a wafer covered with graphene turned out to be challenging for several reasons. First, graphene has poor adhesion with the metals and oxides used in integrated circuits. Because the graphene layer is only a single atom thick, it is vulnerable to damage by some of the common etching processes. In addition, the substrate materials and dielectrics employed in the experimental graphene transistors were not all suitable for wafer-scale fabrication. Finally, the integrated circuit required a mixture of thick and thin metal layers, which requires different processing.
Discovering and then solving these problems took almost a year. More than 60 processing steps were required to make this integrated circuit, 5 or 10 times more than the number required to make a wafer full of single graphene transistors. The circuit was made from a silicon carbide wafer covered with epitaxially grown graphene. The result was a frequency mixer. (See Figure 7.) The area of the circuit is less than 1 square millimeter. Although it uses only one FET, it is a sophisticated combination of devices, interconnectors, insulators and inductors. Frequency mixers are at the heart of almost every wireless communication circuit, and they are required for a signal-detection principle of radio tuning invented by Columbia University electrical engineer Edwin Armstrong decades before transistors were developed. If we imagine building wireless electronics from graphene, a mixer is a very important circuit to use as a starting point. Mixers combine two signals of different frequencies, the radio frequency (RF) signal that carries the information over a long distance, and the local oscillator (LO) signal, which is used to convert the signal to a frequency that is usable by the receiver.
The mixer produces a signal composed of the difference in frequencies of these two inputs (it also produces a sum signal, but this is actually an undesired by-product). If the input frequencies are of close value, then the difference between the two is quite low and might, for example, be in the audio range, even if the input signals are orders of magnitude higher than human hearing range. The successful operation of the graphene mixer is illustrated in Figure 8, which shows a frequency spectrum of the output signal. The two inputs, the RF signal and the LO signal, are seen at 3.8 gigahertz and 4.0 gigahertz, respectively; as required by mixing, the difference signal is seen with a value of 200 megahertz. The unwanted sum signal is also present, but its amplitude is much suppressed by the impedance of an inductor in the output path. We made other measurements to validate the correct operation, with the conclusion that this graphene mixer is quite successful. It has a rather large signal loss, which is undesirable, but our experiments have shown us what we need to do to improve this outcome. Its level of integration is actually greater than some conventional mixers oper-
3
signal strength (decibels)
local oscillator signal (4 gigahertz)
1
difference signal (200 megahertz) 2 sum signal (7.8 gigahertz)
radio frequency signal (3.8 gigahertz) 4
mixer output signal (gigahertz) Figure 8. The output of the graphene frequency mixer shows a frequency spectrum. Mixers combine two inputs: a radio frequency signal (here at 3.8 gigahertz) that carries information over a long distance and a local oscillator signal (here at 4 gigahertz) that is used to convert the signal to one usable by the receiver. The mixer output consists of a desired signal that is the difference frequency of the two input signals (here at 200 megahertz) and a by-product signal that is the sum of the two inputs (here at 7.8 gigahertz). The amplitude of the sum signal is suppressed by the impedance of an inductor in the output path. Currently the graphene frequency mixer experiences a high level of signal loss, but further experiments have shown how to improve the outcome. An advantage is the graphene mixer operates well over a wide temperature range. www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
395
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
source gate
drain
Figure 9. Graphene FETs suffer from what are called parasitic resistances, which limit its ability to conduct electrical signals. The contacts between the metal source and drain electrodes and the graphene are a point of high resistance, because of the chemical and quantum mechanical properties of graphene. It’s not yet known if there is a way to engineer around this problem. The second location of high resistance is called the access region, located between the gated graphene and the contacts. This resistance may be resolved by a manufacturing technique that automatically creates a gate that almost fills the access region.
ating at similar frequency. Testing the mixer also gave us a pleasant surprise: It operates almost unchanged over a temperature range from 300 kelvin to 400 kelvin. Most conventional RF semiconductor circuits require additional feedback circuitry to enable operation over wide temperatures. In separate work done by our group at IBM, it has been found that graphene devices have a frequency response that is almost unchanged to a very low temperature of 4.3 kelvin, so it appears that graphene electronics has the very attractive property of being temperature-independent over a huge range. Challenging Steps Our graphene device has not been perfected. Some “amplifiers” actually attenuate signals instead, and the integrated mixer has a rather large signal loss. Although tremendous progress in exploiting graphene for RF circuits has been made in a very short time— just four years—there is still a lot to be done before it is ready to replace any existing technologies. One major challenge is to preserve graphene’s mobility. Pure graphene has a mobility that is 10 or more times higher than silicon, yet the cutoff frequencies seem to be running at only two times the values of silicon FETs at the same dimensions. This discrepancy results from graphene being all surface. When it is suspended, free of contact on both sides, it has high mobility. When something touches it, its mobility usually drops considerably, 396
M q M q
M q
M q MQmags q
as the charge carriers (electrons and holes) scatter off the adjacent materials on their way through the graphene. Part of the problem comes from the substrate on which the graphene is grown or placed. Also, to make a gated device, there must be a dielectric (an insulator that can be polarized by an electric field) between the gate and the graphene. So a graphene FET inevitably has contact with materials that can rob it of some of its potential. There is considerable engineering effort to reduce the impact of contact by finding different materials and by altering the physical structure of the devices. If the mobility of free-standing graphene can be retained in a full transistor structure, it will be a tremendous leap ahead in electronics performance. The physical device structure is presently limiting performance as well. The issue is resistance: Any resistance in the path from source to drain causes a reduction in transconductance, even if the graphene mobility is high. There are two locations in graphene devices with high resistance, shown in Figure 9. One is at the contact between the metal electrodes and the graphene itself. Because of the chemical and quantum mechanical properties of graphene, this resistance is usually large, but there is work being done to see if it can be reduced. Its not known yet if this is a fundamental problem or “just” an engineering problem. The second source of resistance is the region between the gate graphene and the source and drain contacts,
called the access region. Even though this is a fairly small distance, ungated graphene is of high resistance. It’s as if part of the gate is not doing its job of turning the channel on. This is an engineering problem. Using the technique called self-alignment, which automatically creates a gate that almost fills the region between source and drain, it should be possible to greatly reduce the length of the access region, thereby reducing this unwanted resistance. Output conductance is also a problem and can be seen in the output characteristic. The ideal FET has an output that is flat above a certain drain voltage, which is equivalent to an infinite differential output resistance. A small change of the gate voltage of a transistor causes a change in its drain current. If the output resistance is infinite, this current will flow to the load attached to the transistor—such as an external resistor or another transistor—which results in voltage or power amplification of the gate voltage. This amplification, or gain, is required for almost all electronic components; otherwise, the signal applied to the circuit will be attenuated and eventually lost. If the transistor, however, has a finite output resistance, some of the current modulated by the gate will be dissipated in the transistor and hence not transferred to the load. If too much of the current is lost in the transistor, there will be loss instead of gain, and the device will be useless. Graphene FETs tend to have rather poor output resistance when the gate lengths are small, so this problem is serious for graphene. If the channel is long, drain current saturation is seen, which is equivalent to high output resistance, but a long channel device has low frequency performance, and therefore may not be practical. Getting shortchannel graphene FETs to saturate is challenging. There has been some recent progress based on very thin gate dielectrics, which basically improves the transconductance. Using a thin dielectric can also improve the drain current saturation. There’s a lot left to be done, but some devices with gain have been demonstrated, and it looks like there is a path to improvement. Like any analog circuit, graphene FETs have to face a problem of noise. All analog signals have unwanted noise: If it is too large, it masks or corrupts the transmitted signal. If noise is present at the input to an amplifier
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
chain, it will be amplified along with the signal of interest; if there is noise added by the devices in the amplifier chain, there will be a net increase of noise at the output. At the moment we simply don’t know if graphene makes a low-noise transistor, but perhaps it has lower noise than conventional semiconductors. Based on its conduction mechanism, many people believe it will be superior, but the experiments just haven’t been done yet. New Applications? What if we can extend the electrical performance of graphene a bit more so it becomes as good as silicon, or even a little better? Then how might it be used? One interesting possibility is to combine graphene circuits with some other technology, such as conventional semiconductor circuitry, to exploit the best of each. A layer of graphene circuits might be placed on a patterned semiconductor wafer. The graphene layer can be used for analog and RF circuits, while the semiconductor part can be used for digital processing of the RF signals. A hybrid technology like this seems feasible now that we are able to transfer a single layer of CVD-grown graphene onto any sub-
www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
strate, including one that has been processed to contain circuits. Looking further, we are considering some other attractive graphene properties. It is almost transparent, it absorbs light, it is flexible and strong and it works over a wide temperature range. Circuits made from graphene might be used for invisible electronics, such as on windows or glasses; for flexible circuits, for example, sewn into clothing; or for circuits to be used in extreme temperature environments, such as space and underground exploration. Although separate electronics may already exist for these situations, perhaps only graphene can be of use in these circumstances with the extra advantage of very high-frequency operation. Bibliography Avouris, P., Z. Chen and V. Perebeinos. 2007. Carbon-based electronics. Nature Nanotechnology 2:605–615. Farmer, D. B., Y.-M. Lin and P. Avouris. 2010. Graphene field-effect transistors with self-aligned gates. Applied Physics Letters 97:013103. Geim, A. K. and K. S. Novoselov. 2007. The rise of graphene. Nature Materials 6:183–191. Han, S.-J., K. A. Jenkins, A. Valdes-Garcia, A. D. Franklin, A. A. Bol and W. Haensch. 2011. High-frequency graphene voltage amplifier. Nano Letters 11(9):3690–3693.
Lin, Y.-M., C. Dimitrakopoulos, K. A. Jenkins, D. B. Farmer, H.-Y. Chiu, A. Grill and P. Avouris. 2010. 100-GHz transistors from wafer-scale epitaxial graphene. Science 327(5966):662. Lin, Y.-M., et al. 2011. Wafer-scale graphene integrated circuit. Science 332(6035):1294–1297. Meric, I., C. Dean, A. Young, J. Hone, P. Kim and K. L. Shepard. 2010. Graphene fieldeffect transistors based on boron nitride gate dielectrics. 2010 IEEE International Electron Devices Meeting, December 6–8. Moon, J. S., et al. 2009. Epitaxial-graphene RF field-effect transistors on Si-face 6HSiC substrates. IEEE Electron Device Letters 30(6):650–652. Wang, H., D. Nezich, J. Kong and T. Palacios. 2009. Graphene frequency multipliers. IEEE Electron Device Letters 30(5):547–549. Wu, Y., et al. 2011. High-frequency, scaled graphene transistors on diamond-like carbon. Nature 472:74–78. Yang, X., G. Liu, A. A. Balandin and K. Mohanram. 2010. Triple-mode single-transistor graphene amplifier and its applications. ACS Nano 4(10):5532–5538.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ Issue TOC/issue/1001
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
397
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
The Complex Call of the Carolina Chickadee What can the chick-a-dee call teach us about communication and language? Todd M. Freeberg, Jeffrey R. Lucas, and Indrik˛is Krams
I
f you live in North America, Europe or Asia near a forest, suburban open woodlands or even an urban city park, chances are you have heard a member of the avian family Paridae—the chickadees, tits and titmice. Birds use calls to communicate with their flockmates, and most parids share a unique call system, the chick-a-dee call. The call has multiple notes that are arranged in diverse ways. The resulting variation is extraordinary: The chick-a-dee call is one of the most complex signaling systems documented in nonhuman animal species. Much research on the chick-a-dee call has considered Carolina chickadees, Poecile carolinensis, a species common in the southeastern United States. We focus on this species here, but we also compare findings from other parids. We discuss how the production and reception of these calls may be shaped over individual development, and also how ecological and evolutionary processes may affect call use. Finally, we raise some key questions that must be addressed to unravel some of the complexities of this intriguing signaling system. Increased understanding of the processes and pressures affecting chick-a-dee calls might tell us something important about what drives signalTodd Freeberg is a comparative psychologist in the Department of Psychology and the Department of Ecology and Evolutionary Biology at the University of Tennessee–Knoxville. Jeffrey Lucas is a behavioral ecologist in the Department of Biological Sciences at Purdue University. Indrik˛ is Krams is an ecologist at Tartu Ülikool in Estonia and at Daugavpils Universita¯te in Latvia. Address for Freeberg: Department of Psychology, Austin Peay Building 301B, University of Tennessee, Knoxville, TN 37996. E-mail: _________
[email protected]. 398
ing complexity in animals, and it may also help us understand the evolution of that most complex vocal system, human language. Parids and Chick-a-dee Calls Toward the end of summer, many songbirds in temperate regions of the Northern Hemisphere migrate south to overwinter in more favorable climates. But some species stay put. One of the most common groups of resident songbirds is the chickadees and titmice of North America and the tits of Europe and Asia. These small songbirds (they typically weigh less than 30 grams) live in a wide range of habitats, often in heterospecific flocks—mixedspecies groups that include other songbird and woodpecker species. Conspecific—composed of a single species—flocks of parids are often territorial and are reported to range in size from two (as in oak titmice, Baeolophus inornatus, which occur only as femalemale pairs) to dozens of individuals (as in great tits, Parus major, which form large assemblages with fluid membership). Parids that form flocks do so in the late summer months and often remain in them until the following spring, when female-male pairs establish breeding territories. Such a flock structure, with stable groups of unrelated individuals, is atypical for songbirds and, as we argue below, may be an evolutionary force affecting vocal complexity in these species. Vocalizations in birds are often divided into two categories: songs and calls. Songs are typically given in the mating season and are directed toward mates or potential rivals. Calls are any other vocalization, and they fall into
functional categories, such as food calls, contact calls, mobbing calls or alarm calls. In almost all songbirds, songs are complex and calls are simple. Not so with parids: Many species have relatively simple songs (for example, the fee bee song of black-capped chickadees, Poecile atricapillus, and the peter peter song of tufted titmice, Baeolophus bicolor), but at least one very complex call system—the chick-a-dee call. The name “chickadee” for the North American Poecile group of parids is the onomatopoeic rendition of this call. Interestingly, it is labeled the si-tää call in willow tits, Poecile montanus, which are native to parts of Europe and Asia. When spoken in Swedish, Norwegian or Latvian, si-tää sounds quite similar to the birds’ call. In winter months in many regions, the only bird sounds you may consistently hear are chick-a-dee calls. The source of those calls is likely to be a group of parids interacting with one another and with any number of other species of birds. Parids are commonly the nuclear species—the core members of mixed-species flocks; they are often joined for periods of time by satellite species such as nuthatches, kinglets, woodpeckers, goldcrests and treecreepers. The behavior of these nonparid species is affected by the presence or absence of parids and also by the parids’ chick-a-dee calls. As such, understanding social cohesion and group movement of these mixedspecies flocks requires an understanding of parid signaling systems. The Structure of the Call Chick-a-dee calls across parids share a number of acoustic features, each
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Figure 1. A Carolina chickadee (Poecile carolinensis) perches on a common serviceberry bush (Amelanchier arborea). Chickadees are members of the family Paridae, many of whose members share one of the most complex vocal systems among nonhuman animals: the chick-a-dee call. In the Carolina chickadee, this call is composed of up to six discrete, ordered note types. Variation in the call, the authors suggest, aids communication.
of which can be seen as somewhat analogous to aspects of human language. First, calls are composed of distinct note types. These note types have been categorized into acoustically distinct forms that can be distinguished by researchers with high reliability. In a 2012 study, two of us (Freeberg and Lucas) described six note types—A, E, B, C, Dh, and D notes—in the calls of Carolina chickadees from an eastern Tennessee population (see Figure 3). These note categories do not correspond to human musical notation; they are arbitrary labels. Christopher Sturdy and his colleagues at the University of Alberta have described a similar set of notes in the calls of Carolina chickadees and other chickadee species. A, E and B notes are whistled and often show considerable frequency modulation. The C note is a noisy note type that generally increases in frequency over the course of the note. The D note, another noisy note type, www.americanscientist.org
American Scientist
has minimal frequency modulation. It seems to be a complex combination of two tones, or fundamental frequencies, and their harmonics, tones whose frequency is an integer multiple of the fundamental—along with other tones resulting from these tones’ interaction. (The songbird syrinx, or vocal organ, vibrates in two locations, one in each bronchus. Thus it can create two different tones simultaneously.) The final note type we described, the Dh or hybrid D note, is rare in this population and appears to be an A or B note that transitions without a break in sound into a concluding D note. Each note type normally occupies a specific part of the call. The typical chick-a-dee call in this population has an average of two introductory notes (some combination of A, E or B notes), roughly one C note, and three concluding D notes. Thus, the chick-a-dee call is made up of note types with distinct sounds, similar to the way each human
language is made up of phonemes, or distinct sounds. (For example, the p and b sounds in English are distinct phonemes produced by the lips, called labial stop consonants; the difference between the two is that the b is voiced, or articulated by vibration of the vocal cords, and the p is not.) Second, chick-a-dee calls are produced according to rules of note ordering. Roughly 99 percent of a sample of over 5,000 chick-a-dee calls followed the A– E–B–C–Dh–D ordering rule. Any note type can be repeated or left out of the sequence. So the chick-a-dee call has constraints on how the different sounds that make it up are combined to form calls, a phenomenon perhaps analogous to human-language constraints that govern how different phonemes are combined to form words. A third commonality among chick-adee calls is that the call system is openended. The more chick-a-dee calls we record, the more calls with different 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
399
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Figure 2. Carolina chickadees weigh 10 grams on average. The bird shown above, held by Todd Freeberg, is part of a wild population in east Tennessee. Carolina chickadees are native to the southeastern United States; their range extends to northern Ohio and New Jersey and west through central Texas. The species was named by John James Audubon, who, in his 1840 Birds of America, noted that he did so in part because the birds’ range included South Carolina and “partly because I was desirous of manifesting my gratitude towards the citizens of that state.” (Photograph courtesy of Todd M. Freeberg.)
note-type compositions are revealed. This variation is possible because notes can be repeated in calls, within the constraints of the note ordering rules. We know this from analysis, based on information theory (the study of the quantification of information, begun in the 1940s), of calls recorded from the Tennessee population we have studied. The phenomenon is also supported by within-individual analysis of chick-a-dee call note types derived from large sets of calls of known individuals recorded over time. This open-ended quality is one of the major differences between the chick-a-dee call and the finite call and song repertoires of most songbird species. Open-endedness is one of the defining features of human languages. A final common characteristic among chick-a-dee calls is that they contain a large amount of information. In information theory, this term refers to the amount of uncertainty in a signaling system. When a signaler produces a signal, the information in that signal reduces the overall uncertainty to the receiver about the context of the signal—in other words, the receiver knows more about the signaler or the signaler’s likely behavior than it did before the signal was produced. Sig400
M q M q
M q
M q MQmags q
naling systems with a large amount of information therefore can conceivably transmit a wide variety of distinct messages. The greater information content in chick-a-dee calls stems from the enormous diversity in their note-type composition. A key assumption of the concept of information as it is typically used by parid researchers (and other bioacoustics researchers) is that diversity of note composition relates to distinct messages in signals. Evidence from different labs and from different chickadee species indicates that the variation in chick-a-dee call structure documented via information-based analyses does indeed correspond to functional variation. Certain note-composition variants in these calls seem to be messages, often to flockmates, about the social and physical environment or the behavioral tendencies of the signaler. Changing Notes, Changing Messages Individual parids are often out of sight of flockmates as they move through the environment, so a vocal signaling system that can convey messages related to predators, food or group movement seems crucial to obtaining the benefits of group living. Recent studies indicate that variation in Carolina chickadee chick-a-
dee calls is associated with these social and environmental contexts (see Figure 4). Chickadees and other parids have a number of distinct call types in their vocal repertoires, but our focus here is on chick-a-dee calls, so we use “calls” hereafter to refer to chick-a-dee calls. Most studies of these calls in the context of avian predators have used perched predators or models, as we along with Tatjana Krama and Cecilia Kullberg noted in a recently published review article. Christopher Zachau and Freeberg, in an article published this year, presented predator and control stimuli that “flew” in the area of Carolina chickadees visiting feeders. (See the sidebar on page 403 for more detail about the design of these experiments.) We used wooden models shaped like flying birds and painted to resemble either sharp-shinned hawks (Accipiter striatus, a threatening avian predator) or blue jays (Cyanocitta cristata, a nonthreatening avian control). The chickadees’ calls were recorded before and after the release and “flight” of the models down a zipline near the feeders. The calls produced varied with the presence of each model type, but the biggest effect we measured resulted from the flight of any model, irrespective of the species it mimicked. Calls produced after the model was released contained more A notes compared to calls produced prior to the release of the model. Greater production of A notes in the calls would seem to represent a message of alarm, as opposed to one of mobbing—behavior that is frequently linked to approaching and harassing predators—or of assembly. Tonal sounds that slowly increase in intensity and that are high frequency (such as the A note) are generally difficult for avian predators, and many other animals, to locate. In contrast, noisy sounds with rapid increases in intensity, like the D note, are easier to locate. Thus, the production of more A notes in these calls when a flying predator is detected in the area seems adaptive, as it could alert flockmates to the predator’s presence but not give away the location of the signaler to the predator. Carolina chickadees produce more calls, and often more D notes in those calls, when they detect a perched avianpredator model than when no model is present. For example, in a 2009 study, Chad Soard and Gary Ritchison of Eastern Kentucky University placed six perched avian-predator models in the habitat of Carolina chickadees. The
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
models, all of which represented hawk and owl species, ranged in size and type from small, agile predators like Eastern screech owls (Megascops asio) and sharpshinned hawks to large, relatively slowmoving predators like great horned owls (Bubo virginianus) and red-tailed hawks (Buteo jamaicensis). The former predators represent real threats to small songbird species, whereas the latter do not. Chickadees produced more D notes in their calls when smaller, more threatening avian predators were present (see Figure 5). Later the researchers played back chick-a-dee calls recorded in these different threat contexts to chickadees in their habitat. The authors found that chickadees were more likely to mob the playback speaker—to approach it closely in large numbers—when it was playing calls recorded when a small predator model was present than when
10
the speaker was playing calls recorded when a large predator model was present. This work suggests that easy-tolocalize D notes are used more often in calls when those calls might serve a mobbing function—bringing flockmates to a particular location to drive a predator away. These findings make it clear that Carolina chickadees vary the note composition of their chick-adee calls in the high arousal contexts of predator detection and mobbing. Ellen Mahurin and Freeberg found in a 2009 study that when individual chickadees from an eastern Tennessee population first detected food, the calls they produced contained a relatively large number of D notes (see Figure 6). Once at least one more chickadee arrived at a feeder, however, the first bird’s calls changed such that fewer D notes were produced. In a follow-up
AAA
10
5
1.0
ABDDDDD
10
1.5
0.5
1.0
1.5
CCDDDD
frequency (kilohertz)
frequency (kilohertz)
1.0
EDhDDDD
5
10
1.0
1.5
0.5
1.0
1.5
1.0
1.5
1.0
1.5
EECCCCC
ECCCCCDD
5
0.5 10
5
DDDDDDDD
5
0.5 10
0.5
5
0.5
10
EEEEE
1.5
5
10
study near feeders at several sites, we played back calls that contained either a large number of D notes (which previous findings suggested might be a signal to assemble) or a small number of D notes (as a control). Carolina chickadees flew to and took seed from the feeders more quickly in response to calls containing a large number of D notes, supporting the notion that increased production of D notes can help recruit other individuals to the signaler’s location. A naturalistic observation study conducted by Freeberg in 2008 suggests that chickadees use more C notes in their calls when they are in flight than when they are perched (see Figure 7). We have recently gained more experimental support for this suggestion: Chickadees flying to and from feeders produce calls with a greater number of C notes than they do when they are farther away
5
0.5 10
M q M q
M q
M q MQmags q
1.0
0.5
1.5
EBDDDDDDDDDDDD
time (seconds)
5
0.5
1.0
1.5
2.0
time (seconds) Figure 3. The notes that make up the chick-a-dee call follow a set order, but within those constraints, extreme variation occurs. Notes (which were given arbitrary alphabetical names that do not correspond to Western musical notation) generally follow an A–E–B–C–Dh–D ordering rule, but any note can be left out or repeated. Shown above are sound spectrograms (visual representations of sound) generated from recordings of the chick-a-dee calls of Carolina chickadees. The x-axis shows time, in seconds, and the y-axis shows the frequency of the sound waves, in kilohertz. Each note type is rendered in a discrete color, and the note composition of each call is shown in the upper left corner of its spectrogram. (Spectrograms generated by the authors, using the Avisoft-SASLab Pro software application developed by Raimund Specht.) www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
401
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
frequency (kilohertz)
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
A note
E note
C note
D note
increase when signaler detects approach of possible predator; decrease in flight
increase when signaler is higher off ground
increase when signaler is in flight
increase when signaler detects perched predator or food; decrease in flight
10 8 6 4 2 0.1
0.2
0.3 0.4 time (seconds)
0.5
0.6
0.7
B note
Dh (hybrid D) note
?
?
Figure 4. The calls of Carolina chickadees vary with differing environmental contexts and motivational or behavioral factors. Within the constraints of the call’s note order, shown above, notes can be left out, or their repetition can increase or decrease. The C note, for instance, is used and repeated more when a chickadee is calling in flight. We lack conclusive information about what stimuli the B note and the hybrid D note might vary in response to. Other factors may influence variation of the notes for which we have data. In addition to variations within populations, the rate of use of some notes (in black boxes) varies between different populations of chickadees. 8 eastern screech owl sharp-shinned hawk 6
great horned owl
average D notes per call
American kestrel Cooper’s hawk
red-tailed hawk 4 Freeberg (2008) naturalistic observation study empty stand 2
ruffed grouse
0 20
30
40 model body length (centimeters)
50
60
Figure 5. Chad Soard and Gary Ritchison, in a 2009 study, placed models of perched predators in Carolina chickadee habitat. They then recorded calls the birds made near the models. Smaller avian predators, such as Eastern screech owls and sharp-shinned hawks, are a greater threat to chickadees; larger birds, such as red-tailed hawks, prefer larger prey. When chickadees were near the smaller models, their calls contained more D notes than when the birds were near larger, less threatening predator models or the control model (a ruffed grouse). Circles represent the models: The x-axis shows the length of the model, and the y-axis indicates the average number of D notes per chick-a-dee call made in its presence. The horizontal dashed line shows the number of D notes produced when only the model stand (with no model on it) was presented. The solid horizontal line shows the average number of D notes per call from a naturalistic observational study of Carolina chickadees in eastern Tennessee (Freeberg 2008). (Figure adapted from C. M. Soard and G. Ritchison. 2009. Animal Behaviour 78:1447–1453. With permission from Elsevier.) 402
M q M q
M q
M q MQmags q
from feeders. In addition, chickadees released from capture produce calls with a greater number of C notes when they are in flight than they do once they are perched. So calls with a relatively large number of C notes might signal movement—and thus might be adaptive for maintaining group cohesion in space. In addition to these environmental and behavioral contexts, we have detected motivational influences on call production: Lucas, April Schraeder and Curt Jackson found in a 1999 study that chickadees increase rates of chicka-dee calls when their energy stores decline. Additionally, there appear to be population-level “signatures” in the call that distinguish one population from another. There also appears to be marked variation at the individual level in call production. Evidence from Christopher Sturdy’s lab at the University of Alberta indicates that individual Carolina chickadees, as well as a number of other chickadee species, can often be statistically discriminated from one another by virtue of the acoustic characteristics of the note types of their calls. We thus have considerable evidence that the note composition of calls of Carolina chickadees is associated with detection of predators (both perched and flying), food detection, individual flight and motivation. The calls also vary in ways that may suggest markers for individual, flock, population or some combination of the three. Variation in the note types that make up the call corresponds to different contexts and to population-level characteristics. Studies of call variation have also been carried out in other parid species. For example, as a 2012 review article by Krams and coauthors reveals, perched predator contexts have been shown to have a similar effect on call note composition in black-capped chickadees, Mexican chickadees (Poecile sclateri) and willow tits. Call variation seems to be associated with food contexts in black-capped chickadees and with flight contexts in mountain chickadees (P. gambeli). Krama, Krams and Kristine Igaune in 2008 documented variation in the comparable call system in crested tits (Lophophanes cristatus), based on whether individuals were close to the relative safety of vegetation or were exposed in open areas away from cover. Another interesting finding about this species is that dominant individuals use their calls differently than subordinate individuals, which suggests possible personality-like influences on call variation.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Wherefore Chick-a-dee? Decades ago the Dutch ethologist Niko Tinbergen described four different “why” questions researchers could ask in trying to understand the behavior they observed in animals. Two of the questions entail proximate approaches that focus on the individual. One of these proximate approaches includes mechanistic questions—what is the neural and physiological basis of the behavior, and what stimuli in the environ-
ment elicit behavior? The other proximate approach covers developmental questions—what roles do growth and experience play in shaping and constraining behavior over an individual’s lifetime? The final two questions are ultimate approaches with a population- or species-level focus. These are ecological or functional questions about the adaptiveness of the behavior—what problem might it have evolved in response to?— and they pose phylogenetic or deep-
evolutionary questions—how might common ancestry shape and constrain behavior over the existence of a clade? We can use these approaches to help understand the chick-a-dee call. At a proximate level of analysis, we know that certain environmental stimuli or motivational influences generate variation in calls. In addition, the complexity of social groups in Carolina chickadees can drive complexity in the note composition of calls. In a
Wooden Hawks and Plastic Owls: Experiment Design for Studying Chick-a-dee Calls
T
o discover whether chickadees change their calls in response to flying predators, Todd Freeberg and Christopher Zachau set up a zipline in the vicinity of a feeding station (above, left) in eastern Tennessee. The researchers waited in a camouflaged blind until chickadees had gathered at the feeder. Then one person walked slowly to a ladder at the tree with the zipline, climbed the ladder, and released a wooden model so that it “flew” past the birds at the feeding station. To discover whether chickadees change their calls in response to different kinds of birds, the researchers used models of a known chickadee predator, the sharp-shinned hawk (Accipiter striatus), and models of blue jays (Cyanocitta cristata), which are not a threat to chickadees. A microphone set up near the feeding station recorded the chickadees’ calls before and after the release of the model. The birds’ calls contained more A notes,
www.americanscientist.org
American Scientist
which other studies have found to be linked to alarm, after a model was released. Several studies, including one by Mark Nolen and Jeffrey Lucas, have measured chickadees’ responses to models of perched predators (above, right). Nolen and Lucas wired painted plastic models of the Eastern screech owl (Megascops asio) to trees in a reserve along the Wabash River in west central Indiana. They attached a speaker below the model and used it to play back calls made by chickadees exhibiting mobbing behavior. These calls are rapid and contain a high proportion of D notes. A microphone and recorder were placed nearby. When calls were played back, mixed-species groups, composed predominantly of chickadees but also including nuthatches and titmice, mobbed the model, flying toward it together. Results from multiple recordings revealed that species may interact during mobbing more than had previously been thought. 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
403
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
average D notes per call
10
8
6
4
2
0 before second after second bird arrived bird arrived food-related context Figure 6. When the first chickadee to find food at a feeder produces chick-a-dee calls, those calls contain more D notes before the second chickadee arrives. This suggests that a larger number of D notes may serve a recruitment function, alerting other birds to the presence of the food resource. Each line in the graph at right represents the average number of D notes in calls of a single bird that arrived first at a feeder and produced chick-a-dee calls: The left end of the line shows the number of D notes before another chickadee arrived, and the right end shows the number of D notes after it arrived. (Photograph courtesy of Todd M. Freeberg. Graph data from E. J. Mahurin and T. M. Freeberg, 2009.)
2006 study by Freeberg, chickadees placed into large captive flocks used calls with greater information content compared to chickadees placed into small captive flocks, suggesting that the diversity of messages is greater in more complex social groups. These experimental changes to the social groups of chickadees must have generated neural and physiological changes in the individuals in the study, yet we know relatively little about this aspect of the call. Sturdy’s laboratory has carried out a number of exciting studies related to the perception and discrimination of calls in individuals. Female blackcapped chickadees reared in isolation fail to develop the ability to perceive
relative pitch of males’ songs. However, we know relatively little about the ontogeny of call variation in young parids interacting with parents and, later, with nonrelated adults in their social groups. More work on proximate questions related to call variation is needed. At an ultimate level of analysis, we can infer that the call is homologous across many different parid species, suggesting a fundamentally comparable call system in common ancestors to today’s chickadees, tits and titmice. We know a fair amount about call variation in a few species, but the calls of most parid species have been little studied, let alone the question of whether call variation corresponds to different envi-
ronmental or behavioral contexts. As a result, we cannot yet answer many fairly basic questions about the evolution of call variation. At the functional level, we can infer that the call is adaptive in bringing about social cohesion in parid species, because variation in the call can recruit, alarm or potentially signal movement for members of both conspecific and heterospecific flocks. Whether variation in signaling with the call is related to differences in survival or reproduction is an open question. Nonetheless, a number of hypotheses have been proposed to explain the adaptive significance of call variation in parids. First, the complexity of the social group might influence vocal complex-
average C notes per call
2.0
1.5
1.0
0.5
0.0 nonflight flight behavioral context Figure 7. When a Carolina chickadee calls while in flight or just before taking flight, its calls contain more C notes than do the calls it produces in other contexts. This difference suggests that increased C notes in calls are related to signaler movement. The graph at right shows mean C notes per call when birds were not in flight (blue) and when they were flying (green). The error bars represent 95 percent confidence intervals. (Photograph courtesy of Amy O’Hatnick. Graph data from T. M. Freeberg, 2008.) 404
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Figure 8. Why is the chick-a-dee call so complex and varied? Researchers have proposed several hypotheses. The social complexity hypothesis (top left) suggests that animals in larger, more complex social groups will have greater variation in their vocalizations than will animals in smaller, less complex groups. The predation pressure hypothesis (top right) states that complex calls evolve in response to heightened presence of predators. According to the habitat complexity hypothesis (bottom), animals living in more complex physical environments have need of a wider repertoire of signals to communicate messages to group members. These three are not the only suggested sources of the chick-a-dee call’s complexity, and the call may have emerged as a result of some combination of factors. Further research should help elucidate which of these possibilities are valid. www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
405
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Parid populations or species living in complex physical environments, such as those containing a mix of open, closed and edge habitat, may require more complex calls to communicate effectively, compared to populations or species living in relatively simple physical habitats, such as exclusively coniferous forests. These three hypotheses (and there are others) may each explain the complexity and variation in chick-a-dee calls that we see. Perhaps our biggest need in answering this question is for large comparative data sets from multiple populations or multiple species, with which to test the various hypotheses.
Figure 9. The great tit, Parus major, is native to Europe, the Middle East and central and northern Asia. As such, it is a well-studied species. Unlike most parid species, great tit flocks have fluid social structures and are not highly territorial. The species could help researchers understand what relation might exist between social-group complexity and call complexity. Above, a flock of great tits congregates at a feeder. (Photograph courtesy of Jorma Tenovuo.)
ity. This argument is known as the social complexity hypothesis for communicative complexity, and it is supported by findings from a range of mammals, birds and nonavian reptiles, and from auditory, chemical and visual modalities. For the chick-a-dee call, the social complexity hypothesis predicts that populations in which individuals occur in larger groups or in groups with greater network complexity will have more complex calling behavior than populations in which individuals occur in smaller groups or in groups with little network complexity. If future research supports this hypothesis, we will be able to infer that social pressures that arise from interacting with the same individuals over time, in both competitive and cooperative contexts, require a flexible and diverse repertoire of signals. If the complexity of an individual’s social group impacts the diversity of vocal signals used in social interaction, that social group can be seen as both a context for vocal development and a potential selective pressure that could impact vocal behavior. Selection for increased signaling complexity in parids may also come from other species in mixed-species flocks. For example, Mark Nolen and Lucas found in a 2009 study that both white-breasted nuthatches (Sitta carolinensis) and tufted titmice interact vocally with Carolina chickadees when mobbing predators. The primary vocal signal used by chickadees under 406
these conditions is the chick-a-dee call. Moreover, Chris Templeton and Erick Greene of the University of Montana in 2007 suggested that nuthatches can decode information about predation risk from calls, and recently Stacia Hetrick and Kathryn Sieving of the University of Florida found that chickadees can decode information about predation risk from the chick-a-dee calls of tufted titmice. These findings show that a complex call provides relatively finescale information about predation risk to conspecifics and heterospecifics. Both types of association have fitness consequences. The complexity of conspecific and mixed-species flocks may therefore drive the diversity and complexity of vocal signaling systems. Another hypothesis proposed to explain call complexity is the predation pressure hypothesis, which has support from a number of studies in primate species. It predicts that populations facing intense predation pressure or a variety of predator types should have more complex calling behavior than populations facing relatively light predator pressure. This hypothesis, then, would predict that parid populations or species that face a large number of different predators have a more complex call than parid populations or species that occur in areas with few predators. One more hypothesis to consider for call complexity relates to the physical environment in which individuals live.
Complexities upon Complexities We have discussed sociality in parids in light of the benefits of grouping, but we would be remiss if we did not point out that grouping also brings costs. Foraging in a group reduces energetic costs— individuals have more time to find and process food because they can spend less time detecting predators. But flocking also results in increased competition for resources and may generate higher stress levels. It may also increase transmission of and reduce resistance to parasites and pathogens. More work on the costs of grouping in parids should shed considerable light on the pressures individuals and their signaling systems face in complex social groups. The Paridae family seems ideal for testing hypotheses for communicative complexity. As Jan Ekman of Uppsala Universitet pointed out in a 1989 study, it has considerable variation across species in key social dimensions such as group size, presence and number of heterospecifics in mixed-species flocks, and presence or absence of winter territories. For example, flocks in great tits (Parus major) are reported to range from 2 to roughly 50 individuals (see Figure 9). It is hard to determine flock size in this species, however, because great tits do not have a stable flock structure over time (individuals often move in and out of groups) or space (their flocks, unlike those of many other parids, are not territorial). Recent advances in assessing social networks in animal groups should prove important to determining social complexity in this species. We believe great tits could be a key species for testing functional hypotheses regarding call complexity. Does the variation in social complexity we have been describing here explain variation in the structure and use
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
of chick-a-dee calls? This straightforward question, like the questions raised by other hypotheses, remains unanswered simply because social and vocal behavioral data are needed for a greater number of parids than have been studied to date. For example, we know very little about the vocal behavior and social structure of African parids in the species-rich Melaniparus group, or of South and East Asian parids. One example has been documented thus far of commonly occurring reversals of note ordering rules (where, for example, calls have both a note type 1– note type 2 order, and a note type 2–note type 1 order): In 1994, Jack Hailman of the University of Wisconsin documented this variation in the call of the black-lored tit, Parus xanthogenys, of India. The finding is an exciting and potentially important one: Vocal flexibility of this kind would greatly increase call complexity, and it has the potential to increase the variety of meaning receivers could obtain from calls. Such ability might also bring the call closer to the notion of syntax in human language— in which, for instance, “the child spoke to the toy” has a very different meaning than “the toy spoke to the child.” However, we can say very little about the potential pressures influencing the call system of the black-lored tit because so little is known about its social behavior or about closely related species in this geographical area. We hope that this article will inspire increased efforts at understanding the social and vocal behavior of parids— such understanding is needed to determine the evolution of signaling complexity in these species. Furthermore, greater knowledge of the pressures shaping the chick-a-dee call system just might tell us a little more about the pressures that shape and constrain our own complex vocal system. Acknowledgments We thank Harriet Bowden, Sheri Browing, Gordon Burghardt, Esteban FernandezJuricic, Megan Gall, Jessica Owens, Kelly Ronald and Luke Tyrell for helpful comments on earlier drafts of this article. Todd Freeberg thanks the J. William Fulbright Scholarship Board for a teaching award in Latvia in the spring of 2012, which helped make the writing and preparation of this article possible.
by Richard Buonanno. ___________ http://web4.audubon.org/bird/boa/F10_G1c.html ___________________ Bloomfield, L. L., L. S. Phillmore, R. G. Weisman and C. B. Sturdy. 2005. Note types and coding in parid vocalizations. III: The chicka-dee call of the Carolina chickadee (Poecile carolinensis). Canadian Journal of Zoology 83:820-833. Ekman, J. 1989. Ecology of non-breeding social systems of Parus. Wilson Bulletin 101:263–288. Freeberg, T. M. 2006. Social complexity can drive vocal complexity: group size influences vocal information in Carolina chickadees. Psychological Science 17:557–561. Freeberg, T. M. 2008. Complexity in the chick-adee call of Carolina chickadees (Poecile carolinensis): associations of context and signaler behavior to call structure. Auk 125:896–907. Freeberg, T. M. 2012. Geographic variation in note composition and use of chick-a-dee calls of Carolina Chickadees (Poecile carolinensis). Ethology 118:555–565. Freeberg, T. M., and J. R. Lucas. 2012. Information theoretical approaches to chick-a-dee calls of Carolina chickadees (Poecile carolinensis). Journal of Comparative Psychology 126:68–81. Hailman, J. P. 1989. The organization of major vocalizations in the Paridae. Wilson Bulletin 101:305–343. Hailman, J. P. 1994. Constrained permutation in “chick-a-dee”-like calls of a black-lored tit (Parus xanthogenys). Bioacoustics 6:33–50. Hetrick, S. A., and K. E. Sieving. 2012. Antipredator calls of tufted titmice and interspecific transfer of encoded threat information. Behavioral Ecology 23:83–92. Krama, T., I. Krams and K. Igaune. 2008. Effects of cover on loud trill-call and soft seet-call use in the crested tit Parus cristatus. Ethology 114:656–661. Krams, I., T. Krama, T. M. Freeberg, C. Kullberg and J. R. Lucas. 2012. Linking social complexity and vocal complexity: a parid perspective. Philosophical Transactions of the Royal Society of London, B 367:1879–1891. Lucas, J. R., A. Schraeder and C. Jackson. 1999. Carolina chickadee (Aves, Paridae, Poecile carolinensis) vocalization rates: Effects of
body mass and food availability under aviary conditions. Ethology 105: 503–520. Mahurin, E. J., and T. M. Freeberg. 2009. Chicka-dee call variation in Carolina chickadees and recruiting flockmates to food. Behavioral Ecology 20:111–116. Mostrom, A. M., R. L. Curry and B. Lohr. 2002. Carolina chickadee (Poecile carolinensis). In The Birds of North America, No. 636 (A. Poole and F. Gill, eds.) Philadelphia, PA: The Birds of North America, Inc. pp. 1–28. Nolen, M. T. and J. R. Lucas. 2009. Asymmetries in mobbing behaviour and correlated intensity during predator mobbing by nuthatches, chickadees and titmice. Animal Behaviour 77:1137–1146. Soard, C. M. and G. Ritchison. 2009. “Chicka-dee” calls of Carolina chickadees convey information about degree of threat posed by avian predators. Animal Behaviour 78:1447–1453. Sturdy, C. B., L. L. Bloomfield, I. Charrier and T. T.-Y. Lee. 2007. Chickadee vocal production and perception: an integrative approach to understanding acoustic communication. In Ecology and Behavior of Chickadees and Titmice: An Integrated Approach (K. A. Otter, ed.). Oxford: Oxford University Press. pp. 153–166. Templeton, C. N., and E. Greene. 2007. Nuthatches eavesdrop on variations in heterospecific chickadee mobbing alarm calls. Proceedings of the National Academy of Sciences of the United States of America 104:5479–5482. Zachau, C. E., and T. M. Freeberg. 2012. Chicka-dee call variation in the context of “flying” avian predator stimuli: A field study of Carolina chickadees (Poecile carolinensis). Behavioral Ecology and Sociobiology 66:683–690.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.98/past.aspx
Matter and Void On the subject of endings: the world gives signs of its tiny goodbyes. My pinhole camera captures a bald shrub and the crater in the grass where the dog has napped. Across the yard, the roughneck delivery man shuts his empty truck with a little bang. He makes a radio call as he leaves in which I imagine he says either I’ve got four claims of damage or Honey, I love you, but I can’t anymore. Birds are dropping out of the trees from thirst; all summer I scoop up their needle-boned evidence with a spade. Not even light can escape such hollowing, this huge mass in a small space. Even the Milky Way with its open arms is said to have a black hole at its heart.
Bibliography Audubon, John James. Birds of America, first octavo edition. 1840. Online edition compiled www.americanscientist.org
American Scientist
—Susan B. A. Somers-Willett
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
407
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Slicing a Cone for Art and Science Renaissance artist Albrecht Dürer searched for beauty with mathematics
Daniel S. Silver
A
lbrecht Dürer (1471–1528), master painter and printmaker of the German Renaissance, never thought of himself as a mathematician. Yet he used geometry to uncover nature’s hidden formulas for beauty. His efforts influenced renowned mathematicians, including Gerolamo Cardano and Niccolo Tartaglia, as well as famous scientists such as Galileo Galilei and Johannes Kepler. We praise Leonardo da Vinci and other Renaissance figures for embracing art and science as a unity. But for artists such as da Vinci and Dürer, there was little science to embrace. Efforts to draw or paint directly from nature required an understanding of physiology and optics that were not found in the ancient writings of Galen or Aristotle. It was not just curiosity but also need that motivated Dürer and his fellow Renaissance artists to initiate scientific investigations. Dürer’s nature can seem contradictory. Although steadfastly religious, he sought answers in mathematics. He was outwardly modest but inwardly vain. He fretted about money and forgeries of his work, yet to others he appeared to be a simple man, ready to help fellow artists. Concern for young artists motivated Dürer to write an ambitious handbook for all disciplines of artists. It has the honor of being the first serious mathematics book written in the German language. Its title, Underweysung der Messung, might be translated as A Manual of Measurement. Walter Strauss, who transDaniel S. Silver received his Ph.D. in mathematics from Yale University in 1980. Much of his current research explores the relation between knots and dynamical systems. Other active interests include the history of science and the psychology of invention. He is a professor at the University of South Alabama. Address: Department of Mathematics and Statistics, ILB 325, University of South Alabama, Mobile, AL 36688-0002. E-mail: ____________
[email protected] 408
Figure 1. A detail from Albrecht Dürer’s Melencolia I from 1514 shows a magic square, in which each row, column and main diagonal sum to the same total, in this case 34.
lated Dürer’s work into English, gave the volume a pithy and convenient moniker: the Painter’s Manual. Dürer begins his extraordinary manual with apologetic words, an inversion of the famous warning of Plato’s Academy: Let no one untrained in geometry enter here: The most sagacious of men, Euclid, has assembled the foundation of geometry. Those who understand him well can dispense with what follows here, because it is written for the young and for those who lack a devoted instructor. The manual was organized into four books and printed in Nüremberg in 1525, just three years before the artist’s death. It opens with the definition of a line, and it closes with a discussion of elaborate mechanical devices for accurate drawing in perspective. In between can be found descriptions of spirals, conchoids and other exotic curves. Constructions of regular polygons are given. Cut-out models (“nets”) of polyhedra are found. There is also an important section on typography,
containing a modular construction of the Gothic alphabet. An artist who wishes to draw a bishop’s crozier will learn how to do it with a compass and ruler. An architect who wants to erect a monument might find some sort of inspiration in Dürer’s memorial to a drunkard, a humorous design complete with coffin, beer barrel and oversized drinking mug. Scholarly books of the day were generally written in Latin. Dürer wrote Underweysung der Messung in his native language because he wanted it to be accessible to all German readers, especially those with limited formal education. But there was another reason: Dürer’s knowledge of Latin was rudimentary. Others later translated Underweysung der Messung into several different languages, including Latin. There was no reason to expect that Dürer should have been fluent in Latin. As the son of a goldsmith, he was lucky to have gone to school at all. Fortunately for the world, Dürer displayed his unusual intelligence at an early age. “My father had especial pleasure in me, because he said that I was diligent in trying to learn,” he recalled. He was sent to school, possibly the nearby St. Sebald parochial school, where he learned to read and write. He and his fellow students carried slates or wax writing tablets to class. (Johannes Gutenberg had invented a printing press only 40 years before, and books were still a luxury.) Learning was a slow, oral process. When Dürer turned 13, he was plucked from school so that he could begin learning his father’s trade. At that age, he produced a self-portrait that gives a hint of his emerging artistic skill. Selfportraits at the time were rare. Dürer produced at least 11 more during his lifetime. What might have inspired a tradesman’s son to study the newly rediscovered works of ancient Greek mathematicians such as Euclid and Apollonius? Part of the answer can be
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
found in the intellectual atmosphere of Nüremberg at the time. In 1470, Anton Koberger founded the city’s first printing house. One year later, he became Dürer’s godfather. Science and technology were so appreciated in Nüremberg that the esteemed astronomer Johannes Müller von Königsberg, also known as Regiomontanus (1436–1476), settled there and built an observatory. The rest of the answer can be found in the dedication of the Painter’s Manual: “To my especially dear master and friend, Herr Wilbolden Pirckheymer, I, Albert Dürer wish health and happiness.” This master, whose name is more commonly spelled Willibald Pirckheimer (1470–1530), was a scion of one of Nüremberg’s most wealthy and powerful families. He was enormous in many ways, both physically and in personality, as well as boastful and argumentative. He was also a deeply knowledgeable humanist with a priceless library. Pirckheimer’s house was a gathering place for Nüremberg’s brilliant minds. Despite the wide difference between their social rankings, Dürer and Pirckheimer became lifelong friends. Pirckheimer depended on Dürer to act as a purchasing agent during his travels, scouting for gems and other valuable items. Dürer depended on Pirckheimer for access to rare books and translaWikimedia Commons tion from Greek and Latin. Figure 2. Melencolia I has been heavily debated among art historians. Is the angel’s dejection due to her The word “Messung” meant inability to discover beauty’s secret? The engraving reflects Dürer’s mathematical interests. Note an open more to Dürer than simple mea- compass in the angel’s hand and a magic square above the angel’s head. Dürer’s mistaken belief that elsurement. “Harmony” might lipses were egg-shaped is reflected in the shape of the bell opening. His quest to extend the mathematics have been closer to the mark. behind beauty to artists led him to publish a primer that ended up influencing scientists as well as artists. In his youth, possibly in 1494, Dürer had marveled over a geometri- seen a new kingdom.” Geometry, recov- bers in each row and column, as well as cally based drawing of male and female ered from ancient works, lit his way. the two main diagonals, add to the same figures by the Venetian artist Jacopo de’ The gravity of Dürer’s quest can total, in this case 34. In this one, the date Barbari (about 1440–1516). Despite the be sensed in his enigmatic engraving of the engraving, 1514, appears in the fact that de’ Barbari was unwilling to Melencolia I, shown in Figure 2. Now ap- lowest row.) Clearly Dürer’s mathematishare his methods—or maybe because of proaching the 500th anniversary of its cal interests were not limited to geometry. it—Dürer became convinced that the se- creation, Melencolia I has been the subject When he wrote the Painter’s Manual, crets of beauty might be found by means of more academic debate than any other Dürer was approaching the end of a sucof mathematics. Dürer was only 23 years print in history. Is the winged figure de- cessful career. As a young man eager old at the time. He devoted the remain- jected because she has tried but failed to to learn more about the new science of ing three decades of his life to the search, discover beauty’s secret? She holds in her perspective and to escape outbreaks of for as he reflected some years later, “I hand an open compass. Above her head plague at home, he had made two trips would rather have known what [de’ is a magic square, the first to be seen in to Italy. After the first journey, his proBarbari’s] opinions were than to have Western art. (In a magic square, the num- ductivity soared. Dürer’s self-portrait of www.americanscientist.org
American Scientist
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
409
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Wikimedia Commons
Figure 3. Dürer produced at least a dozen self-portraits during his lifetime. The first, at age 13 (left), hinted at his emerging artistic gifts. A second produced in 1498 at age 27 (middle) showed him at the peak of a successful career, when his confidence was expanding and his productivity soaring. A final self-portrait in 1522, at age 51 (right), shows the artist after his body has been ravaged by a disease that killed him a few years later.
1498 radiates an expanding confidence. (It was not the first time that plague encouraged scientific discovery, nor was it the last. In 1666 Isaac Newton escaped an outbreak of plague at Cambridge University, returning to his mother’s farm, where he had the most profitable year that science has ever known.) During his second visit to Italy, Dürer met with fellow artists including the great master Giovanni Bellini, who praised his work. Dürer came to the conclusion that German artists could rise to the heights of the Italians, but only if they learned the foundations of their art. Such a foundation would prevent mistakes—and such a foundation required geometry. He returned with an edition of Euclid that bears his inscription: “I bought this book at Venice for one ducat in the year 1507—Albrecht Dürer.” Dürer purchased a house in Nüremberg and began to study mathematics. The Painter’s Manual was not the book that he had originally planned to write. He had started work on Vier Bücher von Menschlicher Proportion (“Four Books on Human Proportion”), but soon realized that the mathematical demands that it placed on young readers were too great. The Painter’s Manual was intended as a primer. Work on the Painter’s Manual, too, was temporarily halted when, in 1523, Dürer 410
acquired 10 books from the library of Nüremberg mathematician Bernhard Walther (1430–1504). Walther had been a student of Regiomontanus and had acquired important books and papers from him. But Walther was a moody man who denied others access to this valuable cache. Walther died, but his library remained with his executors for two decades. Finally its contents had been released for sale. Dürer’s precious purchases were chosen and appraised by Pirckheimer. It took Dürer two more years to absorb the ideas these books contained. The completion of the Painter’s Manual would just have to wait. It would be a book for artists, or so Dürer thought. Nevertheless he allowed himself to be carried aloft by mathematics. “How is it that two lines which meet at an acute angle which is made increasingly smaller will nevertheless never join together, even at infinity?” he asks (and proceeds to give a strange explanation). Later he writes: “If you wish to construct a square of the same area as a triangle with unequal sides, proceed as follows.” It is difficult to imagine any artist of the 16th century making use of such ideas. These are the thoughts of a compulsive theoretician. Time for Dürer to complete his Painter’s Manual was running out. In December 1520, he had foolishly
trekked to the swamps of Zeeland in the southwestern Netherlands, hoping to inspect a whale that had washed ashore. Alas, the whale had already washed away by the time he arrived. It was not a healthy place to visit, and the chronic illness that he contracted there eventually killed him after eight painful years. Dürer’s self-portrait of 1522 contrasts disturbingly with his earlier one. In the words of Strauss: “It represents Dürer himself in the nude, with thinned, disheveled hair and drooping shoulders, his body ravaged by his lingering disease.” He fashioned himself as the Man of Sorrows. No Matter How You Slice It “The subject [Conic Sections] is one of those which seem worthy of study for their own sake.” —Apollonius of Perga Although there is much in the Painter’s Manual that rewards close examination, one specific area worthy of concentration is Dürer’s treatment of conic sections. The techniques that Dürer found to draw them anticipate the field of descriptive geometry that Gaspard Monge (1746–1818) developed later. The curves themselves would accompany a revolution in astronomy.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Figure 4. A plane slicing through a cone can produce several different shapes, called conic sections. Top and side views of the cone show how altering the angle of the plane results in a parabola (a), ellipse (b) or hyperbola (c). Dürer’s Painter’s Manual aimed to show artists how to draw these shapes correctly.
“The ancients have shown that one can cut a cone in three ways and arrive at three differently shaped sections,” Dürer writes toward the end of Book I. “I want to teach you how to draw them.” Menaechmus (circa 350 b.c.), who knew Plato and tutored Alexander the Great, is thought to have discovered conic sections (often called simply “conics”). He found them while trying to solve the famous Delian problem of “doubling the cube.” According to legend, terrified citizens of the Greek island of Delos were reassured by an oracle that plague would depart only after they had doubled the size of Apollo’s cubical altar. Assuming that the altar had unit volume, the task of doubling it amounted to constructing a new edge of length precisely equal to the cube root of 2. Although the legend is doubtful, the Delian problem was certainly studied in Plato’s Academy. Plato insisted on an exact solution accomplished using only ruler and compass. Ingenious ruler-and-compass constructions abound in the Painter’s Manual. Dürer’s construction of a regular pentagon is particularly noteworthy. The construction was due not to Euclid but rather to one that had been taught by Ptolemy and is found in his Almagest. In 1837, the French mathematician Pierre Wantzel (1814–1848) proved that doubling the cube with ruler and compass is impossible. However, Menaechmus changed the rules of the game and managed to win. By intersecting a right-angled cone with a plane perpendicular to its side, he produced a curve that was later called a parabola. Then by intersecting two parabolas, chosen carefully, Menaechmus produced a line segment of length equal to the cube root of 2. (The parabola can be described by a simple equation x = py2. The positive number p, called the latus rectum, is a parameter that uniquely describes the shape.) Menaechmus looked at other sorts of cones. When the cone’s angle was either less than or greater than 90 degrees, two new types of curves resulted from their intersection with a plane. A century later, Apollonius of Perga (262– 190 b.c.) called the three curves parabola, www.americanscientist.org
American Scientist
a
b
ellipse and hyperbola, choosing Greek words meaning, respectively: comparison, fall short and excess. Echoes are heard today in the English words such as parable, ellipsis and hyperbole. Today there is debate as to whether the terms originated with Apollonius. In any event, they were likely adapted
c
from earlier terminology of Pythagoras (570–circa 495 b.c.) concerning a construction known as “application of areas.” The interpretation in terms of angle is historically inaccurate but mathematically equivalent and simpler to state. Apollonius’s accomplishments went beyond nomenclature. He made
Figure 5. Dürer gave detailed instructions in his manual for how to transcribe the cutting of a cone with a plane (seen in side and top view, left) into an ellipse. He called ellipses “egg lines” because he believed, mistakenly, that they were wider on the bottom than on the top. (Unless otherwise indicated, all photographs are courtesy of the author.) 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
411
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Figure 6. Johannes Kepler’s Astronomia Nova from 1609 features a sketch of the retrograde motion of the planet Mars when viewed from Earth. Kepler made the discovery that the orbit of Mars is an ellipse with the Sun at one focus. His correspondence makes it clear that he had read Dürer’s description of conics. (Image courtesy of Linda Hall Library of Science, Engineering & Technology.)
a discovery that afforded a lovely simplification. Instead of employing three different types of cones, as Menaechmus did, Apollonius used a single cone. Then by allowing the plane to slice the cone at different angles, he produced all three conics. Associated with an ellipse or hyperbola are a pair of special points called foci. (For a circle, a special case of an ellipse, the distance between the two foci is zero.) Distances to the foci determine the curves in a simple way: The ellipse consists of those points such that the sum of distances to the foci is constant. Likewise, the hyperbola consists of points such that the difference is a constant. Tilt a glass of water toward you and observe the shape of the water’s edge. It is an ellipse. So is the retinal image of a circle viewed from a generic vantage point. Johannes Kepler (1571–1630) made the profound discovery that the orbit of Mars is an ellipse with the Sun at one focus. Kepler introduced the word “focus” into the mathematics lexicon in 1604. It is a Latin word meaning hearth or fireplace. What word could be more appropriate for the location of the Sun? 412
M q M q
M q
M q MQmags q
Kepler’s letter to fellow astronomer David Fabricius (1564–1617), dated October 11, 1605, reveals that Kepler had read Dürer’s description of conics: So, Fabricius, I already have this: That the most true path of the planet [Mars] is an ellipse, which Dürer also calls an oval, or certainly so close to an ellipse that the difference is insensible. In fact, Dürer used a more flavorful term for an ellipse, as we will see. Nature’s parabolas and hyperbolas are less apparent than the ellipse. A waterspout and the path of a cannonball have parabolic trajectories. The wake generated by a boat can assume the form of a hyperbola, but establishing that fact requires more mathematics—or a boat. Egg Lines The significance of Dürer’s treatment of conics is the technique that he used for drawing them, a fertile method of parallel projection. Art historian Erwin Panofsky observed that the technique was “familiar to every architect and carpenter but never before applied to the solution of a purely mathematical problem.” In brief, Dürer viewed a cut
cone from above as well as from the side, then projected downward. His trick was to superimpose the two views and then transfer appropriate measurements using dividers. In this way he relocated the curve from the cone to a two-dimensional sheet of paper. Dürer’s method was correct, but the master draftsman blundered while transferring measurements. He mistakenly believed that the ellipse was wider at the bottom of the cone than at the top, an understandable error considering the shape of the cone. As he transferred the distances with his divider, his erroneous intuition took hold of his hand. Dürer writes: “The ellipse I call eyer linie [egg line] because it looks like an egg.” Egg lines for ellipses can indeed be spotted in Dürer’s work, such as in the bell in Melencolia I. Dürer knew no German equivalent of the Greek word “ellipse.” The appellation he concocted drew attention to his error, and the egg line persisted in German works for nearly a century. It is easy to understand why Kepler had an interest in Dürer ’s flawed analysis of the ellipse. For 10 years beginning in 1601, Kepler struggled to understand the orbit of Mars, a problem that had defeated Regiomontanus. Until he understood that the orbit was an ellipse, Kepler believed that it was some sort of oval. In fact, he specifically used the word “oval,” a descendant of the Latin word “ovum” meaning egg. Kepler was not the first to believe that a planet’s orbit might be eggshaped. Georg von Peuerbach (1423– 1461), a teacher of Regiomontanus, had said as much in Theoricae novae planetarum. Published in Nüremberg in 1473 and reprinted 56 times, Peuerbach’s treatise influenced both Copernicus and Kepler. The 1553 edition, published by Erasmus Reinhold (1511–1553), a pupil of Copernicus, included a comment about Mercury’s orbit that might have caused Kepler to go back to the Painter’s Manual: Mercury’s [orbit] is egg-shaped, the big end lying toward his apogee, and the little end towards its perigee. Later Kepler had this to say about the orbit of Mars: The planet’s orbit is not a circle but [beginning at the aphelion] it
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
curves inward little by little and then [returns] to the amplitude of a circle at [perihelion]. An orbit like this is called an oval. Burning Mirrors It is a safe bet that few artists in 16thcentury Germany felt the need to draw a parabola. In what seems like a marketing effort, Dürer tells his readers how it can be fashioned into a weapon of mass destruction. The story that Archimedes set fire to an invading Roman fleet during the siege of Syracuse was well known in Dürer’s time. Dioclese, a contemporary of Archimedes, explained the principle in his book On Burning Mirrors, preserved by Muslims in the 9th century. Dioclese had observed something special about the parabola that had escaped the notice of Apollonius: On its axis of symmetry there is a point—a single focus—with the property that if a line parallel to the axis reflects from the parabola with the same angle with which it strikes, the reflected line will pass through the focus. In physical terms, a mirror in the shape of a paraboloid, a parabola revolved about its axis, will gather all incoming light at the focus. Collect enough light, and whatever is at the focus will become hot. Making an effective burning mirror is not a simple matter. Unless the parabola used is sufficiently wide, the mirror will not collect enough light. Dürer writes: If you plan to construct a burning mirror of paraboloid shape, the height of the cone you have to use should not exceed the diameter of the base—or this cone should be of the shape of an equilateral triangle. Dürer goes on to explain why the angle of incidence of a light beam striking a mirror is equal to the angle of reflection. An elegant drawing of an artisan (possibly the artist) holding a pair of dividers does little to help matters. (See Figure 7.) Dürer probably sensed that he was getting into a rough technical patch. He concludes the section desperately: The cause of this has been explained by mathematicians. Whoever wants to know it can look it up in their writings. But I have drawn my explanation . . . in the figure below. Burning mirrors might have sounded useful to readers of the Painter’s www.americanscientist.org
American Scientist
focus
Figure 7. Dürer used a method called parallel projection when transcribing figures, such as this parabola, from three to two dimensions. He viewed a cut cone from above as well as from the side, then projected downward, superimposing the two views and then transferring appropriate measurements using dividers (top). Dürer argued that parabolas correctly angled could be used as burning mirrors, heating what is at the focus (bottom left). He tried to explain light angles with a drawing of an artisan (top right). However, he miscalcuated the placement of the focal point, so a correction had to be manually pasted into each copy of his 1525 publication. The original mistake is revealed behind the correction when the page is backlit (bottom right).
Manual. The first scientific evidence that Archimedes’s mirrors might not have been such a hot idea had to wait for 12 years until René Descartes expressed doubts in his treatise Dioptrique. Nevertheless, since the time of Archimedes, burning mirrors, whatever their effectiveness, were constructed in a more practical, approximate fashion with sections of a sphere. (In 1668, Isaac Newton designed the first reflecting telescope on the principle of the burning mirror, with an eyepiece near the focus. He substi-
tuted a spherical mirror to simplify its construction.) It seems that Dürer liked parabolas and was determined to write about them. Dürer invented German names for the parabola and the hyperbola as well as the ellipse. The parabola he called a Brenn Linie (“burn line”). “And the hyperbola I shall call gabellinie [fork line],” he writes, but he offers no explanation for his choice. Nor, it seems, has a reason been suggested by anyone else. Dürer might have been paying tribute to the many gabled houses of which 2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
413
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Figure 8. Johannes Werner, a contemporary of Dürer, made contributions to geography, meteorology and mathematics. His parabola (left), was published in 1522 at a time when Dürer was studying conics. It is likely that Dürer’s parabola, published in 1525 (right), was influenced by Werner’s. The method of parallel projections that is often credited to Dürer might well have derived from Werner’s construction. However, Werner used an oblique cone, whereas Dürer’s was a right cone, so Werner’s formula for the location of the focus no longer applied.
Nüremberg was proud, the artist’s own home near the Thiergärtnerthor included. In the Painter’s Manual, Dürer constructs the hyperbola but has little to say about it. Much of what Dürer knew about conic sections came from Johannes Werner (1468–1522). A former student of Regiomontanus, Werner was an accomplished instrument maker. He made contributions to geography, meteorology and mathematics. A lunar impact crater named in his honor is not far from a crater named Regiomontanus. Werner’s Libellus super viginti duobus elementis conicis was published in 1522, at the time when Dürer was studying conics. The volume’s 22 theorems were intended to introduce the author’s work on the Delian problem. From handwritten notes, it appears that Werner died during his book’s printing. (Werner’s book soon became very rare. It is reported that the Danish astronomer Tycho Brahe could not find a copy for sale anywhere in Germany.) Since 1508, Werner had been serving as priest at the Church of St. John, not far from Dürer ’s house. Like Pirckheimer, Werner acquired some of the rare books and papers that had been in Walther ’s possession. However, Werner knew no Greek 414
and probably relied on Pirckheimer for translation. (His commentary on Ptolemy, published in 1514, is dedicated to Pirckheimer.) Like Dürer, Werner would have been a frequent visitor to Pirckheimer’s house. I believe that Dürer was inspired by Werner’s novel construction of the parabola (see Figure 8). The cone that he used was an oblique cone with vertex directly above a point on the base circle. A cut by a vertical plane produced the parabola. Regularly spaced, circular cross-sections of the cone are in the lower diagram, each tangent to a point that lies directly below the vertex of the cone. The cutting plane is seen in profile as a line through points labeled b and f. By transferring the segments cut by the circles along the line, Werner produced the semi-arcs transverse to the line through k and n in the upper figure. Had Dürer seen the picture, which is likely, Germany’s master of perspective would have had no trouble imagining the tangent circles stacked in three dimensions, the smallest coming closest to his eye. The method of parallel projections that is often credited to Dürer might well have derived from Werner’s construction. In Appendix Duodecima of his book, Werner explains the reflective
properties of the parabola to his audience. He also tells the reader how to locate the focus: Its distance from the vertex is one quarter of the length of the segment ab. (The length of ab, which is equal to the length of kn, is the latus rectum of the parabola—the distance between the slicing plane and the vertex of the cone.) But Dürer used a right cone with vertex directly above the center of the circular base, so his cross-sectional circles became concentric rather than tangent to a single point as in Werner’s diagram. Unfortunately for Dürer, Werner’s formula for the location of the focus no longer applied. Whether Dürer computed the distance incorrectly or merely guessed, we do not know. However, in every copy of the 1525 publication a small piece of paper with the correct drawing had to be pasted by hand over the erroneous one. By holding the final product up to the light, Dürer’s mistake is revealed. Dürer and Creativity For Albrecht Dürer, questions of technique eventually gave way to those of philosophy. In 1523, he wondered at the way “one man may sketch something with his pen on half a sheet of paper in
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
has looked on Beauty bare.” Dürer might have agreed.
Scala/Art Resource, NY
Bibliography
Figure 9. Dürer engraving from 1514, titled St. Jerome in His Study, is noted for its use of unusual mathematical perspective, which invites the viewer into the snug chamber. It is rich with symbolism related to the theological and contemplative aspects of life in Dürer’s time.
one day . . . and it turns out to be better and more artistic than another’s big work at which its author labors with the utmost diligence for a whole year.” The belief that divine genius borrows the body of a fortunate artist was common in Dürer’s time. According to Panofsky, Leonardo da Vinci would have been perplexed had someone called him a genius. But Dürer had begun to see the creative process differently. For him, it became one of synthesis governed by trained intuition. Dürer’s last name was likely derived from the German word tür, meaning door. (His father was born in the Hungarian town of Ajtas, which is related to the Hungarian word for door, ajitó.) www.americanscientist.org
American Scientist
It is a fitting name for someone who opened a two-way passage between mathematics and art. As Panofsky observed: “While [the Painter’s Manual] familiarized the coopers and cabinetmakers with Euclid and Ptolemy, it also familiarized the professional mathematicians with what may be called ‘workshop geometry.’ ” Dürer used geometry to search for beauty, but he never regarded mathematics as a substitute for aesthetic vision. It was a tool to help the artist avoid errors. However, the Painter’s Manual demonstrates that mathematics and, in particular, geometry, meant much more to him. Four centuries after its publication, poet Edna St. Vincent Millay wrote: “Euclid alone
Coolidge, Julian L. 1968. A History of the Conic Sections and Quadric Surfaces. New York: Dover Publications. Dürer, Albrecht. 1525. A Manual of Measurement [Underweysung der Messung]. Translated by Walter L. Strauss, 1977. Norwalk, CT: Abaris Books. Eves, Howard. 1969. An Introduction to the History of Mathematics, Third Edition. Toronto: Holt, Rinehart and Winston. Guppy, Henry, ed. 1902. The Library Association Record, Volume IV. London: The Library Association. Heaton, Mrs. Charles. 1870. The History of the Life of Albrecht Dürer of Nürnberg. London: Macmillan and Co. Herz-Fischler, Roger. 1990. Durer ’s paradox or why an ellipse is not egg-shaped. Mathematics Magazine 63(2):75–85. Kepler, Johannes. 1937. Johannes Kepler, Gesammelte Werke. Vol. 15, letter 358, l. 390–392, p. 249. Walter von Dyck and Max Caspar, eds. Munich: C. H. Beck. Knowles Middleton, William Edgar. 1961. Archimedes, Kircher, Buffon, and the burning-mirrors. Isis 52(4):533–543. Pack, Stephen F. 1966. Revelatory Geometry: The Treatises of Albrecht Dürer. Master’s thesis, School of Architecture, McGill University. Panofsky, Erwin. 1955. The Life and Art of Albrecht Dürer, Fourth Edition. Princeton: Princeton University Press. Rupprich, Hans. 1972. Wilibald Pirckheimer. In Pre-Reformation Germany, Gerald Strauss, ed. London: Harper and Row. Russell, Francis. 1967. The World of Dürer. New York: Time Incorporated. Strauss, Gerald, ed. 1972. Pre-Reformation Germany. London: Harper and Row. Thausing, Moriz. 1882. Albert Dürer: His Life and Works. Translated by Fred A. Eaton, 2003. London: John Murray Publishers. Toomer, Gerald J. 1976. Diocles on burning mirrors. In Sources in the History of Mathematics and the Physical Sciences 1. New York: Springer. Werner, Johannes. 1522. Libellus super viginti duobus elementis conicis. Vienna: Alantsee. Westfall, Richard S. 1995. The Galileo Project: Albrecht Dürer. http://galileo.rice.edu/ Catalog/NewFiles/duerer.html ___________________ Wörz, Adéle Lorraine. 2006. The Visualization of Perspective Systems and Iconography in Dürer’s Works. Ph.D. dissertation, Department of Geography, Oregon State University.
For relevant Web links, consult this issue of American Scientist Online: http://www.americanscientist.org/ issues/id.98/past.aspx
2012
September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
415
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
416
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Valentine is five hours old
Valentine falls into a hole
Concerned siblings surround Valentine
Siblings try to free Valentine
They keep trying
Valentine stays stuck
Older brother runs to Valentine
Brother extends a helping trunk
Valentine gets a leg up
Older sister pushes the baby
Valentine makes more progress
The baby elephant is safe
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Sightings
Forest Elephant Chronicles The best known African elephants are savannah elephants, those massive occupants of grassy plains and bush lands that are the largest land-dwellers on Earth. Less familiar are forest elephants, smaller but still gigantic creatures that live in African rain forests. Difficult to study because of their densely vegetated habitat, Loxodonta cyclotis still roam freely where they evolved. But poaching and development threaten their survival. The Elephant Listening Project at Cornell University has studied the animals since 1999 by electronically eavesdropping on their complex vocalizations and, when possible, by observing their behavior. The project is now using heat-sensitive cameras to document what transpires when these elephants congregate in forest clearings at night. The rare recordings raise new questions about the elusive animals, as program director Peter Wrege explained to American Scientist contributing editor Catherine Clabby. Why is so little known about the behavior of forest elephants? These elephants live in dense rain forests of the Congo Basin where it is extremely difficult to observe them visually. There are few roads and it would be both difficult and dangerous to try to follow individuals on the ground through this habitat. In the absence of any strong evidence, we assume that their social system is structured much like that of the savannah species, but this is something the Elephant Listening Project is trying to confirm. What inspired your team to try thermal imaging? Acoustic monitoring has allowed us to study elephant behavior, without bias, over 24-hour cycles. Their activity cycle is nearly equally distributed day and night, but they prefer to enter forest clearings at night. This is where we can observe the elephants directly. We suspect that different types of interactions occur at night because the types of calls differ then. But we have only the beginnings of an understanding of what the acoustic signals mean. We need to investigate this with visual observation. Also, important behaviors may not have identifying sounds associated with them, and we need to know what these are. What draws elephants to these clearings? Clearings offer resources that appear critically important. Minerals dissolved in water that percolates up through underlying rock are a major attraction. Individuals will spend many hours over several days drinking from pits they dig to access water. But clearings also appear to be important for social interactions, including socializing among family subunits; for the development and confirmation of dominance relationships; and for reproduction. Did filming at night produce any surprises? One of the biggest surprises was the beauty. You see dozens of elusive elephants scattered like hot coals across a cool plain that is surrounded by forest trees radiating the heat they absorbed during the day. Scientifically it was startling to see so much variation in individuals’ external body temperatures. This spring, the Elephant Listening Project of Cornell University filmed forest elephants at night with heat-sensitive cameras, a first in the study of these animals. Using bright lights to record the elusive creatures with conventional cameras is not possible. In one interaction captured in the rain forest of Central African Republic, sisters and a brother helped a newborn that the researchers call Valentine escape a hole. www.americanscientist.org
American Scientist
Some disparity may be due to unknown differences in recent physical activity, but perhaps it also can tell us something about health. Interactions among their own species and with other species on the blackest of nights, when the elephants could not see, gave us insights into how they negotiate their environment using only their hearing and olfactory senses. And there appeared to be much more sexual behavior going on at night compared to the day, which we did not expect. Did the filming prompt new research questions? We saw that in some males the temporal gland that becomes hypertrophied during the reproductive condition called musth is hot compared to surrounding tissue and its size is measurable. Could this effect be used to predict the onset of the musth condition? Is there any correlation between the relative temperature and size of this gland and reproductive success? Also, one of the most dramatic phenomena associated with reproduction among the elephants is a social contagion that we call the mating pandemonium. Following a mating attempt, many related and unrelated families show extreme excitement with trumpeting, rumbling and smelling. Vision can be ruled out when the mating occurs in pitch darkness, so what triggers these social interactions and why? Will these studies help protect forest elephants? Forest elephant populations are decreasing at an alarming rate, falling more than 50 percent in the last nine years. This is because of ivory poaching. Habitat loss adds another layer of risk, but the bigger threat is the increased access to forests by poachers that the development of roads and other infrastructure provide. With acoustic monitoring we can remotely count the number of elephants and the frequency of gunshots at an increasing number of locations in Central Africa. Thermal imaging studies can help by increasing understanding of the elephants’ ecology and behavior. But this sort of information may come too slowly given the pressures these animals face. The immediate conservation value may be in attracting interest and wonder among people around the world. To watch the Elephant Listening Project’s “thermal videos” of forest elephants in their natural habitat at night, visit: elephantlisteningproject.org/thermal.html. In Sightings, American Scientist publishes examples of innovative scientific imaging from diverse research fields. 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
417
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Scientists’ Bookshelf
In a Class by Itself Veit Elser
THE NATURE OF COMPUTATION. Cristopher Moore and Stephan Mertens. xviii + 985 pp. Oxford University Press, 2011. $90.
A
YouTube video shows a smartphone, its camera aimed at a device made of Legos that it also controls, solving a Rubik’s Cube. It’s one of many such videos, and the smartphones in them do not solve only the standard 3 x 3 x 3 cube: One solves the 7 x 7 x 7 in 39 minutes, an impressive time. The phenomenon raises some questions—in addition to the obvious one of why anyone would undertake a project as crazy as this. How does the time taken by the robot to solve the cube increase with the size of the cube?
Does the smartphone’s memory capacity, modest by today’s standards, limit its ability to solve cubes? And then there’s a host of more mundane questions, like how colors are distinguished and represented, how threedimensional space is “imagined,” and so on. It’s easy to see why computer scientists are especially intrigued by the challenge posed by the actual puzzle, and perhaps less by the implementation of simple tasks, such as making the robot execute a specific move. To be sure, the latter is not without its technical chal-
A chapter of The Nature of Computation on interactive proofs is framed as a conversation between the wizard Merlin and King Arthur. To illustrate, the authors include this engraving by Gustav Doré, Merlin and Vivien, which was originally made for Alfred, Lord Tennyson’s long poem Idylls of the King, published in the late 1800s. In the authors’ telling, Merlin is “too busy” to talk with Arthur, but in the original, being busy is the least of Merlin’s problems: Vivien has used a magic spell to imprison him in the trunk of the oak tree. From The Nature of Computation. 418
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
M q MQmags q
THE WORLD’S NEWSSTAND®
lenges, and it is mostly in this domain that computer-powered technology dazzles us. But it is in the details that lead to the solution—the planning of moves, the formulation of strategies, what we imagine as the willingness to prevail in seemingly hopeless circumstances—that the actions of the automaton command our respect. Despite its name, theoretical computer science has very little to do with actual electronic computers. The notion of an algorithm, a procedure for arriving at a solution by a sequence of elementary steps, was familiar even to the ancient Greeks. Euclid’s algorithm for finding the greatest common divisor of two integers is still in use today. And although much of current computer science is devoted to finding efficient algorithms, that accounts for only part of the subject. The field has a deeper and even philosophical half that the average technology consumer is unaware of. This branch began in the 1930s when Alan Turing decided it was interesting to ask what could be computed in principle, efficiency be damned. When time and memory capacity are treated as abstract commodities, computer science addresses questions at the very foundations of mathematics. For example, Turing showed that David Hilbert’s famous Entscheidungsproblem (“decision problem”) was “undecidable”—deciding whether a proof exists for any given mathematical statement cannot be done in a finite amount of time by any algorithm. Perhaps the premier unresolved mathematics question of the present day is the P versus NP problem. The letters refer to “complexity classes,” broad characterizations of computational difficulty. Problems in the class P can be solved efficiently, while for those in NP, we know that checking a candidate solution is easy. The prevailing wisdom is that there are problems in the class NP that are not in P—that problems exist for which it’s easy to check a given solution but that are very hard to solve.The Clay Mathematics Institute is offering a $1 million prize to anyone who can decide whether this is the case. In The Nature of Computation, Cristopher Moore and Stephan Mertens have produced one of the most successful attempts to capture the broad scope and intellectual depth of theoretical computer science as it is pracwww.americanscientist.org
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ticed today. In the preface we are told that this monumental project of almost 1,000 pages was launched as an effort to explain complexity classes to physicists. Physicists have contributed to computer science in two ways: through applying the methods of statistical mechanics to computation problems and, more recently, through the introduction of quantum mechanics to models of computation. Had Moore and Mertens followed their original plan, the outcome would have been a guide for physicists, written in physicists’ language, to the arcane world invented by computer scientists. As it happens, the book instead took the form of a comprehensive and very readable textbook on computer science. Although it is true that some of the later parts of The Nature of Computation feature the physics-inspired work of the two physics-trained authors, these sections do not take center stage. Rather, what shines from every page is that Moore and Mertens are crazy about computer science and have gone all out to share their fondness for the subject. Some reviewers of this book have made the comment that one of its few weaknesses is that it sometimes does not present enough technical detail to serve as a rigorous textbook. I disagree. Details are omitted only when they would intrude on the clear exposition of the main theme, and even then the authors are careful to provide accessible supporting material in the form of illustrations and plausibility arguments. Certainly, wherever computer science makes its most creative displays, such as when reducing one problem type to a seemingly very different one, the material is presented with complete rigor. The book is also not short on more technical topics that, although they are cornerstones of practical computation, receive only passing reference in many textbooks. For example, I was delighted to see a beautiful chapter on linear programming, a geometric computing paradigm. Instructors will be pleased by the easy exercises sandwiched between paragraphs, designed to stimulate comprehension, in addition to the wealth of engaging and more challenging problems at the end of each chapter. There are extensive notes and reference material, also at the ends of the chapters, conveniently notated with a special symbol in the margins of the main text.
Also Reviewed in This Issue 420. WHAT A PLANT KNOWS: A Field Guide to the Senses. By Daniel Chamovitz. Reviewed by Andrea E. Wills. Plants’ ability to sense and respond to their surrounding environment is stranger and more surprising than one might think, and Chamovitz recounts the stories of scientists’ discoveries in plant biology with wit and charm, says Wills 422. AMERICAN GEORGICS: Writings on Farming, Culture, and the Land. Edited by Edwin C. Hagenstein, Sara M. Gregg, and Brian Donahue. Reviewed by Christine Casson. The United States has always embodied the tension between the ideals of agrarianism and industrialism, says Casson, and this book provides a compelling history of that tension 424. LEGALLY POISONED: How the Law Puts Us at Risk from Toxicants. By Carl F. Cranor. Reviewed by Emily Monosson. Cranor notes that it’s not enough for individual citizens to try to avoid chemicals that are known to be toxic; to offer substantive protection, legislation must be improved 426. GRAND PURSUIT: The Story of Economic Genius. By Sylvia Nasar. Reviewed by Brian Hayes. This work is essentially a biography of economics, Hayes writes. Nasar reveals the history and the nature of the field through captivating portraits of economists 428. NANOVIEWS. The Rocks Don’t Lie. By David R. Montgomery. Reviewed by David Schoonmaker. tA Field Guide to Radiation. By Wayne Biddle. Reviewed by Fenella Saunders.
National Library of Medicine
American Scientist
2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
419
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Even if you are not a student or instructor of computer science, you should consider buying copies of this book for your friends’ coffee tables. The Nature of Computation is one of those books you can open to a random page and find something amazing, surprising and, often, very funny. From the generous selection of eclectic quotations and the gorgeous illustrations,
it’s clear the authors had fun writing it. So how does one illustrate a chapter about a very abstract topic called “interactive proofs” that features mathematical conversations between Arthur and Merlin? For this Moore and Mertens have chosen some nice engravings, one of which shows Vivien (who, in the legend of King Arthur, enchanted Merlin and imprisoned him
B O TA N Y
Fortean Flora Andrea E. Wills WHAT A PLANT KNOWS: A Field Guide to the Senses. Daniel Chamovitz. viii + 177 pp. Scientific American / Farrar, Straus & Giroux, 2012. $23.
I
n grassy areas along the equator lives a tiny plant, Mimosa pudica, that has the captivating property of closing its leaves in response to touch. Rest a finger on one leaf, and that leaf and its neighbor will fold abruptly toward the stem. Brush your finger along the length of the stem and every pair of leaves will collapse in turn. For everyone who has wondered at Mimosa, the suddenly snapping Venus flytrap or the way a sunflower’s head unerringly turns to follow the sun, Daniel Chamovitz has written the perfect book. What a Plant Knows: A Field Guide to the Senses examines the parallels and differences between plant senses and human senses by first considering how we interpret sensory inputs and then exploring how plants respond to similar inputs. Each chapter covers one sense—sight, smell, touch and hearing are covered, along with “How a Plant Knows Where It Is” and “What a Plant Remembers”—and each examines a wide taxonomical range of flora and a complementary historical range of experiments. In the book’s introduction, Chamovitz is careful to clarify his intentions in using language that might be considered anthropomorphic to explore the world of plants: When I explore what a plant sees or smells, I am not claiming that plants have eyes or noses (or a brain that colors all sensory input with emotion). But I believe this terminology will help challenge
420
M q M q
M q
M q MQmags q
us to think in new ways about sight, smell, what a plant is, and ultimately what we are. A plant biologist who has held positions at Columbia and Yale and is now director of the Manna Center for Plant Biosciences at Tel Aviv University, Chamovitz is well qualified to present an archive of research on plant perception. Happily, he also has narrative dexterity: The book is delightful and a fast read. Science, as we all know but sometimes forget, is ultimately driven by
in a tree trunk), in the wizard’s arms. The caption reads, “Merlin is too busy to have a conversation with Arthur. He will send a proof instead.” Veit Elser is a professor of physics at Cornell University. His interest in computation came about by accident when he discovered a method for analyzing x-ray diffraction data that, with a few modifications, also solves Sudoku puzzles.
curiosity—about what we are, about what other creatures are. That curiosity is evoked repeatedly in What a Plant Knows. When Chamovitz introduces the baffling way that irises appear to “remember” what color of light they last saw or how the parasitic plant dodder (Cuscuta pentagona) can “smell” whether it’s next to a tomato (one of its preferred hosts) or a stalk of wheat, it’s hard not to share his enthusiasm for unraveling these mysteries. He elaborates on elegant early experiments in plant biology as well as modern-day discoveries, providing a window on the work of the many scientists who clarified the mechanisms driving these perplexing phenomena. The latter include the use of genetic mutants of the botanical workhorse Arabidopsis to unveil 11 different photoreceptors that allow the plant to discern, among other things, whether it was last exposed to the red light present in the morning or the farred light present in the evening. Finely tuned gas chromatography has revealed
In 1901, Indian physicist and plant physiologist Jagadish Chandra Bose described a solution to the puzzle of Mimosa pudica, a plant native to South and Central America whose leaves quickly fold inward toward their stems in response to touch. Bose’s paper on the subject was rejected by the Proceedings of the Royal Society of London, but his hypothesis, described in What a Plant Knows, has since been proven correct. Illustration by Paul Hermann Wilhelm Taubert, Natürliche Pflanzenfamilien. 1891. Leipzig: Engelmann. From What a Plant Knows.
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
how dodder differentiates between the attractive chemicals in eau de tomato and the repulsive ones in eau de wheat. In many of the examples Chamovitz describes, insights gained from simple but powerful experiments drive the story. Consider proprioception, the sense of the relative position of our body parts in space that allows us to complete coordinated movements without tripping over our own feet. Do plants have something like proprioception? Certainly, says Chamovitz, but for plants, it’s about the position of their parts relative to gravity. This sense has been the subject of experimental study for more than two centuries. In the mid-1800s, Thomas Andrew Knight set out to test an observation made 50 years earlier by Henri-Louis Duhamel de Monceau: that roots had a propensity to grow down (positive gravitropism) and shoots to grow up (negative gravitropism). With an elegant experiment in artificial gravity, Knight manipulated these tendencies. He arranged bean seedlings in various positions on a circular wooden plate: some with their roots pointing toward the center of the wheel, some with their roots pointing toward the rim. The plate was attached to a water wheel turned by a stream; it spun at 150 revolutions per minute. The spinning created a local reactive centrifugal force that was stronger than gravity. No matter how the plants were initially positioned, they grew in the same way: root toward the new perceived gravitational force—the rim—and shoot away from it. Charles Darwin, intrigued by this phenomenon, turned his nowunderappreciated botanical inclinations toward the problem some years later. Working with his son Francis, he sliced varying lengths off of the roots of bean, pea and cucumber seedlings and then laid the seedlings horizontally on damp soil. The Darwins noted that cutting off 0.5 mm or more of a seedling’s root tip resulted in horizontal root growth but no downward growth. In another experiment, they placed seedlings sideways on pins and cauterized their root tips with silver nitrate. This caused the roots to cease growing downward. Uncauterized seedlings’ roots reliably grew down. These and other experiments suggested that the root’s ability to sense gravity resides in the root tip. But the mystery of how shoots orient themselves away from gravity remained. Chamovitz continwww.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
In the late 19th century, on the question of whether plants’ growth is affected by gravity, the jury was out. “The hypothesis does not appear to have been strengthened by any facts,” wrote horticulturalist and aristocrat Thomas Andrew Knight. So he devised an experiment to test the hypothesis (left). After several days of being spun around, the shoots of bean seedlings fastened to a plate unerringly grew out, away from the plate’s center, and the roots grew toward the center (right). The plants, he concluded, responded to this “artificial gravity” much as they would respond to natural gravity. Illustration by Varda Wexler, from What a Plant Knows.
ues to follow the study of gravitropism through history, describing the adventures of plants in space and the insights gained from mutant plants that lack the ability to sense gravity. These experiments, he writes, have revealed a surprising parallel between statoliths, structures in the root cap and endodermis of plants that allow them to sense gravity, and otoliths, subcellular structures in the human inner ear that help us maintain our balance. Even as he relates the intricate mechanisms of plant perception, Chamovitz maintains a breezy and accessible style with apt and playful analogies (as when he likens experiments that change plants’ cycle of day and night to jet lag in humans). Although biologists will likely be familiar with some of the examples he offers, his description of the experimental paths that have led to various discoveries will entertain experts and newcomers to the subject alike. He shows remarkable restraint against the temptation to lead with detailed diagrams of plant anatomy, instead introducing apical meristems or cotyledons when an anecdote relies on them. The book includes a thorough bibliography for those who want additional detail about particular concepts or experiments. And each chapter is enhanced by black-andwhite illustrations and frequent links to online videos of things like circumnutating seedling tips and dodder’s python-like death grip.
The reach of What a Plant Knows extends beyond plant biology: Plant senses become a surprisingly successful vehicle for the description and explanation of a broad suite of genetic concepts. Forward and reverse genetic screens; mutagenesis techniques; necessity and sufficiency; and the vagaries of gene naming (as for the scarecrow and werewolf genes) are introduced with clarity. Even epigenetics, which epigeneticists themselves often struggle to define, is covered with deft ease in the context of heritable stress responses. Knowing more about what a plant knows, readers may find themselves struck by a sudden awareness that the unprepossessing houseplants in the living room have been quietly taking note of which windows are open, when the light bulbs were last changed and how ripe the bananas are. The book also offers an enticement to rediscover the Mimosa plant armed with the secret (which I won’t spoil) of how it does its collapsing trick. But perhaps most satisfyingly, Chamovitz conveys the sense that our knowledge of the range and limitations of plant perception is a work in progress and that much remains to be discovered. Andrea Wills is a postdoctoral researcher in the Department of Genetics at the Stanford School of Medicine. Her professional work focuses on vertebrate embryogenesis, but she moonlights as a plant biology enthusiast. She blogs at http://abouquetfrommendel. wordpress.com. 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
421
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
A G R I C U LT U R E
Making the Land Our Own Christine Casson AMERICAN GEORGICS: Writings on Farming, Culture, and the Land. Edwin C. Hagenstein, Sara M. Gregg, and Brian Donahue, editors. xx + 406 pp. Yale University Press, 2011. $35.
F
orty-eight years ago in his groundbreaking book, The Machine in the Garden: Technology and the Pastoral Ideal in America, historian Leo Marx cited Thomas Jefferson to illuminate the tension between farming and industry that has characterized land use in the United States for more than two centuries. Jefferson’s wish, evident in the frequently quoted “Query XIX” from his 1787 Notes on the State of Virginia, was that the United States remain a nation of citizen farmers. Raw materials, he wrote,
should be sent overseas to Europe for manufacture. As Marx noted, Jefferson believed otherwise 30 years later. In a letter to Benjamin Austin, a writer for Boston’s Independent Chronicle, Jefferson acknowledged that in light of America’s uneasy relationship with Britain and France as a result of the Reign of Terror and the War of 1812, “manufactures are now as necessary to our independence as to our comfort.” American Georgics, a fine anthology and the most recent addition to the Agrarian Studies Series from Yale University
Scott Nearing, shown in this undated photograph, and Helen Nearing moved from New York City to rural Vermont in 1932, planning to live self-sufficiently on the land. In addition to farming “without the use of animals or animal products or chemicalized fertilizers,” the Nearings wrote and lectured about their lifestyle. Their 1954 book Living the Good Life: How to Live Sanely and Simply in a Troubled World (from which the quote above is taken) became required reading for back-to-the-landers of the 1960s and 70s. “We made ourselves independent of the labor market and largely independent of the commodity markets,” they wrote. “In short, we had an economic unit which depression could affect but little and which could survive the gradual dissolution of the United States economy.” From American Georgics. 422
M q M q
M q
M q MQmags q
Press, reveals that in the interval since Jefferson concluded that U.S. citizens needed to make industry integral to the nation’s fabric these two antithetical impulses—agrarian and industrial— have remained firmly in place. The editors, Edwin C. Hagenstein, Sara M. Gregg and Brian Donahue, have divided the volume into seven chronological chapters, each preceded by an introductory essay. The first, “Shaping the Agrarian Republic, 1780–1825,” contains early discussion about the role of the farmer and farming in American culture. It includes important excerpts from J. Hector St. John de Crèvecoeur’s Letters from an American Farmer, along with relevant commentary from James Madison and Alexander Hamilton, among others. Is farming an ennobling and civilizing endeavor, as de Crèvecoeur, Madison and Jefferson claimed? Does it lead to a more virtuous life than industry? Should farms remain manageably small or, as Hamilton advised, take advantage of new machinery to increase size and enhance production? The chapter also introduces agrarianism as manifested in the South and considers the ways in which slavery makes complicated both practical ideas and philosophical ideals about husbandry. Subsequent chapters show how these early concerns and debates have evolved. We learn of the adaptation of farming techniques in America in light of the Industrial Revolution, the passage of the Homestead Act, the increased specialization of farms and farming jobs, and the closing of the frontier. In addition, we come to understand the effect of the new railway system that engendered more efficient transport of crops even as it left farmers completely dependent on the vagaries of its pricing, often exorbitant due to lack of competition. The cultural foundations for agrarianism are represented by institutions such as the People’s Party and the Southern Farmers’ Alliance, active in the late 1800s, and Friends of the Land, begun in the 1940s and championed by Aldo Leopold. Two literary movements that focused on land use also contributed to the agrarian ideal: American Romanticism in the mid-1850s and, in the next century, the Southern Agrarians, a group of writers best known for their contributions to the essay collection and manifesto I’ll Take My Stand: The South and the Agrarian Tradition, published in 1930. The
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
The final chapter of American Georgics, “Back to the Land Again, 1940–Present,” opens with this image taken by photographer Terry Evans in Matfield Green, Kansas, in 1994, titled Carl with Twin Calf. “The second half of the twentieth century saw not only a steep decline in the number of American farmers,” the editors write, “. . . but the almost complete collapse of our rural economy and of traditional agrarian culture.” From American Georgics.
final chapter brings us to the present and includes selections from such luminaries as Aldo Leopold, Wes Jackson—who also wrote the foreword to this volume—and Wendell Berry. The editors have likewise reprinted some excellent, lesser-known excerpts, such as George Perkins Marsh’s 1847 “Address to the Agricultural Society of Rutland County,” Wilson Flagg’s 1859 “Agricultural Progress” and Liberty Hyde Bailey’s 1915 “The Holy Earth”— a call for a moral relation to the land made well before Leopold proffered his “land ethic” in A Sand County Almanac, published in 1949. American Georgics makes clear that the issues surrounding farming have changed more in degree than in kind. One wonders, then, whether alternative attitudes toward the land ever will be realized in the United States. The editors observe in the book’s conclusion that “A renewed appreciation for connection with soil and community has captured the interest of a growing and influential segment of American food culture.” This www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
“new wave of agrarianism” is identified as comprising mostly those who can better afford the higher prices of farmer’s markets and organic produce. Whether these neoagrarians and the organizations they have founded will be willing to make the long-term sacrifices necessary to bring wholesome, locally grown food and sustainable farming practices to those who exist at the economic margins of society remains to be seen. American Georgics clearly is directed towards this more affluent audience that is supportive of the agrarian movement, which in its present incarnation advocates for smaller farms that are intimately connected to the welfare of the land and the immediate community. The book includes only a few opposing arguments, such as an excerpt from Edwin G. Nourse, who argued in 1919 that a better standard of living for all Americans could only be achieved once “a considerable fraction of our population” was “freed from the soil” to pursue other callings, and one from
Earl L. Butz, who in 1960 believed the nation’s highest priority should be an increased standard of living, based on efficient food production and marketing, for all citizens regardless of economic standing. These interruptions provide welcome perspective in a book that comes uncomfortably close to promulgating a single point of view. In a similar vein, the editors’ commentary is generally excellent, profoundly enlarging the history of agrarianism and the particulars of each selection. But in several places, I felt that the discussion could better encompass divergent points of view. For example, the book presents American Romanticism as antithetical to agrarianism because, as the editors write, “In turning from the ugliness of industrial civilization toward an idealized wild nature, Romantics left the pastoral ‘middle landscape’ in an uncertain position.” This perspective overlooks the importance of these mid-19th-century writers, whose sense of nature’s sublimity helped alert the public to the importance of preservation and conservation. Although preservation is, as the editors indicate, not usually in the best interests of the agrarian, the Romantics’ regard for an unaltered landscape was prescient. Sixty years later Bailey observed, “We have been obsessed of the passion to cover everything at once, to skin the earth . . . even when there was no necessity for so doing.” Similarly, Leopold advocated for “restrained use” that would include an “interspersion of land uses, a certain pepper-and-salt pattern in the warp and woof of the land-use fabric,” where the “fields and pastures . . . are a mixture of wild and tame attributes, all built on a foundation of good health.” The editors also make light of Thoreau, who, they note, “put in only half a day with his hoe” so that he could have time to walk, read and write. This stance is odd; elsewhere in the book, moderation is repeatedly called for and is celebrated as integral to proper land use and the health of communal culture. That the Romantics sought spiritual and aesthetic good health makes their program no less important than the agrarian emphasis on the more literal good health of the flesh. It deserves commentary as admiring as the book affords the Southern Agrarians, who were similarly idealistic in their celebration of Southern farm life in the early part of the 20th century. 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
423
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
I was surprised as well to find only brief mention of Leo Marx, despite the fact that the third chapter of this book, “The Machine in the Garden: The Rise of American Romanticism,” takes its title from his exceptional work of cultural and literary criticism. “The machine in the garden” is a reference to the sound of the steam locomotive Thoreau hears at Walden Pond—a sound he is intrigued by yet eschews. This conceit, established by Marx, is emblematic of the conflict between agrarianism and industrial progress that pervades the book. The editors give Marx less credit than he deserves because they confuse the pastoral—a literary idea with its source in the firstcentury-B.C.E. Roman poet Virgil’s Eclogues—with agrarianism, which, like Virgil’s Georgics, from which this volume takes its title, has a firm grounding in the practical work of farming. Marx makes this distinction and brilliantly reveals why Jefferson’s ideal of the citizen farmer, mentioned at the beginning of this essay and inspired by the utopian values of Virgil’s Arcadia, was untenable. In addition, the book should have included endnotes. I was intrigued by a comment made by Abraham Lincoln and another by T. S. Eliot, to name just two; both quotations were included in the editors’
introductions to the chapters, but no documentation of primary sources was provided. Finally, the editors should have paid more attention to the expertise of Native cultures. The book offers little recognition that the agrarian ideals of crop rotation and crop diversity coincide with Native ideas about land use, or that most of the Native population of the United States was relocated to reservation land that provided scarce opportunity for successful farming. For example, the authors would have done well to include an excerpt from “Creations,” one of many fine essays by Linda Hogan. She writes: Without deep reflection, we have taken on the story of endings, assumed the story of extinction. . . . We need new stories, new terms and conditions that are relevant to a love of land, a new narrative that would imagine another way. . . . Indian people must not be the only ones who remember agreement with the land, the sacred pact to honor and care for the life that, in turn, provides for us. Hogan’s words are well in line with the aspirations of contemporary agrarians. Since the editors’ intention is to
TOXICOLOGY
Chemical Innocence? Emily Monosson LEGALLY POISONED: How the Law Puts Us at Risk from Toxicants. Carl F. Cranor. xii +315 pp. Harvard University Press, 2011. $35.
“P
CBs are one of the best kept secrets,” a chemist once told me. This was the 1980s, and he made his livelihood extracting polychlorinated biphenyls, a class of synthetic chemicals whose production in the United States was banned in 1979, from fish tissues and sediments. What he meant was that although we hadn’t yet fully understood the toxicology of these chemicals, there was plenty of concern about widespread contamination: enough to keep cadres of federal and private-industry chemists employed for years studying the PCBs which had made their way 424
M q M q
M q
M q MQmags q
from factories into air and water and eventually into fish, birds, whales and humans. At the time, PCB analyses were about $500 a pop, and tests for dioxins (PCBs’ more nefarious cousin) cost more than $1,000. Add to this all the dollars that have been spent funding toxicologists and other healthrelated scientists, engineers and cleanup experts—and the more difficult-tomeasure costs associated with health effects. For the past 30 years or more, our collective experience with these synthetic pollutants has been costly, and we—the public—are too often the ones footing the bill. As Carl F. Cranor
reinforce and expand the audience for responsible land use, it is imperative that they spread their net as widely as they can and forge alliances with those Native cultures that have long been dismissed as well as with others who support land conservation. These criticisms in no way invalidate the importance of this volume. American Georgics is a gem, chock-full of essays and excerpts that are invaluable to an understanding of farming and conservation, and driven by a vision of what landscape and husbandry might become in the United States if we as a nation could think more holistically about what would most benefit us. It is also a wonderful resource for teaching and a step in the direction espoused by Leopold, Jackson and others in this anthology: that through dissemination of knowledge and better education, we might be able to redirect our culture toward a fuller appreciation of our fertile, fragile planet. Christine Casson is Scholar and Writer in Residence at Emerson College in Boston. She is the author of After the First World, a book of poems (Star Cloud Press, 2008). She has published critical essays on the work of Leslie Marmon Silko and the poetry and environmental essays of Linda Hogan and is currently writing a book of nonfiction that explores the relationship between trauma and memory.
describes them in Legally Poisoned, these costs represent the externalities—costs not fully reflected in the market price of a product—so often associated with industrial chemicals and our ongoing reliance on postmarket environmentalhealth laws to protect us. Cranor explains that many important chemicals, including drugs, pesticides and food additives, are regulated by premarket testing, a flawed but relatively effective approach in which, as the phrase implies, toxicity testing is required before commercialization. But far too many chemicals, such as PCBs, bisphenol A (BPA) and polybrominated flame retardants are subject only to postmarket laws. These chemicals are commonly referred to as “innocent until proven guilty,” and they are the chemicals that all too often invade our most private spaces—our bodies. The market is awash with books about toxic bodies, babies, rubber ducks, homes and workplaces (not to mention in-laws, men, faith and assets).
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Cranor, a legal and moral philosopher and a faculty member in the University of California, Riverside’s graduate environmental toxicology program, cannot resist reiterating how contaminated we all are. Nonetheless, Legally Poisoned offers a refreshingly different take on toxic chemicals in our lives, explaining how this situation came to be and what we might do about it. That we are all involuntarily contaminated by chemicals used in consumer products or released by industry is largely the result of watered-down legislation, particularly the postmarket variety, which has let myriad toxic cats out of the bag—and left us not only holding the bag but trying to recapture all those cats while suffering the consequences of any number of diseases they may have spread. Cranor presents this case in chapters 2 and 4, “Nowhere to Hide,” and “Caveat Parens: A Nation at Risk from Contaminants,” respectively. Especially given the many articles, books and websites that already exist to spread this information, I found these to be the book’s weakest chapters. They rely very heavily on quoted material. In addition, I noted minor inaccuracies in Cranor’s toxicology. He suggests, for example, that biomagnification refers to the preferential retention of toxic congeners, or members of the chemical family, of PCBs. (Industry typically used PCB mixtures consisting of different proportions of up to 209 different chlorinated biphenyl congeners.) It may be splitting hairs to point this out, but biomagnification refers simply to the passing of chemicals upward through trophic levels, resulting in increased concentration of a particular chemical—whether it’s PCBs or a metal such as mercury. Although the process certainly contributes to the increased toxicity of PCB congeners as they concentrate up the food web, an important underlying mechanism is the preferential metabolism of various less-toxic congeners and, resulting from this, the retention of the moretoxic congeners that Cranor points out. Because terms like biomagnificaiton, bioaccumulation and bioconcentration are so often confused to begin with, it’s important in a book like this to make sure they are accurately defined. Additionally, the book includes a bit of redundancy (both within itself and with what has already been published), and at times I wondered whether a toxicologist had reviewed these chapters. www.americanscientist.org
American Scientist
M q M q
M q
M q MQmags q
If you are well versed in toxicology or have read any one of the many recent books on the subject, you may want to skim and move along. Beyond chapters 2 and 4, the book improves a great deal. Chapter 3, “Discovering Disease, Dysfunction, and Death by Molecules,” provides a clearly written introduction to the different ways in which scientists gather information about the effects of chemicals on humans, from case reports to epidemiological studies, and the strengths and weaknesses of each type of study. It also includes an articulate discussion of animal studies. I teach introductory toxicology classes to nonmajors, and this is the kind of writing on the subject I’ve been looking for: It’s not overly technical but is detailed enough to explain why linking cause and effect in humans is so difficult— and why, despite our discomfort with animal studies, as long as we continue to develop chemicals that the public will breathe, drink or otherwise ingest, and until we find clearly better alternatives, we are stuck with those studies. Cranor contends that, although we may morally reject human testing, in the context of postmarket chemical regulation, that is essentially what we are doing: We are involuntarily offering ourselves and our children as guinea pigs. This key point is explored in chapter 5, “Reckless Nation: How Existing Laws Fail to Protect Children,” and in the book’s final chapter, “What Kind of World Do We Want to Create?” These chapters reveal the gaping holes in the legal meshwork, which have resulted in a situation that, according to the author, not only “creates temptations for companies not to test their products,” but also rewards them for “rais[ing] doubt about the science that shows the toxicity of a product.” His take on the situation is unequivocal in its concern: Citizens are now experimental subjects for the toxicity of products in our chemical society, an outcome that 1970s congressional and presidential committees knew was possible and hoped to avoid. However, in the end Congress failed to enact legislation to prevent it. Moreover, companies have a legal right to contaminate the public until there is sufficient science for a risk assessment and sufficient political will in a regulatory agency to reduce the risks.
As a toxicologist who shies away from legalese, I found these chapters, along with chapter 6, “A More Prudent Approach to Reduce Toxic Invasions,” most informative. They provide a readable (if not compelling) overview of the current framework of environmental health laws, analysis of those laws’ effectiveness and lack thereof and possible solutions. Much of the book focuses in one way or another on rejecting our reliance on postmarket law and developing stronger and more universal premarket controls. Examples of such controls that Cranor highlights include the European Union’s legislative approach (Regulation on Registration, Evaluation, Authorisation, and Restriction of Chemicals, or REACH); the Massachusetts law (the Toxics Use and Reduction Act) mandating that companies using large amounts of chemicals plan for pollution prevention, which is facilitated by the work of the Toxic Use and Reduction Institute at the University of Massachusetts, Lowell; and the efforts of New Jersey Senator Frank Lautenberg to make major amendments to the sweeping and ineffective Toxic Substances Control Act through the Safe Chemicals Act of 2010 (which, as of this writing, has yet to pass). Cranor not only deals with captivating and current issues but also explains how we got into our current situation and more importantly how, given the public and political will, we might get out. This is an important point. I’ve been asked by students how, as an environmental toxicologist, I can stand to teach such gloom and doom. “Doesn’t it get depressing?” they ask. “No,” I say, “because you can do something about it.” But really I ought to be saying, “We can do something about it.” We can take meaningful action not by simply trying to avoid contact with toxic chemicals (or “self-help,” as Cranor puts it, which is essentially futile) but by prevention: speaking up and demanding changes to our chemical control laws, from premarket to postmarket, for all (or nearly all) chemicals. Emily Monosson is an independent environmental toxicologist and visiting lecturer at Mt. Holyoke College. She is the author of Evolution in a Toxic World: How Life Responds to Chemical Threats (Island Press, 2012) and editor of Motherhood: The Elephant in the Laboratory (Cornell University Press, 2008). 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
425
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
ECONOMICS
A Portrait of the Economy Brian Hayes GRAND PURSUIT: The Story of Economic Genius. Sylvia Nasar. xvi + 558 pp. Simon & Schuster, 2011. $35.
H
istories of economics tend to start with Adam Smith and his Wealth of Nations, but Sylvia Nasar leads off with Charles Dickens and A Christmas Carol. It’s an unusual choice, but an effective and appropriate introduction to the story she wants to tell in Grand Pursuit: The Story of Economic Genius. Dickens shows us the redemption of Ebenezer Scrooge—his conversion from pinchpenny to beneficent bon vivant. Nasar aims to redeem economics from its intellectual roots as a science of scarcity and avarice and present it as a tool for improving the human condition. Nasar is the author of A Beautiful Mind, a biography of the brilliant but troubled mathematician John Nash. Biography, rather than economics, is the true genre of this new book as well. Economic theories and princi-
ples are sketched when necessary, but economists’ lives are rendered in full color and lavish detail. The book’s longest chapter is given to Beatrice Webb and, by extension, her husband Sidney Webb, the founders of the London School of Economics. We follow the wealthy young Beatrice from Gloucester to London for her coming out; we learn about her long and futile infatuation with Joseph Chamberlain (father of Neville) and her sparring matches with philosopher and evolutionist Herbert Spencer at the family dinner table; there’s a bit of upstairs–downstairs drama when Beatrice becomes close with a servant, Martha Jackson, whom she later learns is actually a poor relation. Then comes her blossoming interest in social justice, which leads in turn to a great adventure: a few days spent incognito working as a seamstress in an East
End sweatshop. And all this comes about before Sidney Webb arrives on the scene. (“Beatrice thought Sidney looked like a cross between a London cardsharp and a German professor,” Nasar writes.) In writing biographies of economists, Nasar inevitably invites comparison with Robert L. Heilbroner’s book The Worldly Philosophers: The Lives, Times and Ideas of the Great Economic Thinkers, first published in 1953 and still going strong after seven editions. (Heilbroner died in 2005.) For millions of readers, including me, Heilbroner provided a first introduction to economic thought. As I read Grand Pursuit, I was moved to search out my old copy of The Worldly Philosophers (fourth edition, 1972) and compare the two authors’ portraits of Alfred Marshall, a great synthesizer of 19thcentury ideas and mentor to the next generation. Heilbroner wrote: Merely to look at Alfred Marshall’s portrait is already to see the stereotype of the teacher: white moustache, white wispy hair, kind bright eyes—an eminently professorial countenance. . . . Marshall . . . was pre-eminently the product of a university. . . . His life, his point of view—and inevitably his economics—smacked of the quietude and refinement of the Cambridge setting. And here is how Nasar brings Marshall onto the stage:
John Maynard Keynes (center) converses with biographer Lytton Strachey (right) as philosopher Bertrand Russell looks on. All three were members of the Bloomsbury Group, the close-knit group of London artists and intellectuals active in the early 1900s that also included writers Virginia Woolf and E. M. Forster. Keynes, Nasar writes, “was rather homely and quite rude. He made up for these shortcomings with cleverness, a charming voice, and efficiency in practical matters.” From Grand Pursuit. 426
A young man with delicate features, silky blond hair, and shining blue eyes boarded the Glasgow-bound Great Northern Railway at London’s Euston Station. It was early June 1867. He was carrying only a walking stick and a rucksack crammed with books. His fellow passengers might have taken him for a curate or a schoolmaster on a mountaineering holiday. But when the train reached Manchester, the young man put his rucksack on, jumped down onto the platform, and disappeared in the crowd. Before resuming his journey north to the Scottish highlands, Alfred Marshall, a twenty-fouryear-old mathematician and fellow of St. John’s College in Cambridge, spent hours walking through factory districts and the
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
M q M q
M q
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q MQmags q
THE WORLD’S NEWSSTAND®
He did not doubt that the chief cause of poverty was low wages, but what caused wages to be low? Radicals claimed that it was the rapacity of employers, while Malthusians argued that it was the moral failings of the poor. Marshall proposed a different answer: low productivity.
In a photograph taken at Leonard and Virginia Woolf’s home, Keynes (right) stands with the painter Duncan Grant, another member of the Bloomsbury Group and, Nasar writes, “the great love of his youth.” From Grand Pursuit.
surrounding slums “looking into the faces of the poorest people.” Nasar reveals something else that Heilbroner did not mention: Marshall, the son of a bank teller and the grandson of a butcher, grew up in a London slum “in the shadow of a tannery”; he reached the quietude and refinement of Cambridge only through scholarships. Placing the two authors’ accounts side by side, one begins to wonder if they are talking about the same Alfred Marshall. And that doubt persists when it comes to Marshall’s economic thinking. Heilbroner wrote: “One word can sum up the basic concern behind Marshall’s teaching—the word equilibrium. . . . Marshall was primarily interested in the self-adjusting, self-correcting nature of the economic world.” Nasar gives this account: Marshall’s lectures focused on the central paradox of modern society: poverty amid plenty. He taught by posing a series of questions: Why hadn’t the Industrial Revolution freed the working class “from misery and vice”? How much improvement is possible under current social arrangements based on private property and competition? . . . www.americanscientist.org
American Scientist
Which portrait is truer to the facts? I am inclined to duck that question by rejoicing that we have two fine books. But I can also gripe about both of them: Neither Heilbroner nor Nasar gives even a glimpse of the mathematical reasoning at the core of Marshall’s work. Apart from the Webbs and Marshall, the major figures in Nasar’s narrative are John Maynard Keynes (who was Marshall’s student), the American economist Irving Fisher, and Joseph Schumpeter, an Austrian emigré who eventually wound up at Harvard (where he taught Robert Heilbroner). The supporting cast includes another Austrian, Friedrich von Hayek; Keynes’s student Joan Robinson; and two more Americans, Milton Friedman and Paul Samuelson. Also, as chronological bookends, there are portraits of Karl Marx (along with Friedrich Engels) and of Amartya Sen. Marx is given the role of buffoon: The champion of the proletariat maintains a pretentious suburban home so that his daughters can “establish themselves socially.” Sen is the saint, who escapes famine and poverty in Bengal but never turns his back on the poor. Perhaps the most important character to emerge from this story is the economy itself. We’ve always had wealth and poverty, good times and hard times, but only with the industrial age did anyone think to look upon “the economy” as a distinct entity whose activities we could monitor and measure and perhaps control. It was the Great Depression that brought this notion to the fore, as Keynes and others argued for active intervention to nudge the economic system toward a different point of equilibrium. The idea is now commonplace. “It’s the economy, stupid,” was the watchword of a presidential campaign 20 years ago, and the slogan could serve just as well in the current political season. Indeed, it seems the economy is not just an entity but a personality—a rather needy, high-strung and often
Joan Robinson decided to study economics rather than history in order to understand the poverty she observed in England in the 1920s. She is shown here in a portrait by photography firm Ramsey and Muspratt. From Grand Pursuit.
petulant little tyrant whose tantrums bring down governments and inflict misery on millions. We fret constantly about its health and moods; we debate its need for stimulus or restraint. Nasar urges a much more upbeat view of this situation, arguing that progress over the past two centuries offers every reason for optimism. She deserves to have the last word: Economic calamities—financial panics, hyperinflations, depressions, social conflicts and wars— have always triggered crises of confidence, but they have not come close to wiping out the cumulative gains in average living standards. . . . Since World War II, history has been dominated by the escape of more and more of the world’s population from abject poverty. . . . Remarkably, even the Great Recession of 2008 to 2009, the most severe economic crisis since the 1930s, did not reverse the prior gains in productivity and income.
Brian Hayes is senior writer for American Scientist. He is the author most recently of Group Theory in the Bedroom, and Other Mathematical Diversions (Hill and Wang, 2008). 2012 September–October
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
427
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Nanoviews
Bernard Hallet
A FIELD GUIDE TO RADIATION. Wayne Biddle. 239 pp. Penguin, $15.
THE ROCKS DON’T LIE: A Geologist Investigates Noah’s Flood. David R. Montgomery. W. W. Norton and Co., $26.95.
For most of the past 200 years, the expression “flood geology” has engendered something verging on contempt in many earth scientists. Yet prior to the groundbreaking ideas of James Hutton and John Playfair, the great inundation was the basis of most explanations for the land we see around us—topography, sediments, fossils, the miscellanea of geology. Geomorphologist David R. Montgomery casts a critical yet sympathetic eye on flood myths, finding substance for them in Tibet, the Philippines and elsewhere, while systematically disassembling the universality of Noah’s Flood. (The photograph above shows the spot in Tibet where an ancient glacial dam was breached.) The Rocks Don’t Lie traces the history of the field of geology through the thinking that progressively debunked the great-flood myth and left behind, temporarily, what would be resurrected 150 years later as Creationism. Picking up a book with the subtitle A Geologist Investigates Noah’s Flood, I expected to learn geology and was not disappointed. The Rocks Don’t Lie intertwines geologic history and the author’s own field trips in an engrossing way. Montgomery offers a much richer story than I was taught as an undergraduate about the uncon-
428
formity that Hutton found at Siccar Point, Scotland (which is featured on the book’s cover). I was not prepared, however, to be schooled on how the Bible has been interpreted over the past millennium. To offer just one example, the book recounts how John Calvin’s views of Noah’s Flood differed from those of Martin Luther. Luther turns out to be the literalist, stating that Moses “spoke properly and plainly, and neither allegorically nor figuratively.” Calvin took a more restrained view: He interpreted the Genesis story literally but did not imagine that the great flood was responsible for the topography around him or the fossils in the rocks of his beloved Swiss Alps. True to his field, Montgomery also shows flashes of considerable wit—albeit usually at the expense of the Creationists. Visiting the Creation Museum in Petersburg, Kentucky, he discovers that evolution has actually occurred since the original “creation orchard,” as the museum terms it— but only among nonhuman creatures. The book’s extensive endnotes sometimes expand on points and sometimes document the sources of quotations. Following those references comes a substantive list of sources, which add to the opportunities for pursuing subjects further. That’s just a taste of what’s in store for readers of this delightful volume. I came away far more enriched than I had expected to be.—David Schoonmaker
Although the title A Field Guide to Radiation may conjure up images of ecotourists searching Chernobyl or Fukushima for invisible quarry such as alpha particles and gamma rays, Wayne Biddle’s new book is instead an everyday guide to the radiation to which we are all constantly exposed. It consists of short, pithy essays laid out in alphabetical order, from “Absorbed Dose” to “Zirconium-93, -95.” The guide, says Biddle, “is not pro- or anti-radiation any more or less than a field guide to reptiles is pro- or anti-snake.” His goal is to sort out the personal implications of news reports and other data so regular people can make informed choices. And the essays contain plenty of historical and factual information; readers will learn where various radioactive elements comes from and how much exposure is considered safe. But these facts are often accompanied by commentary, and in some cases he seems to be shaking his head at past uses of radioactive material. In one of the book’s lengthier essays, he notes that up to the 1970s, thousands of infants were treated with radium for harmless skin blemishes, which led to increased cancer rates among these people. The world seemed to be giddy about radium for a time, experimenting with it as a cure for everything from mental disorders to hearing loss. Although Biddle is quick to point out that some “safer” forms of radiation do have legitimate medical uses, he advises patients to ask questions and monitor their exposures. His tone for much of the book is cautionary and thoughtful, but sometimes he succumbs to downright silliness, as when, in the entry for the element yttrium (named after the Swedish village of Ytterby) he mysteriously exclaims “Ytt-ytt!” Biddle emphasizes that there is no really safe way to store or dispose of nuclear material, which makes our planet “newly hostile.” We can’t detect radiation with our human senses, and we can’t get away from it. But his hope is that this utilitarian handbook will help mitigate the risk.—Fenella Saunders
American Scientist, Volume 100
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
September-October 2012 · Volume 21, Number 5
Procter Prize Winner Solomon Golomb
E
ngineer, mathematician and professor of electrical engineering Solomon W. Golomb of the University of Southern California, received Sigma Xi’s 2012 William Procter Prize for Scientific Achievement during the 2012 SETA Conference at the University of Waterloo, Canada this past June. Executive Director Dr. Jerry Baker was on hand to give the award at a banquet in Golomb’s honor, that also just happened to coincide with the Transit of Venus across the Sun. Each year, Sigma Xi awards the William Procter Prize for Scientific Achievement to a scientist who has made an outstanding contribution to scientific research and has demonstrated an ability to communicate this research clearly to scientists in other disciplines. Called “the inventor of polyominoes,” Solomon W. Golomb’s body of work has spanned topic and focus over the course of his career, beginning with his Fulbright studies on communication techniques in deep-space for lunar and planetary explorations. While working on his Ph.D., Golomb spent time at the University of Oslo as a Fulbright Fellow, before returning to the United States to work at the Jet Propulsion Laboratory as a Senior Research Mathematician. He joined the faculty at USC in 1963, obtaining tenure a mere two years later, and has spent his academic career researching complex mathematics.
From the President The Importance of Science to Non-Sciency People Hello Companions! As I was considering what to share with you in this edition I decided to ask my friends and family for input. One of the first suggestions that came in was from a close friend who has no formal background in science at all. He suggested that I discuss “How science touches non-sciency people.” Think about that—we all know it’s true and we all regularly take it for granted. The vast majority of the public, and certainly everyone in industrialized countries, is affected by science and engineering research every day. I’m not talking just about the obvious medicine or even personal care products; I’m talking about computers, GPS systems, smartphones, and even just having the lights turn on when you flip the switch. The list goes on and on. Given how important both fundamental and applied research are in improving our quality of life, why is it more people don’t know more about it? It’s not lack of interest. My friend Rich, who gave the suggestion, is always interested to read about new scientific endeavors and is often a source for me learning something new. Another friend, Carmen, who has no formal background in science either, is deeply interested in cosmology and spends a lot of time reading and sharing information she finds compelling. But are they representative? Perhaps not in the sense they are actively seeking out the information. But in many ways they are in that they are non-sciency people who nonetheless have an interest in the universe around them. How often do you get a response along the lines of “oh, you must be really smart” or “I didn’t do well in physics” when you tell someone what you do? Do you just let that pass and try to move the conversation somewhere comfortable? Why? We do ourselves a disservice if we don’t take that opportunity to explain how that person, knowingly or not, has been impacted significantly by science and engineering research in a wide variety of ways. As another friend regularly says “That iPhone was not brought to you by Apple. It was brought to you by ENGINEERS!” We need to reach out and explain what we do in ways that inspire and inform. I’m not saying people need to be able to understand the detailed nuances of your work—no one who isn’t really specialized is likely to do so. In fact, at this point I would have to study quite a lot to return to the level of understanding I had of my own work when I wrote my dissertation! But we should all work towards that elevator speech about what we do and why it’s important and interesting. It is inherently valuable to have our fellow citizens understand and value our contributions, and we should strive to understand and value theirs. I’ve learned a lot about marketing and the importance of brand from Rich, and in many ways what I am saying is we need to own and improve the brand image of science and engineering research. We should work to change the response so that when people hear “I’m a chemist.” It shouldn’t be “Oh, chemistry is hard.” It should be “Oh! Research in chemistry is really important! Tell me what you do.” Why? This isn’t just altruistic. If everyone understood the importance of science and engineering research to our quality of life and to the economy then perhaps it wouldn’t be such a struggle to maintain the all-important research budgets that fund our work. Thanks for reading, Kelly O. Sullivan
(continued on page 431) www.sigmaxi.org
American Scientist
2012 September-October 429
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
AAAS/ACS Northwest Regional Meeting in Boise, Idaho
E
xecutive Director Dr. Jerry Baker and Manager of Chapters and Member Services Hallie Sessoms were proud to represent Sigma Xi at the
93rd Annual AAAS/ACS Northwest Regional Meeting in Boise, Idaho June 24-27th
at the Boise Center on the Grove. During this productive trip, they were pleased to meet with several Sigma Xi members—and even began the reactivation process for two Northwest region chapters.
Sky Center’s Double D Ranch Club at Broncos Stadium for all participants and friends of the meeting. Sigma Xi President Kelly O. Sullivan was also on hand to present Sivaguru Jayaraman with the 2012 Sigma Xi Young Investigator Award. Dr. Jayaraman is an Associate Professor of Chemistry and Molecular Biology at North Dakota State University,
and his lecture “Learning from Nature: Bio-mimetic Supramolecular Photocatalysis” was one of the highlights of the meeting’s final day. Many thanks to Dr. Linda Mantell, Northwest Regional Director of Sigma Xi for her assistance in ensuring that Sigma Xi had an influential presence during the meeting. In the future, please be sure to look for Sigma Xi at a conference or meeting near you. U
Following two days of student research presentations and symposia, Sigma Xi also sponsored a pre-banquet reception in the Stueckle
Pizza Lunch Shout Out
A
bout once a month at Sigma Xi headquarters, we liven up the lunch hour with an American Scientist Pizza Lunch talk. In these informal lectures, scientists describe new research to nonscientists. Each Pizza Lunch offers an in-depth look at a different subject, from bedbugs to the smart grid.
After each talk, American Scientist editors chat with the speakers about their research. Anyone can listen in via our American Scientist Pizza Lunch podcast, also located online. Don’t miss our rich archives of full-length audio slideshows of earlier lectures, too. Sigma Xi, the Scientific Research Society, hosts the talks in Research Triangle Park, North Carolina. The series is supported by a grant from the N.C. Biotechnology Center and is managed by American Scientist Managing Editor Fenella Saunders. U
University of North Texas
W
e are pleased to share this photo from the University
of North Texas Health Science Center Sigma Xi Honors Day & Induction Ceremony. The oath was
administered by Sigma Xi Member and Dean Jamboor K. Vishwanatha, and the certificates and cords were distributed by another Sigma Xi Member, Provost Thomas Yorio. The University of North Texas Health Science Center is an excellent example of how a chapter's success is greatly increased by the support of university administration. If you have photos of your chapter inductions, please share them with Hallie Sessoms, Manager of Chapter & Member Services today at
[email protected]. _____________ Congratulations to the new members and many thanks to UNT for their dedication to Sigma Xi! U
Social Media Shout Out
H
ave you connected with Sigma Xi via social media? Please do so today and let’s continue the conversation.
“Like” us on Facebook
www.facebook.com/SigmaXi
Follow us on Twitter
https://twitter.com/SigmaXiSociety
Connect on LinkedIn
http://www.linkedin.com/groups?gid=42707
Follow us on Pinterest
www.pinterest.com/SigmaXi U
430
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
THE WORLD’S NEWSSTAND®
Sivaguru Jayaraman: Young Investigator Award
S
igma Xi is proud to announce that Associate Professor of Chemistry and Molecular Biology at North Dakota State University, Dr. Sivaguru Jayaraman, is this year’s Young Investigator Award Winner. Dr. Siva, as he is known by colleagues and students alike, has focused his efforts on the development of a program that involves synthetic effort to allow for freedom of design in the production of new structural motifs, for the study, not only of stereoselective reactions, but also for chemical and biomolecular recognition of encapsulated guests within water soluable nano-reaction vessels. Dr. Siva’s research investigates the molecular and supramolecular assembly characteristics of systems to gain a deeper understanding of the interplay between molecular structure, assembly, dynamics and the role of external interactions critical for molecular recognition events in light-initiated reactions. Additionally, Dr. Siva’s research group uses modern molecular tools and spectroscopic techniques to gain deeper understanding of molecular interactions in chemical and biological systems, using light as both a reagent that initiates the chemistry and as the product of excited state reactivity of organic molecules. Dr. Siva joined the faculty at North Dakota State University in August of 2006. After receiving his Ph.D. from Tulane University in New Orleans, La., he completed a postdoctoral fellowship at Columbia University in New York, N.Y. Dr. Siva received his master’s degree in chemistry from the Indian Institute of Technology, Madras, Tamil Nadu, India and completed his bachelor’s degree in chemistry from St. Joseph’s College, Trichy, Tamil Nadu, India. Since 1998, the annual Young Investigator Award recognizes excellence in research and includes a certificate of recognition and a $5,000 honorarium. U
www.sigmaxi.org
American Scientist
M q M q
M q
M q MQmags q
Noah Olsman: GIAR Recipient
A
s a part of the William Procter Prize, Dr. Solomon Golomb has selected Noah Olsman to receive a $5,000 Grant-In-Aid of Research. Olsman is originally from Los Angeles, California and in 2008, began studies at the University of Southern California majoring in electrical engineering and minoring in mathematics. Olsman began his research under Professor Solomon Golomb as a freshman, generating computational results for open problems in discrete mathematics. In 2012, Noah participated in an NSF sponsored Research Experience for Undergraduates (REU) program in the computer science department at Harvard University on the RoboBees project. While working there, he developed a swarm algorithm in simulation with the goal of providing efficient means for robotic bees to uniformly pollinate a field given limited sensory information and poor controls.
Olsman also worked on a modeling project with Professor Daria Roithmayr in the USC School of Law. The goal of this project was to develop a framework for analyzing the evolutionary and game theoretic tradeoffs faced by groups of agents in public goods games. After graduating from USC in May of 2012, Noah began work at Yale University in the Computational Biology Department, as a visiting student in research in the lab of Professor Thierry Emonet. His work at Yale focuses on modeling aspects of chemotaxis, the process by which cells direct their movement based on their ability to sense chemical gradients. Specifically, his goal is to develop a mathematical framework to analyze the trade-offs faced by single E. coli cells in navigating different environments. U
Proctor Prize (continued from page 429) Among his greatest achievements are the invention of Golomb coding—a form of entropy encoding and the identification of the necessary values for pseudorandom, or maximum length shift register sequences. Golomb’s work in this realm directly contributed to the advancement of cellular phone technology, and his work is applied daily in communication systems for all sectors across the globe.
It is also important to note that Golomb’s work has not stayed solely in the classroom; he is a regular columnist and puzzle-creator for IEEE’s Information Society Newsletter, Scientific American’s Mathematical Games column, and Johns Hopkins Magazine’s puzzle column. Video game aficionados will appreciate that Golomb’s work with polyominoes is widely considered to be the inspiration for the widely popular, generation spanning, 1984 game Tetris. U
2012 September-October 431
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
Meet Your Fellow Companion - Laurent Pirolli
T
he honor of membership into Sigma Xi spans disciplines and courses of research study. Each month in Sigma Xi Today, we will be highlighting a different “Fellow Companion”—asking them about their work and what the Sigma Xi honor has meant for their career.
This month, we are pleased to introduce Laurent Pirolli, Ph.D., a Sensors Integration Engineer for Wireline at Schlumberger Technology Center in Sugar Land, Texas. Dr. Pirolli is a native of France who graduated with a bachelor’s degree in chemistry from the University of Versailles. He received his master’s degree from the French Petroleum School in Paris, after realizing a passion for the important roles of gas, oil and potable water in the 21st century. He then received a Ph.D. in physical chemistry from the University of Delaware, studying diffusion barriers for the microelectronics industry and catalysis at the molecular and atomic levels. This research widened his scientific background to surface chemistry and material science, which has been extremely valuable to him as an employee of an oilfield service company.
1) As a petroleum engineer, what are you currently working on? Currently, I am working on developing new tools to increase our capabilities in characterizing reservoirs down-hole, so that oil can be discovered and produced more economically.
2) What is something we might see in our daily lives that correlates to your work? There are many things related to energy, gas, oil, water and sensors. The easiest one is your car: from the oil to fill up your tank to the small sensors in it to optimize its use.
3) How are your sensor devices used?
7) What has the honor of induction into Sigma Xi meant to you?
Our sensor devices can be used to improve the operation of a tool, all the way to characterizing the liquid or gas in the reservoir, so that appropriate decisions can be made on the zones to produce and on designing the surface facilities accordingly.
It was an honor and a great reward to be accepted as a member after all the hard work during my PhD. For me, Sigma Xi has always been an elite society and to have the honor of being part of it has been an achievement and made all the hard work worthwhile.
4) Can you tell us a little about the development of your new sensor? Our goal is to improve the quality and offering of our services, and to do so, we develop new sensor technologies. It starts in our Research Centers, and once the sensor has reached a certain maturity, one of our Engineering Centers further develops and tests the sensor under field conditions, to make it reliable in the specified down-hole environment.
5) Tell us about your work in multidisciplinary teams in engineering and research. It is one of the most challenging and rewarding part of my job. When developing new technology, we rely on different expertise and experience from all over the world. Everybody has a specific expertise and a defined role, so working all together as a team is the key to developing a successful product. Both France and the United States rely on producing great scientists and innovators to remain leaders. It is a great challenge requiring a continuous effort, but it has great rewards, and I am looking forward to encouraging my own child in this endeavor!
6) Describe the patent experience – were there any bumps along the way for you? I have learned a lot in my early years in the oil industry, and was lucky to work with Intellectual Property attorneys who educated me in making a strong patent case. Thanks to them, my patent experience has been very smooth.
8) Why do you believe honor societies are important? Honor societies unite great minds from different fields, so that an expert from one field can educate experts from different fields. It is the best recipe to solve the greatest challenges! Are you interested in being interviewed for our “Meet Your Fellow Companion” piece in each issue of American Scientist? If so, please contact us at
[email protected]. _______________ Be sure to look out for next month’s Fellow Companion Interview—when we interview one of Sigma Xi’s youngest Full Members! U
Grants-in-Aid of Research Deadline: Oct. 15
G
rants of up to $1,000 are available to undergraduate and graduate students in all areas of science and engineering. Designated funds from the National Academy of Sciences allow for grants of up to $5,000 for astronomy research and $2,500 for vision related research. U
432
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
100 _____________
American Scientist
______________________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
American Scientist
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®
M q M q
M q
M q MQmags q
THE WORLD’S NEWSSTAND®