When We Rally ‘Round the Flag: A History

Flag Day passes every year almost unnoticed. That’s a shame—it celebrates a symbol with ties to religious and totemic objects that have moved people for millennia

The Supreme Court declared in 1989 that desecrating the American flag is a protected form of free speech. That ended the legal debate but not the national one over how we should treat the flag. If anything, two years of controversies over athletes kneeling during “The Star-Spangled Banner,” which led last month to a National Football League ban on the practice, show that feelings are running higher than ever.

Yet, Flag Day—which honors the adoption of the Stars and Stripes by Congress on June 14, 1777—is passing by almost unnoticed this year, as it does almost every year. One reason is that Memorial Day and Independence Day—holidays of federally sanctioned free time, parades and spectacle—flank and overshadow it. That’s a shame, because we could use a day devoted to reflecting on our flag, a precious national symbol whose potency can be traced to the religious and totemic objects that have moved people for millennia.

The first flags were not pieces of cloth but metal or wooden standards affixed to poles. The Shahdad Standard, thought to be the oldest flag, hails from Persia and dates from around 2400 B.C. Because ancient societies considered standards to be conduits for the power and protection of the gods, an army always went into battle accompanied by priests bearing the kingdom’s religious emblems. Isaiah Chapter 49 includes the lines: “Thus saith the Lord God, Behold, I will lift up mine hand to the Gentiles, and set up my standard to the people.”

Ancient Rome added a practical use for standards—waving, dipping and otherwise manipulating them to show warring troops what to do next. But the symbols retained their aura as national totems, emblazoned with the letters SPQR, an abbreviation of Senatus Populusque Romanus, or Senate and People of Rome. It was a catastrophe for a legion to lose its standard in battle. In Germania in A.D. 9, a Roman army was ambushed while marching through Teutoburg Forest and lost three standards. The celebrated general Germanicus eventually recovered two of them after a massive and bloody campaign.

In succeeding centuries, the flag as we know it today began to take shape. Europeans and Arabs learned silk production, pioneered by China, which made it possible to create banners light enough to flutter in the wind. As in ancient days, they were most often designed with heraldic or religious motifs.

In the U.S., the design of the flag harked back to the Roman custom of an explicitly national symbol, but the Star-Spangled Banner was slow to attain its unique status, despite the popularity of Francis Scott Key’s 1814 anthem. It took the Civil War, with its dueling flags, to make the American flag an emblem of national consciousness. As the U.S. Navy moved to capture New Orleans from the Confederacy in 1862, Marines went ashore and raised the Stars and Stripes at the city’s mint. William Mumford, a local resident loyal to the Confederacy, tore the flag down and wore shreds of it in his buttonhole. U.S. General Benjamin Butler had Mumford arrested and executed.

After the war, the Stars and Stripes became a symbol of reconciliation. In 1867 Southerners welcomed Wisconsin war veteran Gilbert Bates as he carried the flag 1,400 miles across the South to show that the nation was healing.

As the country developed economically, a new peril lay in store for the Stars and Stripes: commercialization. The psychological and religious forces that had once made flags sacred began to fade, and the national banner was recruited for the new industry of mass advertising. Companies of the late 19th century used it to sell everything from beer to skin cream, leading to national debates over what the flag stood for and how it should be treated.

President Woodrow Wilson instituted Flag Day in 1916 in an effort to concentrate the minds of citizens on the values embodied in our most familiar national symbol. That’s as worthy a goal today as it was a century ago.

The Gym, for Millennia of Bodies and Souls

Today’s gyms, which depend on our vanity and body envy, are a far cry from what the Greeks envisioned

ILLUSTRATION: THOMAS FUCHS

Going to the gym takes on a special urgency at this time of year, as we prepare to put our bodies on display at the pool and beach. Though the desire to live a virtuous life of fitness no doubt plays its part, vanity and body envy are, I suspect, the main motivation for our seasonal exertions.

The ancient Greeks, who invented gyms (the Greek gymnasion means “school for naked exercise”), were also body-conscious, but they saw a deeper point to the sweat. No mere muscle shops, Greek gymnasia were state-sponsored institutions aimed at training young men to embody, literally, the highest ideals of Greek virtue. In Plato’s “The Republic,” Socrates says that the two branches of physical and academic education “seem to have been given by some god to men…to ensure a proper harmony between energy and initiative on the one hand and reason on the other, by tuning each to the right pitch.”

Physical competition, culminating in the Olympics, was a form of patriotic activity, and young men went to the gym to socialize, bathe and learn to think. Aristotle founded his school of philosophy in the Lyceum, in a gymnasium that included physical training.

The Greek concept fell out of favor in the West with the rise of Christianity. The abbot St. Bernard of Clairvaux (1090–1153), who advised five popes, wrote, “The spirit flourishes more strongly…in an infirm and weak body,” neatly summing up the medieval ambivalence toward physicality.

Many centuries later, an eccentric German educator named Friedrich Jahn (1778-1852) played a key role in the gym’s revival. Convinced that Prussia’s defeat by Napoleon was due to his compatriots’ descent into physical and moral weakness, Jahn decided that a Greek-style gym would “preserve young people from laxity and…prepare them to fight for the fatherland.” In 1811, he opened a gym in Berlin for military-style physical training (not to be confused with the older German usage of the term gymnasium for the most advanced level of secondary schools).

By the mid-19th century, Europe’s upper-middle classes had sufficient wealth and leisure time to devote themselves to exercise for exercise’s sake. Hippolyte Triat opened two of the first truly commercial gyms in Brussels and Paris in the late 1840s. A retired circus strongman, he capitalized on his physique to sell his “look.”

But broader spiritual ideas still influenced the spread of physical fitness. The 19th-century movement Muscular Christianity sought to transform the working classes into healthy, patriotic Christians. One offshoot, the Young Men’s Christian Association, became famous for its low-cost gyms.

By the mid-20th century, Americans were using their gyms for two different sets of purposes. Those devoted to “manliness” worked out at places like Gold’s Gym and aimed to wow others with their physiques. The other group, “health and fitness” advocates, expanded sharply after Jack LaLanne, who founded his first gym in 1936, turned a healthy lifestyle into a salable commodity. A few decades later, Jazzercise, aerobics, disco and spandex made the gym a liberating, fashionable and sexy place.

More than 57 million Americans belong to a health club today, but until local libraries start adding spinning classes and CrossFit, the gym will remain a shadow of the original Greek ideal. We prize our sound bodies, but we aren’t nearly as devoted to developing sound mind and character.

With Big Prizes Often Comes Controversy

It’s not just the Nobel: Award-giving missteps have a long history

ILLUSTRATION: THOMAS FUCHS

This spring, controversies have engulfed three big prizes.

The Swedish Academy isn’t awarding the Nobel Prize for Literature this year while it deals with the fallout from a scandal over allegations of sexual assault and financial impropriety.

In the U.S., the author Junot Díaz has stepped down as Pulitzer Prize chairman while the board investigates allegations of sexual misconduct. In a statement through his literary agent earlier this month, Mr. Díaz did not address individual accusations but said in part, “I take responsibility for my past.” Finally, the organizers of the Echo, Germany’s version of the Grammys, said they would no longer bestow the awards after one of this year’s prizes went to rappers who used anti-Semitic words and images in their lyrics and videos.

Prize-giving controversies—some more serious than others—go back millennia. I know something about prizes, having served as chairwoman of the literary Man Booker Prize jury.

The ancient Greeks gave us the concept of the arts prize. To avoid jury corruption in their drama competitions, during the Festival of Dionysus, the Athenians devised a complicated system of votes and lotteries that is still not entirely understood today. Looking back now, the quality of the judging seems questionable. Euripides, the greatest tragedian of classical Greece, habitually challenged his society’s assumptions in tragedies like “Medea,” which sympathetically portrayed female desperation in a society where men ruled absolutely. In a three-way competition, “Medea,” which still holds the stage today, placed last.

Controversy surrounding a competition can be a revitalizing force—especially when the powers that be support the dissidents. By the 1860s, France’s Academy of Fine Arts, the defender of official taste, was growing increasingly out of touch with contemporary art. In 1863, the jury of the prestigious annual Salon exhibition, which the academy controlled, rejected artists such as Paul Cézanne, Camille Pissarro and Édouard Manet.

The furor caused Emperor Napoleon III to order a special exhibition called the Salon “of Rejects” to “let the public judge” who was right. The public was divided, but the artists felt emboldened, and many scholars regard 1863 as the birthdate of modern painting. The Academy ultimately relinquished its control of the Salon in 1881. Its time was over.

At other times, controversies over prizes are more flash than substance. As antigovernment student protests swept Paris and many other places in 1968, a group of filmmakers tried to show solidarity with the protesters by shutting down the venerable Cannes Film Festival. At one point, directors hung from a curtain to prevent a film from starting. The festival was canceled but returned in 1969 without the revolutionary changes some critics were hoping for.

In contrast, a recent dispute at the festival over its refusal to allow in its competition Netflix films that bypass French theaters for streaming was relatively quiet but reflects the serious power struggle between streaming services and theatrical movie distributors.

As the summer approaches and the beleaguered festivals around the world take a breather, here’s some advice from a survivor of the prize process: Use this time to reflect and revive.

Serendipity of Science is Often Born of Years of Labor

Over the centuries, lucky discoveries depend on training and discernment

ILLUSTRATION: THOMAS FUCHS

One recent example comes from an international scientific team studying the bacterium, Ideonella sakaiensis 201-F6, which makes an enzyme that breaks down the most commonly used form of plastic, thus allowing the bacterium to eat it. As reported last month in the Proceedings of the National Academy of Sciences, in the course of their research the scientists accidentally created an enzyme even better at dissolving the plastic. It’s still early days, but we may have moved a step closer to solving the man-made scourge of plastics pollution.

The development illustrates a truth about seemingly serendipitous discoveries: The “serendipity” part is usually the result of years of experimentation—and failure. A new book by two business professors at Wharton and a biology professor, “Managing Discovery in the Life Sciences,” argues that governments and pharmaceutical companies should adopt more flexible funding requirements—otherwise innovation and creativity could end up stifled by the drive for quick, concrete results. As one of the authors, Philip Rea, argues, serendipity means “getting answers to questions that were never posed.”

So much depends on who has observed the accident, too. As Joseph Henry, the first head of the Smithsonian Institution, said, “The seeds of great discoveries are constantly floating around us, but they only take root in minds well prepared to receive them.”

One famously lucky meeting of perception and conception happened in 1666, when Isaac Newton observed an apple fall from the tree. (The details remain hazy, but there’s no evidence that the fruit actually hit him, as legend has it.) Newton had seen apples fall before, of course, but this time the sight inspired him to ask questions about gravity’s relationship to the rules of motion that he was contemplating. Still, it took Newton another 20 years of work before he published his Law of Universal Gravitation.

Bad weather was the catalyst for another revelation, leading to physicist Henri Becquerel’s discovery of radioactivity in 1896. Unable to continue his photographic X-ray experiments on the effect of sunlight on uranium salt, Becquerel put the plates in a drawer. They developed, incredibly, without light. Realizing that he had been pursing the wrong question, Becquerel started again, this time focusing on uranium itself as a radiation emitter.

As for inventions, accident and inadvertence played a role in the development of Post-it Notes and microwave heating. During the 1990s, Viagra failed miserably in trials as a treatment for angina, but alert researchers at Pfizer realized that one of the side effects could have global appeal.

The most famous accidental medical discovery is antibiotics. The biologist Alexander Fleming discovered penicillin in 1928 after he went on vacation, leaving a petri dish of bacteria out in the laboratory. On his return, the dish had developed mold, with a clean area around it. Fleming realized that something in the mold must have killed off the bacteria.

That ability to ask the right questions can be more important than knowing the right answers. Funders of science should take note.

 

Undying Defeat: The Power of Failed Uprisings

From the Warsaw Ghetto to the Alamo, doomed rebels live on in culture

John Wayne said that he saw the Alamo as ‘a metaphor for America’. PHOTO: ALAMY

Earlier this month, Israel commemorated the 75th anniversary of the Warsaw Ghetto Uprising of April 1943. The annual Remembrance Day of the Holocaust and Heroism, as it is called, reminds Israelis of the moral duty to fight to the last.

The Warsaw ghetto battle is one of many doomed uprisings across history that have cast their influence far beyond their failures, providing inspiration to a nation’s politics and culture.

Nearly 500,000 Polish Jews once lived in the ghetto. By January 1943, the Nazis had marked the surviving 55,000 for deportation. The Jewish Fighting Organization had just one machine gun and fewer than a hundred revolvers for a thousand or so sick and starving volunteer soldiers. The Jews started by blowing up some tanks and fought on until May 16. The Germans executed 7,000 survivors and deported the rest.

For many Jews, the rebellion offered a narrative of resistance, an alternative to the grim story of the fortress of Masada, where nearly 1,000 besieged fighters chose suicide over slavery during the First Jewish-Roman War (A.D. 66–73).
The story of the Warsaw ghetto uprising has also entered the wider culture. The title of Leon Uris’s 1961 novel “Mila 18” comes from the street address of the headquarters of the Jewish resistance in their hopeless fight. Four decades later, Roman Polanski made the uprising a crucial part of his 2002 Oscar-winning film, “The Pianist,” whose musician hero aids the effort.

Other doomed uprisings have also been preserved in art. The 48-hour Paris Uprising of 1832, fought by 3,000 insurrectionists against 30,000 regular troops, gained immortality through Victor Hugo, who made the revolt a major plot point in “Les Misérables” (1862). The novel was a hit on its debut and ever after—and gave its world-wide readership a set of martyrs to emulate.

Even a young country like the U.S. has its share of national myths, of desperate last stands serving as touchstones for American identity. One has been the Battle of the Alamo in 1836 during the War of Texas Independence. “Remember the Alamo” became the Texan war cry only weeks after roughly 200 ill-equipped rebels, among them the frontiersman Davy Crockett, were killed defending the Alamo mission in San Antonio against some 2,000 Mexican troops.

The Alamo’s imagery of patriotic sacrifice became popular in novels and paintings but really took off during the film era, beginning in 1915 with the D.W. Griffith production, “Martyrs of the Alamo.” Walt Disney got in on the act with his 1950s TV miniseries, “ Davy Crockett : King of the Wild Frontier.” John Wayne’s 1960 “The Alamo,” starring Wayne as Crockett, immortalized the character for a generation.

Wayne said that he saw the Alamo as “a metaphor of America” and its will for freedom. Others did too, even in very different contexts. During the Vietnam War, President Lyndon Johnson, whose hometown wasn’t far from San Antonio, once told the National Security Council why he believed U.S. troops needed to be fighting in Southeast Asia: “Hell,” he said, “Vietnam is just like the Alamo.”

When Blossoms and Bullets Go Together: The Battles of Springtime

Generals have launched spring offensives from ancient times to the Taliban era

ILLUSTRATION: THOMAS FUCHS

‘When birds do sing, hey ding a ding, ding; Sweet lovers love the spring,” wrote Shakespeare. But the season has a darker side as well. As we’re now reminded each year when the Taliban anticipate the warm weather by announcing their latest spring offensive in Afghanistan, military commanders and strategists have always loved the season, too.

The World War I poet Wilfred Owen highlighted the irony of this juxtaposition—the budding of new life alongside the massacre of those in life’s prime—in his famous “Spring Offensive”: “Marvelling they stood, and watched the long grass swirled / By the May breeze”—right before their deaths.

The pairing of rebirth with violent death has an ancient history. In the 19th century, the anthropologist James George Frazer identified the concept of the “dying and rising god” as one of the earliest cornerstones of religious belief. For new life to appear in springtime, there had to be a death or sacrifice in winter. Similar sacrifice-and-rejuvenation myths can be found among the Sumerians, Egyptians, Canaanites and Greeks.

Mediterranean and Near Eastern cultures saw spring in this dual perspective for practical reasons as well. The agricultural calendar revolved around wet winters, cool springs and very hot summers when almost nothing grew except olives and figs. Harvest time for essential cereal crops such as wheat and barley took place in the spring. The months of May and June, therefore, were perfect for armies to invade, because they could live off the land. The Bible says of King David, who lived around 1,000 B.C., that he sent Joab and the Israelite army to fight the Ammonites “in the spring of the year, when kings normally go out to war.”

It was no coincidence that the Romans named the month of March after Mars, the god of war but also the guardian of agriculture. As the saying goes, “An army fights on its stomach.” For ancient Greek historians, the rhythm of war rarely changed: Discussion took place in the winter, action began in spring. When they referred to a population “waiting for spring,” it was usually literary shorthand for a people living in fear of the next attack. The military campaigns of Alexander the Great (356-323 B.C.) into the Balkans, Persia and India began with a spring offensive.

In succeeding centuries, the seasonal rhythms of Europe, which were very different from those of warmer climes, brought about a new calendar of warfare. Europe’s reliance on the autumn harvest ended the ancient marriage of spring and warfare. Conscripts were unwilling to abandon their farms and fight in the months between planting and harvesting.

 This seasonal difficulty would not be addressed until Sweden’s King Gustavus Adolphus (1594-1632), a great military innovator, developed principles for the first modern army. According to the British historian Basil Liddell Hart, Gustavus made the crucial shift from short-term conscripts, drawn away from agricultural labor, to a standing force of professional, trained soldiers on duty all year round, regardless of the seasons.

Gustavus died before he could fully implement his ideas. This revolution in military affairs fell instead to Frederick the Great, king of Prussia (1712-1786), who turned military life into a respectable upper-class career choice and the Prussian army into a mobile, flexible and efficient machine.

Frederick believed that a successful army attacks first and hard, a lesson absorbed by Napoleon a half century later. This meant that the spring season, which had become the season for drilling and training in preparation for summer campaigning, became a fighting season again.

But the modern iteration of the spring offensive is different from its ancient forebear. Its purpose isn’t to feed an army but to incapacitate enemies before they have the chance to strike. The strategy is a risky gambler’s throw, relying on timing and psychology as much as on strength and numbers.

For Napoleon, the spring offensive played to his strength in being able to combine speed, troop concentration and offensive action in a single, decisive blow. Throughout his career he relied on the spring offensive, beginning with his first military campaign in Italy (1796-7), in which the French defeated the more-numerous and better-supplied Austrians. His final spring campaign was also his boldest. Despite severe shortages of money and troops, Napoleon came within a hair’s breadth of victory at the Battle of Waterloo on June 18, 1815.

The most famous spring campaign of the early 20th century—Germany’s 1918 offensive in World War I, originated by Gen. Erich Ludendorff—reveals its limitations as a strategy. If the knockout blow doesn’t happen, what next?

 At the end of 1917, the German high command had decided that the army needed a spring offensive to revive morale. Ludendorff thought that only an attack in the Napoleonic mode would work: “The army pined for the offensive…It alone is decisive,” he wrote. He was convinced that all he had to do was “blow a hole in the middle” of the enemy’s front and “the rest will follow of its own accord.” When Ludendorff’s first spring offensive stalled after 15 days, he quickly launched four more. Lacking any other objective than the attack itself, all failed, leaving Germany bankrupt and crippled by July.

In this century, the Taliban have found their own brutal way to renew the ancient tradition—with the blossoms come the bombs and the bloodshed.

How Mermaid-Merman Tales Got to This Year’s Oscars

ILLUSTRATON: DANIEL ZALKUS

‘The Shape of Water,’ the best-picture winner, extends a tradition of ancient tales of these water creatures and their dealings with humans

Popular culture is enamored with mermaids. This year’s Best Picture Oscar winner, Guillermo del Toro’s “The Shape of Water,” about a lonely mute woman and a captured amphibious man, is a new take on an old theme. “The Little Mermaid,” Disney ’senormously successful 1989 animated film, was based on the Hans Christian Andersen story of the same name, and it was turned into a Broadway musical, which even now is still being staged across the country.

The fascination with mermythology began with the ancient Greeks. In the beginning, mermen were few and far between. As for mermaids, they were simply members of a large chorus of female sea creatures that included the benign Nereids, the sea-nymph daughters of the sea god Nereus, and the Sirens, whose singing led sailors to their doom—a fate Odysseus barely escapes in Homer’s epic “The Odyssey.”

Over the centuries, the innocuous mermaid became interchangeable with the deadly sirens. They led Scottish sailors to their deaths in one of the variations of the anonymous poem “Sir Patrick Spens,” probably written in the 15th century: “Then up it raise the mermaiden, / Wi the comb an glass in her hand: / ‘Here’s a health to you, my merrie young men, / For you never will see dry land.’”

In pictures, mermaids endlessly combed their hair while sitting semi-naked on the rocks, lying in wait for seafarers. During the Elizabethan era, a “mermaid” was a euphemism for a prostitute. Poets and artists used them to link feminine sexuality with eternal damnation.

But in other tales, the original, more innocent idea of a mermaid persisted. Andersen’s 1837 story followed an old literary tradition of a “virtuous” mermaid hoping to redeem herself through human love.

Andersen purposely broke with the old tales. As he acknowledged to a friend, his fishy heroine would “follow a more natural, more divine path” that depended on her own actions rather than that of “an alien creature.” Egged on by her sisters to murder the prince whom she loves and return to her mermaid existence, she chooses death instead—a sacrifice that earns her the right to a soul, something that mermaids were said to lack.

Richard Wagner’s version of mermaids—the Rhine maidens who guard the treasure of “Das Rheingold”—also bucked the “temptress” cliché. While these maidens could be cruel, they gave valuable advice later in the “Ring” cycle.

The cultural rehabilitation of mermaids gained steam in the 20th century. In T.S. Eliot’s 1915 poem, “The Love Song of J. Alfred Prufrock,” their erotic power becomes a symbol of release from stifling respectability. The sad protagonist laments, “I have heard the mermaids singing, each to each. / I do not think that they will sing to me.” By 1984, when a gorgeous mermaid (Daryl Hannah) fell in love with a nerdy man ( Tom Hanks ) in the film comedy “Splash,” audiences were ready to accept that mermaids might offer a liberating alternative to society’s hang-ups, and that humans themselves are the obstacle to perfect happiness, not female sexuality.

What makes “The Shape of Water” unusual is that a scaly male, not a sexy mermaid, is the object of affection to be rescued. Andersen probably wouldn’t recognize his Little Mermaid in Mr. del Toro’s nameless, male amphibian, yet the two tales are mirror images of the same fantasy: Love conquers all.

In Epidemics, Leaders Play a Crucial Role

ILLUSTRATION: JON KRAUSE

Lessons in heroism and horror as a famed flu pandemic hits a milestone

A century ago this week, an army cook named Albert Gitchell at Fort Riley, Kansas, paid a visit to the camp infirmary, complaining of a severe cold. It’s now thought that he was America’s patient zero in the Spanish Flu pandemic of 1918.

The disease killed more than 40 million people world-wide, including 675,000 Americans. In this case, as in so many others throughout history, the pace of the pandemic’s deadly progress depended on the actions of public officials.

Spain had allowed unrestricted reporting about the flu, so people mistakenly believed it originated there. Other countries, including the U.S., squandered thousands of lives by suppressing news and delaying health measures. Chicago kept its schools open, citing a state commission that had declared the epidemic at a “standstill,” while the city’s public health commissioner said, “It is our duty to keep the people from fear. Worry kills more people than the epidemic.”

Worry had indeed sown chaos, misery and violence in many previous outbreaks, such as the Black Plague. The disease, probably caused by bacteria-infected fleas living on rodents, swept through Asia and Europe during the 1340s, killing up to a quarter of the world’s population. In Europe, where over 50 million died, a search for scapegoats led to widespread pogroms against Jews. In 1349, the city of Strasbourg in France, already somewhat affected by the plague, put to death hundreds of Jews and expelled the rest.

But not all authorities lost their heads at the first sign of contagion. Pope Clement VI (1291-1352), one of a series of popes who ruled from the southern French city of Avignon, declared that the Jews had not caused the plague and issued two papal bulls against their persecution.

In Italy, Venetian authorities took the practical approach: They didn’t allow ships from infected ports to dock and subjected all travelers to a period of isolation. The term quarantine comes from the Italian quaranta giorni, meaning “40 days”—the official length of time until the Venetians granted foreign ships the right of entry.

Less exalted rulers could also show prudence and compassion in the face of a pandemic. After the Black Plague struck the village of Eyam in England, the vicar William Mompesson persuaded its several hundred inhabitants not to flee, to prevent the disease from spreading to other villages. The biggest landowner in the county, the earl of Devonshire, ensured a regular supply of food and necessities to the stricken community. Some 260 villagers died during their self-imposed quarantine, but their decision likely saved thousands of lives.

The response to more recent pandemics has not always met that same high standard. When viral severe acute respiratory syndrome (SARS) began in China in November 2002, the government’s refusal to acknowledge the outbreak allowed the disease to spread to Hong Kong, a hub for the West and much of Asia, thus creating a world problem. On a more hopeful note, when Ebola was spreading uncontrollably through West Africa in 2014, the Ugandans leapt into action, saturating their media with warnings and enabling quick reporting of suspected cases, and successfully contained their outbreak.

Pandemics always create a sense of crisis. History shows that public leadership is the most powerful weapon in keeping them from becoming full-blown tragedies.

The Quest for Unconsciousness: A Brief History of Anesthesia

The ancient Greeks used alcohol and opium. Patients in the 12th century got a ‘soporific sponge.’ A look at anesthetics over the centuries

ILLUSTRATION: ELLEN WEINSTEIN

Every year, some 21 million Americans undergo a general anesthetic. During recent minor surgery, I became one of the roughly 26,000 Americans a year who experience “anesthetic awareness” during sedation: I woke up. I still can’t say what was more disturbing: being conscious or seeing the horrified faces of the doctors and nurses.

The best explanation my doctors could give was that not all brains react in the same way to a general anesthetic. Redheads, for example, seem to require higher dosages than brunettes. While not exactly reassuring, this explanation does highlight one of the many mysteries behind the science of anesthesia.

Although being asleep and being unconscious might look the same, they are very different states. Until the mid-19th century, a medically induced deep unconsciousness was beyond the reach of science. Healers had no reliable way to control, let alone eliminate, a patient’s awareness or pain during surgery, though not for lack of trying.

The ancient Greeks generally relied on alcohol, poppy opium or mandrake root to sedate patients. Evidence from the “Sushruta Samhita,” an ancient Sanskrit medical text, suggests that Indian healers used cannabis incense. The Chinese developed acupuncture at some point before 100 B.C., and in Central and South America, shamans used the spit from chewed coca leaves as a numbing balm.

Little changed over the centuries. In the 12th century, Nicholas of Salerno recorded in a treatise the recipe for a “soporific sponge” with ingredients that hadn’t advanced much beyond the medicines used by the Greeks: a mixture of opium, mulberry juice, lettuce seed, mandrake, ivy and hemlock.

Discoveries came but weren’t exploited. In 1540, the German alchemist and astrologer Paracelsus (aka Theophrastus Bombastus von Hohenheim) noted that liquid ether could induce sleep in animals. In 1772, the English chemist Joseph Priestley discovered nitrous oxide gas (laughing gas). Using it became the thing to do at parties—in 1799, the poet Coleridge described trying the gas—but no one apparently tried using ether or nitrous oxide for medicinal purposes.

In 1811, the novelist Fanny Burney had no recourse when she went under the knife for suspected breast cancer. She wrote later, “O Heaven!—I then felt the Knife rackling against the breast bone—scraping it!”

Despite the ordeal, Burney lived into her 80s, dying in 1840—just before everything changed. Ether, nitrous oxide and later chloroform soon became common in operating theaters. On Oct. 16, 1846, a young dentist from Boston named William Morton made history by performing surgery on a patient anesthetized with ether. It was such a success that, a few months later, Frances Appleton Longfellow, wife of Henry Wadsworth Longfellow, became the first American to receive anesthesia during childbirth.

But these wonder drugs were lethal if not administered properly. A German study compiled in 1934 estimated that the number of chloroform-related deaths was as high as 1 in 3,000 operations. The drive for safer drugs produced such breakthroughs as halothane in 1955, which could be inhaled by patients.

Yet for all the continuous advances in anesthesia, scientists still don’t entirely understand how it works. A study published in the December 2017 issue of Annals of Botany reveals that anesthetics can also stop motion in plants like the Venus flytrap—which, as far as we know, doesn’t have a brain. Clearly, we still have a lot to learn about consciousness in every form.