Historically Speaking: The Long Haul of Distance Running

How the marathon became the world’s top endurance race

The Wall Street Journal

September 2, 2021

The New York City Marathon, the world’s largest, will hold its 50th race this autumn, after missing last year’s due to the pandemic. A podiatrist once told me that he always knows when there has been a marathon because of the sudden uptick in patients with stress fractures and missing toenails. Nevertheless, humans are uniquely suited to long-distance running.

Some 2-3 million years ago, our hominid ancestors began to develop sweat glands that enabled their bodies to stay cool while chasing after prey. Other mammals, by contrast, overheat unless they stop and rest. Thus, slow but sweaty humans won out over fleet but panting animals.

The marathon, at 26.2 miles, isn’t the oldest known long-distance race. Egyptian Pharaoh Taharqa liked to organize runs to keep his soldiers fit. A monument inscribed around 685 B.C. records a two-day, 62-mile race from Memphis to Fayum and back. The unnamed winner of the first leg (31 miles) completed it in about four hours.

ILLUSTRATION: THOMAS FUCHS

The considerably shorter marathon derives from the story of a Greek messenger, Pheidippides, who allegedly ran from Marathon to Athens in 490 B.C. to deliver news of victory over the Persians—only to drop dead of exhaustion at the end. But while it is true that the Greeks used long-distance runners, called hemerodromoi, or day runners, to convey messages, this story is probably a myth or a conflation of different events.

Still, foot-bound messengers ran impressive distances in their day. Within 24 hours of Herman Cortes’s landing in Mexico in 1519, messenger relays had carried news of his arrival over 260 miles to King Montezuma II in Tenochtitlan.

As a competitive sport, the marathon has a shorter history. The longest race at the ancient Olympic Games was about 3 miles. This didn’t stop the French philologist Michel Bréal from persuading the organizers of the inaugural modern Olympics in 1896 to recreate Pheidippides’s epic run as a way of adding a little classical flavor to the Games. The event exceeded his expectations: The Greek team trained so hard that it won 8 of the first 9 places. John Graham, manager of the U.S. Olympic team, was inspired to organize the first Boston Marathon in 1897.

Marathon runners became fitter and faster with each Olympics. But at the 1908 London Games the first runner to reach the stadium, the Italian Dorando Pietri, arrived delirious with exhaustion. He staggered and fell five times before concerned officials eventually helped him over the line. This, unfortunately, disqualified his time of 2:54:46.

Pietri’s collapse added fuel to the arguments of those who thought that a woman’s body could not possibly stand up to a marathon’s demands. Women were banned from the sport until 1964, when Britain’s Isle of Wight Marathon allowed the Scotswoman Dale Greig to run, with an ambulance on standby just in case. Organizers of the Boston Marathon proved more intransigent: Roberta Gibb and Katherine Switzer tried to force their way into the race in 1966 and ’67, but Boston’s gender bar stayed in place until 1972. The Olympics held out until 1984.

Since that time, marathons have become a great equalizer, with men and women on the same course: For 26.2 miles, the only label that counts is “runner.”

Historically Speaking: The Ordeal of Standardized Testing

From the Incas to the College Board, exams have been a popular way for societies to select an elite.

The Wall Street Journal

March 11, 2021

Last month, the University of Texas at Austin joined the growing list of colleges that have made standardized test scores optional for another year due to the pandemic. Last year, applicants were quick to take up the offer: Only 44% of high-school students who applied to college using the Common Application submitted SAT or ACT scores in 2020-21, compared with 77% the previous year.

Nobody relishes taking exams, yet every culture expects some kind of proof of educational attainment from its young. To enter Plato’s Academy in ancient Athens, a prospective student had to solve mathematical problems. Would-be doctors at one of the many medical schools in Ephesus had to participate in a two-day competition that tested their knowledge as well as their surgical skills.

ILLUSTRATION: THOMAS FUCHS

On the other side of the world, the Incas of Peru were no less demanding. Entry into the nobility required four years of rigorous instruction in the Quechua language, religion and history. At the end of the course students underwent a harsh examination lasting several days that tested their physical and mental endurance.

It was the Chinese who invented the written examination, as a means of improving the quality of imperial civil servants. During the reign of Empress Wu Zetian, China’s only female ruler, in the 7th century, the exam became a national rite of passage for the intelligentsia. Despite its burdensome academic requirements, several hundred thousand candidates took it every year. A geographical quota system was eventually introduced to prevent the richer regions of China from dominating.

Over the centuries, all that cramming for one exam stifled innovation and encouraged conformity. Still, the meritocratic nature of the Chinese imperial exam greatly impressed educational reformers in the West. In 1702, Trinity College, Cambridge became the first institution to require students to take exams in writing rather than orally. By the end of the 19th century, exams to enter a college or earn a degree had become a fixture in most European countries.

In the U.S., the reformer Horace Mann introduced standardized testing in Boston schools in the 1840s, hoping to raise the level of teaching and ensure that all citizens would have equal access to a good education. The College Board, a nonprofit organization founded by a group of colleges and high schools in 1899, established the first standardized test for university applicants.

Not every institution that adopted standardized testing had noble aims, however. The U.S. Army had experimented with multiple-choice intelligence tastes during World War I and found them useless as a predictive tool. But in the early 1920s, the president of Columbia University, Nicholas M. Butler, adopted the Thorndike Tests for Mental Alertness as part of the admissions process, believing it would limit the number of Jewish students.

The College Board adopted the SAT, a multiple-choice aptitude test, in 1926, as a fair and inclusive alternative to written exams, which were thought to be biased against poorer students. In the 1960s, civil rights activists began to argue that standardized tests like the SAT and ACT were biased against minority students, but despite the mounting criticisms, the tests seemed like a permanent part of American education—until now.

Historically Speaking: Overcoming the Labor of Calculation

Inventors tried many times over the centuries to build an effective portable calculator—but no one succeeded until John Merryman.

The Wall Street Journal, April 5, 2019

ILLUSTRATION: THOMAS FUCHS

The world owes a debt of gratitude to Jerry Merryman, who died on Feb. 27 at the age of 86. It was Merryman who, in 1965, worked with two other engineers at Texas Instruments to invent the world’s first pocket calculator. Today we all carry powerful calculators on our smartphones, but Merryman was the first to solve a problem that had plagued mathematicians at least since Gottfried Leibniz, one of the creators of calculus in the 17th century, who observed: “It is unworthy of excellent men to lose hours like slaves in the labor of calculation, which could safely be relegated to anyone else if machines were used.”

In ancient times, the only alternative to mental arithmetic was the abacus, with its simple square frame, wooden rods and movable beads. Most civilizations, including the Romans, Chinese, Indians and Aztecs, had their own version—from counting boards for keeping track of sums to more sophisticated designs that could calculate square roots.

We know of just one counting machine from antiquity that was more complex. The 2nd-century B.C. Antikythera Mechanism, discovered in 1901 in the remains of an ancient Greek shipwreck, was a 30-geared bronze calculator that is believed to have been used to track the movement of the heavens. Despite its ingenious construction, the Antikythera had a limited range of capabilities: It could calculate dates and constellations and little else.

In the 15th century, Leonardo da Vinci drew up designs for a mechanical calculating device that consisted of 13 “digit-registering” wheels. In 1967, IBM commissioned the Italian scholar and engineer Roberto Guatelli to build a replica based on the sketches. Guatelli believed that Leonardo had invented the first calculator, but other experts disagreed. In any case, it turned out that the metal wheels would have generated so much friction that the frame would probably have caught fire.

Technological advances in clockmaking helped the French mathematician and inventor Blaise Pascal to build the first working mechanical calculator, called the Pascaline, in 1644. It wasn’t a fire hazard, and it could add and subtract. But the Pascaline was very expensive to make, fragile in the extreme, and far too limited to be really useful. Pascal only made 50 in his lifetime, of which less than 10 survive today.

Perhaps the greatest calculator never to see the light of day was designed by Charles Babbage in the early 19th century. He actually designed two machines—the Difference Engine, which could perform arithmetic, and the Analytical Engine, which was theoretically capable of a range of functions from direct multiplication to parallel processing. Ada, Countess of Lovelace, the daughter of the poet Lord Byron, made important contributions to the development of Babbage’s Analytical Engine. According to the historian Walter Isaacson, Lovelace realized that any process based on logical symbols could be used to represent entities other than quantity—the same principle used in modern computing.

Unfortunately, despite many years and thousands of pounds of government funding, Babbage only ever managed to build a small prototype of the Difference Engine. Even for most of the 20th century, his dream of a portable calculator stayed a dream, while the go-to instrument for doing large sums remained the slide rule. It took Merryman’s invention to allow us all to become “excellent men” and leave the labor of calculation to the machines.

Historically Speaking: The Immortal Charm of Daffodils

The humble flower has been a favorite symbol in myth and art since ancient times

The Wall Street Journal, March 22, 2019

ILLUSTRATION: THOMAS FUCHS

On April 15, 1802, the poet William Wordsworth and his sister Dorothy were enjoying a spring walk through the hills and vales of the English Lake District when they came across a field of daffodils. Dorothy was so moved that she recorded the event in her journal, noting how the flowers “tossed and reeled and danced and seemed as if they verily laughed with the wind that blew upon them over the Lake.” And William decided there was nothing for it but to write a poem, which he published in its final version in 1815. “I Wandered Lonely as a Cloud” is one of his most famous reflections on the power of nature: “For oft, when on my couch I lie/In vacant or in pensive mood,/They flash upon that inward eye/Which is the bliss of solitude;/And then my heart with pleasure fills,/And dances with the daffodils.”

Long dismissed as a common field flower, unworthy of serious attention by the artist, poet or gardener, the daffodil enjoyed a revival thanks in part to Wordsworth’s poem. The painters Claude Monet, Berthe Morisot and Vincent van Gogh were among its 19th-century champions. Today, the daffodil is so ubiquitous, in gardens and in art, that it’s easy to overlook.

But the flower deserves respect for being a survivor. Every part of the narcissus, to use its scientific name, is toxic to humans, animals and even other flowers, and yet—as many cultures have noted—it seems immortal. There are still swaths of daffodils on the lakeside meadow where the Wordsworths ambled two centuries ago.

The daffodil originated in the ancient Mediterranean, where it was regarded with deep ambivalence. The ancient Egyptians associated narcissi with the idea of death and resurrection, using them in tomb paintings. The Greeks also gave the flower contrary mythological meanings. Its scientific name comes from the story of Narcissus, a handsome youth who faded away after being cursed into falling in love with his own image. At the last moment, the gods saved him from death by granting him a lifeless immortality as a daffodil. In another Greek myth, the daffodil’s luminous beauty was used by Hades to lure Persephone away from her friends so that he could abduct her into the underworld. During her four-month captivity the only flower she saw was the asphodelus, which grew in abundance on the fields of Elysium—and whose name inspired the English derivative “daffodil.”

But it is isn’t only Mediterranean cultures that have fixated on the daffodil’s mysterious alchemy of life and death. A fragrant variety of the narcissus—the sweet-smelling paper white—traveled along the Silk Road to China. There, too, the flower appeared to encapsulate the happy promise of spring, but also other painful emotions such as loss and yearning. The famous Ming Dynasty scroll painting “Narcissi and Plum Blossoms” by Qiu Ying (ca. 1494-1552), for instance, is a study in contrasts, juxtaposing exquisitely rendered flowers with the empty desolation of winter.

The English botanist John Parkinson introduced the traditional yellow variety from Spain in 1618. Aided by a soggy but temperate climate, daffodils quickly spread across lawns and fields, causing its foreign origins to be forgotten. By the 19th century they had become quintessentially British—so much so that missionaries and traders, nostalgic for home, planted bucketfuls of bulbs wherever they went. Their legacy in North America is a burst of color each year just when the browns and grays of winter have worn out their welcome.

Historically Speaking: Insuring Against Disasters

Insurance policies go back to the ancient Babylonians and were crucial in the early development of capitalism

The Wall Street Journal, February 21, 2019

ILLUSTRATION: THOMAS FUCHS

Living in a world without insurance, free from all those claim forms and high deductibles, might sound like a little bit of paradise. But the only thing worse than dealing with the insurance industry is trying to conduct business without it. In fact, the basic principle of insurance—pooling risk in order to minimize liability from unforeseen dangers—is one of the things that made modern capitalism possible.

The first merchants to tackle the problem of risk management in a systematic way were the Babylonians. The 18th-century B.C. Code of Hammurabi shows that they used a primitive form of insurance known as “bottomry.” According to the Code, merchants who took high-interest loans tied to shipments of goods could have the loans forgiven if the ship was lost. The practice benefited both traders and their creditors, who charged a premium of up to 30% on such loans.

The Athenians, realizing that bottomry was a far better hedge against disaster than relying on the Oracle of Delphi, subsequently developed the idea into a maritime insurance system. They had professional loan syndicates, official inspections of ships and cargoes, and legal sanctions against code violators.

With the first insurance schemes, however, came the first insurance fraud. One of the oldest known cases comes from Athens in the 3rd century B.C. Two men named Hegestratos and Xenothemis obtained bottomry insurance for a shipment of corn from Syracuse to Athens. Halfway through the journey they attempted to sink the ship, only to have their plan foiled by an alert passenger. Hegestratos jumped (or was thrown) from the ship and drowned. Xenothemis was taken to Athens to meet his punishment.

In Christian Europe, insurance was widely frowned upon as a form of gambling—betting against God. Even after Pope Gregory IX decreed in the 13th century that the premiums charged on bottomry loans were not usury, because of the risk involved, the industry rarely expanded. Innovations came mainly in response to catastrophes: The Great Fire of London in 1666 led to the growth of fire insurance, while the Lisbon earthquake of 1755 did the same for life insurance.

It took the Enlightenment to bring widespread changes in the way Europeans thought about insurance. Probability became subject to numbers and statistics rather than hope and prayer. In addition to his contributions to mathematics, astronomy and physics, Edmond Halley (1656-1742), of Halley’s comet fame, developed the foundations of actuarial science—the mathematical measurement of risk. This helped to create a level playing field for sellers and buyers of insurance. By the end of the 18th century, those who abjured insurance were regarded as stupid rather than pious. Adam Smith declared that to do business without it “was a presumptuous contempt of the risk.”

But insurance only works if it can be trusted in a crisis. For the modern American insurance industry, the deadly San Francisco earthquake of 1906 was a day of reckoning. The devastation resulted in insured losses of $235 million—equivalent to $6.3 billion today. Many American insurers balked, but in Britain, Lloyd’s of London announced that every one of its customers would have their claims paid in full within 30 days. This prompt action saved lives and ensured that business would be able to go on.

And that’s why we pay our premiums: You can’t predict tomorrow, but you can plan for it.

Historically Speaking: Unenforceable Laws Against Pleasure

The 100th anniversary of Prohibition is a reminder of how hard it is to regulate consumption and display

The Wall Street Journal, January 24, 2019

ILLUSTRATION: THOMAS FUCHS

This month we mark the centennial of the ratification of the Constitution’s 18th Amendment, better known as Prohibition. But the temperance movement was active for over a half-century before winning its great prize. As the novelist Anthony Trollope discovered to his regret while touring North America in 1861-2, Maine had been dry for a decade. The convivial Englishman condemned the ban: “This law, like all sumptuary laws, must fail,” he wrote.

Sumptuary laws had largely fallen into disuse by the 19th century, but they were once a near-universal tool, used in the East and West alike to control economies and preserve social hierarchies. A sumptuary law is a rule that regulates consumption in its broadest sense, from what a person may eat and drink to what they may own, wear or display. The oldest known example, the Locrian Law Code devised by the seventh century B.C. Greek law giver Zaleucus, banned all citizens of Locri (except prostitutes) from ostentatious displays of gold jewelry.

Sumptuary laws were often political weapons disguised as moral pieties, aimed at less powerful groups, particularly women. In 215 B.C., at the height of the Second Punic War, the Roman Senate passed the Lex Oppia, which (among other restrictions) banned women from owning more than a half ounce of gold. Ostensibly a wartime austerity measure, 20 years later the law appeared so ridiculous as to be unenforceable. But during debate on its repeal in 195 B.C., Cato the Elder, its strongest defender, inadvertently revealed the Lex Oppia’s true purpose: “What [these women] want is complete freedom…. Once they have achieved equality, they will be your masters.”

Cato’s message about preserving social hierarchy echoed down the centuries. As trade and economic stability returned to Europe during the High Middle Ages (1000-1300), so did the use of sumptuary laws to keep the new merchant elites in their place. By the 16th century, sumptuary laws in Europe had extended from clothing to almost every aspect of daily life. The more they were circumvented, the more specific such laws became. An edict issued by King Henry VIII of England in 1517, for example, dictated the maximum number of dishes allowed at a meal: nine for a cardinal, seven for the aristocracy and three for the gentry.

The rise of modern capitalism ultimately made sumptuary laws obsolete. Trade turned once-scarce luxuries into mass commodities that simply couldn’t be controlled. Adam Smith’s “The Wealth of Nations” (1776) confirmed what had been obvious for over a century: Consumption and liberty go hand in hand. “It is the highest impertinence,” he wrote, “to pretend to watch over the economy of private people…either by sumptuary laws, or by prohibiting the importation of foreign luxuries.”

Smith’s pragmatic view was echoed by President William Howard Taft. He opposed Prohibition on the grounds that it was coercive rather than consensual, arguing that “experience has shown that a law of this kind, sumptuary in its character, can only be properly enforced in districts in which a majority of the people favor the law.” Mass immigration in early 20th-century America had changed many cities into ethnic melting-pots. Taft recognized Prohibition as an attempt by nativists to impose cultural uniformity on immigrant communities whose attitudes toward alcohol were more permissive. But his warning was ignored, and the disastrous course of Prohibition was set.

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

WSJ Historically Speaking: The Gym, for Millennia of Bodies and Souls

Today’s gyms, which depend on our vanity and body envy, are a far cry from what the Greeks envisioned

ILLUSTRATION: THOMAS FUCHS

Going to the gym takes on a special urgency at this time of year, as we prepare to put our bodies on display at the pool and beach. Though the desire to live a virtuous life of fitness no doubt plays its part, vanity and body envy are, I suspect, the main motivation for our seasonal exertions.

The ancient Greeks, who invented gyms (the Greek gymnasion means “school for naked exercise”), were also body-conscious, but they saw a deeper point to the sweat. No mere muscle shops, Greek gymnasia were state-sponsored institutions aimed at training young men to embody, literally, the highest ideals of Greek virtue. In Plato’s “The Republic,” Socrates says that the two branches of physical and academic education “seem to have been given by some god to men…to ensure a proper harmony between energy and initiative on the one hand and reason on the other, by tuning each to the right pitch.”

Physical competition, culminating in the Olympics, was a form of patriotic activity, and young men went to the gym to socialize, bathe and learn to think. Aristotle founded his school of philosophy in the Lyceum, in a gymnasium that included physical training.

The Greek concept fell out of favor in the West with the rise of Christianity. The abbot St. Bernard of Clairvaux (1090–1153), who advised five popes, wrote, “The spirit flourishes more strongly…in an infirm and weak body,” neatly summing up the medieval ambivalence toward physicality.

Many centuries later, an eccentric German educator named Friedrich Jahn (1778-1852) played a key role in the gym’s revival. Convinced that Prussia’s defeat by Napoleon was due to his compatriots’ descent into physical and moral weakness, Jahn decided that a Greek-style gym would “preserve young people from laxity and…prepare them to fight for the fatherland.” In 1811, he opened a gym in Berlin for military-style physical training (not to be confused with the older German usage of the term gymnasium for the most advanced level of secondary schools).

By the mid-19th century, Europe’s upper-middle classes had sufficient wealth and leisure time to devote themselves to exercise for exercise’s sake. Hippolyte Triat opened two of the first truly commercial gyms in Brussels and Paris in the late 1840s. A retired circus strongman, he capitalized on his physique to sell his “look.”

But broader spiritual ideas still influenced the spread of physical fitness. The 19th-century movement Muscular Christianity sought to transform the working classes into healthy, patriotic Christians. One offshoot, the Young Men’s Christian Association, became famous for its low-cost gyms.

By the mid-20th century, Americans were using their gyms for two different sets of purposes. Those devoted to “manliness” worked out at places like Gold’s Gym and aimed to wow others with their physiques. The other group, “health and fitness” advocates, expanded sharply after Jack LaLanne, who founded his first gym in 1936, turned a healthy lifestyle into a salable commodity. A few decades later, Jazzercise, aerobics, disco and spandex made the gym a liberating, fashionable and sexy place.

More than 57 million Americans belong to a health club today, but until local libraries start adding spinning classes and CrossFit, the gym will remain a shadow of the original Greek ideal. We prize our sound bodies, but we aren’t nearly as devoted to developing sound mind and character.

WSJ Historically Speaking: How Mermaid-Merman Tales Got to This Year’s Oscars

ILLUSTRATON: DANIEL ZALKUS

‘The Shape of Water,’ the best-picture winner, extends a tradition of ancient tales of these water creatures and their dealings with humans

Popular culture is enamored with mermaids. This year’s Best Picture Oscar winner, Guillermo del Toro’s “The Shape of Water,” about a lonely mute woman and a captured amphibious man, is a new take on an old theme. “The Little Mermaid,” Disney ’senormously successful 1989 animated film, was based on the Hans Christian Andersen story of the same name, and it was turned into a Broadway musical, which even now is still being staged across the country.

The fascination with mermythology began with the ancient Greeks. In the beginning, mermen were few and far between. As for mermaids, they were simply members of a large chorus of female sea creatures that included the benign Nereids, the sea-nymph daughters of the sea god Nereus, and the Sirens, whose singing led sailors to their doom—a fate Odysseus barely escapes in Homer’s epic “The Odyssey.”

Over the centuries, the innocuous mermaid became interchangeable with the deadly sirens. They led Scottish sailors to their deaths in one of the variations of the anonymous poem “Sir Patrick Spens,” probably written in the 15th century: “Then up it raise the mermaiden, / Wi the comb an glass in her hand: / ‘Here’s a health to you, my merrie young men, / For you never will see dry land.’”

In pictures, mermaids endlessly combed their hair while sitting semi-naked on the rocks, lying in wait for seafarers. During the Elizabethan era, a “mermaid” was a euphemism for a prostitute. Poets and artists used them to link feminine sexuality with eternal damnation.

But in other tales, the original, more innocent idea of a mermaid persisted. Andersen’s 1837 story followed an old literary tradition of a “virtuous” mermaid hoping to redeem herself through human love.

Andersen purposely broke with the old tales. As he acknowledged to a friend, his fishy heroine would “follow a more natural, more divine path” that depended on her own actions rather than that of “an alien creature.” Egged on by her sisters to murder the prince whom she loves and return to her mermaid existence, she chooses death instead—a sacrifice that earns her the right to a soul, something that mermaids were said to lack.

Richard Wagner’s version of mermaids—the Rhine maidens who guard the treasure of “Das Rheingold”—also bucked the “temptress” cliché. While these maidens could be cruel, they gave valuable advice later in the “Ring” cycle.

The cultural rehabilitation of mermaids gained steam in the 20th century. In T.S. Eliot’s 1915 poem, “The Love Song of J. Alfred Prufrock,” their erotic power becomes a symbol of release from stifling respectability. The sad protagonist laments, “I have heard the mermaids singing, each to each. / I do not think that they will sing to me.” By 1984, when a gorgeous mermaid (Daryl Hannah) fell in love with a nerdy man ( Tom Hanks ) in the film comedy “Splash,” audiences were ready to accept that mermaids might offer a liberating alternative to society’s hang-ups, and that humans themselves are the obstacle to perfect happiness, not female sexuality.

What makes “The Shape of Water” unusual is that a scaly male, not a sexy mermaid, is the object of affection to be rescued. Andersen probably wouldn’t recognize his Little Mermaid in Mr. del Toro’s nameless, male amphibian, yet the two tales are mirror images of the same fantasy: Love conquers all.

WSJ Historically Speaking: The Perils of Cultural Purity

PHOTO: THOMAS FUCHS

“Cultural appropriation” is a leading contender for the most overused phrase of 2017. Originally employed by academics in postcolonial studies to describe the adoption of one culture’s creative expressions by another, the term has evolved to mean the theft or exploitation of an ethnic culture or history by persons of white European heritage. Continue reading…