Insuring Against Disasters

Insurance policies go back to the ancient Babylonians and were crucial in the early development of capitalism

ILLUSTRATION: THOMAS FUCHS

Living in a world without insurance, free from all those claim forms and high deductibles, might sound like a little bit of paradise. But the only thing worse than dealing with the insurance industry is trying to conduct business without it. In fact, the basic principle of insurance—pooling risk in order to minimize liability from unforeseen dangers—is one of the things that made modern capitalism possible.

The first merchants to tackle the problem of risk management in a systematic way were the Babylonians. The 18th-century B.C. Code of Hammurabi shows that they used a primitive form of insurance known as “bottomry.” According to the Code, merchants who took high-interest loans tied to shipments of goods could have the loans forgiven if the ship was lost. The practice benefited both traders and their creditors, who charged a premium of up to 30% on such loans.

The Athenians, realizing that bottomry was a far better hedge against disaster than relying on the Oracle of Delphi, subsequently developed the idea into a maritime insurance system. They had professional loan syndicates, official inspections of ships and cargoes, and legal sanctions against code violators.

With the first insurance schemes, however, came the first insurance fraud. One of the oldest known cases comes from Athens in the 3rd century B.C. Two men named Hegestratos and Xenothemis obtained bottomry insurance for a shipment of corn from Syracuse to Athens. Halfway through the journey they attempted to sink the ship, only to have their plan foiled by an alert passenger. Hegestratos jumped (or was thrown) from the ship and drowned. Xenothemis was taken to Athens to meet his punishment.

In Christian Europe, insurance was widely frowned upon as a form of gambling—betting against God. Even after Pope Gregory IX decreed in the 13th century that the premiums charged on bottomry loans were not usury, because of the risk involved, the industry rarely expanded. Innovations came mainly in response to catastrophes: The Great Fire of London in 1666 led to the growth of fire insurance, while the Lisbon earthquake of 1755 did the same for life insurance.

It took the Enlightenment to bring widespread changes in the way Europeans thought about insurance. Probability became subject to numbers and statistics rather than hope and prayer. In addition to his contributions to mathematics, astronomy and physics, Edmond Halley (1656-1742), of Halley’s comet fame, developed the foundations of actuarial science—the mathematical measurement of risk. This helped to create a level playing field for sellers and buyers of insurance. By the end of the 18th century, those who abjured insurance were regarded as stupid rather than pious. Adam Smith declared that to do business without it “was a presumptuous contempt of the risk.”

But insurance only works if it can be trusted in a crisis. For the modern American insurance industry, the deadly San Francisco earthquake of 1906 was a day of reckoning. The devastation resulted in insured losses of $235 million—equivalent to $6.3 billion today. Many American insurers balked, but in Britain, Lloyd’s of London announced that every one of its customers would have their claims paid in full within 30 days. This prompt action saved lives and ensured that business would be able to go on.

And that’s why we pay our premiums: You can’t predict tomorrow, but you can plan for it.

The Invention of Ice Hockey

Canada gave us the modern form of a sport that has been played for centuries around the world

ILLUSTRATION: THOMAS FUCHS

Canadians like to say—and print on mugs and T-shirts—that “Canada is Hockey.” No fewer than five Canadian cities and towns claim to be the birthplace of ice hockey, including Windsor, Nova Scotia, which has an entire museum dedicated to the sport. Canada’s annual Hockey Day, which falls on February 9 this year, features a TV marathon of hockey games. Such is the country’s love for the game that last year’s broadcast was watched by more than 1 in 4 Canadians.

But as with many of humanity’s great advances, no single country or person can take the credit for inventing ice hockey. Stick-and-ball games are as old as civilization itself. The ancient Egyptians were playing a form of field hockey as early as the 21st century B.C., if a mural on a tomb at Beni Hasan, a Middle Kingdom burial site about 120 miles south of Cairo, is anything to go by. The ancient Greeks also played a version of the game, as did the early Christian Ethiopians, the Mesoamerican Teotihuacanos in the Valley of Mexico, and the Daur tribes of Inner Mongolia. And the Scottish and Irish versions of field hockey, known as shinty and hurling respectively, have strong similarities with the modern game.

Taking a ball and stick onto the ice was therefore a fairly obvious innovation, at least in places with snowy winters. The figures may be tiny, but three villagers playing an ice hockey-type game can be seen in the background of Pieter Bruegel the Elder’s 1565 painting “Hunters in the Snow.” There is no such pictorial evidence to show when the Mi’kmaq Indians of Nova Scotia first started hitting a ball on ice, but linguistic clues suggest that their hockey tradition existed before the arrival of European traders in the 16th century. The two cultures then proceeded to influence each other, with the Mi’kmaqs becoming the foremost maker of hockey sticks in the 19th century.

The earliest known use of the word hockey appears in a book, “Juvenile Sports and Pastimes,” written by Richard Johnson in London in 1776. Recently, Charles Darwin became an unlikely contributor to ice hockey history after researchers found a letter in which he reminisced about playing the game as a boy in the 1820s: “I used to be very fond of playing Hocky [sic] on the ice in skates.” On January 8, 1864, the future King Edward VII played ice hockey at Windsor Castle while awaiting the birth of his first child.

As for Canada, apart from really liking the game, what has been its real contribution to ice hockey? The answer is that it created the game we know today, from the official rulebook to the size and shape of the rink to the establishment of the Stanley Cup championship in 1894. The first indoor ice hockey game was played in Montreal in 1875, thereby solving the perennial problem of pucks getting lost. (The rink was natural ice, with Canada’s cold winter supplying the refrigeration.) The game involved two teams of nine players, each with a set position—three more than teams field today—a wooden puck, and a list of rules for fouls and scoring.

In addition to being the first properly organized game, the Montreal match also initiated ice hockey’s other famous tradition: brawling on the ice. In this case, the fighting erupted between the players, spectators and skaters who wanted the ice rink back for free skating. Go Canada!

The High Cost of Financial Panics

Roman emperors and American presidents alike have struggled to deal with sudden economic crashes

ILLUSTRATION: THOMAS FUCHS

On January 12, 1819 Thomas Jefferson wrote to his friend Nathaniel Macon, “I have…entire confidence in the late and present Presidents…I slumber without fear.” He did concede, though, that market fluctuations can trip up even the best governments. Jefferson was prescient: A few days later, the country plunged into a full-blown financial panic. The trigger was a collapse in the overseas cotton market, but the crisis had been building for months. The factors that led to the crash included the actions of the Second Bank of the United States, which had helped to fuel a real estate boom in the West only to reverse course suddenly and call in its loans.

The recession that followed the panic of 1819 was prolonged and severe: Banks closed, lending all but ceased and businesses failed by the thousands. By the time it was over in 1823, almost a third of the population—including Jefferson himself—had suffered irreversible losses.

As we mark the 200th anniversary of the 1819 panic, it is worth pondering the role of governments in a financial crisis. During a panic in Rome in the year 33, the emperor Tiberius’s prompt action prevented a total collapse of the city’s finances. Rome was caught among falling property prices, a real estate bubble and a sudden credit crunch. Instead of waiting it out, Tiberius ordered interest rates to be lowered and released 100 million sestertii (large brass coins) into the banking system to avoid a mass default.

But not all government interventions have been as successful or timely. In 1124, King Henry I of England attempted to restore confidence in the country’s money by having the mint-makers publicly castrated and their right hands amputated for producing substandard coins. A temporary fix at best, his bloody act neither deterred people from debasing the coinage nor allayed fears over England’s creditworthiness.

On the other side of the globe, China began using paper money in 1023. Successive emperors of the Ming Dynasty (1368-1644) failed, however, to limit the number of notes in circulation or to back the money with gold or silver specie. By the mid-15th century the economy was in the grip of hyperinflationary cycles. The emperor Yingzong simply gave up on the problem: China returned to coinage just as Europe was discovering the uses of paper.

The rise of commercial paper along with paper currencies allowed European countries to develop more sophisticated banking systems. But they also led to panics, inflation and dangerous speculation—sometimes all at once, as in France in 1720, when John Law’s disastrous Mississippi Company share scheme ended in mass bankruptcies for its investors and the collapse of the French livre.

As it turns out, it is easier to predict the consequences of a crisis than it is to prevent one from happening. In 2015, the U.K.’s Centre for Economic Policy Research published a paper on the effects of 100 financial crises in 20 Western countries over the past 150 years, down to the recession of 2007-09. They found two consistent outcomes. The first is that politics becomes more extreme and polarized following a crisis; the second is that countries become more ungovernable as violence, protests and populist revolts overshadow the rule of law.

With the U.S. stock market having suffered its worst December since the Great Depression of the 1930s, it is worth remembering that the only thing more frightening than a financial crisis can be its aftermath.

WSJ Historically Speaking: At Age 50, a Time of Second Acts

Amanda Foreman finds comfort in countless examples of the power of reinvention after five decades.

ILLUSTRATION BY TONY RODRIGUEZ

I turned 50 this week, and like many people I experienced a full-blown midlife crisis in the lead-up to the Big Day. The famous F. Scott Fitzgerald quotation, “There are no second acts in American lives,” dominated my thoughts. I wondered now that my first act was over—would my life no longer be about opportunities and instead consist largely of consequences?
Fitzgerald, who left the line among his notes for “The Last Tycoon,” had ample reason for pessimism. He had hoped the novel would lead to his own second act after failing to make it in Hollywood, but he died at 44, broken and disappointed, leaving the book unfinished. Yet the truth about his grim line is more complicated. Several years earlier, Fitzgerald had used it to make an almost opposite point, in the essay “My Lost City”: “I once thought that there were no second acts in American lives, but there was certainly to be a second act to New York’s boom days.”
The one comfort we should take from countless examples in history is the power of reinvention. The Victorian poet William Ernest Henley was right when he wrote, “I am the master of my fate/ I am the captain of my soul.”
The point is to seize the moment. The disabled Roman Emperor Claudius, (10 B.C.-A.D. 54) spent most his life being victimized by his awful family. Claudius was 50 when his nephew, Caligula, met his end at the hands of some of his own household security, the Praetorian Guards. The historian Suetonius writes that a soldier discovered Claudius, who had tried to hide, trembling in the palace. The guards decided to make Claudius their puppet emperor. It was a grave miscalculation. Claudius grabbed his chance, shed his bumbling persona and became a forceful and innovative ruler of Rome.

In Russia many centuries later, the general Mikhail Kutuzov was in his 60s when his moment came. In 1805, Czar Alexander I had unfairly blamed Kutuzov for the army’s defeat at the Battle of Austerlitz and relegated him to desk duties. Russian society cruelly treated the general, who looked far from heroic—a character in Tolstoy’s “War and Peace” notes the corpulent Kutuzov’s war scars, especially his “bleached eyeball.” But when the country needed a savior in 1812, Kutuzov, the “has-been,” drove Napoleon and his Grande Armée out of Russia.

Winston Churchill had a similar apotheosis in World War II when he was in his 60s. Until then, his political career had been a catalog of failures, the most famous being the Gallipoli Campaign of 1916 that left Britain and its allies with more than 100,000 casualties.

As for writers and artists, they often find middle age extremely liberating. They cease being afraid to take risks in life. Another Fitzgerald—the Man Booker Prize-winning novelist Penelope—lived on the brink of homelessness, struggling as a tutor and teacher (she later recalled “the stuffy and inky boredom of the classroom”) until she published her first book at 58.

Anna Mary Robertson Moses, better known as Grandma Moses, may be the greatest example of self-reinvention. After many decades of farm life, around age 75 she began a new career, becoming one of America’s best known folk painters.

Perhaps I’ll be inspired to master Greek when I am 80, as some say the Roman statesman Cato the Elder did. But what I’ve learned, while coming to terms with turning 50, is that time spent worrying about “what you might have been” is better passed with friends and family—celebrating the here and now.

WSJ Historically Speaking: When We Rally ‘Round the Flag: A History

Flag Day passes every year almost unnoticed. That’s a shame—it celebrates a symbol with ties to religious and totemic objects that have moved people for millennia

The Supreme Court declared in 1989 that desecrating the American flag is a protected form of free speech. That ended the legal debate but not the national one over how we should treat the flag. If anything, two years of controversies over athletes kneeling during “The Star-Spangled Banner,” which led last month to a National Football League ban on the practice, show that feelings are running higher than ever.

Yet, Flag Day—which honors the adoption of the Stars and Stripes by Congress on June 14, 1777—is passing by almost unnoticed this year, as it does almost every year. One reason is that Memorial Day and Independence Day—holidays of federally sanctioned free time, parades and spectacle—flank and overshadow it. That’s a shame, because we could use a day devoted to reflecting on our flag, a precious national symbol whose potency can be traced to the religious and totemic objects that have moved people for millennia.

The first flags were not pieces of cloth but metal or wooden standards affixed to poles. The Shahdad Standard, thought to be the oldest flag, hails from Persia and dates from around 2400 B.C. Because ancient societies considered standards to be conduits for the power and protection of the gods, an army always went into battle accompanied by priests bearing the kingdom’s religious emblems. Isaiah Chapter 49 includes the lines: “Thus saith the Lord God, Behold, I will lift up mine hand to the Gentiles, and set up my standard to the people.”

Ancient Rome added a practical use for standards—waving, dipping and otherwise manipulating them to show warring troops what to do next. But the symbols retained their aura as national totems, emblazoned with the letters SPQR, an abbreviation of Senatus Populusque Romanus, or Senate and People of Rome. It was a catastrophe for a legion to lose its standard in battle. In Germania in A.D. 9, a Roman army was ambushed while marching through Teutoburg Forest and lost three standards. The celebrated general Germanicus eventually recovered two of them after a massive and bloody campaign.

In succeeding centuries, the flag as we know it today began to take shape. Europeans and Arabs learned silk production, pioneered by China, which made it possible to create banners light enough to flutter in the wind. As in ancient days, they were most often designed with heraldic or religious motifs.

In the U.S., the design of the flag harked back to the Roman custom of an explicitly national symbol, but the Star-Spangled Banner was slow to attain its unique status, despite the popularity of Francis Scott Key’s 1814 anthem. It took the Civil War, with its dueling flags, to make the American flag an emblem of national consciousness. As the U.S. Navy moved to capture New Orleans from the Confederacy in 1862, Marines went ashore and raised the Stars and Stripes at the city’s mint. William Mumford, a local resident loyal to the Confederacy, tore the flag down and wore shreds of it in his buttonhole. U.S. General Benjamin Butler had Mumford arrested and executed.

After the war, the Stars and Stripes became a symbol of reconciliation. In 1867 Southerners welcomed Wisconsin war veteran Gilbert Bates as he carried the flag 1,400 miles across the South to show that the nation was healing.

As the country developed economically, a new peril lay in store for the Stars and Stripes: commercialization. The psychological and religious forces that had once made flags sacred began to fade, and the national banner was recruited for the new industry of mass advertising. Companies of the late 19th century used it to sell everything from beer to skin cream, leading to national debates over what the flag stood for and how it should be treated.

President Woodrow Wilson instituted Flag Day in 1916 in an effort to concentrate the minds of citizens on the values embodied in our most familiar national symbol. That’s as worthy a goal today as it was a century ago.

WSJ Historically Speaking: Undying Defeat: The Power of Failed Uprisings

From the Warsaw Ghetto to the Alamo, doomed rebels live on in culture

John Wayne said that he saw the Alamo as ‘a metaphor for America’. PHOTO: ALAMY

Earlier this month, Israel commemorated the 75th anniversary of the Warsaw Ghetto Uprising of April 1943. The annual Remembrance Day of the Holocaust and Heroism, as it is called, reminds Israelis of the moral duty to fight to the last.

The Warsaw ghetto battle is one of many doomed uprisings across history that have cast their influence far beyond their failures, providing inspiration to a nation’s politics and culture.

Nearly 500,000 Polish Jews once lived in the ghetto. By January 1943, the Nazis had marked the surviving 55,000 for deportation. The Jewish Fighting Organization had just one machine gun and fewer than a hundred revolvers for a thousand or so sick and starving volunteer soldiers. The Jews started by blowing up some tanks and fought on until May 16. The Germans executed 7,000 survivors and deported the rest.

For many Jews, the rebellion offered a narrative of resistance, an alternative to the grim story of the fortress of Masada, where nearly 1,000 besieged fighters chose suicide over slavery during the First Jewish-Roman War (A.D. 66–73).
The story of the Warsaw ghetto uprising has also entered the wider culture. The title of Leon Uris’s 1961 novel “Mila 18” comes from the street address of the headquarters of the Jewish resistance in their hopeless fight. Four decades later, Roman Polanski made the uprising a crucial part of his 2002 Oscar-winning film, “The Pianist,” whose musician hero aids the effort.

Other doomed uprisings have also been preserved in art. The 48-hour Paris Uprising of 1832, fought by 3,000 insurrectionists against 30,000 regular troops, gained immortality through Victor Hugo, who made the revolt a major plot point in “Les Misérables” (1862). The novel was a hit on its debut and ever after—and gave its world-wide readership a set of martyrs to emulate.

Even a young country like the U.S. has its share of national myths, of desperate last stands serving as touchstones for American identity. One has been the Battle of the Alamo in 1836 during the War of Texas Independence. “Remember the Alamo” became the Texan war cry only weeks after roughly 200 ill-equipped rebels, among them the frontiersman Davy Crockett, were killed defending the Alamo mission in San Antonio against some 2,000 Mexican troops.

The Alamo’s imagery of patriotic sacrifice became popular in novels and paintings but really took off during the film era, beginning in 1915 with the D.W. Griffith production, “Martyrs of the Alamo.” Walt Disney got in on the act with his 1950s TV miniseries, “ Davy Crockett : King of the Wild Frontier.” John Wayne’s 1960 “The Alamo,” starring Wayne as Crockett, immortalized the character for a generation.

Wayne said that he saw the Alamo as “a metaphor of America” and its will for freedom. Others did too, even in very different contexts. During the Vietnam War, President Lyndon Johnson, whose hometown wasn’t far from San Antonio, once told the National Security Council why he believed U.S. troops needed to be fighting in Southeast Asia: “Hell,” he said, “Vietnam is just like the Alamo.”

WSJ Historically Speaking: When Blossoms and Bullets Go Together: The Battles of Springtime

Generals have launched spring offensives from ancient times to the Taliban era

ILLUSTRATION: THOMAS FUCHS

‘When birds do sing, hey ding a ding, ding; Sweet lovers love the spring,” wrote Shakespeare. But the season has a darker side as well. As we’re now reminded each year when the Taliban anticipate the warm weather by announcing their latest spring offensive in Afghanistan, military commanders and strategists have always loved the season, too.

The World War I poet Wilfred Owen highlighted the irony of this juxtaposition—the budding of new life alongside the massacre of those in life’s prime—in his famous “Spring Offensive”: “Marvelling they stood, and watched the long grass swirled / By the May breeze”—right before their deaths.

The pairing of rebirth with violent death has an ancient history. In the 19th century, the anthropologist James George Frazer identified the concept of the “dying and rising god” as one of the earliest cornerstones of religious belief. For new life to appear in springtime, there had to be a death or sacrifice in winter. Similar sacrifice-and-rejuvenation myths can be found among the Sumerians, Egyptians, Canaanites and Greeks.

Mediterranean and Near Eastern cultures saw spring in this dual perspective for practical reasons as well. The agricultural calendar revolved around wet winters, cool springs and very hot summers when almost nothing grew except olives and figs. Harvest time for essential cereal crops such as wheat and barley took place in the spring. The months of May and June, therefore, were perfect for armies to invade, because they could live off the land. The Bible says of King David, who lived around 1,000 B.C., that he sent Joab and the Israelite army to fight the Ammonites “in the spring of the year, when kings normally go out to war.”

It was no coincidence that the Romans named the month of March after Mars, the god of war but also the guardian of agriculture. As the saying goes, “An army fights on its stomach.” For ancient Greek historians, the rhythm of war rarely changed: Discussion took place in the winter, action began in spring. When they referred to a population “waiting for spring,” it was usually literary shorthand for a people living in fear of the next attack. The military campaigns of Alexander the Great (356-323 B.C.) into the Balkans, Persia and India began with a spring offensive.

In succeeding centuries, the seasonal rhythms of Europe, which were very different from those of warmer climes, brought about a new calendar of warfare. Europe’s reliance on the autumn harvest ended the ancient marriage of spring and warfare. Conscripts were unwilling to abandon their farms and fight in the months between planting and harvesting.

 This seasonal difficulty would not be addressed until Sweden’s King Gustavus Adolphus (1594-1632), a great military innovator, developed principles for the first modern army. According to the British historian Basil Liddell Hart, Gustavus made the crucial shift from short-term conscripts, drawn away from agricultural labor, to a standing force of professional, trained soldiers on duty all year round, regardless of the seasons.

Gustavus died before he could fully implement his ideas. This revolution in military affairs fell instead to Frederick the Great, king of Prussia (1712-1786), who turned military life into a respectable upper-class career choice and the Prussian army into a mobile, flexible and efficient machine.

Frederick believed that a successful army attacks first and hard, a lesson absorbed by Napoleon a half century later. This meant that the spring season, which had become the season for drilling and training in preparation for summer campaigning, became a fighting season again.

But the modern iteration of the spring offensive is different from its ancient forebear. Its purpose isn’t to feed an army but to incapacitate enemies before they have the chance to strike. The strategy is a risky gambler’s throw, relying on timing and psychology as much as on strength and numbers.

For Napoleon, the spring offensive played to his strength in being able to combine speed, troop concentration and offensive action in a single, decisive blow. Throughout his career he relied on the spring offensive, beginning with his first military campaign in Italy (1796-7), in which the French defeated the more-numerous and better-supplied Austrians. His final spring campaign was also his boldest. Despite severe shortages of money and troops, Napoleon came within a hair’s breadth of victory at the Battle of Waterloo on June 18, 1815.

The most famous spring campaign of the early 20th century—Germany’s 1918 offensive in World War I, originated by Gen. Erich Ludendorff—reveals its limitations as a strategy. If the knockout blow doesn’t happen, what next?

 At the end of 1917, the German high command had decided that the army needed a spring offensive to revive morale. Ludendorff thought that only an attack in the Napoleonic mode would work: “The army pined for the offensive…It alone is decisive,” he wrote. He was convinced that all he had to do was “blow a hole in the middle” of the enemy’s front and “the rest will follow of its own accord.” When Ludendorff’s first spring offensive stalled after 15 days, he quickly launched four more. Lacking any other objective than the attack itself, all failed, leaving Germany bankrupt and crippled by July.

In this century, the Taliban have found their own brutal way to renew the ancient tradition—with the blossoms come the bombs and the bloodshed.

WSJ Historically Speaking: How Mermaid-Merman Tales Got to This Year’s Oscars

ILLUSTRATON: DANIEL ZALKUS

‘The Shape of Water,’ the best-picture winner, extends a tradition of ancient tales of these water creatures and their dealings with humans

Popular culture is enamored with mermaids. This year’s Best Picture Oscar winner, Guillermo del Toro’s “The Shape of Water,” about a lonely mute woman and a captured amphibious man, is a new take on an old theme. “The Little Mermaid,” Disney ’senormously successful 1989 animated film, was based on the Hans Christian Andersen story of the same name, and it was turned into a Broadway musical, which even now is still being staged across the country.

The fascination with mermythology began with the ancient Greeks. In the beginning, mermen were few and far between. As for mermaids, they were simply members of a large chorus of female sea creatures that included the benign Nereids, the sea-nymph daughters of the sea god Nereus, and the Sirens, whose singing led sailors to their doom—a fate Odysseus barely escapes in Homer’s epic “The Odyssey.”

Over the centuries, the innocuous mermaid became interchangeable with the deadly sirens. They led Scottish sailors to their deaths in one of the variations of the anonymous poem “Sir Patrick Spens,” probably written in the 15th century: “Then up it raise the mermaiden, / Wi the comb an glass in her hand: / ‘Here’s a health to you, my merrie young men, / For you never will see dry land.’”

In pictures, mermaids endlessly combed their hair while sitting semi-naked on the rocks, lying in wait for seafarers. During the Elizabethan era, a “mermaid” was a euphemism for a prostitute. Poets and artists used them to link feminine sexuality with eternal damnation.

But in other tales, the original, more innocent idea of a mermaid persisted. Andersen’s 1837 story followed an old literary tradition of a “virtuous” mermaid hoping to redeem herself through human love.

Andersen purposely broke with the old tales. As he acknowledged to a friend, his fishy heroine would “follow a more natural, more divine path” that depended on her own actions rather than that of “an alien creature.” Egged on by her sisters to murder the prince whom she loves and return to her mermaid existence, she chooses death instead—a sacrifice that earns her the right to a soul, something that mermaids were said to lack.

Richard Wagner’s version of mermaids—the Rhine maidens who guard the treasure of “Das Rheingold”—also bucked the “temptress” cliché. While these maidens could be cruel, they gave valuable advice later in the “Ring” cycle.

The cultural rehabilitation of mermaids gained steam in the 20th century. In T.S. Eliot’s 1915 poem, “The Love Song of J. Alfred Prufrock,” their erotic power becomes a symbol of release from stifling respectability. The sad protagonist laments, “I have heard the mermaids singing, each to each. / I do not think that they will sing to me.” By 1984, when a gorgeous mermaid (Daryl Hannah) fell in love with a nerdy man ( Tom Hanks ) in the film comedy “Splash,” audiences were ready to accept that mermaids might offer a liberating alternative to society’s hang-ups, and that humans themselves are the obstacle to perfect happiness, not female sexuality.

What makes “The Shape of Water” unusual is that a scaly male, not a sexy mermaid, is the object of affection to be rescued. Andersen probably wouldn’t recognize his Little Mermaid in Mr. del Toro’s nameless, male amphibian, yet the two tales are mirror images of the same fantasy: Love conquers all.

WSJ Historically Speaking: In Epidemics, Leaders Play a Crucial Role

ILLUSTRATION: JON KRAUSE

Lessons in heroism and horror as a famed flu pandemic hits a milestone

A century ago this week, an army cook named Albert Gitchell at Fort Riley, Kansas, paid a visit to the camp infirmary, complaining of a severe cold. It’s now thought that he was America’s patient zero in the Spanish Flu pandemic of 1918.

The disease killed more than 40 million people world-wide, including 675,000 Americans. In this case, as in so many others throughout history, the pace of the pandemic’s deadly progress depended on the actions of public officials.

Spain had allowed unrestricted reporting about the flu, so people mistakenly believed it originated there. Other countries, including the U.S., squandered thousands of lives by suppressing news and delaying health measures. Chicago kept its schools open, citing a state commission that had declared the epidemic at a “standstill,” while the city’s public health commissioner said, “It is our duty to keep the people from fear. Worry kills more people than the epidemic.”

Worry had indeed sown chaos, misery and violence in many previous outbreaks, such as the Black Plague. The disease, probably caused by bacteria-infected fleas living on rodents, swept through Asia and Europe during the 1340s, killing up to a quarter of the world’s population. In Europe, where over 50 million died, a search for scapegoats led to widespread pogroms against Jews. In 1349, the city of Strasbourg in France, already somewhat affected by the plague, put to death hundreds of Jews and expelled the rest.

But not all authorities lost their heads at the first sign of contagion. Pope Clement VI (1291-1352), one of a series of popes who ruled from the southern French city of Avignon, declared that the Jews had not caused the plague and issued two papal bulls against their persecution.

In Italy, Venetian authorities took the practical approach: They didn’t allow ships from infected ports to dock and subjected all travelers to a period of isolation. The term quarantine comes from the Italian quaranta giorni, meaning “40 days”—the official length of time until the Venetians granted foreign ships the right of entry.

Less exalted rulers could also show prudence and compassion in the face of a pandemic. After the Black Plague struck the village of Eyam in England, the vicar William Mompesson persuaded its several hundred inhabitants not to flee, to prevent the disease from spreading to other villages. The biggest landowner in the county, the earl of Devonshire, ensured a regular supply of food and necessities to the stricken community. Some 260 villagers died during their self-imposed quarantine, but their decision likely saved thousands of lives.

The response to more recent pandemics has not always met that same high standard. When viral severe acute respiratory syndrome (SARS) began in China in November 2002, the government’s refusal to acknowledge the outbreak allowed the disease to spread to Hong Kong, a hub for the West and much of Asia, thus creating a world problem. On a more hopeful note, when Ebola was spreading uncontrollably through West Africa in 2014, the Ugandans leapt into action, saturating their media with warnings and enabling quick reporting of suspected cases, and successfully contained their outbreak.

Pandemics always create a sense of crisis. History shows that public leadership is the most powerful weapon in keeping them from becoming full-blown tragedies.