Duels Among the Clouds

Aerial combat was born during World War I, giving the world a new kind of military hero: the fighter pilot

“Top Gun” is back. The 1986 film about Navy fighter pilots is getting a sequel next year, with Tom Cruise reprising his role as Lt. Pete “Maverick” Mitchell, the sexy flyboy who can’t stay out of trouble. Judging by the trailer released by Paramount in July, the new movie, “Top Gun: Maverick,” will go straight to the heart of current debates about the future of aerial combat. An unseen voice tells Mr. Cruise, “Your kind is headed for extinction.”

The mystique of the fighter pilot began during World War I, when fighter planes first entered military service. The first aerial combat took place on Oct. 5, 1914, when French and German biplanes engaged in an epic contest in the sky, watched by soldiers on both sides of the trenches. At this early stage, neither plane was armed, but the German pilot had a rifle and the French pilot a machine gun; the latter won the day.

A furious arms race ensued. The Germans turned to the Dutch engineer Anthony Fokker, who devised a way to synchronize a plane’s propeller with its machine gun, creating a flying weapon of deadly accuracy. The Allies soon caught up, ushering in the era of the dogfight.

From the beginning, the fighter pilot seemed to belong to a special category of warrior—the dueling knight rather than the ordinary foot soldier. Flying aces of all nationalities gave each other a comradely respect. In 1916, the British marked the downing of the German fighter pilot Oswald Boelcke by dropping a wreath in his honor on his home airfield in Germany.

But not until World War II could air combat decide the outcome of an entire campaign. During the Battle of Britain in the summer of 1940, the German air force, the Luftwaffe, dispatched up to 1,000 aircraft in a single attack. The Royal Air Force’s successful defense of the skies led to British Prime Minister Winston Churchill’s famous declaration, “Never in the field of human conflict was so much owed by so many to so few.”

The U.S. air campaigns over Germany taught American military planners a different lesson. Rather than focusing on pilot skills, they concentrated on building planes with superior firepower. In the decades after World War II, the invention of air-to-air missiles was supposed to herald the end of the dogfight. But during the Vietnam War, steep American aircraft losses caused by the acrobatic, Soviet-built MiG fighter showed that one-on-one combat still mattered. The U.S. response to this threat was the highly maneuverable twin-engine F-15 and the formation of a new pilot training academy, the Navy Fighter Weapons School, which inspired the original “Top Gun.”

Since that film’s release, however, aerial combat between fighter planes has largely happened on screen, not in the real world. The last dogfight involving a U.S. aircraft took place in 1999, during the NATO air campaign in Kosovo. The F-14 Tomcats flown by Mr. Cruise’s character have been retired, and his aircraft carrier, the USS Enterprise, has been decommissioned.

Today, conventional wisdom again holds that aerial combat is obsolete. The new F-35 Joint Strike Fighter is meant to replace close-up dogfights with long-range weapons. But not everyone seems to have read the memo about the future of air warfare. Increasingly, U.S. and NATO pilots are having to scramble their planes to head off Russian incursions. The knights of the skies can’t retire just yet.

A Palace Open to the People

From the Pharaohs to Queen Victoria, royal dwellings have been symbols of how rulers think about power.

Every summer, Queen Elizabeth II opens the state rooms of Buckingham Palace to the public. This year’s opening features an exhibition that I curated, “Queen Victoria’s Palace,” the result of a three-year collaboration with Royal Collection Trust. The exhibition uses paintings, objects and even computer-generated imagery to show how Victoria transformed Buckingham Palace into both a family home and the headquarters of the monarchy. In the process, she modernized not only the building itself but also the relationship between the Royal Family and the British people.

Plenty of rulers before Victoria had built palaces, but it was always with a view to enhancing their power rather than sharing it. Consider Amarna in Egypt, the temple-palace complex created in the 14th century B.C. by Amenhotep IV, better known as Akhenaten. Supported by his beautiful wife Nefertiti, the heretical Akhenaten made himself the head of a new religion that revered the divine light of the sun’s disk, the Aten.

The Great Palace reflected Akhenaten’s megalomania: The complex featured vast open air courtyards where the public was required to engage in mass worship of the Pharaoh and his family. Akhenaten’s palace was hated as much as his religion, and both were abandoned after his death.

Weiyang, the Endless Palace, built in 200 B.C. in western China by the Emperor Gaozu, was also designed to impart a religious message. Until its destruction in the 9th century by Tibetan invaders, Weiyang extended over two square miles, making it the largest imperial palace in history. Inside, the halls and courtyards were laid out along specific axial and symmetrical lines to ensure that the Emperor existed in harmony with the landscape and, by extension, with his people. Each chamber was ranked according to its proximity to the Emperor’s quarters; every person knew his place and obligations according to his location in the palace.

Western Europe had nothing comparable to Weiyang until King Louis XIV built the Palace of Versailles in 1682. With its unparalleled opulence—particularly the glittering Hall of Mirrors—and spectacular gardens, Versailles was a cult of personality masquerading as architecture. Louis, the self-styled Sun King at the center of this artificial universe, created a living stage where seeing and being seen was the highest form of social currency.

The offstage reality was grimmer. Except for the royal family’s quarters, Versailles lacked even such basic amenities as plumbing. The cost of upkeep swallowed a quarter of the government’s annual tax receipts. Louis XIV’s fantasy lasted a century before being swept away by the French Revolution in 1789.

Although Victoria would never have described herself as a social revolutionary, the many changes she made to Buckingham Palace were an extraordinary break with the past. From the famous balcony where the Royal Family gathers to share special occasions with the nation, to the spaces for entertaining that can welcome thousands of guests, the revitalized palace created a more inclusive form of royal architecture. It sidelined the old values of wealth, lineage, power and divine right to emphasize new ones based on family, duty, loyalty and patriotism. Victoria’s palace was perhaps less awe-inspiring than its predecessors, but it may prove to be more enduring.

Playing Cards for Fun and Money

From 13th-century Egypt to the Wild West, the standard deck of 52 cards has provided entertainment—and temptation.

More than 8,500 people traveled to Las Vegas to play in this year’s World Series of Poker, which ended July 16—a near-record for the contest. I’m not a poker player myself, but I understand the fun and excitement of playing with real cards in an actual game rather than online. There’s something uniquely pleasurable about a pack of cards—the way they look and feel to the touch—that can’t be replicated on the screen.

Although the origins of the playing card are believed to lie in China, the oldest known examples come from the Mamluk Caliphate, an Islamic empire that stretched across Egypt and the eastern Mediterranean from 1260 to 1517. It’s significant that the empire was governed by a warrior caste of former slaves: A playing card can be seen as an assertion of freedom, since time spent playing cards is time spent freely. The Mamluk card deck consisted of 52 cards divided into four suits, whose symbols reflected the daily realities of soldiering—a scimitar, a polo stick, a chalice and a coin.

Returning Crusaders and Venetian traders were probably responsible for bringing cards to Europe. Church and state authorities were not amused: In 1377, Parisians were banned from playing cards on work days. Like dice, cards were classed as gateway vices that led to greater sins. The authorities may not have been entirely wrong: Some surviving card decks from the 15th century have incredibly bawdy themes.

The suits and symbols used in playing cards became more uniform and less ornate following the advent of the printing press. French printers added a number of innovations, including dividing the four suits into red and black, and giving us the heart, diamond, club and spade symbols. Standardization enabled cards to become a lingua franca across cultures, further enhancing their appeal as a communal leisure activity.

In the 18th century, the humble playing card was the downfall of many a noble family, with vast fortunes being won and lost at the gaming table. Cards also started to feature in paintings and novels as symbols of the vagaries of fortune. The 19th-century short story “The Queen of Spades,” by the Russian writer Alexander Pushkin, beautifully captures the card mania of the period. The anti-hero, Hermann, is destroyed by his obsession with winning at Faro, a game of chance that was as popular in the saloons of the American West as it was in the drawing rooms of Europe. The lawman Wyatt Earp may have won fame in the gunfight at the OK Corral, but he earned his money as a Faro dealer in Tombstone, Ariz.

In Britain, attempts to regulate card-playing through high taxes on the cards themselves were a failure, though they did result in one change: Every ace of spades had to show a government tax stamp, which is why it’s the card that traditionally carries the manufacturer’s mark. The last innovation in the card deck, like the first, had military origins. Many Civil War regiments killed time by playing Euchre, which requires an extra trump card. The Samuel Hart Co. duly obliged with a card which became the forerunner to the Joker, the wild card that doesn’t have a suit.

But we shouldn’t allow the unsavory association of card games with gambling to have the last word. As Charles Dickens wrote in “Nicholas Nickleby”: “Thus two people who cannot afford to play cards for money, sometimes sit down to a quiet game for love.”

ILLUSTRATION: THOMAS FUCHS

Beware the Red Tide

Massive algae blooms that devastate ocean life have been recorded since antiquity—and they are getting worse.

In H.G. Wells’s 1898 novel “The War of the Worlds,” the invading Martians bring with them a noxious red weed that suffocates the land and poisons the water. Fortunately, it dies off at the end of the novel, killed by good old British bacteria.

Real life isn’t so tidy. Currently, there is no force, biological or otherwise, capable of stopping the algae blooms that are attacking coastal waters around the world with frightening regularity, turning thousands of square miles into odoriferous graveyards of dead and rotting fish. In the U.S., one of the chief culprits is the Karenia brevis algae, a common marine microorganism that blooms when exposed to sunlight, warm water and phosphorus or nitrates. The result is a toxic sludge known as a red tide, which depletes the oxygen in the water, poisons shellfish and emits a foul vapor strong enough to irritate the lungs.

The red tide isn’t a new phenomenon, though its frequency and severity have certainly gotten worse thanks to pollution and rising water temperatures. There used to be decades between outbreaks, but since 1998 the Gulf Coast has suffered one every year.

The earliest description of a red tide may have come from Tacitus, the first-century Roman historian, in his “Annals”: “the Ocean had appeared blood-red and…the ebbing tide had left behind it what looked to be human corpses.” The Japanese recorded their first red tide catastrophe in 1234: An algae bloom in Osaka Bay invaded the Yodo River, a major waterway between Kyoto and Osaka, which led to mass deaths among humans and fish alike.

The earliest reliable accounts of red tide invasions in the Western Hemisphere come from 16th-century Spanish sailors in the Gulf of Mexico. The colorful explorer Álvar Núñez Cabeza de Vaca (ca. 1490-1560) almost lost his entire expedition to red tide poisoning while sailing in Apalachee Bay on the west coast of Florida in July 1528. Unaware that local Native American tribes avoided fishing in the area at that time of year, he allowed his men to gorge themselves on oysters. “The journey was difficult in the extreme,” he wrote afterward, “because neither the horses were sufficient to carry all the sick, nor did we know what remedy to seek because every day they languished.”

Red tides started appearing everywhere in the late 18th and early 19th centuries. Charles Darwin recorded seeing red-tinged water off the coast of Chile during his 1832 voyage on HMS Beagle. Scientists finally identified K. brevis as the culprit behind the outbreaks in 1946-47, but this was small comfort to Floridians, who were suffering the worst red tide invasion in U.S. history. It started in Naples and spread all the way to Sarasota, hanging around for 18 months, destroying the fishing industry and making life unbearable for residents. A 35-mile long stretch of sea was so thick with rotting fish carcasses that the government dispatched Navy warships to try to break up the mass. People compared the stench to poison gas.

The red tide invasion of 2017-18 was particularly terrible, lasting some 15 months and covering 145 miles of Floridian coastline. The loss to tourism alone neared $100 million. Things are looking better this summer, fortunately, but we need more than hope or luck to combat this plague; we need a weapon that hasn’t yet been invented.

How We Kept Cool Before Air Conditioning

Wind-catching towers and human-powered rotary fans were just some of the devices invented to fight the heat.

What would we do without our air conditioning? Given the number of rolling blackouts and brownouts that happen across the U.S. each summer, that’s not exactly a rhetorical question.

Fortunately, our ancestors knew a thing or two about staying cool even without electricity. The ancient Egyptians developed the earliest known technique: Evaporative cooling involved hanging wet reeds in front of windows, so that the air cooled as the water evaporated.

The Romans, the greatest engineers of the ancient world, had more sophisticated methods. By 312 B.C. they were piping fresh water into Rome via underground pipes and aqueducts, enabling the rich to cool and heat their houses using cold water pipes embedded in the walls and hot water pipes under the floor.

Nor were the Romans alone in developing clever domestic architecture to provide relief in hot climes. In the Middle East, architects constructed buildings with “wind catchers”—tall, four-windowed towers that funneled cool breezes down to ground level and allowed hot air to escape. These had the advantage of working on their own, without human labor. The Chinese had started using rotary fans as early as the second century, but they required a dedicated army of slaves to keep them moving. The addition of hydraulic power during the Song era, 960-1279, alleviated but didn’t end the manpower issue.

There had been no significant improvements in air conditioning designs for almost a thousand years when, in 1734, British politicians turned to Dr. John Theophilus Desaguliers, a former assistant to Isaac Newton, and begged him to find a way of cooling the overheated House of Commons. Desaguliers designed a marvelous Rube Goldberg-like system that used all three traditional methods: wind towers, pipes and rotary fans. It actually worked, so long as there was someone to crank the handle at all times.

But the machinery wore out in the late 1760s, leaving politicians as hot and bothered as ever. In desperation, the House invited Benjamin Franklin and other leading scientists to design something new. Their final scheme turned out to be no better than Desaguliers’s and required not one but two men to keep the system working.

The real breakthrough occurred in 1841, after the British engineer David Boswell Reid figured out how to control room temperature using steam power. St. George’s Hall in Liverpool is widely considered to be the world’s first air-conditioned building.

Indeed, Reid is one of history’s unsung heroes. His system worked so well that he was invited to install his pipe and ventilation design in hospitals and public buildings around the world. He was working in the U.S. at the start of the Civil War and was appointed inspector of military hospitals. Unfortunately, he died suddenly in 1863, leaving his proposed improvements to gather dust.

The chief problem with Reid’s system was that steam power was about to be overtaken by electricity. When President James Garfield was shot by an assassin in the summer of 1881, naval engineers attempted to keep him cool by using electric fans to blow air over blocks of ice. Two decades later, Willis Haviland Carrier invented the first all-electric air conditioning unit. Architects and construction engineers have been designing around it ever since.

Fears for our power grid may be exaggerated, but it’s good to know that if the unthinkable were to happen and we lost our air conditioners, history can offer us some cool solutions.

WSJ Historically Speaking: How Mermaid-Merman Tales Got to This Year’s Oscars

ILLUSTRATON: DANIEL ZALKUS

‘The Shape of Water,’ the best-picture winner, extends a tradition of ancient tales of these water creatures and their dealings with humans

Popular culture is enamored with mermaids. This year’s Best Picture Oscar winner, Guillermo del Toro’s “The Shape of Water,” about a lonely mute woman and a captured amphibious man, is a new take on an old theme. “The Little Mermaid,” Disney ’senormously successful 1989 animated film, was based on the Hans Christian Andersen story of the same name, and it was turned into a Broadway musical, which even now is still being staged across the country.

The fascination with mermythology began with the ancient Greeks. In the beginning, mermen were few and far between. As for mermaids, they were simply members of a large chorus of female sea creatures that included the benign Nereids, the sea-nymph daughters of the sea god Nereus, and the Sirens, whose singing led sailors to their doom—a fate Odysseus barely escapes in Homer’s epic “The Odyssey.”

Over the centuries, the innocuous mermaid became interchangeable with the deadly sirens. They led Scottish sailors to their deaths in one of the variations of the anonymous poem “Sir Patrick Spens,” probably written in the 15th century: “Then up it raise the mermaiden, / Wi the comb an glass in her hand: / ‘Here’s a health to you, my merrie young men, / For you never will see dry land.’”

In pictures, mermaids endlessly combed their hair while sitting semi-naked on the rocks, lying in wait for seafarers. During the Elizabethan era, a “mermaid” was a euphemism for a prostitute. Poets and artists used them to link feminine sexuality with eternal damnation.

But in other tales, the original, more innocent idea of a mermaid persisted. Andersen’s 1837 story followed an old literary tradition of a “virtuous” mermaid hoping to redeem herself through human love.

Andersen purposely broke with the old tales. As he acknowledged to a friend, his fishy heroine would “follow a more natural, more divine path” that depended on her own actions rather than that of “an alien creature.” Egged on by her sisters to murder the prince whom she loves and return to her mermaid existence, she chooses death instead—a sacrifice that earns her the right to a soul, something that mermaids were said to lack.

Richard Wagner’s version of mermaids—the Rhine maidens who guard the treasure of “Das Rheingold”—also bucked the “temptress” cliché. While these maidens could be cruel, they gave valuable advice later in the “Ring” cycle.

The cultural rehabilitation of mermaids gained steam in the 20th century. In T.S. Eliot’s 1915 poem, “The Love Song of J. Alfred Prufrock,” their erotic power becomes a symbol of release from stifling respectability. The sad protagonist laments, “I have heard the mermaids singing, each to each. / I do not think that they will sing to me.” By 1984, when a gorgeous mermaid (Daryl Hannah) fell in love with a nerdy man ( Tom Hanks ) in the film comedy “Splash,” audiences were ready to accept that mermaids might offer a liberating alternative to society’s hang-ups, and that humans themselves are the obstacle to perfect happiness, not female sexuality.

What makes “The Shape of Water” unusual is that a scaly male, not a sexy mermaid, is the object of affection to be rescued. Andersen probably wouldn’t recognize his Little Mermaid in Mr. del Toro’s nameless, male amphibian, yet the two tales are mirror images of the same fantasy: Love conquers all.

WSJ Historically Speaking: In Epidemics, Leaders Play a Crucial Role

ILLUSTRATION: JON KRAUSE

Lessons in heroism and horror as a famed flu pandemic hits a milestone

A century ago this week, an army cook named Albert Gitchell at Fort Riley, Kansas, paid a visit to the camp infirmary, complaining of a severe cold. It’s now thought that he was America’s patient zero in the Spanish Flu pandemic of 1918.

The disease killed more than 40 million people world-wide, including 675,000 Americans. In this case, as in so many others throughout history, the pace of the pandemic’s deadly progress depended on the actions of public officials.

Spain had allowed unrestricted reporting about the flu, so people mistakenly believed it originated there. Other countries, including the U.S., squandered thousands of lives by suppressing news and delaying health measures. Chicago kept its schools open, citing a state commission that had declared the epidemic at a “standstill,” while the city’s public health commissioner said, “It is our duty to keep the people from fear. Worry kills more people than the epidemic.”

Worry had indeed sown chaos, misery and violence in many previous outbreaks, such as the Black Plague. The disease, probably caused by bacteria-infected fleas living on rodents, swept through Asia and Europe during the 1340s, killing up to a quarter of the world’s population. In Europe, where over 50 million died, a search for scapegoats led to widespread pogroms against Jews. In 1349, the city of Strasbourg in France, already somewhat affected by the plague, put to death hundreds of Jews and expelled the rest.

But not all authorities lost their heads at the first sign of contagion. Pope Clement VI (1291-1352), one of a series of popes who ruled from the southern French city of Avignon, declared that the Jews had not caused the plague and issued two papal bulls against their persecution.

In Italy, Venetian authorities took the practical approach: They didn’t allow ships from infected ports to dock and subjected all travelers to a period of isolation. The term quarantine comes from the Italian quaranta giorni, meaning “40 days”—the official length of time until the Venetians granted foreign ships the right of entry.

Less exalted rulers could also show prudence and compassion in the face of a pandemic. After the Black Plague struck the village of Eyam in England, the vicar William Mompesson persuaded its several hundred inhabitants not to flee, to prevent the disease from spreading to other villages. The biggest landowner in the county, the earl of Devonshire, ensured a regular supply of food and necessities to the stricken community. Some 260 villagers died during their self-imposed quarantine, but their decision likely saved thousands of lives.

The response to more recent pandemics has not always met that same high standard. When viral severe acute respiratory syndrome (SARS) began in China in November 2002, the government’s refusal to acknowledge the outbreak allowed the disease to spread to Hong Kong, a hub for the West and much of Asia, thus creating a world problem. On a more hopeful note, when Ebola was spreading uncontrollably through West Africa in 2014, the Ugandans leapt into action, saturating their media with warnings and enabling quick reporting of suspected cases, and successfully contained their outbreak.

Pandemics always create a sense of crisis. History shows that public leadership is the most powerful weapon in keeping them from becoming full-blown tragedies.

WSJ Historically Speaking: Life Beyond the Three-Ring Circus

Why ‘The Greatest Show on Earth’ foundered—and what’s next

ILLUSTRATION: THOMAS FUCHS

The modern circus, which celebrates its 250th anniversary this year, has attracted such famous fans as Queen Victoria, Charles Dickens and Ernest Hemingway, who wrote in 1953, “It’s the only spectacle I know that, while you watch it, gives the quality of a truly happy dream.”

Recently, however, the “happy dream” has struggled with lawsuits, high-profile bankruptcies and killer clown scares inspired in part by the evil Pennywise in Stephen King’s “It.” Even the new Hugh Jackman -led circus film, “The Greatest Showman,” comes with an ironic twist. The surprise hit—about the legendary impresario P.T. Barnum, co-founder of “The Greatest Show on Earth”—arrives on the heels of last year’s closing of the actual Ringling Bros., Barnum and Bailey Circus, after 146 years in business.

The word circus is Roman, but Roman and modern circuses do not share the same roots. Rome’s giant Circus Maximus, which could hold some 150,000 people, was more of a sporting arena than a theatrical venue, built to hold races, athletic competitions and executions. The Roman satirist Juvenal was alluding to the popular appeal of such spectacles when he coined the phrase “bread and circuses,” assailing citizens’ lack of interest in politics.

In fact, the entertainments commonly linked with the modern circus—acrobatics, animal performances and pantomimes—belong to traditions long predating the Romans. Four-millennia-old Egyptian paintings show female jugglers; in China, archaeologists have found 2,000-year-old clay figurines of tumblers.

Circus-type entertainments could be hideously violent: In 17th-century Britain, dogs tore into bears and chimpanzees. A humane change of pace came in 1768, when Philip Astley, often called the father of the modern circus, put on his first show in London, in a simple horse-riding ring. He found that a circle 42 feet in diameter was ideal for using centrifugal force as an aid in balancing on a horse’s back while doing tricks. It’s a size still used today. Between the horse shows, he scheduled clowning and tumbling acts.Circuses in fledgling America, with its long distances, shortage of venues and lack of large cities, found the European model too static and costly. In 1808, Hachaliah Bailey took the circus in a new direction by making animals the real stars, particularly an African elephant named Old Bet. The focus on animal spectacles became the American model, while Europeans still emphasized human performers.

When railroads spread across America, circuses could ship their menageries. Already famous for his museums and “freak shows,” P.T. Barnum and his partners joined forces with rivals and used special circus trains to create the largest circus in the country. Although Barnum played up the animal and human oddities in his “sideshow,” the marquee attraction was Jumbo the Elephant. In its final year, the Ringling Bros. animal contingent, according to a news report, included tigers, camels, horses, kangaroos and snakes. The elephants had already retired.

Once animal-rights protests and rising travel costs started eroding profitability in the late 20th century, the American circus became trapped by its own history. But the success of Canada’s Cirque du Soleil, which since its 1984 debut has conquered the globe with its astounding acrobatics and staging, shows that the older European tradition introduced by Astley still has the power to inspire wonder. The future may well lie in looking backward, to the era when the stars of the show were the people in the ring.

WSJ Historically Speaking: The Long and Winding Road to New Year’s

 

The hour, date and kind of celebration have changed century to century

With its loud TV hosts, drunken parties and awful singing, New Year’s Eve might seem to have been around forever. Yet when it comes to the timing and treatment of the  holiday, our version of New Year’s—the eve and day itself—is a relatively recent tradition.

ILLUSTRATION: THOMAS FUCHS

The Babylonians celebrated New Year’s in March, when the vernal equinox—a day of equal light and darkness—takes place. To them, New Year’s was a time of pious reckoning rather than raucous partying. The Egyptians got the big parties going: Their celebration fell in line with the annual flooding of the Nile River. It was a chance to get roaring drunk for a few weeks rather just for a few hours. The holiday’s timing, though, was the opposite of ours, in July.

Continue reading…