Historically Speaking: The Long Road to Cleanliness

The ancient Babylonians and Egyptians knew about soap, but daily washing didn’t become popular until the 19th century.

As the mother of five teenagers, I have a keen appreciation of soap—especially when it’s actually used. Those little colored bars—or more frequently nowadays, dollops of gel—represent one of the triumphs of civilization.

Adolescents aside, human beings like to be clean, and any product made of fats or oils, alkaline salts and water will help them to stay that way. The Babylonians knew how to make soap as early as 2800 B.C., although it was probably too caustic for washing anything except hair and textiles. The Ebers Papyrus, an ancient Egyptian medical document from 1550 B.C., suggests that the Egyptians used soap only for treating skin ailments.

ILLUSTRATION: THOMAS FUCHS

The Greeks and Romans also avoided washing with harsh soaps, until Julius Caesar’s conquest of Gaul in 58 B.C. introduced them to a softer Celtic formula. Aretaeus of Cappadocia, a 1st century A.D. Greek physician, wrote that “those alkaline substances made into balls” are a “very excellent thing to cleanse the body in the bath.”

Following Rome’s collapse in the 5th century, the centers of soap-making moved to India, Africa and the Middle East. In Europe, soap suffered from being associated with ancient paganism. In the 14th century, Crusaders returning from the Middle East brought back with them a taste for washing with soap and water, but not in sufficient numbers to slow the spread of plague.

Soap began to achieve wider acceptance in Europe during the Renaissance, though geography still played a role: Southern countries had the advantage of making soap out of natural oils and perfumes, while the colder north had to make do with animal fats and whale blubber. Soap’s growing popularity also attracted the attention of revenue-hungry governments. In 1632, in one of the earliest documented cases of crony capitalism, King Charles I of England granted a group of London soapmakers a 14-year monopoly in exchange for annual payments of 4 pounds per ton sold.

Soap remained a luxury item, however, until scientific advances during the age of the Enlightenment made large-scale production possible: In 1790, the French chemist Nicholas Leblanc discovered how to make alkali from common salt. The saying “cleanliness is next to godliness”—credited to John Wesley, the founder of Methodism—was a great piece of free advertising, but it was soap’s role in modern warfare that had a bigger impact on society. During the Crimean War in Europe and the Civil War in the U.S., high death tolls from unsanitary conditions led to new requirements that soldiers use soap every day.

In the late 19th century, soap manufacturers helped to jump-start the advertising industry with their use of catchy poems and famous artworks as marketing tools. British and American soapmakers were ahead of their time in other ways, too: Lever (now Unilever) built housing for its workers, while Procter and Gamble pioneered the practice of profit-sharing.

And it was Procter and Gamble that made soap the basis for one of the most influential cultural institutions of the last century. Having read reports that women would like to be entertained while doing housework, the company decided it would sponsor the production of daytime radio domestic dramas. Thus began the first soap opera, “Ma Perkins,” a 15-minute tear-laden serial that ran from 1933 until 1960—and created a new form of storytelling.

Historically Speaking: Fashion Shows: From Royal to Retail

The catwalk has always been a place for dazzling audiences as well as selling clothes

The 2007 Fendi Fall Collection show at the Great Wall of China. PHOTO: GETTY IMAGES

As devotees know, the fashion calendar is divided between the September fashion shows, which display the designers’ upcoming spring collections, and the February shows, which preview the fall. New York Fashion Week, which wraps up this weekend, is the world’s oldest; it started in 1943, when it was called “press week,” and always goes first, followed by London, Milan, and Paris.

Although fashion week is an American invention, the twice-yearly fashion show can be traced back to the court of Louis XIV of France in the 17th century. The king insisted on a seasonal dress code at court as a way to boost the French textile industry: velvet and satin in the winter, silks in the summer. The French were also responsible for the rise of the dress designer: Charles Frederick Worth opened the first fashion house in Paris in 1858. Worth designed unique dresses for individual clients, but he made his fortune with seasonal dress collections, which he licensed to the new department stores that were springing up in the world’s big cities.

Worth’s other innovation was the use of live models instead of mannequins. By the late 1800s this had evolved into the “fashion parade,” a precursor to today’s catwalk, which took place at invitation-only luncheons and tea parties. In 1903, the Ehrich brothers transported the fashion parade idea to their department store in New York. The big difference was that the dresses on show could be bought and worn the same day. The idea caught on, and all the major department stores began holding fashion shows.

The French couture houses studiously ignored the consumer-friendly approach pioneered by American retailers. After World War II, however, they had to tout for business like anyone else. The first Paris fashion week took place in 1947. But unlike New York’s, which catered to journalists and wholesale buyers only, the emphasis of the Paris fashion shows was still on haute couture.

The two different types of fashion show—the selling kind, organized by department stores for the public, and the preview kind, held by designers for fashion insiders—coexisted until the 1960s. Suddenly, haute couture was out and buying off the rack was in. The retail fashion show became obsolete as the design houses turned to ready-to-wear collections and accessories such as handbags and perfume.

Untethered from its couture roots, the designer fashion show morphed into performance art—the more shocking the better. The late designer Alexander McQueen provocatively titled his 1995 Fall show “Highland Rape” and sent out models in bloodied and torn clothes. The laurels for the most insanely extravagant runway show still belong to Karl Lagerfeld, who staged his 2007 Fendi Fall Collection on the Great Wall of China at a cost of $10 million.

But today there’s trouble on the catwalk. Poor attendance has led to New York’s September Fashion Week shrinking to a mere five days. Critics have started to argue that the idea of seasonal collections makes little sense in today’s global economy, while the convenience of e-commerce has made customers unwilling to wait a week for a dress, let alone six months. Designers are putting on expensive fashion shows only to have their work copied and sold to the public at knockdown prices a few weeks later. The Ehrich brothers may have been right after all: don’t just tell, sell.

Historically Speaking: Before Weather Was a Science

Modern instruments made accurate forecasting possible, but humans have tried to predict the weather for thousands of years.

The Wall Street Journal, August 31, 2019

ILLUSTRATION: THOMAS FUCHS

Labor Day weekend places special demands on meteorologists, even when there’s not a hurricane like Dorian on the way. September weather is notoriously variable: In 1974, Labor Day in Iowa was a chilly 43 degrees, while the following year it was a baking 103.

Humanity has always sought ways to predict the weather. The invention of writing during the 4th millennium B.C. was an important turning point for forecasting: It allowed the ancient Egyptians to create the first weather records, using them as a guide to predict the annual flood level of the Nile. Too high meant crop failures, too low meant drought.

Some early cultures, such as the ancient Greeks and the Mayans, based their weather predictions on the movements of the stars. Others relied on atmospheric signs and natural phenomena. One of the oldest religious texts in Indian literature, the Chandogya Upanishad from the 8th century B.C., includes observations on various types of rain clouds. In China, artists during the Han Dynasty (206 B.C.-9 A.D.) painted “cloud charts” on silk for use as weather guides.

These early forecasting attempts weren’t simply products of magical thinking. The ancient adage “red sky at night, shepherd’s delight,” which Jesus mentions in the gospel of Matthew, is backed by hard science: The sky appears red when a high-pressure front moves in from the west, driving the clouds away.

In the 4th century B.C., Aristotle tried to provide rational explanations for weather phenomena in his treatise Meteorologica. His use of scientific method laid the foundations for modern meteorology. The problem was that nothing could be built on Aristotle’s ideas until the invention of such tools as the thermometer (an early version was produced by Galileo in 1593) and the barometer (invented by his pupil Torricelli in 1643).

Such instruments couldn’t predict anything on their own, but they made possible accurate daily weather observations. Realizing this, Thomas Jefferson, a pioneer in modern weather forecasting, ordered Meriwether Lewis and William Clark to keep meticulous weather records during their 1804-06 expedition to the American West. He also made his own records wherever he resided, writing in his meteorological diary, “My method is to make two observations a day.”

Most governments, however, remained dismissive of weather forecasting until World War I. Suddenly, knowing which way the wind would blow tomorrow meant the difference between gassing your own side or the enemy’s.

To make accurate predictions, meteorologists needed a mathematical model that could combine different types of data into a single forecast. The first attempt, by the English mathematician Lewis Fry Richardson in 1917, took six weeks to calculate and turned out to be completely wrong.

There were still doubts about the accuracy of weather forecasting when the Allied meteorological team told Supreme Commander Dwight Eisenhower that there was only one window of opportunity for a Normandy landing: June 6, 1944. Despite his misgivings, Eisenhower acted on the information, surprising German meteorologists who had predicted that storms would continue in the English Channel until mid-June.

As we all know, meteorologists still occasionally make the wrong predictions. That’s when the old proverb comes into play: “There is no such thing as bad weather, only inappropriate clothes.”

Historically Speaking: Duels Among the Clouds

Aerial combat was born during World War I, giving the world a new kind of military hero: the fighter pilot

“Top Gun” is back. The 1986 film about Navy fighter pilots is getting a sequel next year, with Tom Cruise reprising his role as Lt. Pete “Maverick” Mitchell, the sexy flyboy who can’t stay out of trouble. Judging by the trailer released by Paramount in July, the new movie, “Top Gun: Maverick,” will go straight to the heart of current debates about the future of aerial combat. An unseen voice tells Mr. Cruise, “Your kind is headed for extinction.”

The mystique of the fighter pilot began during World War I, when fighter planes first entered military service. The first aerial combat took place on Oct. 5, 1914, when French and German biplanes engaged in an epic contest in the sky, watched by soldiers on both sides of the trenches. At this early stage, neither plane was armed, but the German pilot had a rifle and the French pilot a machine gun; the latter won the day.

A furious arms race ensued. The Germans turned to the Dutch engineer Anthony Fokker, who devised a way to synchronize a plane’s propeller with its machine gun, creating a flying weapon of deadly accuracy. The Allies soon caught up, ushering in the era of the dogfight.

From the beginning, the fighter pilot seemed to belong to a special category of warrior—the dueling knight rather than the ordinary foot soldier. Flying aces of all nationalities gave each other a comradely respect. In 1916, the British marked the downing of the German fighter pilot Oswald Boelcke by dropping a wreath in his honor on his home airfield in Germany.

But not until World War II could air combat decide the outcome of an entire campaign. During the Battle of Britain in the summer of 1940, the German air force, the Luftwaffe, dispatched up to 1,000 aircraft in a single attack. The Royal Air Force’s successful defense of the skies led to British Prime Minister Winston Churchill’s famous declaration, “Never in the field of human conflict was so much owed by so many to so few.”

The U.S. air campaigns over Germany taught American military planners a different lesson. Rather than focusing on pilot skills, they concentrated on building planes with superior firepower. In the decades after World War II, the invention of air-to-air missiles was supposed to herald the end of the dogfight. But during the Vietnam War, steep American aircraft losses caused by the acrobatic, Soviet-built MiG fighter showed that one-on-one combat still mattered. The U.S. response to this threat was the highly maneuverable twin-engine F-15 and the formation of a new pilot training academy, the Navy Fighter Weapons School, which inspired the original “Top Gun.”

Since that film’s release, however, aerial combat between fighter planes has largely happened on screen, not in the real world. The last dogfight involving a U.S. aircraft took place in 1999, during the NATO air campaign in Kosovo. The F-14 Tomcats flown by Mr. Cruise’s character have been retired, and his aircraft carrier, the USS Enterprise, has been decommissioned.

Today, conventional wisdom again holds that aerial combat is obsolete. The new F-35 Joint Strike Fighter is meant to replace close-up dogfights with long-range weapons. But not everyone seems to have read the memo about the future of air warfare. Increasingly, U.S. and NATO pilots are having to scramble their planes to head off Russian incursions. The knights of the skies can’t retire just yet.

Historically Speaking: A Palace Open to the People

From the Pharaohs to Queen Victoria, royal dwellings have been symbols of how rulers think about power.

Every summer, Queen Elizabeth II opens the state rooms of Buckingham Palace to the public. This year’s opening features an exhibition that I curated, “Queen Victoria’s Palace,” the result of a three-year collaboration with Royal Collection Trust. The exhibition uses paintings, objects and even computer-generated imagery to show how Victoria transformed Buckingham Palace into both a family home and the headquarters of the monarchy. In the process, she modernized not only the building itself but also the relationship between the Royal Family and the British people.

Plenty of rulers before Victoria had built palaces, but it was always with a view to enhancing their power rather than sharing it. Consider Amarna in Egypt, the temple-palace complex created in the 14th century B.C. by Amenhotep IV, better known as Akhenaten. Supported by his beautiful wife Nefertiti, the heretical Akhenaten made himself the head of a new religion that revered the divine light of the sun’s disk, the Aten.

The Great Palace reflected Akhenaten’s megalomania: The complex featured vast open air courtyards where the public was required to engage in mass worship of the Pharaoh and his family. Akhenaten’s palace was hated as much as his religion, and both were abandoned after his death.

Weiyang, the Endless Palace, built in 200 B.C. in western China by the Emperor Gaozu, was also designed to impart a religious message. Until its destruction in the 9th century by Tibetan invaders, Weiyang extended over two square miles, making it the largest imperial palace in history. Inside, the halls and courtyards were laid out along specific axial and symmetrical lines to ensure that the Emperor existed in harmony with the landscape and, by extension, with his people. Each chamber was ranked according to its proximity to the Emperor’s quarters; every person knew his place and obligations according to his location in the palace.

Western Europe had nothing comparable to Weiyang until King Louis XIV built the Palace of Versailles in 1682. With its unparalleled opulence—particularly the glittering Hall of Mirrors—and spectacular gardens, Versailles was a cult of personality masquerading as architecture. Louis, the self-styled Sun King at the center of this artificial universe, created a living stage where seeing and being seen was the highest form of social currency.

The offstage reality was grimmer. Except for the royal family’s quarters, Versailles lacked even such basic amenities as plumbing. The cost of upkeep swallowed a quarter of the government’s annual tax receipts. Louis XIV’s fantasy lasted a century before being swept away by the French Revolution in 1789.

Although Victoria would never have described herself as a social revolutionary, the many changes she made to Buckingham Palace were an extraordinary break with the past. From the famous balcony where the Royal Family gathers to share special occasions with the nation, to the spaces for entertaining that can welcome thousands of guests, the revitalized palace created a more inclusive form of royal architecture. It sidelined the old values of wealth, lineage, power and divine right to emphasize new ones based on family, duty, loyalty and patriotism. Victoria’s palace was perhaps less awe-inspiring than its predecessors, but it may prove to be more enduring.

The Guardian: Feminist Queen. Show explores how Victoria transformed monarchy

The story of how Victoria and Prince Albert rebuilt the palace into the most glittering court in Europe is explored through paintings, sketches and costumes, and includes a Hollywood-produced immersive experience that brings to life the balls for which she was famous.

Visiting the exhibition, Victoria’s great-great granddaughter, the Queen, was “totally engrossed” as she watched virtual-reality dancers recreate a quadrille, a dance that was fashionable at 19th-century balls. “Thank God we don’t have to do that any more,” said the Queen.

Quadrilles, in which four couples dance together, may no longer be performed but many of Victoria’s innovations remain. She created the balcony, and bequeathed balcony appearances and garden parties to a nation. “It is now unimaginable you would have a national celebration without this balcony, so embedded is it in the nation’s consciousness,” said Dr Amanda Foreman, the historian and co-curator of the exhibition, Queen Victoria’s Buckingham Palace.

Queen Victoria’s maternal role is highlighted in the sketches she made of her nine children, as well as an ornate casket containing their milk teeth and marble sculptures she had made of their tiny arms and feet.

The centrepiece of the exhibition, which marks the 200th anniversary of Victoria’s birth, is a recreation of the grand ballroom which she had built. She believed the picture gallery was too small for lavish entertainment, noting in her journal how the dresses get squashed and ruined during an attempt at a quadrille.

Digital technology by a Hollywood-based production company recreates the ballroom as it looked during a ball in 1856, with images of the wall furnishings and paintings, as shown in contemporary watercolours, projected on to its walls.

A quadrille is recreated through a hologram effect, using actors in replicas of the costumes featured in the watercolour. The technology was inspired by the Victorian illusionist trick known as Pepper’s Ghost, which used angled glass to reflect images on to the Victorian stage.

“Queen Victoria transformed Buckingham Palace, the fabric of this building, and in so doing created new traditions, those traditions which we now associate with the modern monarchy,” said Foreman.

“It is significant that it was a woman who was responsible for these traditions and a woman who defined our nation’s understanding and concept of sovereign power, how it’s experienced, how it’s expressed.

“It’s very much a feminist transformation, although Queen Victoria herself would not have used those words, and those words would not have meant to the Victorians what they mean to us today.

“We tend to diminish the contribution of women in particular. We assign their success to the men around them. We tend to simply forget who was responsible for certain things. So by putting on this exhibition, we are stripping away those layers of oblivion, forgetfulness, discounting, and allowing Queen Victoria the space to shine.”

Victoria turned the once-unloved palace into a home fit for state, public and private events. But for 10 years after her beloved Albert’s death, she rarely set foot in it, describing it in her journals as “one of my saddest of sad houses”.

 Queen Victoria’s Buckingham Palace exhibition is at the summer opening of Buckingham Palace, 20 July to 29 September 2019.

The Sunday Times: With one magnificent renovation, Queen Victoria revamped the monarchy

A new exhibition reveals how the monarch’s redesign of Buckingham Palace created a home for her family and a focus for the nation, writes its co‑curator, Amanda Foreman.

The Sunday Times,

Did Queen Victoria reign over Britain or did she rule? The difference may seem like splitting hairs, but the two words go to the heart of modern debates about the way society perceives women in power. A sovereign can be chained in a dungeon and still reign, but there’s no mistaking the action implied in the verb “to rule”. The very word has a potency to it that the mealy-mouthed “reign” does not.

The Victorians could never quite resolve in their minds whether Victoria was ruling or reigning over them. To mark her diamond jubilee in 1897, the poet laureate, Alfred Austin, composed Victoria, an embarrassing poem that attempted to have it both ways — praising the monarch for 60 dutiful years on the throne while dismissing her: “But, being a woman only, I can be / Not great, but good . . . Nor in the discords that distract a Realm / Be seen or heard.”

Despite a wealth of new scholarship and biographies about Victoria, most people still find it hard to say what she actually achieved, aside from reigning for a really long time. It’s as though she simply floated through life in her womanly way, pausing only to fall in love, have babies and reportedly say things such as “We are not amused”. Her personal accomplishments are diminished, ascribed to Prince Albert’s genius or ignored.

I have co-curated, with Lucy Peter of the Royal Collection Trust, an exhibition for this year’s Buckingham Palace summer opening. It is an attempt to redress the balance. Queen Victoria’s Palace argues that Victoria’s building programme at Buckingham Palace helped to redefine the monarchy for the modern age.

The new design enabled a more open, welcoming and inclusive relationship to develop between the royal family and the public.

The house Victoria inherited in 1837 was nothing like the building we know today. The Queen’s House, as Buckingham Palace was then known, was a mishmash of rooms and styles from three reigns.

The entertaining rooms and public spaces were too small, the kitchens dilapidated, the private apartments inadequate and the plumbing and heating barely functional.

Victoria, and then Albert after their marriage, put up with its failings until there was no room for their growing family.

It’s certainly true that Albert was more involved than Victoria in the decoration of the interior. But it was Victoria’s conception of female power that dictated the palace’s final form. Kingship, as Austin’s jubilee poem helpfully pointed out, is expressed by such manly virtues as strength, glory and military might, none of which Victoria could claim without appearing to betray her feminine identity.

Instead she made queenship a reflection of her own moral values, placing the emphasis on family, duty, patriotism and public service. These four “female” virtues formed the pillars not only of her reign but of every one that followed.

Today it would be impossible to conceive of the monarchy in any other way. It is one of the very few instances where gendered power has worked in favour of women.

The Buckingham Palace that emerged from its scaffolding in 1855 was a triumph. The additions included a nursery for the children, a large balcony on the east front, state rooms for diplomatic visits and a ballroom that was large enough to accommodate 2,000 guests.

For the next six years the palace was the epicentre of the monarchy. The death of Albert on December 14, 1861, brought a sudden and abrupt end to its glory.

Incapacitated by grief, Victoria hid herself away, much to the consternation of her family and subjects.

The story of Victoria’s eventual return to public life is reflected in the slow but sure rejuvenation of the palace. There were some things that she could never bear to do there, because they intruded too much on personal memories: she never again attended a concert at the palace or played host to a visiting head of state or gave a ball like the magnificent ones of old.

But Victoria developed other ways of opening the palace to the wider world. One of the most visible was the summer garden party, a tradition that now brings 30,000 people to the palace every year.

She also allowed Prince George — later George V — and Princess Mary to appear on the balcony after their wedding, cementing a tradition now watched by hundreds of millions. The palace balcony appearance has become so ingrained in the national consciousness that each occasion receives the most intense scrutiny.

At last month’s trooping the colour, lip-readers were brought in by media outlets to decipher the exchange between the Duke and Duchess of Sussex. (It’s believed Prince Harry told Meghan to turn around.)

By the end of Victoria’s life, the monarchy’s popularity was greater than ever. Buckingham Palace was also back in the people’s affections, having a starring role in Victoria’s diamond jubilee in 1897 as the physical and emotional focus for the London celebrations.

After her death in 1901, much of Victoria and Albert’s taste was swept away in the name of modernity, including the east front, which was refaced by George V. The Buckingham Palace of the 21st century looks quite different from the one she built. But its purpose is the same.

The palace still functions as a private home. It is still the administrative headquarters of the monarchy. And, perhaps most important of all, it is still the place where the nation gathers to celebrate and be celebrated.

This is her legacy and the proof, if such is needed, that Victoria reigned, ruled and did much else besides.

Queen Victoria’s Palace is at Buckingham Palace until September 29

Historically Speaking: Playing Cards for Fun and Money

From 13th-century Egypt to the Wild West, the standard deck of 52 cards has provided entertainment—and temptation.

ILLUSTRATION: THOMAS FUCHS

More than 8,500 people traveled to Las Vegas to play in this year’s World Series of Poker, which ended July 16—a near-record for the contest. I’m not a poker player myself, but I understand the fun and excitement of playing with real cards in an actual game rather than online. There’s something uniquely pleasurable about a pack of cards—the way they look and feel to the touch—that can’t be replicated on the screen.

Although the origins of the playing card are believed to lie in China, the oldest known examples come from the Mamluk Caliphate, an Islamic empire that stretched across Egypt and the eastern Mediterranean from 1260 to 1517. It’s significant that the empire was governed by a warrior caste of former slaves: A playing card can be seen as an assertion of freedom, since time spent playing cards is time spent freely. The Mamluk card deck consisted of 52 cards divided into four suits, whose symbols reflected the daily realities of soldiering—a scimitar, a polo stick, a chalice and a coin.

Returning Crusaders and Venetian traders were probably responsible for bringing cards to Europe. Church and state authorities were not amused: In 1377, Parisians were banned from playing cards on work days. Like dice, cards were classed as gateway vices that led to greater sins. The authorities may not have been entirely wrong: Some surviving card decks from the 15th century have incredibly bawdy themes.

The suits and symbols used in playing cards became more uniform and less ornate following the advent of the printing press. French printers added a number of innovations, including dividing the four suits into red and black, and giving us the heart, diamond, club and spade symbols. Standardization enabled cards to become a lingua franca across cultures, further enhancing their appeal as a communal leisure activity.

In the 18th century, the humble playing card was the downfall of many a noble family, with vast fortunes being won and lost at the gaming table. Cards also started to feature in paintings and novels as symbols of the vagaries of fortune. The 19th-century short story “The Queen of Spades,” by the Russian writer Alexander Pushkin, beautifully captures the card mania of the period. The anti-hero, Hermann, is destroyed by his obsession with winning at Faro, a game of chance that was as popular in the saloons of the American West as it was in the drawing rooms of Europe. The lawman Wyatt Earp may have won fame in the gunfight at the OK Corral, but he earned his money as a Faro dealer in Tombstone, Ariz.

In Britain, attempts to regulate card-playing through high taxes on the cards themselves were a failure, though they did result in one change: Every ace of spades had to show a government tax stamp, which is why it’s the card that traditionally carries the manufacturer’s mark. The last innovation in the card deck, like the first, had military origins. Many Civil War regiments killed time by playing Euchre, which requires an extra trump card. The Samuel Hart Co. duly obliged with a card which became the forerunner to the Joker, the wild card that doesn’t have a suit.

But we shouldn’t allow the unsavory association of card games with gambling to have the last word. As Charles Dickens wrote in “Nicholas Nickleby”: “Thus two people who cannot afford to play cards for money, sometimes sit down to a quiet game for love.”

Historically Speaking: Beware the Red Tide

Massive algae blooms that devastate ocean life have been recorded since antiquity—and they are getting worse.

Real life isn’t so tidy. Currently, there is no force, biological or otherwise, capable of stopping the algae blooms that are attacking coastal waters around the world with frightening regularity, turning thousands of square miles into odoriferous graveyards of dead and rotting fish. In the U.S., one of the chief culprits is the Karenia brevis algae, a common marine microorganism that blooms when exposed to sunlight, warm water and phosphorus or nitrates. The result is a toxic sludge known as a red tide, which depletes the oxygen in the water, poisons shellfish and emits a foul vapor strong enough to irritate the lungs.

The red tide isn’t a new phenomenon, though its frequency and severity have certainly gotten worse thanks to pollution and rising water temperatures. There used to be decades between outbreaks, but since 1998 the Gulf Coast has suffered one every year.

The earliest description of a red tide may have come from Tacitus, the first-century Roman historian, in his “Annals”: “the Ocean had appeared blood-red and…the ebbing tide had left behind it what looked to be human corpses.” The Japanese recorded their first red tide catastrophe in 1234: An algae bloom in Osaka Bay invaded the Yodo River, a major waterway between Kyoto and Osaka, which led to mass deaths among humans and fish alike.

The earliest reliable accounts of red tide invasions in the Western Hemisphere come from 16th-century Spanish sailors in the Gulf of Mexico. The colorful explorer Álvar Núñez Cabeza de Vaca (ca. 1490-1560) almost lost his entire expedition to red tide poisoning while sailing in Apalachee Bay on the west coast of Florida in July 1528. Unaware that local Native American tribes avoided fishing in the area at that time of year, he allowed his men to gorge themselves on oysters. “The journey was difficult in the extreme,” he wrote afterward, “because neither the horses were sufficient to carry all the sick, nor did we know what remedy to seek because every day they languished.”

Red tides started appearing everywhere in the late 18th and early 19th centuries. Charles Darwin recorded seeing red-tinged water off the coast of Chile during his 1832 voyage on HMS Beagle. Scientists finally identified K. brevis as the culprit behind the outbreaks in 1946-47, but this was small comfort to Floridians, who were suffering the worst red tide invasion in U.S. history. It started in Naples and spread all the way to Sarasota, hanging around for 18 months, destroying the fishing industry and making life unbearable for residents. A 35-mile long stretch of sea was so thick with rotting fish carcasses that the government dispatched Navy warships to try to break up the mass. People compared the stench to poison gas.

The red tide invasion of 2017-18 was particularly terrible, lasting some 15 months and covering 145 miles of Floridian coastline. The loss to tourism alone neared $100 million. Things are looking better this summer, fortunately, but we need more than hope or luck to combat this plague; we need a weapon that hasn’t yet been invented.