Electric Lights for Yuletide

In 1882, Thomas Edison’s business partner put up a Christmas tree decorated with 80 red, white and blue bulbs—and launched an American tradition.

The Wall Street Journal, December 5, 2019

As a quotation often attributed to Maya Angelou has it, ‘’You can tell a lot about a person by the way (s)he handles these three things: a rainy day, lost luggage and tangled Christmas tree lights.” I’m not sure what it says about me that I actually look forward to getting my hands on the latter. My house is so brightly decorated with energy-saving LEDs that it could double as a landing beacon on a foggy night. It’s the one thing I really missed when I lived abroad—no other country does Christmas lights like America. More than 80 million households put up lighting displays each December, creating a seasonal spike in U.S. energy use that’s bigger than the annual consumption of some small countries.

Holiday lights in Brooklyn, 2015. PHOTO: GETTY IMAGES

Hanging festive lighting during the winter solstice is an ancient practice, but the modern version owes its origins to Thomas Edison and his business partner Edward H. Johnson. Edison perfected the first fully functional lightbulb in 1879. For Christmas the following year, he strung up lights outside his Menlo Park factory—partly to provide good cheer but mostly to advertise the benefits of electrification. Johnson went one further in 1882, placing a Christmas tree decorated with 80 red, white and blue blinking lightbulbs on a revolving turntable in his parlor window in Manhattan.

Johnson repeated the display every year, much to the delight of New Yorkers, striving to make it bigger and better each time. He thus founded the other great American tradition: the competitive light display. The first person to take up Johnson’s challenge was President Grover Cleveland, who in 1894 erected an enormous multi-light Christmas tree in the White House, thereby starting a new presidential tradition.

The initial $300 price tag for an electrified Christmas tree (about $2,000 today) put it beyond the reach of the average consumer. The alternative was clip-on candles, but they were so hazardous that by 1910 most home insurance policies contained a nonpayment clause for house fires caused by candlelit Christmas trees. Although it was possible to rent electric Christmas lights for the season, and the General Electric Company was beginning to produce easy-to-assemble kits, the stark difference between lit and unlit homes threatened to become a powerful symbol of social inequality.

Fortunately, a New Yorker named Emilie D. Lee Herreshoff was on the case. She persuaded the city council to allow an electrified Christmas tree to be put up in Madison Square Park. The inaugural tree-lighting celebration, on December 24, 1912, generated so much public enthusiasm that within two years over 300 cities and towns were holding similar ceremonies.

Not content with just one festive tree, in 1920 the city of Pasadena, Calif., agreed to light up the 150 mature evergreens lining Santa Rosa Avenue, leading to its nickname “Christmas Tree Lane.” This was quite a feat of electrical engineering, given that outdoor Christmas lights didn’t become commercially available until 1927.

To encourage buyers, GE began sponsoring local holiday lighting contests, unleashing a competitive spirit each Yuletide that only seems to have grown stronger with the passing decades. Since 2014, the Guinness Book of World Records title for the most lights on a private residence has been held by the Gay family of LaGrangeville, N.Y., with strong competition from Australia. To which I say, “God bless them, everyone.”

Historically Speaking: ‘Sesame Street’ Wasn’t the First to Make Learning Fun

The show turns 50 this month, but the idea that education can be entertaining goes back to ancient Greece.

The Wall Street Journal, November 21, 2019

“Sesame Street,” which first went on the air 50 years ago this month, is one of the most successful and cost-effective tools ever created for preparing preschool tots for the classroom. Now showing in 70 languages in more than 150 countries around the world, “Sesame Street” is that rare thing in a child’s life: truly educational entertainment.

The Muppets of ‘Sesame Street’ in the 1993-94 season. PHOTO: EVERETT COLLECTION

Historically, those two words have rarely appeared together. In the 4th century B.C., Plato and Aristotle both agreed that children can learn through play. In “The Republic,” Plato went so far as to advise, “Do not use compulsion, but let early education be a sort of amusement.” Unfortunately, his advice failed to catch on.

In Europe during the Middle Ages, play and learning were almost diametrically opposed. Monks were in charge of boys’ education, which largely consisted of Latin grammar and religious teaching. (Girls learned domestic skills at home.) The invention of the printing press in 1440 helped spread literacy among young readers, but the works written for them weren’t exactly entertaining. A book like “A token for children: Being an exact account of the conversion, holy and exemplary lives, and joyful deaths of several young children,” by the 17th-centrury English Puritan James Janeway, surely didn’t follow Plato’s injunction to be amusing as well as instructional.

Social attitudes toward children’s entertainment changed considerably, however, in the wake of the English philosopher John Locke’s 1693 treatise “Some Thoughts Concerning Education.” Locke followed Plato’s line on education, writing, “I always have had a fancy that learning might be made a play and recreation to children.” The publisher John Newbery heeded Locke’s advice; in 1744, he published “A Little Pretty Pocket-Book Intended for the Instruction and Amusement of Little Master Tommy and Pretty Miss Polly,” which was sold along with a ball for boys and a pincushion for girls. In the introduction, Newbery promised parents and guardians that the book would not only make their children “strong, healthy, virtuous, wise” but also “happy.”

When it came to early children’s television in the U.S., however, “play and recreation” usually squeezed out educational content. Many popular shows existed primarily to sell toys and products: “Howdy Doody,” the pioneering puppet show that ran on NBC from 1947 to 1960, was sponsored by RCA to pitch color televisions. Parents became so indignant over the exploitation of their children by the TV industry that, in 1968, grass-roots activists started the nonprofit Action for Children’s Television, which petitioned the Federal Communications Commission to ban advertising on children’s programming.

This cultural mood led to the birth of “Sesame Street.” The show’s co-creators, Joan Ganz Cooney and Lloyd Morrisett, were particularly devoted to using TV to combat educational inequality in minority communities. They spent three years working with teachers, child psychologists and Jim Henson’s Muppets to get the right mix of education and entertainment. The pilot episode, broadcast on public television stations on Nov. 10, 1969, introduced the world to Big Bird, Bert and Ernie, Oscar the Grouch, and their cast of multiethnic friends and neighbors. “You’re gonna love it,” says one of the show’s human characters, Gordon, to his wife Sally in the first show’s opening lines. And we have.

Historically Speaking: The Many Roads to Vegetarianism

Health, religion and animal rights have all been advanced as reasons not to eat meat.

The Wall Street Journal, October 18, 2019

ILLUSTRATION: PETER ARKLE

The claim that today’s ingeniously engineered fake meat tastes like the real thing and helps the planet is winning over consumers from the carnivore side of the food aisle. According to Barclays, the alt-meat market could be worth $140 billion a year a decade from now. But the argument over the merits of vegetarianism is nothing new; it’s been going on since ancient times.

Meat played a pivotal role in the evolution of the human brain, providing the necessary calories and protein to enable it to increase in size. Nonetheless, meat-eating remained a luxury in the diets of most early civilizations. It wasn’t much of a personal sacrifice, therefore, when the Greek philosopher Pythagoras (ca. 570-495 B.C.), author of the famous theorem, became what many consider the first vegetarian by choice. Pythogoreans believed that humans could be reincarnated as animals and vice versa, meaning that if you ate meat, Aunt Lydia could end up on your plate.

The anti-meat school of thought was joined a century later by Plato, who argued in the Republic that meat consumption encouraged decadence and warlike behavior. These views were strongly countered by Aristotelian philosophy, which taught that animals exist for human use—an opinion that the Romans heartily endorsed.

The avoidance of meat for moral and ascetic reasons also found a home in Buddhism and Hinduism. Ashoka the Great, the 3rd-century Buddhist emperor of the Maurya Dynasty of India, abolished animal sacrifice and urged his people to abstain from eating flesh.

It wasn’t until the Enlightenment, however, that Western moralists and philosophers began to argue for vegetarianism on the grounds that we have a moral duty to avoid causing animals pain. In 1641 the Massachusetts Bay Colony passed one of the earliest laws against animal cruelty. By the early 19th century, the idea that animals have rights had started to take hold: The English Romantic poet Percy Bysshe Shelley proselytized for vegetarianism, as did the American transcendentalist thinker Henry David Thoreau, who wrote in “Walden”: “I have no doubt that it is part of the destiny of the human race … to leave off eating animals.”

The word “vegetarian” first appeared in print in England in 1842. Within a decade there were vegetarian societies in Britain and America. Echoing the Platonists rather than Pythagoras, their guiding motivation was self-denial as opposed to animal welfare. Sylvester Graham, the leader of the early American vegetarian movement, also urged sexual abstinence on his followers.

Vegetarianism finally escaped its moralistic straitjacket at the end of the 19th century, when the health guru John Harvey Kellogg, the inventor of corn flakes, popularized meat-free living for reasons of bodily well-being at his Battle Creek Sanitarium in Michigan.

There continue to be mixed motivations for vegetarianism today. Burger King’s meatless Impossible Whopper may be “green,” but it has less protein and virtually the same number of calories as the original. A healthier version will no doubt appear before long, and some people hope that when lab-grown meat hits the market in a few years, it will be as animal- and climate-friendly as plant-based food. With a lot of science and a bit of luck, vegetarians and meat-eaters may end up in the same place.

Historically Speaking: The Long Road to Cleanliness

The ancient Babylonians and Egyptians knew about soap, but daily washing didn’t become popular until the 19th century.

As the mother of five teenagers, I have a keen appreciation of soap—especially when it’s actually used. Those little colored bars—or more frequently nowadays, dollops of gel—represent one of the triumphs of civilization.

Adolescents aside, human beings like to be clean, and any product made of fats or oils, alkaline salts and water will help them to stay that way. The Babylonians knew how to make soap as early as 2800 B.C., although it was probably too caustic for washing anything except hair and textiles. The Ebers Papyrus, an ancient Egyptian medical document from 1550 B.C., suggests that the Egyptians used soap only for treating skin ailments.

ILLUSTRATION: THOMAS FUCHS

The Greeks and Romans also avoided washing with harsh soaps, until Julius Caesar’s conquest of Gaul in 58 B.C. introduced them to a softer Celtic formula. Aretaeus of Cappadocia, a 1st century A.D. Greek physician, wrote that “those alkaline substances made into balls” are a “very excellent thing to cleanse the body in the bath.”

Following Rome’s collapse in the 5th century, the centers of soap-making moved to India, Africa and the Middle East. In Europe, soap suffered from being associated with ancient paganism. In the 14th century, Crusaders returning from the Middle East brought back with them a taste for washing with soap and water, but not in sufficient numbers to slow the spread of plague.

Soap began to achieve wider acceptance in Europe during the Renaissance, though geography still played a role: Southern countries had the advantage of making soap out of natural oils and perfumes, while the colder north had to make do with animal fats and whale blubber. Soap’s growing popularity also attracted the attention of revenue-hungry governments. In 1632, in one of the earliest documented cases of crony capitalism, King Charles I of England granted a group of London soapmakers a 14-year monopoly in exchange for annual payments of 4 pounds per ton sold.

Soap remained a luxury item, however, until scientific advances during the age of the Enlightenment made large-scale production possible: In 1790, the French chemist Nicholas Leblanc discovered how to make alkali from common salt. The saying “cleanliness is next to godliness”—credited to John Wesley, the founder of Methodism—was a great piece of free advertising, but it was soap’s role in modern warfare that had a bigger impact on society. During the Crimean War in Europe and the Civil War in the U.S., high death tolls from unsanitary conditions led to new requirements that soldiers use soap every day.

In the late 19th century, soap manufacturers helped to jump-start the advertising industry with their use of catchy poems and famous artworks as marketing tools. British and American soapmakers were ahead of their time in other ways, too: Lever (now Unilever) built housing for its workers, while Procter and Gamble pioneered the practice of profit-sharing.

And it was Procter and Gamble that made soap the basis for one of the most influential cultural institutions of the last century. Having read reports that women would like to be entertained while doing housework, the company decided it would sponsor the production of daytime radio domestic dramas. Thus began the first soap opera, “Ma Perkins,” a 15-minute tear-laden serial that ran from 1933 until 1960—and created a new form of storytelling.

Historically Speaking: Fashion Shows: From Royal to Retail

The catwalk has always been a place for dazzling audiences as well as selling clothes

The 2007 Fendi Fall Collection show at the Great Wall of China. PHOTO: GETTY IMAGES

As devotees know, the fashion calendar is divided between the September fashion shows, which display the designers’ upcoming spring collections, and the February shows, which preview the fall. New York Fashion Week, which wraps up this weekend, is the world’s oldest; it started in 1943, when it was called “press week,” and always goes first, followed by London, Milan, and Paris.

Although fashion week is an American invention, the twice-yearly fashion show can be traced back to the court of Louis XIV of France in the 17th century. The king insisted on a seasonal dress code at court as a way to boost the French textile industry: velvet and satin in the winter, silks in the summer. The French were also responsible for the rise of the dress designer: Charles Frederick Worth opened the first fashion house in Paris in 1858. Worth designed unique dresses for individual clients, but he made his fortune with seasonal dress collections, which he licensed to the new department stores that were springing up in the world’s big cities.

Worth’s other innovation was the use of live models instead of mannequins. By the late 1800s this had evolved into the “fashion parade,” a precursor to today’s catwalk, which took place at invitation-only luncheons and tea parties. In 1903, the Ehrich brothers transported the fashion parade idea to their department store in New York. The big difference was that the dresses on show could be bought and worn the same day. The idea caught on, and all the major department stores began holding fashion shows.

The French couture houses studiously ignored the consumer-friendly approach pioneered by American retailers. After World War II, however, they had to tout for business like anyone else. The first Paris fashion week took place in 1947. But unlike New York’s, which catered to journalists and wholesale buyers only, the emphasis of the Paris fashion shows was still on haute couture.

The two different types of fashion show—the selling kind, organized by department stores for the public, and the preview kind, held by designers for fashion insiders—coexisted until the 1960s. Suddenly, haute couture was out and buying off the rack was in. The retail fashion show became obsolete as the design houses turned to ready-to-wear collections and accessories such as handbags and perfume.

Untethered from its couture roots, the designer fashion show morphed into performance art—the more shocking the better. The late designer Alexander McQueen provocatively titled his 1995 Fall show “Highland Rape” and sent out models in bloodied and torn clothes. The laurels for the most insanely extravagant runway show still belong to Karl Lagerfeld, who staged his 2007 Fendi Fall Collection on the Great Wall of China at a cost of $10 million.

But today there’s trouble on the catwalk. Poor attendance has led to New York’s September Fashion Week shrinking to a mere five days. Critics have started to argue that the idea of seasonal collections makes little sense in today’s global economy, while the convenience of e-commerce has made customers unwilling to wait a week for a dress, let alone six months. Designers are putting on expensive fashion shows only to have their work copied and sold to the public at knockdown prices a few weeks later. The Ehrich brothers may have been right after all: don’t just tell, sell.

Historically Speaking: Before Weather Was a Science

Modern instruments made accurate forecasting possible, but humans have tried to predict the weather for thousands of years.

The Wall Street Journal, August 31, 2019

ILLUSTRATION: THOMAS FUCHS

Labor Day weekend places special demands on meteorologists, even when there’s not a hurricane like Dorian on the way. September weather is notoriously variable: In 1974, Labor Day in Iowa was a chilly 43 degrees, while the following year it was a baking 103.

Humanity has always sought ways to predict the weather. The invention of writing during the 4th millennium B.C. was an important turning point for forecasting: It allowed the ancient Egyptians to create the first weather records, using them as a guide to predict the annual flood level of the Nile. Too high meant crop failures, too low meant drought.

Some early cultures, such as the ancient Greeks and the Mayans, based their weather predictions on the movements of the stars. Others relied on atmospheric signs and natural phenomena. One of the oldest religious texts in Indian literature, the Chandogya Upanishad from the 8th century B.C., includes observations on various types of rain clouds. In China, artists during the Han Dynasty (206 B.C.-9 A.D.) painted “cloud charts” on silk for use as weather guides.

These early forecasting attempts weren’t simply products of magical thinking. The ancient adage “red sky at night, shepherd’s delight,” which Jesus mentions in the gospel of Matthew, is backed by hard science: The sky appears red when a high-pressure front moves in from the west, driving the clouds away.

In the 4th century B.C., Aristotle tried to provide rational explanations for weather phenomena in his treatise Meteorologica. His use of scientific method laid the foundations for modern meteorology. The problem was that nothing could be built on Aristotle’s ideas until the invention of such tools as the thermometer (an early version was produced by Galileo in 1593) and the barometer (invented by his pupil Torricelli in 1643).

Such instruments couldn’t predict anything on their own, but they made possible accurate daily weather observations. Realizing this, Thomas Jefferson, a pioneer in modern weather forecasting, ordered Meriwether Lewis and William Clark to keep meticulous weather records during their 1804-06 expedition to the American West. He also made his own records wherever he resided, writing in his meteorological diary, “My method is to make two observations a day.”

Most governments, however, remained dismissive of weather forecasting until World War I. Suddenly, knowing which way the wind would blow tomorrow meant the difference between gassing your own side or the enemy’s.

To make accurate predictions, meteorologists needed a mathematical model that could combine different types of data into a single forecast. The first attempt, by the English mathematician Lewis Fry Richardson in 1917, took six weeks to calculate and turned out to be completely wrong.

There were still doubts about the accuracy of weather forecasting when the Allied meteorological team told Supreme Commander Dwight Eisenhower that there was only one window of opportunity for a Normandy landing: June 6, 1944. Despite his misgivings, Eisenhower acted on the information, surprising German meteorologists who had predicted that storms would continue in the English Channel until mid-June.

As we all know, meteorologists still occasionally make the wrong predictions. That’s when the old proverb comes into play: “There is no such thing as bad weather, only inappropriate clothes.”

Historically Speaking: Duels Among the Clouds

Aerial combat was born during World War I, giving the world a new kind of military hero: the fighter pilot

“Top Gun” is back. The 1986 film about Navy fighter pilots is getting a sequel next year, with Tom Cruise reprising his role as Lt. Pete “Maverick” Mitchell, the sexy flyboy who can’t stay out of trouble. Judging by the trailer released by Paramount in July, the new movie, “Top Gun: Maverick,” will go straight to the heart of current debates about the future of aerial combat. An unseen voice tells Mr. Cruise, “Your kind is headed for extinction.”

The mystique of the fighter pilot began during World War I, when fighter planes first entered military service. The first aerial combat took place on Oct. 5, 1914, when French and German biplanes engaged in an epic contest in the sky, watched by soldiers on both sides of the trenches. At this early stage, neither plane was armed, but the German pilot had a rifle and the French pilot a machine gun; the latter won the day.

A furious arms race ensued. The Germans turned to the Dutch engineer Anthony Fokker, who devised a way to synchronize a plane’s propeller with its machine gun, creating a flying weapon of deadly accuracy. The Allies soon caught up, ushering in the era of the dogfight.

From the beginning, the fighter pilot seemed to belong to a special category of warrior—the dueling knight rather than the ordinary foot soldier. Flying aces of all nationalities gave each other a comradely respect. In 1916, the British marked the downing of the German fighter pilot Oswald Boelcke by dropping a wreath in his honor on his home airfield in Germany.

But not until World War II could air combat decide the outcome of an entire campaign. During the Battle of Britain in the summer of 1940, the German air force, the Luftwaffe, dispatched up to 1,000 aircraft in a single attack. The Royal Air Force’s successful defense of the skies led to British Prime Minister Winston Churchill’s famous declaration, “Never in the field of human conflict was so much owed by so many to so few.”

The U.S. air campaigns over Germany taught American military planners a different lesson. Rather than focusing on pilot skills, they concentrated on building planes with superior firepower. In the decades after World War II, the invention of air-to-air missiles was supposed to herald the end of the dogfight. But during the Vietnam War, steep American aircraft losses caused by the acrobatic, Soviet-built MiG fighter showed that one-on-one combat still mattered. The U.S. response to this threat was the highly maneuverable twin-engine F-15 and the formation of a new pilot training academy, the Navy Fighter Weapons School, which inspired the original “Top Gun.”

Since that film’s release, however, aerial combat between fighter planes has largely happened on screen, not in the real world. The last dogfight involving a U.S. aircraft took place in 1999, during the NATO air campaign in Kosovo. The F-14 Tomcats flown by Mr. Cruise’s character have been retired, and his aircraft carrier, the USS Enterprise, has been decommissioned.

Today, conventional wisdom again holds that aerial combat is obsolete. The new F-35 Joint Strike Fighter is meant to replace close-up dogfights with long-range weapons. But not everyone seems to have read the memo about the future of air warfare. Increasingly, U.S. and NATO pilots are having to scramble their planes to head off Russian incursions. The knights of the skies can’t retire just yet.

Historically Speaking: A Palace Open to the People

From the Pharaohs to Queen Victoria, royal dwellings have been symbols of how rulers think about power.

Every summer, Queen Elizabeth II opens the state rooms of Buckingham Palace to the public. This year’s opening features an exhibition that I curated, “Queen Victoria’s Palace,” the result of a three-year collaboration with Royal Collection Trust. The exhibition uses paintings, objects and even computer-generated imagery to show how Victoria transformed Buckingham Palace into both a family home and the headquarters of the monarchy. In the process, she modernized not only the building itself but also the relationship between the Royal Family and the British people.

Plenty of rulers before Victoria had built palaces, but it was always with a view to enhancing their power rather than sharing it. Consider Amarna in Egypt, the temple-palace complex created in the 14th century B.C. by Amenhotep IV, better known as Akhenaten. Supported by his beautiful wife Nefertiti, the heretical Akhenaten made himself the head of a new religion that revered the divine light of the sun’s disk, the Aten.

The Great Palace reflected Akhenaten’s megalomania: The complex featured vast open air courtyards where the public was required to engage in mass worship of the Pharaoh and his family. Akhenaten’s palace was hated as much as his religion, and both were abandoned after his death.

Weiyang, the Endless Palace, built in 200 B.C. in western China by the Emperor Gaozu, was also designed to impart a religious message. Until its destruction in the 9th century by Tibetan invaders, Weiyang extended over two square miles, making it the largest imperial palace in history. Inside, the halls and courtyards were laid out along specific axial and symmetrical lines to ensure that the Emperor existed in harmony with the landscape and, by extension, with his people. Each chamber was ranked according to its proximity to the Emperor’s quarters; every person knew his place and obligations according to his location in the palace.

Western Europe had nothing comparable to Weiyang until King Louis XIV built the Palace of Versailles in 1682. With its unparalleled opulence—particularly the glittering Hall of Mirrors—and spectacular gardens, Versailles was a cult of personality masquerading as architecture. Louis, the self-styled Sun King at the center of this artificial universe, created a living stage where seeing and being seen was the highest form of social currency.

The offstage reality was grimmer. Except for the royal family’s quarters, Versailles lacked even such basic amenities as plumbing. The cost of upkeep swallowed a quarter of the government’s annual tax receipts. Louis XIV’s fantasy lasted a century before being swept away by the French Revolution in 1789.

Although Victoria would never have described herself as a social revolutionary, the many changes she made to Buckingham Palace were an extraordinary break with the past. From the famous balcony where the Royal Family gathers to share special occasions with the nation, to the spaces for entertaining that can welcome thousands of guests, the revitalized palace created a more inclusive form of royal architecture. It sidelined the old values of wealth, lineage, power and divine right to emphasize new ones based on family, duty, loyalty and patriotism. Victoria’s palace was perhaps less awe-inspiring than its predecessors, but it may prove to be more enduring.

Historically Speaking: Playing Cards for Fun and Money

From 13th-century Egypt to the Wild West, the standard deck of 52 cards has provided entertainment—and temptation.

ILLUSTRATION: THOMAS FUCHS

More than 8,500 people traveled to Las Vegas to play in this year’s World Series of Poker, which ended July 16—a near-record for the contest. I’m not a poker player myself, but I understand the fun and excitement of playing with real cards in an actual game rather than online. There’s something uniquely pleasurable about a pack of cards—the way they look and feel to the touch—that can’t be replicated on the screen.

Although the origins of the playing card are believed to lie in China, the oldest known examples come from the Mamluk Caliphate, an Islamic empire that stretched across Egypt and the eastern Mediterranean from 1260 to 1517. It’s significant that the empire was governed by a warrior caste of former slaves: A playing card can be seen as an assertion of freedom, since time spent playing cards is time spent freely. The Mamluk card deck consisted of 52 cards divided into four suits, whose symbols reflected the daily realities of soldiering—a scimitar, a polo stick, a chalice and a coin.

Returning Crusaders and Venetian traders were probably responsible for bringing cards to Europe. Church and state authorities were not amused: In 1377, Parisians were banned from playing cards on work days. Like dice, cards were classed as gateway vices that led to greater sins. The authorities may not have been entirely wrong: Some surviving card decks from the 15th century have incredibly bawdy themes.

The suits and symbols used in playing cards became more uniform and less ornate following the advent of the printing press. French printers added a number of innovations, including dividing the four suits into red and black, and giving us the heart, diamond, club and spade symbols. Standardization enabled cards to become a lingua franca across cultures, further enhancing their appeal as a communal leisure activity.

In the 18th century, the humble playing card was the downfall of many a noble family, with vast fortunes being won and lost at the gaming table. Cards also started to feature in paintings and novels as symbols of the vagaries of fortune. The 19th-century short story “The Queen of Spades,” by the Russian writer Alexander Pushkin, beautifully captures the card mania of the period. The anti-hero, Hermann, is destroyed by his obsession with winning at Faro, a game of chance that was as popular in the saloons of the American West as it was in the drawing rooms of Europe. The lawman Wyatt Earp may have won fame in the gunfight at the OK Corral, but he earned his money as a Faro dealer in Tombstone, Ariz.

In Britain, attempts to regulate card-playing through high taxes on the cards themselves were a failure, though they did result in one change: Every ace of spades had to show a government tax stamp, which is why it’s the card that traditionally carries the manufacturer’s mark. The last innovation in the card deck, like the first, had military origins. Many Civil War regiments killed time by playing Euchre, which requires an extra trump card. The Samuel Hart Co. duly obliged with a card which became the forerunner to the Joker, the wild card that doesn’t have a suit.

But we shouldn’t allow the unsavory association of card games with gambling to have the last word. As Charles Dickens wrote in “Nicholas Nickleby”: “Thus two people who cannot afford to play cards for money, sometimes sit down to a quiet game for love.”

Historically Speaking: Beware the Red Tide

Massive algae blooms that devastate ocean life have been recorded since antiquity—and they are getting worse.

Real life isn’t so tidy. Currently, there is no force, biological or otherwise, capable of stopping the algae blooms that are attacking coastal waters around the world with frightening regularity, turning thousands of square miles into odoriferous graveyards of dead and rotting fish. In the U.S., one of the chief culprits is the Karenia brevis algae, a common marine microorganism that blooms when exposed to sunlight, warm water and phosphorus or nitrates. The result is a toxic sludge known as a red tide, which depletes the oxygen in the water, poisons shellfish and emits a foul vapor strong enough to irritate the lungs.

The red tide isn’t a new phenomenon, though its frequency and severity have certainly gotten worse thanks to pollution and rising water temperatures. There used to be decades between outbreaks, but since 1998 the Gulf Coast has suffered one every year.

The earliest description of a red tide may have come from Tacitus, the first-century Roman historian, in his “Annals”: “the Ocean had appeared blood-red and…the ebbing tide had left behind it what looked to be human corpses.” The Japanese recorded their first red tide catastrophe in 1234: An algae bloom in Osaka Bay invaded the Yodo River, a major waterway between Kyoto and Osaka, which led to mass deaths among humans and fish alike.

The earliest reliable accounts of red tide invasions in the Western Hemisphere come from 16th-century Spanish sailors in the Gulf of Mexico. The colorful explorer Álvar Núñez Cabeza de Vaca (ca. 1490-1560) almost lost his entire expedition to red tide poisoning while sailing in Apalachee Bay on the west coast of Florida in July 1528. Unaware that local Native American tribes avoided fishing in the area at that time of year, he allowed his men to gorge themselves on oysters. “The journey was difficult in the extreme,” he wrote afterward, “because neither the horses were sufficient to carry all the sick, nor did we know what remedy to seek because every day they languished.”

Red tides started appearing everywhere in the late 18th and early 19th centuries. Charles Darwin recorded seeing red-tinged water off the coast of Chile during his 1832 voyage on HMS Beagle. Scientists finally identified K. brevis as the culprit behind the outbreaks in 1946-47, but this was small comfort to Floridians, who were suffering the worst red tide invasion in U.S. history. It started in Naples and spread all the way to Sarasota, hanging around for 18 months, destroying the fishing industry and making life unbearable for residents. A 35-mile long stretch of sea was so thick with rotting fish carcasses that the government dispatched Navy warships to try to break up the mass. People compared the stench to poison gas.

The red tide invasion of 2017-18 was particularly terrible, lasting some 15 months and covering 145 miles of Floridian coastline. The loss to tourism alone neared $100 million. Things are looking better this summer, fortunately, but we need more than hope or luck to combat this plague; we need a weapon that hasn’t yet been invented.