Historically Speaking: Unenforceable Laws Against Pleasure

The 100th anniversary of Prohibition is a reminder of how hard it is to regulate consumption and display

The Wall Street Journal, January 24, 2019

ILLUSTRATION: THOMAS FUCHS

This month we mark the centennial of the ratification of the Constitution’s 18th Amendment, better known as Prohibition. But the temperance movement was active for over a half-century before winning its great prize. As the novelist Anthony Trollope discovered to his regret while touring North America in 1861-2, Maine had been dry for a decade. The convivial Englishman condemned the ban: “This law, like all sumptuary laws, must fail,” he wrote.

Sumptuary laws had largely fallen into disuse by the 19th century, but they were once a near-universal tool, used in the East and West alike to control economies and preserve social hierarchies. A sumptuary law is a rule that regulates consumption in its broadest sense, from what a person may eat and drink to what they may own, wear or display. The oldest known example, the Locrian Law Code devised by the seventh century B.C. Greek law giver Zaleucus, banned all citizens of Locri (except prostitutes) from ostentatious displays of gold jewelry.

Sumptuary laws were often political weapons disguised as moral pieties, aimed at less powerful groups, particularly women. In 215 B.C., at the height of the Second Punic War, the Roman Senate passed the Lex Oppia, which (among other restrictions) banned women from owning more than a half ounce of gold. Ostensibly a wartime austerity measure, 20 years later the law appeared so ridiculous as to be unenforceable. But during debate on its repeal in 195 B.C., Cato the Elder, its strongest defender, inadvertently revealed the Lex Oppia’s true purpose: “What [these women] want is complete freedom…. Once they have achieved equality, they will be your masters.”

Cato’s message about preserving social hierarchy echoed down the centuries. As trade and economic stability returned to Europe during the High Middle Ages (1000-1300), so did the use of sumptuary laws to keep the new merchant elites in their place. By the 16th century, sumptuary laws in Europe had extended from clothing to almost every aspect of daily life. The more they were circumvented, the more specific such laws became. An edict issued by King Henry VIII of England in 1517, for example, dictated the maximum number of dishes allowed at a meal: nine for a cardinal, seven for the aristocracy and three for the gentry.

The rise of modern capitalism ultimately made sumptuary laws obsolete. Trade turned once-scarce luxuries into mass commodities that simply couldn’t be controlled. Adam Smith’s “The Wealth of Nations” (1776) confirmed what had been obvious for over a century: Consumption and liberty go hand in hand. “It is the highest impertinence,” he wrote, “to pretend to watch over the economy of private people…either by sumptuary laws, or by prohibiting the importation of foreign luxuries.”

Smith’s pragmatic view was echoed by President William Howard Taft. He opposed Prohibition on the grounds that it was coercive rather than consensual, arguing that “experience has shown that a law of this kind, sumptuary in its character, can only be properly enforced in districts in which a majority of the people favor the law.” Mass immigration in early 20th-century America had changed many cities into ethnic melting-pots. Taft recognized Prohibition as an attempt by nativists to impose cultural uniformity on immigrant communities whose attitudes toward alcohol were more permissive. But his warning was ignored, and the disastrous course of Prohibition was set.

Historically Speaking: The High Cost of Financial Panics

Roman emperors and American presidents alike have struggled to deal with sudden economic crashes

The Wall Street Journal, January 17, 2019

ILLUSTRATION: THOMAS FUCHS

On January 12, 1819 Thomas Jefferson wrote to his friend Nathaniel Macon, “I have…entire confidence in the late and present Presidents…I slumber without fear.” He did concede, though, that market fluctuations can trip up even the best governments. Jefferson was prescient: A few days later, the country plunged into a full-blown financial panic. The trigger was a collapse in the overseas cotton market, but the crisis had been building for months. The factors that led to the crash included the actions of the Second Bank of the United States, which had helped to fuel a real estate boom in the West only to reverse course suddenly and call in its loans.

The recession that followed the panic of 1819 was prolonged and severe: Banks closed, lending all but ceased and businesses failed by the thousands. By the time it was over in 1823, almost a third of the population—including Jefferson himself—had suffered irreversible losses.

As we mark the 200th anniversary of the 1819 panic, it is worth pondering the role of governments in a financial crisis. During a panic in Rome in the year 33, the emperor Tiberius’s prompt action prevented a total collapse of the city’s finances. Rome was caught among falling property prices, a real estate bubble and a sudden credit crunch. Instead of waiting it out, Tiberius ordered interest rates to be lowered and released 100 million sestertii (large brass coins) into the banking system to avoid a mass default.

But not all government interventions have been as successful or timely. In 1124, King Henry I of England attempted to restore confidence in the country’s money by having the mint-makers publicly castrated and their right hands amputated for producing substandard coins. A temporary fix at best, his bloody act neither deterred people from debasing the coinage nor allayed fears over England’s creditworthiness.

On the other side of the globe, China began using paper money in 1023. Successive emperors of the Ming Dynasty (1368-1644) failed, however, to limit the number of notes in circulation or to back the money with gold or silver specie. By the mid-15th century the economy was in the grip of hyperinflationary cycles. The emperor Yingzong simply gave up on the problem: China returned to coinage just as Europe was discovering the uses of paper.

The rise of commercial paper along with paper currencies allowed European countries to develop more sophisticated banking systems. But they also led to panics, inflation and dangerous speculation—sometimes all at once, as in France in 1720, when John Law’s disastrous Mississippi Company share scheme ended in mass bankruptcies for its investors and the collapse of the French livre.

As it turns out, it is easier to predict the consequences of a crisis than it is to prevent one from happening. In 2015, the U.K.’s Centre for Economic Policy Research published a paper on the effects of 100 financial crises in 20 Western countries over the past 150 years, down to the recession of 2007-09. They found two consistent outcomes. The first is that politics becomes more extreme and polarized following a crisis; the second is that countries become more ungovernable as violence, protests and populist revolts overshadow the rule of law.

With the U.S. stock market having suffered its worst December since the Great Depression of the 1930s, it is worth remembering that the only thing more frightening than a financial crisis can be its aftermath.

Historically Speaking: New Year, Old Regrets

From the ancient Babylonians to Victorian England, the year’s end has been a time for self-reproach and general misery

The Wall Street Journal, January 3, 2019

ILLUSTRATION: ALAN WITSCHONKE

I don’t look forward to New Year’s Eve. When the bells start to ring, it isn’t “Auld Lang Syne” I hear but echoes from the Anglican “Book of Common Prayer”: “We have left undone those things which we ought to have done; And we have done those things which we ought not to have done.”

At least I’m not alone in my annual dip into the waters of woe. Experiencing the sharp sting of regret around the New Year has a long pedigree. The ancient Babylonians required their kings to offer a ritual apology during the Akitu festival of New Year: The king would go down on his knees before an image of the god Marduk, beg his forgiveness, insist that he hadn’t sinned against the god himself and promise to do better next year. The rite ended with the high priest giving the royal cheek the hardest possible slap.

There are sufficient similarities between the Akitu festival and Yom Kippur, Judaism’s Day of Atonement—which takes place 10 days after the Jewish New Year—to suggest that there was likely a historical link between them. Yom Kippur, however, is about accepting responsibility, with the emphasis on owning up to sins committed rather than pointing out those omitted.

In Europe, the 14th-century Middle English poem “Sir Gawain and the Green Knight” begins its strange tale on New Year’s Day. A green-skinned knight arrives at King Arthur’s Camelot and challenges the knights to strike at him, on the condition that he can return the blow in a year and a day. Sir Gawain reluctantly accepts the challenge, and embarks on a year filled with adventures. Although he ultimately survives his encounter with the Green Knight, Gawain ends up haunted by his moral lapses over the previous 12 months. For, he laments (in J.R.R. Tolkien’s elegant translation), “a man may cover his blemish, but unbind it he cannot.”

New Year’s Eve in Shakespeare’s era was regarded as a day for gift-giving rather than as a catalyst for regret. But Sonnet 30 shows that Shakespeare was no stranger to the melancholy that looking back can inspire: “I summon up remembrance of things past, / I sigh the lack of many a thing I sought, / And with old woes new wail my dear time’s waste.”

For a full dose of New Year’s misery, however, nothing beats the Victorians. “I wait its close, I court its gloom,” declared the poet Walter Savage Landor in “Mild Is the Parting Year.” Not to be outdone, William Wordsworth offered his “Lament of Mary Queen of Scots on the Eve of a New Year”: “Pondering that Time tonight will pass / The threshold of another year; /…My very moments are too full / Of hopelessness and fear.”

Fortunately, there is always Charles Dickens. In 1844, Dickens followed up the wildly successful “A Christmas Carol” with a slightly darker but still uplifting seasonal tale, “The Chimes.” Trotty Veck, an elderly messenger, takes stock of his life on New Year’s Eve and decides that he has been nothing but a burden on society. He resolves to kill himself, but the spirits of the church bells intervene, showing him a vision of what would happen to the people he loves.

Today, most Americans recognize this story as the basis of the bittersweet 1946 Frank Capra film “It’s a Wonderful Life.” As an antidote to New Year’s blues, George Bailey’s lesson holds true for everyone: “No man is a failure who has friends.”

Historically Speaking: Trees of Life and Wonder

From Saturnalia to Christmas Eve, people have always had a spiritual need for greenery in the depths of winter

The Wall Street Journal, December 13, 2018

Queen Victoria and family with their Christmas tree in 1848. PHOTO: GETTY IMAGES

My family never had a pink-frosted Christmas tree, though Lord knows my 10-year-old self really wanted one. Every year my family went to Sal’s Christmas Emporium on Wilshire Boulevard in Los Angeles, where you could buy neon-colored trees, mechanical trees that played Christmas carols, blue and white Hanukkah bushes or even a real Douglas fir if you wanted to go retro. We were solidly retro.

Decorating the Christmas tree remains one of my most treasured memories, and according to the National Christmas Tree Association, the tradition is still thriving in our digital age: In 2017 Americans bought 48.5 million real and artificial Christmas trees. Clearly, bringing a tree into the house, especially during winter, taps into something deeply spiritual in the human psyche.

Nearly every society has at some point venerated the tree as a symbol of fertility and rebirth, or as a living link between the heavens, the earth and the underworld. In the ancient Near East, “tree of life” motifs appear on pottery as early as 7000 B.C. By the second millennium B.C., variations of the motif were being carved onto temple walls in Egypt and fashioned into bronze sculptures in southern China.

The early Christian fathers were troubled by the possibility that the faithful might identify the Garden of Eden’s trees of life and knowledge, described in the Book of Genesis, with paganism’s divine trees and sacred groves. Accordingly, in 572 the Council of Braga banned Christians from participating in the Roman celebration of Saturnalia—a popular winter solstice festival in honor of Saturn, the god of agriculture, that included decking the home with boughs of holly, his sacred symbol.

It wasn’t until the late Middle Ages that evergreens received a qualified welcome from the Church, as props in the mystery plays that told the story of Creation. In Germany, mystery plays were performed on Christmas Eve, traditionally celebrated in the church calendar as the feast day of Adam and Eve. The original baubles that hung on these “paradise trees,” representing the trees in the Garden of Eden, were round wafer breads that symbolized the Eucharist.

The Christmas tree remained a northern European tradition until Queen Charlotte, the German-born wife of George III, had one erected for a children’s party at Windsor Castle in 1800. The British upper classes quickly followed suit, but the rest of the country remained aloof until 1848, when the London Illustrated News published a charming picture of Queen Victoria and her family gathered around a large Christmas tree. Suddenly, every household had to have one for the children to decorate. It didn’t take long for President Franklin Pierce to introduce the first Christmas tree to the White House, in 1853—a practice that every President has honored except Theodore Roosevelt, who in 1902 refused to have a tree on conservationist grounds. (His children objected so much to the ban that he eventually gave in.)

Many writers have tried to capture the complex feelings that Christmas trees inspire, particularly in children. Few, though, can rival T.S. Eliot’s timeless meditation on joy, death and life everlasting, in his 1954 poem “The Cultivation of Christmas Trees”: “The child wonders at the Christmas Tree: / Let him continue in the spirit of wonder / At the Feast as an event not accepted as a pretext; / So that the glittering rapture, the amazement / Of the first-remembered Christmas Tree /…May not be forgotten.”

Historically Speaking: The Tradition of Telling All

From ancient Greece to modern Washington, political memoirs have been irresistible source of gossip about great leaders

The Wall Street Journal, November 30, 2018

ILLUSTRATION: THOMAS FUCHS

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”

Historically Speaking: How Potatoes Conquered the World

It took centuries for the spud to travel from the New World to the Old and back again

The Wall Street Journal, November 15, 2018

At the first Thanksgiving dinner, eaten by the Wampanoag Indians and the Pilgrims in 1621, the menu was rather different from what’s served today. For one thing, the pumpkin was roasted, not made into a pie. And there definitely wasn’t a side dish of mashed potatoes.

In fact, the first hundred Thanksgivings were spud-free, since potatoes weren’t grown in North America until 1719, when Scotch-Irish settlers began planting them in New Hampshire. Mashed potatoes were an even later invention. The first recorded recipe for the dish appeared in 1747, in Hannah Glasse’s splendidly titled “The Art of Cookery Made Plain and Easy, Which Far Exceeds Any Thing of the Kind yet Published.”

By then, the potato had been known in Europe for a full two centuries. It was first introduced by the Spanish conquerors of Peru, where the Incas had revered the potato and even invented a natural way of freeze-drying it for storage. Nevertheless, despite its nutritional value and ease of growing, the potato didn’t catch on in Europe. It wasn’t merely foreign and ugly-looking; to wheat-growing farmers it seemed unnatural—possibly even un-Christian, since there is no mention of the potato in the Bible. Outside of Spain, it was generally grown for animal feed.

The change in the potato’s fortunes was largely due to the efforts of a Frenchman named Antoine-Augustin Parmentier (1737-1813). During the Seven Years’ War, he was taken prisoner by the Prussians and forced to live on a diet of potatoes. To his surprise, he stayed relatively healthy. Convinced he had found a solution to famine, Parmentier dedicated his life after the war to popularizing the potato’s nutritional benefits. He even persuaded Marie-Antoinette to wear potato flowers in her hair.

Among the converts to his message were the economist Adam Smith, who realized the potato’s economic potential as a staple food for workers, and Thomas Jefferson, then the U.S. Ambassador to France, who was keen for his new nation to eat well in all senses of the word. Jefferson is credited with introducing Americans to french fries at a White House dinner in 1802.

As Smith predicted, the potato became the fuel for the Industrial Revolution. A study published in 2011 by Nathan Nunn and Nancy Qian in the Quarterly Journal of Economics estimates that up to a quarter of the world’s population growth from 1700 to 1900 can be attributed solely to the introduction of the potato. As Louisa May Alcott observed in “Little Men,” in 1871, “Money is the root of all evil, and yet it is such a useful root that we cannot get on without it any more than we can without potatoes.”

In 1887, two Americans, Jacob Fitzgerald and William H. Silver, patented the first potato ricer, which forced a cooked potato through a cast iron sieve, ending the scourge of lumpy mash. Still, the holy grail of “quick and easy” mashed potatoes remained elusive until the late 1950s. Using the flakes produced by the potato ricer and a new freeze drying method, U.S. government scientists perfected instant mashed potatoes, which only requires the simple step of adding hot water or milk to the mix. The days of peeling, boiling and mashing were now optional, and for millions of cooks, Thanksgiving became a little easier. And that’s something to be thankful for.

For the Wall Street Journal

Historically Speaking: Overrun by Alien Species

From Japanese knotweed to cane toads, humans have introduced invasive species to new environments with disastrous results

The Wall Street Journal, November 1, 2018

Ever since Neolithic people wandered the earth, inadvertently bringing the mouse along for the ride, humans have been responsible for introducing animal and plant species into new environments. But problems can arise when a non-native species encounters no barriers to population growth, allowing it to rampage unchecked through the new habitat, overwhelming the ecosystem. On more than one occasion, humans have transplanted a species for what seemed like good reasons, only to find out too late that the consequences were disastrous.

One of the most famous examples is celebrating its 150th anniversary this year: the introduction of Japanese knotweed to the U.S. A highly aggressive plant, it can grow 15 feet high and has roots that spread up to 45 feet. Knotweed had already been a hit in Europe because of its pretty little white flowers, and, yes, its miraculous indestructibility.

First mentioned in botanical articles in 1868, knotweed was brought to New York by the Hogg brothers, James and Thomas, eminent American horticulturalists and among the earliest collectors of Japanese plants. Thanks to their extensive contacts, knotweed found a home in arboretums, botanical gardens and even Central Park. Not content with importing one of world’s most invasive shrubs, the Hoggs also introduced Americans to the wonders of kudzu, a dense vine that can grow a foot a day.

Impressed by the vigor of kudzu, agriculturalists recommended using these plants to provide animal fodder and prevent soil erosion. In the 1930s, the government was even paying Southern farmers $8 per acre to plant kudzu. Today it is known as the “vine that ate the South,” because of the way it covers huge tracts of land in a green blanket of death. And Japanese knotweed is still spreading, colonizing entire habitats from Mississippi to Alaska, where only the Arctic tundra holds it back from world domination.

Knotweed has also reached Australia, a country that has been ground zero for the worst excesses of invasive species. In the 19th century, the British imported non-native animals such as rabbits, cats, goats, donkeys, pigs, foxes and camels, causing mass extinctions of Australia’s native mammal species. Australians are still paying the price; there are more rabbits in the country today than wombats, more camels than kangaroos.

Yet the lesson wasn’t learned. In the 1930s, scientists in both Australia and the U.S. decided to import the South American cane toad as a form of biowarfare against beetles that eat sugar cane. The experiment failed, and it turned out that the cane toad was poisonous to any predator that ate it. There’s also the matter of the 30,000 eggs it can lay at a time. Today, the cane toad can be found all over northern Australia and south Florida.

So is there anything we can do once an invasive species has taken up residence? The answer is yes, but it requires more than just fences, traps and pesticides; it means changing human incentives. Today, for instance, the voracious Indo-Pacific lionfish is gobbling up local fish in the west Atlantic, while the Asian carp threatens the ecosystem of the Great Lakes. There is only one solution: We must eat them, dear reader. These invasive fish can be grilled, fried or consumed as sashimi, and they taste delicious. Likewise, kudzu makes great salsa, and Japanese knotweed can be treated like rhubarb. Eat for America and save the environment.

Historically Speaking: The Dark Lore of Black Cats

Ever since they were worshiped in ancient Egypt, cats have occupied an uncanny place in the world’s imagination

The Wall Street Journal, October 22, 2018

ILLUSTRATION: THOMAS FUCHS

As Halloween approaches, decorations featuring scary black cats are starting to make their seasonal appearance. But what did the black cat ever do to deserve its reputation as a symbol of evil? Why is it considered bad luck to have a black cat cross your path?

It wasn’t always this way. In fact, the first human-cat interactions were benign and based on mutual convenience. The invention of agriculture in the Neolithic era led to surpluses of grain, which attracted rodents, which in turn motivated wild cats to hang around humans in the hope of catching dinner. Domestication soon followed: The world’s oldest pet cat was found in a 9,500 year-old grave in Cyprus, buried alongside its human owner.

According to the Roman writer Polyaenus, who lived in the second century A.D., the Egyptian veneration of cats led to disaster at the Battle of Pelusium in 525 B.C. The invading Persian army carried cats on the front lines, rightly calculating that the Egyptians would rather accept defeat than kill a cat.

The Egyptians were unique in their extreme veneration of cats, but they weren’t alone in regarding them as having a special connection to the spirit world. In Greek mythology the cat was a familiar of Hecate, goddess of magic, sorcery and witchcraft. Hecate’s pet had once been a serving maid named Galanthis, who was turned into a cat as punishment by the goddess Hera for being rude.

When Christianity became the official religion of Rome in 380, the association of cats with paganism and witchcraft made them suspect. Moreover, the cat’s independence suggested a willful rebellion against the teaching of the Bible, which said that Adam had dominion over all the animals. The cat’s reputation worsened during the medieval era, as the Catholic Church battled against heresies and dissent. Fed lurid tales by his inquisitors, in 1233 Pope Gregory IX issued a papal bull, “Vox in Rama,” which accused heretics of using black cats in their nighttime sex orgies with Lucifer—who was described as half-cat in appearance.

In Europe, countless numbers of cats were killed in the belief that they could be witches in disguise. In 1484, Pope Innocent VIII fanned the flames of anti-cat prejudice with his papal bull on witchcraft, “Summis Desiderantes Affectibus,” which stated that the cat was “the devil’s favorite animal and idol of all witches.”

The Age of Reason ought to have rescued the black cat from its pariah status, but superstitions die hard. (How many modern apartment buildings lack a 13th floor?). Cats had plenty of ardent fans among 19th century writers, including Charles Dickens and Mark Twain, who wrote “I simply can’t resist a cat, particularly a purring one.” But Edgar Allan Poe, the master of the gothic tale, felt otherwise: in his 1843 story “The Black Cat,” the spirit of a dead cat drives its killer to madness and destruction.

So pity the poor black cat, which through no fault of its own has gone from being an instrument of the devil to the convenient tool of the horror writer—and a favorite Halloween cliché.

For the Wall Street Journal’s “Historically Speaking” column

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

Historically Speaking: At Age 50, a Time of Second Acts

Amanda Foreman finds comfort in countless examples of the power of reinvention after five decades.

ILLUSTRATION BY TONY RODRIGUEZ

I turned 50 this week, and like many people I experienced a full-blown midlife crisis in the lead-up to the Big Day. The famous F. Scott Fitzgerald quotation, “There are no second acts in American lives,” dominated my thoughts. I wondered now that my first act was over—would my life no longer be about opportunities and instead consist largely of consequences?
Fitzgerald, who left the line among his notes for “The Last Tycoon,” had ample reason for pessimism. He had hoped the novel would lead to his own second act after failing to make it in Hollywood, but he died at 44, broken and disappointed, leaving the book unfinished. Yet the truth about his grim line is more complicated. Several years earlier, Fitzgerald had used it to make an almost opposite point, in the essay “My Lost City”: “I once thought that there were no second acts in American lives, but there was certainly to be a second act to New York’s boom days.”
The one comfort we should take from countless examples in history is the power of reinvention. The Victorian poet William Ernest Henley was right when he wrote, “I am the master of my fate/ I am the captain of my soul.”
The point is to seize the moment. The disabled Roman Emperor Claudius, (10 B.C.-A.D. 54) spent most his life being victimized by his awful family. Claudius was 50 when his nephew, Caligula, met his end at the hands of some of his own household security, the Praetorian Guards. The historian Suetonius writes that a soldier discovered Claudius, who had tried to hide, trembling in the palace. The guards decided to make Claudius their puppet emperor. It was a grave miscalculation. Claudius grabbed his chance, shed his bumbling persona and became a forceful and innovative ruler of Rome.

In Russia many centuries later, the general Mikhail Kutuzov was in his 60s when his moment came. In 1805, Czar Alexander I had unfairly blamed Kutuzov for the army’s defeat at the Battle of Austerlitz and relegated him to desk duties. Russian society cruelly treated the general, who looked far from heroic—a character in Tolstoy’s “War and Peace” notes the corpulent Kutuzov’s war scars, especially his “bleached eyeball.” But when the country needed a savior in 1812, Kutuzov, the “has-been,” drove Napoleon and his Grande Armée out of Russia.

Winston Churchill had a similar apotheosis in World War II when he was in his 60s. Until then, his political career had been a catalog of failures, the most famous being the Gallipoli Campaign of 1916 that left Britain and its allies with more than 100,000 casualties.

As for writers and artists, they often find middle age extremely liberating. They cease being afraid to take risks in life. Another Fitzgerald—the Man Booker Prize-winning novelist Penelope—lived on the brink of homelessness, struggling as a tutor and teacher (she later recalled “the stuffy and inky boredom of the classroom”) until she published her first book at 58.

Anna Mary Robertson Moses, better known as Grandma Moses, may be the greatest example of self-reinvention. After many decades of farm life, around age 75 she began a new career, becoming one of America’s best known folk painters.

Perhaps I’ll be inspired to master Greek when I am 80, as some say the Roman statesman Cato the Elder did. But what I’ve learned, while coming to terms with turning 50, is that time spent worrying about “what you might have been” is better passed with friends and family—celebrating the here and now.