Historically Speaking: The Quest to Look Young Forever

From drinking gold to injecting dog hormones, people have searched for eternal youth in some unlikely places.

The Wall Street Journal

May 18, 2023

A study explaining why mouse hairs turn gray made global headlines last month. Not because the little critters are in desperate need of a makeover; but knowing the “why” in mice could lead to a cure for graying locks in humans. Everyone, nowadays, seems to be chasing after youth, either to keep it, find it or just remember it.

The ancient Greeks believed that seeking eternal youth and immortality was hubris, inviting punishment by the gods. Eos, goddess of dawn, asked Zeus to make her human lover Tithonus immortal. He granted her wish, but not quite the way she expected: Tithonus lived on and on as a prisoner of dementia and decrepitude.

The Egyptians believed it was possible for a person to achieve eternal life; the catch was that he had to die first. Also, for a soul to be reborn, every spell, ritual and test outlined in the Book of the Dead had to be executed perfectly, or else death was permanent.

Since asking the gods or dying first seemed like inadvisable ways to defy aging, people in the ancient world often turned to lotions and potions that promised to give at least the appearance of eternal youth. Most anti-aging remedies were reasonably harmless. Roman recipes for banishing wrinkles included a wide array of ingredients, from ass’s milk, swan’s fat and bean paste to frankincense and myrrh.

But ancient elixirs of life often contained substances with allegedly magical properties that were highly toxic. China’s first emperor Qin Shi Huang, who lived in the 3rd century B.C., is believed to have died from mercury poisoning after drinking elixirs meant to make him immortal. Perversely, his failure was subsequently regarded as a challenge. During the Tang Dynasty, from 618 to 907, noxious concoctions created by court alchemists to prolong youth killed as many as six emperors.

THOMAS FUCHS

Even nonlethal beauty aids could be dangerous. In 16th-century France, Diane de Poitiers, the mistress of King Henri II, was famous for looking the same age as her lover despite being 20 years older. Regular exercise and moderate drinking probably helped, but a study of Diane’s remains published in 2009 found that her hair contained extremely high levels of gold, likely due to daily sips of a youth-potion containing gold chloride, diethyl ether and mercury. The toxic combination would have ravaged her internal organs and made her look ghostly white.

By the 19th century, elixirs, fountains of youth and other magical nonsense had been replaced by quack medicine. In 1889, a French doctor named Charles Brown-Sequard started a fashion for animal gland transplants after he claimed spectacular results from injecting himself with a serum containing canine testicle fluid. This so-called rejuvenation treatment, which promised to restore youthful looks and sexual vigor to men, went through various iterations until it fell out of favor in the 1930s.

Advances in plastic surgery following World War I meant that people could skip tedious rejuvenation therapies and instantly achieve younger looks with a scalpel. Not surprisingly, in a country where ex-CNN anchor Don Lemon could call a 51-year-old woman “past her prime,” women accounted for 85% of the facelifts performed in the U.S. in 2019. For men, there’s nothing about looking old that can’t be fixed by a Lamborghini and a 21-year-old girlfriend. For women, the problem isn’t the mice, it’s the men.

Historically Speaking: Even Ancient Children Did Homework

Americans have long debated the value of take-home assignments, but children have been struggling with them for millennia.

The Wall Street Journal

February 24, 2023

If American schoolchildren no longer had to do hours of homework each night, a lot of dogs might miss out on their favorite snack, if an old excuse is to be believed, but would the children be worse off? Americans have been debating whether or not to abolish homework for almost a century and a half. Schoolwork and homework became indistinguishable during Covid, when children were learning from home. But the normal school day has returned and so has the issue.

The ancient Greek philosophers thought deeply about the purpose of education. In the Republic, Plato argued that girls as well as boys should receive physical, mental and moral training because it was good for the state. But the Greeks were less concerned about where this training should take place, school or home, or what kind of separation should exist between the two. The Roman statesman Cicero wrote that he learned at home as much as he did outside of it.

History of Homework

ILLUSTRATION: THOMAS FUCHS

But, of course, practice makes perfect, as Pliny the Younger told his students of oratory. Elementary schoolchildren in Greco-Roman Egypt were expected to practice their letters on wax writing tablets. A homework tablet from the second century AD, now in the British Museum, features two lines of Greek written by the teacher and a child’s attempt to copy them underneath.

Tedious copying exercises also plagued the lives of ancient Chinese students. In 1900 an enormous trove of 1500 to 900-year-old Buddhist scrolls was discovered in a cave near Dunhuang in northwestern China. Scattered among the texts were homework copies made by bored monastery pupils who scribbled things like, “This is Futong incurring another person’s anger.”

What many people generally think of as homework today—after-class assignments forced on children regardless of their pedagogical usefulness—has its origins in the Prussian school system. In the 18th and 19th centuries, Prussia led the world in mass education. Fueled by the belief that compulsory schooling was the best means of controlling the peasantry, the authorities devised a rigorous system based on universal standards and applied methods. Daily homework was introduced, in part because it was a way of inserting school oversight, and by extension the state, into the home.

American educationalists such as Horace Mann in Massachusetts sought to create a free school system based on the Prussian model. Dividing children into age groups and other practical reforms faced little opposition. But as early as 1855, the American Medical Monthly was warning of the dangers to children’s health from lengthy homework assignments. In the 1880s, the Boston school board expressed its concern by voting to reduce the amount of arithmetic homework in elementary schools.

As more parents complained of lost family time and homework wars, the Ladies’ Home Journal began to campaign for its abolition, calling after-school work a “national crime” in 1900. The California legislature agreed, abolishing all elementary school homework in 1901. The homework debate seesawed until the Russians launched Sputnik in 1957 and shocked Americans out of complacency. Congress quickly passed a $1 billion education spending program. More, not less, homework became the mantra until the permissive ‘70s, only to reverse in response to Japan’s economic ascendancy in the ‘80s.

All the old criticisms of homework remain today, but perhaps the bigger threat to such assignments is technological, in the form of the universal homework butler known as ChatGPT.

Historically Speaking: Passports Haven’t Always Been Liberating

France’s Louis XIV first required international travelers to carry an official document. By the 20th century, most other countries did the same for reasons of national security.

The Wall Street Journal

August 12, 2022

As anyone who has recently applied for a passport can attest, U.S. passport agencies are still catching up from the pandemic lockdown. But even with the current delays and frustrations, a passport is, quite literally, our pass to freedom.

The exact concept did not exist in ancient times. An approximation was the official letter of introduction or safe conduct that guaranteed the security of the traveler holding it. The Hebrew Bible recounts that the prophet Nehemiah, cup-bearer to Artaxerxes I of Persia, requested a letter from the king for his mission to Judea. As an indispensable tool of international business and diplomacy, such documents were considered sacrosanct. In medieval England, King Henry V decreed that any attack on a bearer of one would be treated as high treason.

Another variation was the official credential proving the bearer had permission to travel. The Athenian army controlled the movements of officers between bases by using a clay token system. In China, by the time of the Tang dynasty in the early 7th century, trade along the Silk Road had become regulated by the paper-backed guosuo system. Functioning as a pass and identity card, possession of a signed guosuo document was the only means of legitimate travel between towns and cities.

The birth of the modern passport may be credited in part to King Louis XIV of France, who decreed in 1669 that all individuals, whether leaving or entering his country, were required to register their personal details with the appropriate officials and carry a copy of their travel license. Ironically, the passport requirement helped to foil King Louis XVI and Marie Antoinette’s attempt to escape from Paris in 1791.

The rise of middle-class tourism during the 19th century exposed the ideological gulf between the continental and Anglo-Saxon view of passports. Unlike many European states, neither Great Britain nor America required its citizens to carry an identity card or request government permission to travel. Only 785 Britons applied for a passport in 1847, mainly out of the belief that a document personally signed by the foreign secretary might elevate the bearer in the eyes of the locals.

By the end of World War I, however, most governments had come around to the idea that passports were an essential buttress of national security. The need to own one coincided with mass upheaval across Europe: Countries were redrawn, regimes toppled, minorities persecuted, creating millions of stateless refugees.

Into this humanitarian crisis stepped an unlikely savior, the Norwegian diplomat Fridtjof Nansen. In 1922, as the first high commissioner for refugees for the League of Nations, Nansen used his office to create a temporary passport for displaced persons, enabling them to travel, register and work in over 50 countries. Among the hundreds of thousands saved by a “Nansen passport” were the artist Marc Chagall and the writer Vladimir Nabokov. With unfortunate timing, the program lapsed in 1938, the year that Nazi Germany annexed Austria and invaded Czechoslovakia.

For a brief time during the Cold War, Americans experienced the kind of politicization that shaped most other passport systems. In the 1950s, the State Department could and did revoke the passports of suspected communist sympathizers. My father Carl Foreman was temporarily stripped of his after he finished making the anti-McCarthyite film classic “High Noon.”

Nowadays, neither race nor creed nor political opinions can come between an American and his passport. But delays of up to 12 weeks are currently unavoidable.

Historically Speaking: The Long Haul of Distance Running

How the marathon became the world’s top endurance race

The Wall Street Journal

September 2, 2021

The New York City Marathon, the world’s largest, will hold its 50th race this autumn, after missing last year’s due to the pandemic. A podiatrist once told me that he always knows when there has been a marathon because of the sudden uptick in patients with stress fractures and missing toenails. Nevertheless, humans are uniquely suited to long-distance running.

Some 2-3 million years ago, our hominid ancestors began to develop sweat glands that enabled their bodies to stay cool while chasing after prey. Other mammals, by contrast, overheat unless they stop and rest. Thus, slow but sweaty humans won out over fleet but panting animals.

The marathon, at 26.2 miles, isn’t the oldest known long-distance race. Egyptian Pharaoh Taharqa liked to organize runs to keep his soldiers fit. A monument inscribed around 685 B.C. records a two-day, 62-mile race from Memphis to Fayum and back. The unnamed winner of the first leg (31 miles) completed it in about four hours.

ILLUSTRATION: THOMAS FUCHS

The considerably shorter marathon derives from the story of a Greek messenger, Pheidippides, who allegedly ran from Marathon to Athens in 490 B.C. to deliver news of victory over the Persians—only to drop dead of exhaustion at the end. But while it is true that the Greeks used long-distance runners, called hemerodromoi, or day runners, to convey messages, this story is probably a myth or a conflation of different events.

Still, foot-bound messengers ran impressive distances in their day. Within 24 hours of Herman Cortes’s landing in Mexico in 1519, messenger relays had carried news of his arrival over 260 miles to King Montezuma II in Tenochtitlan.

As a competitive sport, the marathon has a shorter history. The longest race at the ancient Olympic Games was about 3 miles. This didn’t stop the French philologist Michel Bréal from persuading the organizers of the inaugural modern Olympics in 1896 to recreate Pheidippides’s epic run as a way of adding a little classical flavor to the Games. The event exceeded his expectations: The Greek team trained so hard that it won 8 of the first 9 places. John Graham, manager of the U.S. Olympic team, was inspired to organize the first Boston Marathon in 1897.

Marathon runners became fitter and faster with each Olympics. But at the 1908 London Games the first runner to reach the stadium, the Italian Dorando Pietri, arrived delirious with exhaustion. He staggered and fell five times before concerned officials eventually helped him over the line. This, unfortunately, disqualified his time of 2:54:46.

Pietri’s collapse added fuel to the arguments of those who thought that a woman’s body could not possibly stand up to a marathon’s demands. Women were banned from the sport until 1964, when Britain’s Isle of Wight Marathon allowed the Scotswoman Dale Greig to run, with an ambulance on standby just in case. Organizers of the Boston Marathon proved more intransigent: Roberta Gibb and Katherine Switzer tried to force their way into the race in 1966 and ’67, but Boston’s gender bar stayed in place until 1972. The Olympics held out until 1984.

Since that time, marathons have become a great equalizer, with men and women on the same course: For 26.2 miles, the only label that counts is “runner.”

Historically Speaking: The Ordeal of Standardized Testing

From the Incas to the College Board, exams have been a popular way for societies to select an elite.

The Wall Street Journal

March 11, 2021

Last month, the University of Texas at Austin joined the growing list of colleges that have made standardized test scores optional for another year due to the pandemic. Last year, applicants were quick to take up the offer: Only 44% of high-school students who applied to college using the Common Application submitted SAT or ACT scores in 2020-21, compared with 77% the previous year.

Nobody relishes taking exams, yet every culture expects some kind of proof of educational attainment from its young. To enter Plato’s Academy in ancient Athens, a prospective student had to solve mathematical problems. Would-be doctors at one of the many medical schools in Ephesus had to participate in a two-day competition that tested their knowledge as well as their surgical skills.

ILLUSTRATION: THOMAS FUCHS

On the other side of the world, the Incas of Peru were no less demanding. Entry into the nobility required four years of rigorous instruction in the Quechua language, religion and history. At the end of the course students underwent a harsh examination lasting several days that tested their physical and mental endurance.

It was the Chinese who invented the written examination, as a means of improving the quality of imperial civil servants. During the reign of Empress Wu Zetian, China’s only female ruler, in the 7th century, the exam became a national rite of passage for the intelligentsia. Despite its burdensome academic requirements, several hundred thousand candidates took it every year. A geographical quota system was eventually introduced to prevent the richer regions of China from dominating.

Over the centuries, all that cramming for one exam stifled innovation and encouraged conformity. Still, the meritocratic nature of the Chinese imperial exam greatly impressed educational reformers in the West. In 1702, Trinity College, Cambridge became the first institution to require students to take exams in writing rather than orally. By the end of the 19th century, exams to enter a college or earn a degree had become a fixture in most European countries.

In the U.S., the reformer Horace Mann introduced standardized testing in Boston schools in the 1840s, hoping to raise the level of teaching and ensure that all citizens would have equal access to a good education. The College Board, a nonprofit organization founded by a group of colleges and high schools in 1899, established the first standardized test for university applicants.

Not every institution that adopted standardized testing had noble aims, however. The U.S. Army had experimented with multiple-choice intelligence tastes during World War I and found them useless as a predictive tool. But in the early 1920s, the president of Columbia University, Nicholas M. Butler, adopted the Thorndike Tests for Mental Alertness as part of the admissions process, believing it would limit the number of Jewish students.

The College Board adopted the SAT, a multiple-choice aptitude test, in 1926, as a fair and inclusive alternative to written exams, which were thought to be biased against poorer students. In the 1960s, civil rights activists began to argue that standardized tests like the SAT and ACT were biased against minority students, but despite the mounting criticisms, the tests seemed like a permanent part of American education—until now.

Historically Speaking: Overcoming the Labor of Calculation

Inventors tried many times over the centuries to build an effective portable calculator—but no one succeeded until John Merryman.

The Wall Street Journal, April 5, 2019

ILLUSTRATION: THOMAS FUCHS

The world owes a debt of gratitude to Jerry Merryman, who died on Feb. 27 at the age of 86. It was Merryman who, in 1965, worked with two other engineers at Texas Instruments to invent the world’s first pocket calculator. Today we all carry powerful calculators on our smartphones, but Merryman was the first to solve a problem that had plagued mathematicians at least since Gottfried Leibniz, one of the creators of calculus in the 17th century, who observed: “It is unworthy of excellent men to lose hours like slaves in the labor of calculation, which could safely be relegated to anyone else if machines were used.”

In ancient times, the only alternative to mental arithmetic was the abacus, with its simple square frame, wooden rods and movable beads. Most civilizations, including the Romans, Chinese, Indians and Aztecs, had their own version—from counting boards for keeping track of sums to more sophisticated designs that could calculate square roots.

We know of just one counting machine from antiquity that was more complex. The 2nd-century B.C. Antikythera Mechanism, discovered in 1901 in the remains of an ancient Greek shipwreck, was a 30-geared bronze calculator that is believed to have been used to track the movement of the heavens. Despite its ingenious construction, the Antikythera had a limited range of capabilities: It could calculate dates and constellations and little else.

In the 15th century, Leonardo da Vinci drew up designs for a mechanical calculating device that consisted of 13 “digit-registering” wheels. In 1967, IBM commissioned the Italian scholar and engineer Roberto Guatelli to build a replica based on the sketches. Guatelli believed that Leonardo had invented the first calculator, but other experts disagreed. In any case, it turned out that the metal wheels would have generated so much friction that the frame would probably have caught fire.

Technological advances in clockmaking helped the French mathematician and inventor Blaise Pascal to build the first working mechanical calculator, called the Pascaline, in 1644. It wasn’t a fire hazard, and it could add and subtract. But the Pascaline was very expensive to make, fragile in the extreme, and far too limited to be really useful. Pascal only made 50 in his lifetime, of which less than 10 survive today.

Perhaps the greatest calculator never to see the light of day was designed by Charles Babbage in the early 19th century. He actually designed two machines—the Difference Engine, which could perform arithmetic, and the Analytical Engine, which was theoretically capable of a range of functions from direct multiplication to parallel processing. Ada, Countess of Lovelace, the daughter of the poet Lord Byron, made important contributions to the development of Babbage’s Analytical Engine. According to the historian Walter Isaacson, Lovelace realized that any process based on logical symbols could be used to represent entities other than quantity—the same principle used in modern computing.

Unfortunately, despite many years and thousands of pounds of government funding, Babbage only ever managed to build a small prototype of the Difference Engine. Even for most of the 20th century, his dream of a portable calculator stayed a dream, while the go-to instrument for doing large sums remained the slide rule. It took Merryman’s invention to allow us all to become “excellent men” and leave the labor of calculation to the machines.

Historically Speaking: The Immortal Charm of Daffodils

The humble flower has been a favorite symbol in myth and art since ancient times

The Wall Street Journal, March 22, 2019

ILLUSTRATION: THOMAS FUCHS

On April 15, 1802, the poet William Wordsworth and his sister Dorothy were enjoying a spring walk through the hills and vales of the English Lake District when they came across a field of daffodils. Dorothy was so moved that she recorded the event in her journal, noting how the flowers “tossed and reeled and danced and seemed as if they verily laughed with the wind that blew upon them over the Lake.” And William decided there was nothing for it but to write a poem, which he published in its final version in 1815. “I Wandered Lonely as a Cloud” is one of his most famous reflections on the power of nature: “For oft, when on my couch I lie/In vacant or in pensive mood,/They flash upon that inward eye/Which is the bliss of solitude;/And then my heart with pleasure fills,/And dances with the daffodils.”

Long dismissed as a common field flower, unworthy of serious attention by the artist, poet or gardener, the daffodil enjoyed a revival thanks in part to Wordsworth’s poem. The painters Claude Monet, Berthe Morisot and Vincent van Gogh were among its 19th-century champions. Today, the daffodil is so ubiquitous, in gardens and in art, that it’s easy to overlook.

But the flower deserves respect for being a survivor. Every part of the narcissus, to use its scientific name, is toxic to humans, animals and even other flowers, and yet—as many cultures have noted—it seems immortal. There are still swaths of daffodils on the lakeside meadow where the Wordsworths ambled two centuries ago.

The daffodil originated in the ancient Mediterranean, where it was regarded with deep ambivalence. The ancient Egyptians associated narcissi with the idea of death and resurrection, using them in tomb paintings. The Greeks also gave the flower contrary mythological meanings. Its scientific name comes from the story of Narcissus, a handsome youth who faded away after being cursed into falling in love with his own image. At the last moment, the gods saved him from death by granting him a lifeless immortality as a daffodil. In another Greek myth, the daffodil’s luminous beauty was used by Hades to lure Persephone away from her friends so that he could abduct her into the underworld. During her four-month captivity the only flower she saw was the asphodelus, which grew in abundance on the fields of Elysium—and whose name inspired the English derivative “daffodil.”

But it is isn’t only Mediterranean cultures that have fixated on the daffodil’s mysterious alchemy of life and death. A fragrant variety of the narcissus—the sweet-smelling paper white—traveled along the Silk Road to China. There, too, the flower appeared to encapsulate the happy promise of spring, but also other painful emotions such as loss and yearning. The famous Ming Dynasty scroll painting “Narcissi and Plum Blossoms” by Qiu Ying (ca. 1494-1552), for instance, is a study in contrasts, juxtaposing exquisitely rendered flowers with the empty desolation of winter.

The English botanist John Parkinson introduced the traditional yellow variety from Spain in 1618. Aided by a soggy but temperate climate, daffodils quickly spread across lawns and fields, causing its foreign origins to be forgotten. By the 19th century they had become quintessentially British—so much so that missionaries and traders, nostalgic for home, planted bucketfuls of bulbs wherever they went. Their legacy in North America is a burst of color each year just when the browns and grays of winter have worn out their welcome.

Historically Speaking: Insuring Against Disasters

Insurance policies go back to the ancient Babylonians and were crucial in the early development of capitalism

The Wall Street Journal, February 21, 2019

ILLUSTRATION: THOMAS FUCHS

Living in a world without insurance, free from all those claim forms and high deductibles, might sound like a little bit of paradise. But the only thing worse than dealing with the insurance industry is trying to conduct business without it. In fact, the basic principle of insurance—pooling risk in order to minimize liability from unforeseen dangers—is one of the things that made modern capitalism possible.

The first merchants to tackle the problem of risk management in a systematic way were the Babylonians. The 18th-century B.C. Code of Hammurabi shows that they used a primitive form of insurance known as “bottomry.” According to the Code, merchants who took high-interest loans tied to shipments of goods could have the loans forgiven if the ship was lost. The practice benefited both traders and their creditors, who charged a premium of up to 30% on such loans.

The Athenians, realizing that bottomry was a far better hedge against disaster than relying on the Oracle of Delphi, subsequently developed the idea into a maritime insurance system. They had professional loan syndicates, official inspections of ships and cargoes, and legal sanctions against code violators.

With the first insurance schemes, however, came the first insurance fraud. One of the oldest known cases comes from Athens in the 3rd century B.C. Two men named Hegestratos and Xenothemis obtained bottomry insurance for a shipment of corn from Syracuse to Athens. Halfway through the journey they attempted to sink the ship, only to have their plan foiled by an alert passenger. Hegestratos jumped (or was thrown) from the ship and drowned. Xenothemis was taken to Athens to meet his punishment.

In Christian Europe, insurance was widely frowned upon as a form of gambling—betting against God. Even after Pope Gregory IX decreed in the 13th century that the premiums charged on bottomry loans were not usury, because of the risk involved, the industry rarely expanded. Innovations came mainly in response to catastrophes: The Great Fire of London in 1666 led to the growth of fire insurance, while the Lisbon earthquake of 1755 did the same for life insurance.

It took the Enlightenment to bring widespread changes in the way Europeans thought about insurance. Probability became subject to numbers and statistics rather than hope and prayer. In addition to his contributions to mathematics, astronomy and physics, Edmond Halley (1656-1742), of Halley’s comet fame, developed the foundations of actuarial science—the mathematical measurement of risk. This helped to create a level playing field for sellers and buyers of insurance. By the end of the 18th century, those who abjured insurance were regarded as stupid rather than pious. Adam Smith declared that to do business without it “was a presumptuous contempt of the risk.”

But insurance only works if it can be trusted in a crisis. For the modern American insurance industry, the deadly San Francisco earthquake of 1906 was a day of reckoning. The devastation resulted in insured losses of $235 million—equivalent to $6.3 billion today. Many American insurers balked, but in Britain, Lloyd’s of London announced that every one of its customers would have their claims paid in full within 30 days. This prompt action saved lives and ensured that business would be able to go on.

And that’s why we pay our premiums: You can’t predict tomorrow, but you can plan for it.

Historically Speaking: Unenforceable Laws Against Pleasure

The 100th anniversary of Prohibition is a reminder of how hard it is to regulate consumption and display

The Wall Street Journal, January 24, 2019

ILLUSTRATION: THOMAS FUCHS

This month we mark the centennial of the ratification of the Constitution’s 18th Amendment, better known as Prohibition. But the temperance movement was active for over a half-century before winning its great prize. As the novelist Anthony Trollope discovered to his regret while touring North America in 1861-2, Maine had been dry for a decade. The convivial Englishman condemned the ban: “This law, like all sumptuary laws, must fail,” he wrote.

Sumptuary laws had largely fallen into disuse by the 19th century, but they were once a near-universal tool, used in the East and West alike to control economies and preserve social hierarchies. A sumptuary law is a rule that regulates consumption in its broadest sense, from what a person may eat and drink to what they may own, wear or display. The oldest known example, the Locrian Law Code devised by the seventh century B.C. Greek law giver Zaleucus, banned all citizens of Locri (except prostitutes) from ostentatious displays of gold jewelry.

Sumptuary laws were often political weapons disguised as moral pieties, aimed at less powerful groups, particularly women. In 215 B.C., at the height of the Second Punic War, the Roman Senate passed the Lex Oppia, which (among other restrictions) banned women from owning more than a half ounce of gold. Ostensibly a wartime austerity measure, 20 years later the law appeared so ridiculous as to be unenforceable. But during debate on its repeal in 195 B.C., Cato the Elder, its strongest defender, inadvertently revealed the Lex Oppia’s true purpose: “What [these women] want is complete freedom…. Once they have achieved equality, they will be your masters.”

Cato’s message about preserving social hierarchy echoed down the centuries. As trade and economic stability returned to Europe during the High Middle Ages (1000-1300), so did the use of sumptuary laws to keep the new merchant elites in their place. By the 16th century, sumptuary laws in Europe had extended from clothing to almost every aspect of daily life. The more they were circumvented, the more specific such laws became. An edict issued by King Henry VIII of England in 1517, for example, dictated the maximum number of dishes allowed at a meal: nine for a cardinal, seven for the aristocracy and three for the gentry.

The rise of modern capitalism ultimately made sumptuary laws obsolete. Trade turned once-scarce luxuries into mass commodities that simply couldn’t be controlled. Adam Smith’s “The Wealth of Nations” (1776) confirmed what had been obvious for over a century: Consumption and liberty go hand in hand. “It is the highest impertinence,” he wrote, “to pretend to watch over the economy of private people…either by sumptuary laws, or by prohibiting the importation of foreign luxuries.”

Smith’s pragmatic view was echoed by President William Howard Taft. He opposed Prohibition on the grounds that it was coercive rather than consensual, arguing that “experience has shown that a law of this kind, sumptuary in its character, can only be properly enforced in districts in which a majority of the people favor the law.” Mass immigration in early 20th-century America had changed many cities into ethnic melting-pots. Taft recognized Prohibition as an attempt by nativists to impose cultural uniformity on immigrant communities whose attitudes toward alcohol were more permissive. But his warning was ignored, and the disastrous course of Prohibition was set.

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.