Historically Speaking: New Year, Old Regrets

From the ancient Babylonians to Victorian England, the year’s end has been a time for self-reproach and general misery

The Wall Street Journal, January 3, 2019

ILLUSTRATION: ALAN WITSCHONKE

I don’t look forward to New Year’s Eve. When the bells start to ring, it isn’t “Auld Lang Syne” I hear but echoes from the Anglican “Book of Common Prayer”: “We have left undone those things which we ought to have done; And we have done those things which we ought not to have done.”

At least I’m not alone in my annual dip into the waters of woe. Experiencing the sharp sting of regret around the New Year has a long pedigree. The ancient Babylonians required their kings to offer a ritual apology during the Akitu festival of New Year: The king would go down on his knees before an image of the god Marduk, beg his forgiveness, insist that he hadn’t sinned against the god himself and promise to do better next year. The rite ended with the high priest giving the royal cheek the hardest possible slap.

There are sufficient similarities between the Akitu festival and Yom Kippur, Judaism’s Day of Atonement—which takes place 10 days after the Jewish New Year—to suggest that there was likely a historical link between them. Yom Kippur, however, is about accepting responsibility, with the emphasis on owning up to sins committed rather than pointing out those omitted.

In Europe, the 14th-century Middle English poem “Sir Gawain and the Green Knight” begins its strange tale on New Year’s Day. A green-skinned knight arrives at King Arthur’s Camelot and challenges the knights to strike at him, on the condition that he can return the blow in a year and a day. Sir Gawain reluctantly accepts the challenge, and embarks on a year filled with adventures. Although he ultimately survives his encounter with the Green Knight, Gawain ends up haunted by his moral lapses over the previous 12 months. For, he laments (in J.R.R. Tolkien’s elegant translation), “a man may cover his blemish, but unbind it he cannot.”

New Year’s Eve in Shakespeare’s era was regarded as a day for gift-giving rather than as a catalyst for regret. But Sonnet 30 shows that Shakespeare was no stranger to the melancholy that looking back can inspire: “I summon up remembrance of things past, / I sigh the lack of many a thing I sought, / And with old woes new wail my dear time’s waste.”

For a full dose of New Year’s misery, however, nothing beats the Victorians. “I wait its close, I court its gloom,” declared the poet Walter Savage Landor in “Mild Is the Parting Year.” Not to be outdone, William Wordsworth offered his “Lament of Mary Queen of Scots on the Eve of a New Year”: “Pondering that Time tonight will pass / The threshold of another year; /…My very moments are too full / Of hopelessness and fear.”

Fortunately, there is always Charles Dickens. In 1844, Dickens followed up the wildly successful “A Christmas Carol” with a slightly darker but still uplifting seasonal tale, “The Chimes.” Trotty Veck, an elderly messenger, takes stock of his life on New Year’s Eve and decides that he has been nothing but a burden on society. He resolves to kill himself, but the spirits of the church bells intervene, showing him a vision of what would happen to the people he loves.

Today, most Americans recognize this story as the basis of the bittersweet 1946 Frank Capra film “It’s a Wonderful Life.” As an antidote to New Year’s blues, George Bailey’s lesson holds true for everyone: “No man is a failure who has friends.”

Historically Speaking: Trees of Life and Wonder

From Saturnalia to Christmas Eve, people have always had a spiritual need for greenery in the depths of winter

The Wall Street Journal, December 13, 2018

Queen Victoria and family with their Christmas tree in 1848. PHOTO: GETTY IMAGES

My family never had a pink-frosted Christmas tree, though Lord knows my 10-year-old self really wanted one. Every year my family went to Sal’s Christmas Emporium on Wilshire Boulevard in Los Angeles, where you could buy neon-colored trees, mechanical trees that played Christmas carols, blue and white Hanukkah bushes or even a real Douglas fir if you wanted to go retro. We were solidly retro.

Decorating the Christmas tree remains one of my most treasured memories, and according to the National Christmas Tree Association, the tradition is still thriving in our digital age: In 2017 Americans bought 48.5 million real and artificial Christmas trees. Clearly, bringing a tree into the house, especially during winter, taps into something deeply spiritual in the human psyche.

Nearly every society has at some point venerated the tree as a symbol of fertility and rebirth, or as a living link between the heavens, the earth and the underworld. In the ancient Near East, “tree of life” motifs appear on pottery as early as 7000 B.C. By the second millennium B.C., variations of the motif were being carved onto temple walls in Egypt and fashioned into bronze sculptures in southern China.

The early Christian fathers were troubled by the possibility that the faithful might identify the Garden of Eden’s trees of life and knowledge, described in the Book of Genesis, with paganism’s divine trees and sacred groves. Accordingly, in 572 the Council of Braga banned Christians from participating in the Roman celebration of Saturnalia—a popular winter solstice festival in honor of Saturn, the god of agriculture, that included decking the home with boughs of holly, his sacred symbol.

It wasn’t until the late Middle Ages that evergreens received a qualified welcome from the Church, as props in the mystery plays that told the story of Creation. In Germany, mystery plays were performed on Christmas Eve, traditionally celebrated in the church calendar as the feast day of Adam and Eve. The original baubles that hung on these “paradise trees,” representing the trees in the Garden of Eden, were round wafer breads that symbolized the Eucharist.

The Christmas tree remained a northern European tradition until Queen Charlotte, the German-born wife of George III, had one erected for a children’s party at Windsor Castle in 1800. The British upper classes quickly followed suit, but the rest of the country remained aloof until 1848, when the London Illustrated News published a charming picture of Queen Victoria and her family gathered around a large Christmas tree. Suddenly, every household had to have one for the children to decorate. It didn’t take long for President Franklin Pierce to introduce the first Christmas tree to the White House, in 1853—a practice that every President has honored except Theodore Roosevelt, who in 1902 refused to have a tree on conservationist grounds. (His children objected so much to the ban that he eventually gave in.)

Many writers have tried to capture the complex feelings that Christmas trees inspire, particularly in children. Few, though, can rival T.S. Eliot’s timeless meditation on joy, death and life everlasting, in his 1954 poem “The Cultivation of Christmas Trees”: “The child wonders at the Christmas Tree: / Let him continue in the spirit of wonder / At the Feast as an event not accepted as a pretext; / So that the glittering rapture, the amazement / Of the first-remembered Christmas Tree /…May not be forgotten.”

Historically Speaking: The Tradition of Telling All

From ancient Greece to modern Washington, political memoirs have been irresistible source of gossip about great leaders

The Wall Street Journal, November 30, 2018

ILLUSTRATION: THOMAS FUCHS

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”

Historically Speaking: How Potatoes Conquered the World

It took centuries for the spud to travel from the New World to the Old and back again

The Wall Street Journal, November 15, 2018

At the first Thanksgiving dinner, eaten by the Wampanoag Indians and the Pilgrims in 1621, the menu was rather different from what’s served today. For one thing, the pumpkin was roasted, not made into a pie. And there definitely wasn’t a side dish of mashed potatoes.

In fact, the first hundred Thanksgivings were spud-free, since potatoes weren’t grown in North America until 1719, when Scotch-Irish settlers began planting them in New Hampshire. Mashed potatoes were an even later invention. The first recorded recipe for the dish appeared in 1747, in Hannah Glasse’s splendidly titled “The Art of Cookery Made Plain and Easy, Which Far Exceeds Any Thing of the Kind yet Published.”

By then, the potato had been known in Europe for a full two centuries. It was first introduced by the Spanish conquerors of Peru, where the Incas had revered the potato and even invented a natural way of freeze-drying it for storage. Nevertheless, despite its nutritional value and ease of growing, the potato didn’t catch on in Europe. It wasn’t merely foreign and ugly-looking; to wheat-growing farmers it seemed unnatural—possibly even un-Christian, since there is no mention of the potato in the Bible. Outside of Spain, it was generally grown for animal feed.

The change in the potato’s fortunes was largely due to the efforts of a Frenchman named Antoine-Augustin Parmentier (1737-1813). During the Seven Years’ War, he was taken prisoner by the Prussians and forced to live on a diet of potatoes. To his surprise, he stayed relatively healthy. Convinced he had found a solution to famine, Parmentier dedicated his life after the war to popularizing the potato’s nutritional benefits. He even persuaded Marie-Antoinette to wear potato flowers in her hair.

Among the converts to his message were the economist Adam Smith, who realized the potato’s economic potential as a staple food for workers, and Thomas Jefferson, then the U.S. Ambassador to France, who was keen for his new nation to eat well in all senses of the word. Jefferson is credited with introducing Americans to french fries at a White House dinner in 1802.

As Smith predicted, the potato became the fuel for the Industrial Revolution. A study published in 2011 by Nathan Nunn and Nancy Qian in the Quarterly Journal of Economics estimates that up to a quarter of the world’s population growth from 1700 to 1900 can be attributed solely to the introduction of the potato. As Louisa May Alcott observed in “Little Men,” in 1871, “Money is the root of all evil, and yet it is such a useful root that we cannot get on without it any more than we can without potatoes.”

In 1887, two Americans, Jacob Fitzgerald and William H. Silver, patented the first potato ricer, which forced a cooked potato through a cast iron sieve, ending the scourge of lumpy mash. Still, the holy grail of “quick and easy” mashed potatoes remained elusive until the late 1950s. Using the flakes produced by the potato ricer and a new freeze drying method, U.S. government scientists perfected instant mashed potatoes, which only requires the simple step of adding hot water or milk to the mix. The days of peeling, boiling and mashing were now optional, and for millions of cooks, Thanksgiving became a little easier. And that’s something to be thankful for.

For the Wall Street Journal

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

Historically Speaking: The Miseries of Travel

ILLUSTRATION: THOMAS FUCHS

Today’s jet passengers may think they have it bad, but delay and discomfort have been a part of journeys since the Mayflower

The Wall Street Journal, September 20, 2018

Fifty years ago, on September 30, 1968, the world’s first 747 Jumbo Jet rolled out of Boeing’s Everett plant in Seattle, Washington. It was hailed as the future of commercial air travel, complete with fine dining, live piano music and glamorous stewardesses. And perhaps we might still be living in that future, were it not for the 1978 Airline Deregulation Act signed into law by President Jimmy Carter.

Deregulation was meant to increase the competitiveness of the airlines, while giving passengers more choice about the prices they paid. It succeeded in greatly expanding the accessibility of air travel, but at the price of making it a far less luxurious experience. Today, flying is a matter of “calculated misery,” as Columbia Law School professor Tim Wu put it in a 2014 article in the New Yorker. Airlines deliberately make travel unpleasant in order to force economy passengers to pay extra for things that were once considered standard, like food and blankets.

So it has always been with mass travel, since its beginnings in the 17th century: a test of how much discomfort and delay passengers are willing to endure. For the English Puritans who sailed to America on the Mayflower in 1620, light and ventilation were practically non-existent, the food was terrible and the sanitation primitive. All 102 passengers were crammed into a tiny living area just 80 feet long and 20 feet wide. To cap it all, the Mayflower took 66 days to arrive instead of the usual 47 for a trans-Atlantic crossing and was 600 miles off course from its intended destination of Virginia.

The introduction of the commercial stage coach in 1610, by a Scottish entrepreneur who offered trips between Edinburgh and Leith, made it easier for the middle classes to travel by land. But it was still an expensive and unpleasant experience. Before the invention of macadam roads—which rely on layers of crushed stone to create a flat and durable surface—in Britain in the 1820s, passengers sat cheek by jowl on springless benches, in a coach that trundled along at around five miles per hour.

The new paving technology improved the travel times but not necessarily the overall experience. Charles Dickens had already found fame with his comic stories of coach travel in “The Pickwick Papers” when he and Mrs. Dickens traveled on an American stage coach in Ohio in 1842. They paid to have the coach to themselves, but the journey was still rough: “At one time we were all flung together in a heap at the bottom of the coach.” Dickens chose to go by rail for the next leg of the trip, which wasn’t much better: “There is a great deal of jolting, a great deal of noise, a great deal of wall, not much window.”

Despite its primitive beginnings, 19th-century rail travel evolved to offer something revolutionary to its paying customers: quality service at an affordable price. In 1868, the American inventor George Pullman introduced his new designs for sleeping and dining cars. For a modest extra fee, the distinctive green Pullman cars provided travelers with hotel-like accommodation, forcing rail companies to raise their standards on all sleeper trains.

By contrast, the transatlantic steamship operators pampered their first-class passengers and abused the rest. In 1879, a reporter at the British Pall Mall Gazette sailed Cunard’s New York to Liverpool route in steerage in order to “test [the] truth by actual experience.” He was appalled to find that passengers were treated worse than cattle. No food was provided, “despite the fact that the passage is paid for.” The journalist noted that two steerage passengers “took one look at the place” and paid for an upgrade. I think we all know how they felt.

https://www.wsj.com/articles/the-miseries-of-travel-1537455854?mod=searchresults&page=1&pos=1

Historically Speaking: When Royal Love Affairs Go Wrong

From Cleopatra to Edward VIII, monarchs have followed their hearts—with disastrous results.

ILLUSTRATION: THOMAS FUCHS

The Wall Street Journal, August 8, 2018

“Ay me!” laments Lysander in Shakespeare’s “A Midsummer Night’s Dream.” “For aught that I could ever read, / Could ever hear by tale or history, / The course of true love never did run smooth.” What audience would disagree? Thwarted lovers are indeed the stuff of history and art—especially when the lovers are kings and queens.

But there were good reasons why the monarchs of old were not allowed to follow their hearts. Realpolitik and royal passion do not mix, as Cleopatra VII (69-30 B.C.), the anniversary of whose death falls on Aug. 12, found to her cost. Her theatrical seduction of and subsequent affair with Julius Caesar insulated Egypt from Roman imperial designs. But in 41 B.C., she let her heart rule her head and fell in love with Mark Antony, who was fighting Caesar’s adopted son Octavian for control of Rome.

Cleopatra’s demand that Antony divorce his wife Octavia—sister of Octavian—and marry her instead was a catastrophic misstep. It made Egypt the target of Octavian’s fury, and forced Cleopatra into fighting Rome on Antony’s behalf. The couple’s defeat at the sea battle of Actium in 31 B.C. didn’t only end in personal tragedy: the 300-year-old Ptolemaic dynasty was destroyed, and Egypt was reduced to a Roman province.

In Shakespeare’s play “Antony and Cleopatra,” Antony laments, “I am dying, Egypt, dying.” It is a reminder that, as Egypt’s queen, Cleopatra was the living embodiment of her country; their fates were intertwined. That is why royal marriages have usually been inseparable from international diplomacy.

In 1339, when Prince Pedro of Portugal fell in love with his wife’s Castilian lady-in-waiting, Inés de Castro, the problem wasn’t the affair per se but the opportunity it gave to neighboring Castile to meddle in Portuguese politics. In 1355, Pedro’s father, King Afonso IV, took the surest way of separating the couple—who by now had four children together—by having Inés murdered. Pedro responded by launching a bloody civil war against his father that left northern Portugal in ruins. The dozens of romantic operas and plays inspired by the tragic love story neglect to mention its political repercussions; for decades afterward, the Portuguese throne was weak and the country divided.

Perhaps no monarchy in history bears more scars from Cupid’s arrow than the British. From Edward II (1284-1327), whose poor choice of male lovers unleashed murder and mayhem on the country—he himself was allegedly killed with a red hot poker—to Henry VIII (1491-1547), who bullied and butchered his way through six wives and destroyed England’s Catholic way of life in the process, British rulers have been remarkable for their willingness to place personal happiness above public responsibility.

Edward VIII (1894 -1972) was a chip off the block, in the worst way. The moral climate of the 1930s couldn’t accept the King of England marrying a twice-divorced American. Declaring he would have Wallis Simpson or no one, Edward plunged the country into crisis by abdicating in 1936. With European monarchies falling on every side, Britain’s suddenly looked extremely vulnerable. The current Queen’s father, King George VI, quite literally saved it from collapse.

According to a popular saying, “Everything in the world is about sex except sex. Sex is about power.” That goes double when the lovers wear royal crowns.

Historically Speaking: In Awe of the Grand Canyon

Since the 16th century, travelers have recorded the overwhelming impact of a natural wonder.

ILLUSTRATION BY THOMAS FUCHS

Strange as it may sound, it was watching Geena Davis and Susan Sarandon in the tragic final scene of “Thelma and Louise” (1991) that convinced me I had to go to the Grand Canyon one day and experience its life-changing beauty. Nearly three decades have passed, but I’m finally here. Instead of a stylish 1966 Ford Thunderbird, however, I’m driving a mammoth RV, with my family in tow.

The overwhelming presence of the Grand Canyon is just as I dreamed. Yet I’m acutely aware of how one-sided the relationship is. As the Pulitzer Prize-winning poet Carl Sandburg wrote in “Many Hats” in 1928: “For each man sees himself in the Grand Canyon—each one makes his own Canyon before he comes.”

The first Europeans to encounter the Canyon were Spanish conquistadors searching for the legendary Seven Golden Cities of Cibola. In 1540, Hopi guides took a small scouting party led by García López de Cárdenas to the South Rim (60 miles north of present-day Williams, Ariz.). In Cárdenas’s mind, the Canyon was a route to riches. After trying for three days to find a path to reach the river below, he cut his losses in disgust and left. Cárdenas saw no point to the Grand Canyon if it failed to yield any treasure.

Three centuries later, in 1858, the first Euro-American to follow in Cárdenas’s footsteps, Lt. Joseph Christmas Ives of the U.S. Army Corps of Topographical Engineers, had a similar reaction. In his official report, Ives waxed lyrical about the magnificent scenery but concluded, “The region is, of course, altogether valueless….Ours has been the first, and will doubtless be the last, party of whites to visit this profitless locality.”

Americans only properly “discovered” the Grand Canyon through the works of artists such as Thomas Moran. A devotee of the Hudson River School of painters, Moran found his spiritual and artistic home in the untamed landscapes of the West. His romantic pictures awakened the public to the natural wonder in their midst. Eager to see the real thing, the trickle of visitors turned into a stream by the late 1880s.

The effusive reactions to the Canyon recorded by tourists who made the arduous trek from Flagstaff, Ariz. (a railway to Grand Canyon Village was only built in 1901) have become a familiar refrain: “Not for human needs was it fashioned, but for the abode of gods…. To the end it effaced me,” wrote Harriet Monroe, the founder of Poetry magazine, in 1899.

But there was one class of people who were apparently insensible to the Canyon: copper miners. Watching their thoughtless destruction of the landscape, Monroe wondered, “Do they cease to feel it?” President Theodore Roosevelt feared so, and in 1908 he made an executive decision to protect 800,000 acres from exploitation by creating the Grand Canyon National Monument.

Roosevelt’s farsightedness may have put a crimp in the profits of mining companies, but it paid dividends in other ways. By the 1950s, the Canyon had become a must-see destination, attracting visitors from all over the world. Among them were the tragic Sylvia Plath, author of “The Bell Jar,” and her husband, Ted Hughes, the future British Poet Laureate. Thirty years later, the visit to the Canyon still haunted Hughes: “I never went back and you are dead. / But at odd moments it comes, / As if for the first time.” He is not alone, I suspect, in never fully leaving the Canyon behind.

WSJ Historically Speaking: The Gym, for Millennia of Bodies and Souls

Today’s gyms, which depend on our vanity and body envy, are a far cry from what the Greeks envisioned

ILLUSTRATION: THOMAS FUCHS

Going to the gym takes on a special urgency at this time of year, as we prepare to put our bodies on display at the pool and beach. Though the desire to live a virtuous life of fitness no doubt plays its part, vanity and body envy are, I suspect, the main motivation for our seasonal exertions.

The ancient Greeks, who invented gyms (the Greek gymnasion means “school for naked exercise”), were also body-conscious, but they saw a deeper point to the sweat. No mere muscle shops, Greek gymnasia were state-sponsored institutions aimed at training young men to embody, literally, the highest ideals of Greek virtue. In Plato’s “The Republic,” Socrates says that the two branches of physical and academic education “seem to have been given by some god to men…to ensure a proper harmony between energy and initiative on the one hand and reason on the other, by tuning each to the right pitch.”

Physical competition, culminating in the Olympics, was a form of patriotic activity, and young men went to the gym to socialize, bathe and learn to think. Aristotle founded his school of philosophy in the Lyceum, in a gymnasium that included physical training.

The Greek concept fell out of favor in the West with the rise of Christianity. The abbot St. Bernard of Clairvaux (1090–1153), who advised five popes, wrote, “The spirit flourishes more strongly…in an infirm and weak body,” neatly summing up the medieval ambivalence toward physicality.

Many centuries later, an eccentric German educator named Friedrich Jahn (1778-1852) played a key role in the gym’s revival. Convinced that Prussia’s defeat by Napoleon was due to his compatriots’ descent into physical and moral weakness, Jahn decided that a Greek-style gym would “preserve young people from laxity and…prepare them to fight for the fatherland.” In 1811, he opened a gym in Berlin for military-style physical training (not to be confused with the older German usage of the term gymnasium for the most advanced level of secondary schools).

By the mid-19th century, Europe’s upper-middle classes had sufficient wealth and leisure time to devote themselves to exercise for exercise’s sake. Hippolyte Triat opened two of the first truly commercial gyms in Brussels and Paris in the late 1840s. A retired circus strongman, he capitalized on his physique to sell his “look.”

But broader spiritual ideas still influenced the spread of physical fitness. The 19th-century movement Muscular Christianity sought to transform the working classes into healthy, patriotic Christians. One offshoot, the Young Men’s Christian Association, became famous for its low-cost gyms.

By the mid-20th century, Americans were using their gyms for two different sets of purposes. Those devoted to “manliness” worked out at places like Gold’s Gym and aimed to wow others with their physiques. The other group, “health and fitness” advocates, expanded sharply after Jack LaLanne, who founded his first gym in 1936, turned a healthy lifestyle into a salable commodity. A few decades later, Jazzercise, aerobics, disco and spandex made the gym a liberating, fashionable and sexy place.

More than 57 million Americans belong to a health club today, but until local libraries start adding spinning classes and CrossFit, the gym will remain a shadow of the original Greek ideal. We prize our sound bodies, but we aren’t nearly as devoted to developing sound mind and character.

WSJ Historically Speaking: The Quest for Unconsciousness: A Brief History of Anesthesia

The ancient Greeks used alcohol and opium. Patients in the 12th century got a ‘soporific sponge.’ A look at anesthetics over the centuries

ILLUSTRATION: ELLEN WEINSTEIN

Every year, some 21 million Americans undergo a general anesthetic. During recent minor surgery, I became one of the roughly 26,000 Americans a year who experience “anesthetic awareness” during sedation: I woke up. I still can’t say what was more disturbing: being conscious or seeing the horrified faces of the doctors and nurses.

The best explanation my doctors could give was that not all brains react in the same way to a general anesthetic. Redheads, for example, seem to require higher dosages than brunettes. While not exactly reassuring, this explanation does highlight one of the many mysteries behind the science of anesthesia.

Although being asleep and being unconscious might look the same, they are very different states. Until the mid-19th century, a medically induced deep unconsciousness was beyond the reach of science. Healers had no reliable way to control, let alone eliminate, a patient’s awareness or pain during surgery, though not for lack of trying.

The ancient Greeks generally relied on alcohol, poppy opium or mandrake root to sedate patients. Evidence from the “Sushruta Samhita,” an ancient Sanskrit medical text, suggests that Indian healers used cannabis incense. The Chinese developed acupuncture at some point before 100 B.C., and in Central and South America, shamans used the spit from chewed coca leaves as a numbing balm.

Little changed over the centuries. In the 12th century, Nicholas of Salerno recorded in a treatise the recipe for a “soporific sponge” with ingredients that hadn’t advanced much beyond the medicines used by the Greeks: a mixture of opium, mulberry juice, lettuce seed, mandrake, ivy and hemlock.

Discoveries came but weren’t exploited. In 1540, the German alchemist and astrologer Paracelsus (aka Theophrastus Bombastus von Hohenheim) noted that liquid ether could induce sleep in animals. In 1772, the English chemist Joseph Priestley discovered nitrous oxide gas (laughing gas). Using it became the thing to do at parties—in 1799, the poet Coleridge described trying the gas—but no one apparently tried using ether or nitrous oxide for medicinal purposes.

In 1811, the novelist Fanny Burney had no recourse when she went under the knife for suspected breast cancer. She wrote later, “O Heaven!—I then felt the Knife rackling against the breast bone—scraping it!”

Despite the ordeal, Burney lived into her 80s, dying in 1840—just before everything changed. Ether, nitrous oxide and later chloroform soon became common in operating theaters. On Oct. 16, 1846, a young dentist from Boston named William Morton made history by performing surgery on a patient anesthetized with ether. It was such a success that, a few months later, Frances Appleton Longfellow, wife of Henry Wadsworth Longfellow, became the first American to receive anesthesia during childbirth.

But these wonder drugs were lethal if not administered properly. A German study compiled in 1934 estimated that the number of chloroform-related deaths was as high as 1 in 3,000 operations. The drive for safer drugs produced such breakthroughs as halothane in 1955, which could be inhaled by patients.

Yet for all the continuous advances in anesthesia, scientists still don’t entirely understand how it works. A study published in the December 2017 issue of Annals of Botany reveals that anesthetics can also stop motion in plants like the Venus flytrap—which, as far as we know, doesn’t have a brain. Clearly, we still have a lot to learn about consciousness in every form.