Historically Speaking: The Tradition of Telling All

From ancient Greece to modern Washington, political memoirs have been irresistible source of gossip about great leaders

The Wall Street Journal, November 30, 2018

ILLUSTRATION: THOMAS FUCHS

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”

Historically Speaking: How Potatoes Conquered the World

It took centuries for the spud to travel from the New World to the Old and back again

The Wall Street Journal, November 15, 2018

At the first Thanksgiving dinner, eaten by the Wampanoag Indians and the Pilgrims in 1621, the menu was rather different from what’s served today. For one thing, the pumpkin was roasted, not made into a pie. And there definitely wasn’t a side dish of mashed potatoes.

In fact, the first hundred Thanksgivings were spud-free, since potatoes weren’t grown in North America until 1719, when Scotch-Irish settlers began planting them in New Hampshire. Mashed potatoes were an even later invention. The first recorded recipe for the dish appeared in 1747, in Hannah Glasse’s splendidly titled “The Art of Cookery Made Plain and Easy, Which Far Exceeds Any Thing of the Kind yet Published.”

By then, the potato had been known in Europe for a full two centuries. It was first introduced by the Spanish conquerors of Peru, where the Incas had revered the potato and even invented a natural way of freeze-drying it for storage. Nevertheless, despite its nutritional value and ease of growing, the potato didn’t catch on in Europe. It wasn’t merely foreign and ugly-looking; to wheat-growing farmers it seemed unnatural—possibly even un-Christian, since there is no mention of the potato in the Bible. Outside of Spain, it was generally grown for animal feed.

The change in the potato’s fortunes was largely due to the efforts of a Frenchman named Antoine-Augustin Parmentier (1737-1813). During the Seven Years’ War, he was taken prisoner by the Prussians and forced to live on a diet of potatoes. To his surprise, he stayed relatively healthy. Convinced he had found a solution to famine, Parmentier dedicated his life after the war to popularizing the potato’s nutritional benefits. He even persuaded Marie-Antoinette to wear potato flowers in her hair.

Among the converts to his message were the economist Adam Smith, who realized the potato’s economic potential as a staple food for workers, and Thomas Jefferson, then the U.S. Ambassador to France, who was keen for his new nation to eat well in all senses of the word. Jefferson is credited with introducing Americans to french fries at a White House dinner in 1802.

As Smith predicted, the potato became the fuel for the Industrial Revolution. A study published in 2011 by Nathan Nunn and Nancy Qian in the Quarterly Journal of Economics estimates that up to a quarter of the world’s population growth from 1700 to 1900 can be attributed solely to the introduction of the potato. As Louisa May Alcott observed in “Little Men,” in 1871, “Money is the root of all evil, and yet it is such a useful root that we cannot get on without it any more than we can without potatoes.”

In 1887, two Americans, Jacob Fitzgerald and William H. Silver, patented the first potato ricer, which forced a cooked potato through a cast iron sieve, ending the scourge of lumpy mash. Still, the holy grail of “quick and easy” mashed potatoes remained elusive until the late 1950s. Using the flakes produced by the potato ricer and a new freeze drying method, U.S. government scientists perfected instant mashed potatoes, which only requires the simple step of adding hot water or milk to the mix. The days of peeling, boiling and mashing were now optional, and for millions of cooks, Thanksgiving became a little easier. And that’s something to be thankful for.

For the Wall Street Journal

Historically Speaking: Overrun by Alien Species

From Japanese knotweed to cane toads, humans have introduced invasive species to new environments with disastrous results

The Wall Street Journal, November 1, 2018

Ever since Neolithic people wandered the earth, inadvertently bringing the mouse along for the ride, humans have been responsible for introducing animal and plant species into new environments. But problems can arise when a non-native species encounters no barriers to population growth, allowing it to rampage unchecked through the new habitat, overwhelming the ecosystem. On more than one occasion, humans have transplanted a species for what seemed like good reasons, only to find out too late that the consequences were disastrous.

One of the most famous examples is celebrating its 150th anniversary this year: the introduction of Japanese knotweed to the U.S. A highly aggressive plant, it can grow 15 feet high and has roots that spread up to 45 feet. Knotweed had already been a hit in Europe because of its pretty little white flowers, and, yes, its miraculous indestructibility.

First mentioned in botanical articles in 1868, knotweed was brought to New York by the Hogg brothers, James and Thomas, eminent American horticulturalists and among the earliest collectors of Japanese plants. Thanks to their extensive contacts, knotweed found a home in arboretums, botanical gardens and even Central Park. Not content with importing one of world’s most invasive shrubs, the Hoggs also introduced Americans to the wonders of kudzu, a dense vine that can grow a foot a day.

Impressed by the vigor of kudzu, agriculturalists recommended using these plants to provide animal fodder and prevent soil erosion. In the 1930s, the government was even paying Southern farmers $8 per acre to plant kudzu. Today it is known as the “vine that ate the South,” because of the way it covers huge tracts of land in a green blanket of death. And Japanese knotweed is still spreading, colonizing entire habitats from Mississippi to Alaska, where only the Arctic tundra holds it back from world domination.

Knotweed has also reached Australia, a country that has been ground zero for the worst excesses of invasive species. In the 19th century, the British imported non-native animals such as rabbits, cats, goats, donkeys, pigs, foxes and camels, causing mass extinctions of Australia’s native mammal species. Australians are still paying the price; there are more rabbits in the country today than wombats, more camels than kangaroos.

Yet the lesson wasn’t learned. In the 1930s, scientists in both Australia and the U.S. decided to import the South American cane toad as a form of biowarfare against beetles that eat sugar cane. The experiment failed, and it turned out that the cane toad was poisonous to any predator that ate it. There’s also the matter of the 30,000 eggs it can lay at a time. Today, the cane toad can be found all over northern Australia and south Florida.

So is there anything we can do once an invasive species has taken up residence? The answer is yes, but it requires more than just fences, traps and pesticides; it means changing human incentives. Today, for instance, the voracious Indo-Pacific lionfish is gobbling up local fish in the west Atlantic, while the Asian carp threatens the ecosystem of the Great Lakes. There is only one solution: We must eat them, dear reader. These invasive fish can be grilled, fried or consumed as sashimi, and they taste delicious. Likewise, kudzu makes great salsa, and Japanese knotweed can be treated like rhubarb. Eat for America and save the environment.

Historically Speaking: The Dark Lore of Black Cats

Ever since they were worshiped in ancient Egypt, cats have occupied an uncanny place in the world’s imagination

The Wall Street Journal, October 22, 2018

ILLUSTRATION: THOMAS FUCHS

As Halloween approaches, decorations featuring scary black cats are starting to make their seasonal appearance. But what did the black cat ever do to deserve its reputation as a symbol of evil? Why is it considered bad luck to have a black cat cross your path?

It wasn’t always this way. In fact, the first human-cat interactions were benign and based on mutual convenience. The invention of agriculture in the Neolithic era led to surpluses of grain, which attracted rodents, which in turn motivated wild cats to hang around humans in the hope of catching dinner. Domestication soon followed: The world’s oldest pet cat was found in a 9,500 year-old grave in Cyprus, buried alongside its human owner.

According to the Roman writer Polyaenus, who lived in the second century A.D., the Egyptian veneration of cats led to disaster at the Battle of Pelusium in 525 B.C. The invading Persian army carried cats on the front lines, rightly calculating that the Egyptians would rather accept defeat than kill a cat.

The Egyptians were unique in their extreme veneration of cats, but they weren’t alone in regarding them as having a special connection to the spirit world. In Greek mythology the cat was a familiar of Hecate, goddess of magic, sorcery and witchcraft. Hecate’s pet had once been a serving maid named Galanthis, who was turned into a cat as punishment by the goddess Hera for being rude.

When Christianity became the official religion of Rome in 380, the association of cats with paganism and witchcraft made them suspect. Moreover, the cat’s independence suggested a willful rebellion against the teaching of the Bible, which said that Adam had dominion over all the animals. The cat’s reputation worsened during the medieval era, as the Catholic Church battled against heresies and dissent. Fed lurid tales by his inquisitors, in 1233 Pope Gregory IX issued a papal bull, “Vox in Rama,” which accused heretics of using black cats in their nighttime sex orgies with Lucifer—who was described as half-cat in appearance.

In Europe, countless numbers of cats were killed in the belief that they could be witches in disguise. In 1484, Pope Innocent VIII fanned the flames of anti-cat prejudice with his papal bull on witchcraft, “Summis Desiderantes Affectibus,” which stated that the cat was “the devil’s favorite animal and idol of all witches.”

The Age of Reason ought to have rescued the black cat from its pariah status, but superstitions die hard. (How many modern apartment buildings lack a 13th floor?). Cats had plenty of ardent fans among 19th century writers, including Charles Dickens and Mark Twain, who wrote “I simply can’t resist a cat, particularly a purring one.” But Edgar Allan Poe, the master of the gothic tale, felt otherwise: in his 1843 story “The Black Cat,” the spirit of a dead cat drives its killer to madness and destruction.

So pity the poor black cat, which through no fault of its own has gone from being an instrument of the devil to the convenient tool of the horror writer—and a favorite Halloween cliché.

For the Wall Street Journal’s “Historically Speaking” column

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

Historically Speaking: The Miseries of Travel

ILLUSTRATION: THOMAS FUCHS

Today’s jet passengers may think they have it bad, but delay and discomfort have been a part of journeys since the Mayflower

The Wall Street Journal, September 20, 2018

Fifty years ago, on September 30, 1968, the world’s first 747 Jumbo Jet rolled out of Boeing’s Everett plant in Seattle, Washington. It was hailed as the future of commercial air travel, complete with fine dining, live piano music and glamorous stewardesses. And perhaps we might still be living in that future, were it not for the 1978 Airline Deregulation Act signed into law by President Jimmy Carter.

Deregulation was meant to increase the competitiveness of the airlines, while giving passengers more choice about the prices they paid. It succeeded in greatly expanding the accessibility of air travel, but at the price of making it a far less luxurious experience. Today, flying is a matter of “calculated misery,” as Columbia Law School professor Tim Wu put it in a 2014 article in the New Yorker. Airlines deliberately make travel unpleasant in order to force economy passengers to pay extra for things that were once considered standard, like food and blankets.

So it has always been with mass travel, since its beginnings in the 17th century: a test of how much discomfort and delay passengers are willing to endure. For the English Puritans who sailed to America on the Mayflower in 1620, light and ventilation were practically non-existent, the food was terrible and the sanitation primitive. All 102 passengers were crammed into a tiny living area just 80 feet long and 20 feet wide. To cap it all, the Mayflower took 66 days to arrive instead of the usual 47 for a trans-Atlantic crossing and was 600 miles off course from its intended destination of Virginia.

The introduction of the commercial stage coach in 1610, by a Scottish entrepreneur who offered trips between Edinburgh and Leith, made it easier for the middle classes to travel by land. But it was still an expensive and unpleasant experience. Before the invention of macadam roads—which rely on layers of crushed stone to create a flat and durable surface—in Britain in the 1820s, passengers sat cheek by jowl on springless benches, in a coach that trundled along at around five miles per hour.

The new paving technology improved the travel times but not necessarily the overall experience. Charles Dickens had already found fame with his comic stories of coach travel in “The Pickwick Papers” when he and Mrs. Dickens traveled on an American stage coach in Ohio in 1842. They paid to have the coach to themselves, but the journey was still rough: “At one time we were all flung together in a heap at the bottom of the coach.” Dickens chose to go by rail for the next leg of the trip, which wasn’t much better: “There is a great deal of jolting, a great deal of noise, a great deal of wall, not much window.”

Despite its primitive beginnings, 19th-century rail travel evolved to offer something revolutionary to its paying customers: quality service at an affordable price. In 1868, the American inventor George Pullman introduced his new designs for sleeping and dining cars. For a modest extra fee, the distinctive green Pullman cars provided travelers with hotel-like accommodation, forcing rail companies to raise their standards on all sleeper trains.

By contrast, the transatlantic steamship operators pampered their first-class passengers and abused the rest. In 1879, a reporter at the British Pall Mall Gazette sailed Cunard’s New York to Liverpool route in steerage in order to “test [the] truth by actual experience.” He was appalled to find that passengers were treated worse than cattle. No food was provided, “despite the fact that the passage is paid for.” The journalist noted that two steerage passengers “took one look at the place” and paid for an upgrade. I think we all know how they felt.

https://www.wsj.com/articles/the-miseries-of-travel-1537455854?mod=searchresults&page=1&pos=1

Historically Speaking: Poison and Politics

From ‘cantarella’ to polonium, governments have used toxins to terrorize and kill their enemies

The Wall Street Journal, September 7, 2018

ILLUSTRATION: THOMAS FUCHS

Among the pallbearers at Senator John McCain’s funeral in Washington last weekend was the Russian dissident Vladimir Kara-Murza. Mr. Kara-Murza is a survivor of two poisoning attempts, in 2015 and 2017, which he believes were intended as retaliation for his activism against the Putin regime.

Indeed, Russia is known or suspected to be responsible for several notorious recent poisoning cases, including the attempted murder this past March of Sergei Skripal, a former Russian spy living in Britain, and his daughter Yulia with the nerve agent Novichok. They survived the attack, but several months later a British woman died of Novichok exposure a few miles from where the Skirpals lived.

Poison has long been a favorite tool of brutal statecraft: It both terrorizes and kills, and it can be administered without detection. The Arthashastra, an ancient Indian political treatise that out-Machiavels Machiavelli, contains hundreds of recipes for toxins, as well as advice on when and how to use them to eliminate an enemy.

Most royal and imperial courts of the classical world were also awash with poison. Though it is impossible to prove so many centuries later, the long list of putative victims includes Alexander the Great (poisoned wine), Emperor Augustus (poisoned figs) and Emperor Claudius (poisoned mushrooms), as well as dozens of royal heirs, relatives, rivals and politicians. King Mithridates of Pontus, an ancient Hellenistic empire, was so paranoid—having survived a poison attempt by his own mother—that he took daily microdoses of every known toxin in order to build up his immunity.

Poisoning reached its next peak during the Italian Renaissance. Every ruling family, from the Medicis to the Viscontis, either fell victim to poison or employed it as a political weapon. The Borgias were even reputed to have their own secret recipe, a variation of arsenic called “cantarella.” Although a large number of their rivals conveniently dropped dead, the Borgias were small fry compared with the republic of Venice. The records of the Venetian Council of Ten reveal that a secret poison program went on for decades. Remarkably, two victims are known to have survived their assassination attempts: Count Francesco Sforza in 1450 and the Ottoman Sultan Mehmed II in 1477.

In the 20th century, the first country known to have established a targeted poisoning program was Russia under the Bolsheviks. According to Boris Volodarsky, a former Russian agent, Lenin ordered the creation of a poison laboratory called the “Special Room” in 1921. By the Cold War, the one-room lab had evolved into an international factory system staffed by hundreds, possibly thousands of scientists. Their specialty was untraceable poisons delivered by ingenious weapons—such as a cigarette packet made in 1954 that could fire bullets filled with potassium cyanide.

In 1978, the prizewinning Bulgarian writer Georgi Markov, then working for the BBC in London, was killed by an umbrella tip that shot a pellet containing the poison ricin into his leg. After the international outcry, the Soviet Union toned down its poisoning efforts but didn’t end them. And Putin’s Russia has continued to use similar techniques. In 2006, according to an official British inquiry, Russian secret agents murdered the ex-spy Alexander Litvinenko by slipping polonium into his drink during a meeting at a London hotel. It was the beginning of a new wave of poisonings whose end is not yet in sight.

Historically Speaking: The Struggle Before #MeToo

Today’s women are not the first to take a public stand against sexual assault and harassment

The Wall Street Journal, August 23, 2018

Rosa Parks in 1955 PHOTO: GETTY IMAGES

Since it began making headlines last year, the #MeToo movement has expanded into a global rallying cry. The campaign has many facets, but its core message is clear: Women who are victims of sexual harassment and assault still face too many obstacles in their quest for justice.

How much harder it was for women in earlier eras is illustrated perfectly by Emperor Constantine’s 326 edict on rape and abduction. While condemning both, the law assumed that all rape victims deserved punishment for their failure to resist more forcefully. The best outcome for the victim was disinheritance from her parents’ estate; the worst, death by burning.

In the Middle Ages, a rape victim was more likely to be blamed than believed, unless she suffered death or dismemberment in the attack. That makes the case of the Englishwoman Isabella Plomet all the more remarkable. In 1292, Plomet went to her doctor Ralph de Worgan to be treated for a leg problem. He made her drink a sleeping drug and then proceeded to rape her while she was unconscious.

It’s likely that Worgan, a respected pillar of local society, had relied for years on the silence of his victims. But Plomet’s eloquence in court undid him: He was found guilty and fined. The case was a landmark in medieval law, broadening the definition of rape to include nonconsent through intoxication.

But prejudice against the victims of sexual assault was slow to change. In Catholic Europe, notions of family honor and female reputation usually meant that victims had to marry their rapists or be classed as ruined. This was the origin of the most famous case of the 17th century. In 1611, Artemisia Gentileschi and her father Orazio brought a suit in a Roman court against her art teacher, Agostino Tassi, for rape.

Although Tassi had a previous criminal record, as a “dishonored” woman it was Gentileschi who had to submit to torture to prove that she was telling the truth. She endured an eight-month trial to see Tassi convicted and temporarily banished from Rome. “Cleared” by her legal victory, Gentileschi refused to let the attack define her or determine the rest of her life. She is now regarded as one of the greatest artists of the Baroque era.

One class of victims who had no voice and no legal recourse were free and enslaved black women in pre-Civil War America. Their stories make grim reading. In 1855, Celia, an 18-year-old slave in Missouri, killed her master when he attempted to rape her. At her trial she insisted—through her lawyers, since she was barred from testifying—that the right to self-defense extended to all women. The court disagreed, and Celia was executed—but not before making a successful prison break and almost escaping.

Change was still far off in 1931, when the 18-year-old Rosa Parks, working as a housekeeper, was pounced on by her white employer. As she later recalled, “He offered me a drink of whiskey, which I promptly and vehemently refused. He moved nearer to me and put his hand on my waist.” She managed to fight him off, and in a larger sense Parks never stopped fighting. She became a criminal investigator for the NAACP, helping black victims of white sexual assault to press charges.

Rosa Parks is often referred to as the “first lady of civil rights,” in recognition of her famous protest on a segregated bus in Montgomery, Alabama in 1955. She should also be remembered as one of the unsung heroines in the long prehistory of #MeToo.

Historically Speaking: When Royal Love Affairs Go Wrong

From Cleopatra to Edward VIII, monarchs have followed their hearts—with disastrous results.

ILLUSTRATION: THOMAS FUCHS

The Wall Street Journal, August 8, 2018

“Ay me!” laments Lysander in Shakespeare’s “A Midsummer Night’s Dream.” “For aught that I could ever read, / Could ever hear by tale or history, / The course of true love never did run smooth.” What audience would disagree? Thwarted lovers are indeed the stuff of history and art—especially when the lovers are kings and queens.

But there were good reasons why the monarchs of old were not allowed to follow their hearts. Realpolitik and royal passion do not mix, as Cleopatra VII (69-30 B.C.), the anniversary of whose death falls on Aug. 12, found to her cost. Her theatrical seduction of and subsequent affair with Julius Caesar insulated Egypt from Roman imperial designs. But in 41 B.C., she let her heart rule her head and fell in love with Mark Antony, who was fighting Caesar’s adopted son Octavian for control of Rome.

Cleopatra’s demand that Antony divorce his wife Octavia—sister of Octavian—and marry her instead was a catastrophic misstep. It made Egypt the target of Octavian’s fury, and forced Cleopatra into fighting Rome on Antony’s behalf. The couple’s defeat at the sea battle of Actium in 31 B.C. didn’t only end in personal tragedy: the 300-year-old Ptolemaic dynasty was destroyed, and Egypt was reduced to a Roman province.

In Shakespeare’s play “Antony and Cleopatra,” Antony laments, “I am dying, Egypt, dying.” It is a reminder that, as Egypt’s queen, Cleopatra was the living embodiment of her country; their fates were intertwined. That is why royal marriages have usually been inseparable from international diplomacy.

In 1339, when Prince Pedro of Portugal fell in love with his wife’s Castilian lady-in-waiting, Inés de Castro, the problem wasn’t the affair per se but the opportunity it gave to neighboring Castile to meddle in Portuguese politics. In 1355, Pedro’s father, King Afonso IV, took the surest way of separating the couple—who by now had four children together—by having Inés murdered. Pedro responded by launching a bloody civil war against his father that left northern Portugal in ruins. The dozens of romantic operas and plays inspired by the tragic love story neglect to mention its political repercussions; for decades afterward, the Portuguese throne was weak and the country divided.

Perhaps no monarchy in history bears more scars from Cupid’s arrow than the British. From Edward II (1284-1327), whose poor choice of male lovers unleashed murder and mayhem on the country—he himself was allegedly killed with a red hot poker—to Henry VIII (1491-1547), who bullied and butchered his way through six wives and destroyed England’s Catholic way of life in the process, British rulers have been remarkable for their willingness to place personal happiness above public responsibility.

Edward VIII (1894 -1972) was a chip off the block, in the worst way. The moral climate of the 1930s couldn’t accept the King of England marrying a twice-divorced American. Declaring he would have Wallis Simpson or no one, Edward plunged the country into crisis by abdicating in 1936. With European monarchies falling on every side, Britain’s suddenly looked extremely vulnerable. The current Queen’s father, King George VI, quite literally saved it from collapse.

According to a popular saying, “Everything in the world is about sex except sex. Sex is about power.” That goes double when the lovers wear royal crowns.

The Sunday Times: No more midlife crisis – I’m riding the U-curve of happiness

Evidence shows people become happier in their fifties, but achieving that takes some soul-searching

I used not to believe in the “midlife crisis”. I am ashamed to say that I thought it was a convenient excuse for self-indulgent behaviour — such as splurging on a Lamborghini or getting buttock implants. So I wasn’t even aware that I was having one until earlier this year, when my family complained that I had become miserable to be around. I didn’t shout or take to my bed, but five minutes in my company was a real downer. The closer I got to my 50th birthday, the more I radiated dissatisfaction.

Can you be simultaneously contented and discontented? The answer is yes. Surveys of “national wellbeing” in several countries, including the UK, by the Office for National Statistics have revealed a fascinating U-curve in relation to happiness and age. In Britain, feelings of stress and anxiety appear to peak at 49 and subsequently fade as the years increase. Interestingly, a 2012 study showed that chimpanzees and orang-utans exhibited a similar U-curve of happiness as they reach middle age.

On a rational level, I wasn’t the least bit disappointed with my life. The troika of family, work and friends made me very happy. And yet something was eating away at my peace of mind. I regarded myself as a failure — not in terms of work but as a human being. Learning that I wasn’t alone in my daily acid bath of gloom didn’t change anything.

One of F Scott Fitzgerald’s most memorable lines is: “There are no second acts in American lives.” It’s so often quoted that it’s achieved the status of a truism. It’s often taken to be an ironic commentary on how Americans, particularly men, are so frightened of failure that they cling to the fiction that life is a perpetual first act. As I thought about the line in relation to my own life, Fitzgerald’s meaning seemed clear. First acts are about actions and opportunities. There is hope, possibility and redemption. Second acts are about reactions and consequences.

Old habits die hard, however. I couldn’t help conducting a little research into Fitzgerald’s life. What was the author of The Great Gatsby really thinking when he wrote the line? Would it even matter?

The answer turned out to be complicated. As far as the quotation goes, Fitzgerald actually wrote the reverse. The line appears in a 1935 essay entitled My Lost City, about his relationship with New York: “I once thought that there were no second acts in American lives, but there was certainly to be a second act to New York’s boom days.”

It reappeared in the notes for his Hollywood novel, The Love of the Last Tycoon, which was half finished when he died in 1940, aged 44. Whatever he had planned for his characters, the book was certainly meant to have been Fitzgerald’s literary comeback — his second act — after a decade of drunken missteps, declining book sales and failed film projects.

Fitzgerald may not have subscribed to the “It’s never too late to be what you might have been” school of thought, but he wasn’t blind to reality. Of course he believed in second acts. The world is full of middle-aged people who successfully reinvented themselves a second or even third time. The mercurial rise of Emperor Claudius (10BC to AD54) is one of the earliest historical examples of the true “second act”.

According to Suetonius, Claudius’s physical infirmities had made him the butt of scorn among his powerful family. But his lowly status saved him after the assassination of his nephew, Caligula. The plotters found the 56-year-old Claudius cowering behind a curtain. On the spur of the moment, instead of killing him, as they did Caligula’s wife and daughter, the plotters decided the stumbling and stuttering scion of the Julio-Claudian dynasty could be turned into a puppet emperor. It was a grave miscalculation. Claudius seized on his changed circumstances. The bumbling persona was dropped and, although flawed, he became a forceful and innovative ruler.

Mostly, however, it isn’t a single event that shapes life after 50 but the willingness to stay the course long after the world has turned away. It’s extraordinary how the granting of extra time can turn tragedy into triumph. In his heyday, General Mikhail Kutuzov was hailed as Russia’s greatest military leader. But by 1800 the 55-year-old was prematurely aged. Stiff-limbed, bloated and blind in one eye, Kutuzov looked more suited to play the role of the buffoon than the great general. He was Alexander I’s last choice to lead the Russian forces at the Battle of Austerlitz in 1805, but was the first to be blamed for the army’s defeat.

Kutuzov was relegated to the sidelines after Austerlitz. He remained under official disfavour until Napoleon’s army was halfway to Moscow in 1812. Only then, with the army and the aristocracy begging for his recall, did the tsar agree to his reappointment. Thus, in Russia’s hour of need it ended up being Kutuzov, the disgraced general, who saved the country.

Winston Churchill had a similar apotheosis in the Second World War. For most of the 1930s he was considered a political has-been by friends and foes alike. His elevation to prime minister in 1940 at the age of 65 changed all that, of course. But had it not been for the extraordinary circumstances created by the war, Robert Rhodes James’s Churchill: A Study in Failure, 1900-1939 would have been the epitaph rather than the prelude to the greatest chapter in his life.

It isn’t just generals and politicians who can benefit from second acts. For writers and artists, particularly women, middle age can be extremely liberating. The Booker prize-winning novelist Penelope Fitzgerald published her first book at 59 after a lifetime of teaching while supporting her children and alcoholic husband. Thereafter she wrote at a furious pace, producing nine novels and three biographies before she died at 83.

I could stop right now and end with a celebratory quote from Morituri Salutamus by the American poet Henry Wadsworth Longfellow: “For age is opportunity no less/ than youth itself, though in another dress, / And as the evening twilight fades away / The sky is filled with stars, invisible by day.”

However, that isn’t — and wasn’t — what was troubling me in the first place. I don’t think the existential anxieties of middle age are caused or cured by our careers. Sure, I could distract myself with happy thoughts about a second act where I become someone who can write a book a year rather than one a decade. But that would still leave the problem of the flesh-and-blood person I had become in reality. What to think of her? It finally dawned on me that this had been my fear all along: it doesn’t matter which act I am in; I am still me.

My funk lifted once the big day rolled around. I suspect that joining a gym and going on a regular basis had a great deal to do with it. But I had also learnt something valuable during these past few months. Worrying about who you thought you would be or what you might have been fills a void but leaves little space for anything else. It’s coming to terms with who you are right now that really matters.