Overrun by Alien Species

From Japanese knotweed to cane toads, humans have introduced invasive species to new environments with disastrous results

Ever since Neolithic people wandered the earth, inadvertently bringing the mouse along for the ride, humans have been responsible for introducing animal and plant species into new environments. But problems can arise when a non-native species encounters no barriers to population growth, allowing it to rampage unchecked through the new habitat, overwhelming the ecosystem. On more than one occasion, humans have transplanted a species for what seemed like good reasons, only to find out too late that the consequences were disastrous.

One of the most famous examples is celebrating its 150th anniversary this year: the introduction of Japanese knotweed to the U.S. A highly aggressive plant, it can grow 15 feet high and has roots that spread up to 45 feet. Knotweed had already been a hit in Europe because of its pretty little white flowers, and, yes, its miraculous indestructibility.

First mentioned in botanical articles in 1868, knotweed was brought to New York by the Hogg brothers, James and Thomas, eminent American horticulturalists and among the earliest collectors of Japanese plants. Thanks to their extensive contacts, knotweed found a home in arboretums, botanical gardens and even Central Park. Not content with importing one of world’s most invasive shrubs, the Hoggs also introduced Americans to the wonders of kudzu, a dense vine that can grow a foot a day.

Impressed by the vigor of kudzu, agriculturalists recommended using these plants to provide animal fodder and prevent soil erosion. In the 1930s, the government was even paying Southern farmers $8 per acre to plant kudzu. Today it is known as the “vine that ate the South,” because of the way it covers huge tracts of land in a green blanket of death. And Japanese knotweed is still spreading, colonizing entire habitats from Mississippi to Alaska, where only the Arctic tundra holds it back from world domination.

Knotweed has also reached Australia, a country that has been ground zero for the worst excesses of invasive species. In the 19th century, the British imported non-native animals such as rabbits, cats, goats, donkeys, pigs, foxes and camels, causing mass extinctions of Australia’s native mammal species. Australians are still paying the price; there are more rabbits in the country today than wombats, more camels than kangaroos.

Yet the lesson wasn’t learned. In the 1930s, scientists in both Australia and the U.S. decided to import the South American cane toad as a form of biowarfare against beetles that eat sugar cane. The experiment failed, and it turned out that the cane toad was poisonous to any predator that ate it. There’s also the matter of the 30,000 eggs it can lay at a time. Today, the cane toad can be found all over northern Australia and south Florida.

So is there anything we can do once an invasive species has taken up residence? The answer is yes, but it requires more than just fences, traps and pesticides; it means changing human incentives. Today, for instance, the voracious Indo-Pacific lionfish is gobbling up local fish in the west Atlantic, while the Asian carp threatens the ecosystem of the Great Lakes. There is only one solution: We must eat them, dear reader. These invasive fish can be grilled, fried or consumed as sashimi, and they taste delicious. Likewise, kudzu makes great salsa, and Japanese knotweed can be treated like rhubarb. Eat for America and save the environment.

The Dark Lore of Black Cats

Ever since they were worshiped in ancient Egypt, cats have occupied an uncanny place in the world’s imagination

For the Wall Street Journal’s “Historically Speaking” column

ILLUSTRATION: THOMAS FUCHS

As Halloween approaches, decorations featuring scary black cats are starting to make their seasonal appearance. But what did the black cat ever do to deserve its reputation as a symbol of evil? Why is it considered bad luck to have a black cat cross your path?

It wasn’t always this way. In fact, the first human-cat interactions were benign and based on mutual convenience. The invention of agriculture in the Neolithic era led to surpluses of grain, which attracted rodents, which in turn motivated wild cats to hang around humans in the hope of catching dinner. Domestication soon followed: The world’s oldest pet cat was found in a 9,500 year-old grave in Cyprus, buried alongside its human owner.

According to the Roman writer Polyaenus, who lived in the second century A.D., the Egyptian veneration of cats led to disaster at the Battle of Pelusium in 525 B.C. The invading Persian army carried cats on the front lines, rightly calculating that the Egyptians would rather accept defeat than kill a cat.

The Egyptians were unique in their extreme veneration of cats, but they weren’t alone in regarding them as having a special connection to the spirit world. In Greek mythology the cat was a familiar of Hecate, goddess of magic, sorcery and witchcraft. Hecate’s pet had once been a serving maid named Galanthis, who was turned into a cat as punishment by the goddess Hera for being rude.

When Christianity became the official religion of Rome in 380, the association of cats with paganism and witchcraft made them suspect. Moreover, the cat’s independence suggested a willful rebellion against the teaching of the Bible, which said that Adam had dominion over all the animals. The cat’s reputation worsened during the medieval era, as the Catholic Church battled against heresies and dissent. Fed lurid tales by his inquisitors, in 1233 Pope Gregory IX issued a papal bull, “Vox in Rama,” which accused heretics of using black cats in their nighttime sex orgies with Lucifer—who was described as half-cat in appearance.

In Europe, countless numbers of cats were killed in the belief that they could be witches in disguise. In 1484, Pope Innocent VIII fanned the flames of anti-cat prejudice with his papal bull on witchcraft, “Summis Desiderantes Affectibus,” which stated that the cat was “the devil’s favorite animal and idol of all witches.”

The Age of Reason ought to have rescued the black cat from its pariah status, but superstitions die hard. (How many modern apartment buildings lack a 13th floor?). Cats had plenty of ardent fans among 19th century writers, including Charles Dickens and Mark Twain, who wrote “I simply can’t resist a cat, particularly a purring one.” But Edgar Allan Poe, the master of the gothic tale, felt otherwise: in his 1843 story “The Black Cat,” the spirit of a dead cat drives its killer to madness and destruction.

So pity the poor black cat, which through no fault of its own has gone from being an instrument of the devil to the convenient tool of the horror writer—and a favorite Halloween cliché.

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

WSJ Historically Speaking: The Miseries of Travel

ILLUSTRATION: THOMAS FUCHS

Today’s jet passengers may think they have it bad, but delay and discomfort have been a part of journeys since the Mayflower

Fifty years ago, on September 30, 1968, the world’s first 747 Jumbo Jet rolled out of Boeing’s Everett plant in Seattle, Washington. It was hailed as the future of commercial air travel, complete with fine dining, live piano music and glamorous stewardesses. And perhaps we might still be living in that future, were it not for the 1978 Airline Deregulation Act signed into law by President Jimmy Carter.

Deregulation was meant to increase the competitiveness of the airlines, while giving passengers more choice about the prices they paid. It succeeded in greatly expanding the accessibility of air travel, but at the price of making it a far less luxurious experience. Today, flying is a matter of “calculated misery,” as Columbia Law School professor Tim Wu put it in a 2014 article in the New Yorker. Airlines deliberately make travel unpleasant in order to force economy passengers to pay extra for things that were once considered standard, like food and blankets.

So it has always been with mass travel, since its beginnings in the 17th century: a test of how much discomfort and delay passengers are willing to endure. For the English Puritans who sailed to America on the Mayflower in 1620, light and ventilation were practically non-existent, the food was terrible and the sanitation primitive. All 102 passengers were crammed into a tiny living area just 80 feet long and 20 feet wide. To cap it all, the Mayflower took 66 days to arrive instead of the usual 47 for a trans-Atlantic crossing and was 600 miles off course from its intended destination of Virginia.

The introduction of the commercial stage coach in 1610, by a Scottish entrepreneur who offered trips between Edinburgh and Leith, made it easier for the middle classes to travel by land. But it was still an expensive and unpleasant experience. Before the invention of macadam roads—which rely on layers of crushed stone to create a flat and durable surface—in Britain in the 1820s, passengers sat cheek by jowl on springless benches, in a coach that trundled along at around five miles per hour.

The new paving technology improved the travel times but not necessarily the overall experience. Charles Dickens had already found fame with his comic stories of coach travel in “The Pickwick Papers” when he and Mrs. Dickens traveled on an American stage coach in Ohio in 1842. They paid to have the coach to themselves, but the journey was still rough: “At one time we were all flung together in a heap at the bottom of the coach.” Dickens chose to go by rail for the next leg of the trip, which wasn’t much better: “There is a great deal of jolting, a great deal of noise, a great deal of wall, not much window.”

Despite its primitive beginnings, 19th-century rail travel evolved to offer something revolutionary to its paying customers: quality service at an affordable price. In 1868, the American inventor George Pullman introduced his new designs for sleeping and dining cars. For a modest extra fee, the distinctive green Pullman cars provided travelers with hotel-like accommodation, forcing rail companies to raise their standards on all sleeper trains.

By contrast, the transatlantic steamship operators pampered their first-class passengers and abused the rest. In 1879, a reporter at the British Pall Mall Gazette sailed Cunard’s New York to Liverpool route in steerage in order to “test [the] truth by actual experience.” He was appalled to find that passengers were treated worse than cattle. No food was provided, “despite the fact that the passage is paid for.” The journalist noted that two steerage passengers “took one look at the place” and paid for an upgrade. I think we all know how they felt.

https://www.wsj.com/articles/the-miseries-of-travel-1537455854?mod=searchresults&page=1&pos=1

WSJ Historically Speaking: Poison and Politics

From ‘cantarella’ to polonium, governments have used toxins to terrorize and kill their enemies

ILLUSTRATION: THOMAS FUCHS

Among the pallbearers at Senator John McCain’s funeral in Washington last weekend was the Russian dissident Vladimir Kara-Murza. Mr. Kara-Murza is a survivor of two poisoning attempts, in 2015 and 2017, which he believes were intended as retaliation for his activism against the Putin regime.

Indeed, Russia is known or suspected to be responsible for several notorious recent poisoning cases, including the attempted murder this past March of Sergei Skripal, a former Russian spy living in Britain, and his daughter Yulia with the nerve agent Novichok. They survived the attack, but several months later a British woman died of Novichok exposure a few miles from where the Skirpals lived.

Poison has long been a favorite tool of brutal statecraft: It both terrorizes and kills, and it can be administered without detection. The Arthashastra, an ancient Indian political treatise that out-Machiavels Machiavelli, contains hundreds of recipes for toxins, as well as advice on when and how to use them to eliminate an enemy.

Most royal and imperial courts of the classical world were also awash with poison. Though it is impossible to prove so many centuries later, the long list of putative victims includes Alexander the Great (poisoned wine), Emperor Augustus (poisoned figs) and Emperor Claudius (poisoned mushrooms), as well as dozens of royal heirs, relatives, rivals and politicians. King Mithridates of Pontus, an ancient Hellenistic empire, was so paranoid—having survived a poison attempt by his own mother—that he took daily microdoses of every known toxin in order to build up his immunity.

Poisoning reached its next peak during the Italian Renaissance. Every ruling family, from the Medicis to the Viscontis, either fell victim to poison or employed it as a political weapon. The Borgias were even reputed to have their own secret recipe, a variation of arsenic called “cantarella.” Although a large number of their rivals conveniently dropped dead, the Borgias were small fry compared with the republic of Venice. The records of the Venetian Council of Ten reveal that a secret poison program went on for decades. Remarkably, two victims are known to have survived their assassination attempts: Count Francesco Sforza in 1450 and the Ottoman Sultan Mehmed II in 1477.

In the 20th century, the first country known to have established a targeted poisoning program was Russia under the Bolsheviks. According to Boris Volodarsky, a former Russian agent, Lenin ordered the creation of a poison laboratory called the “Special Room” in 1921. By the Cold War, the one-room lab had evolved into an international factory system staffed by hundreds, possibly thousands of scientists. Their specialty was untraceable poisons delivered by ingenious weapons—such as a cigarette packet made in 1954 that could fire bullets filled with potassium cyanide.

In 1978, the prizewinning Bulgarian writer Georgi Markov, then working for the BBC in London, was killed by an umbrella tip that shot a pellet containing the poison ricin into his leg. After the international outcry, the Soviet Union toned down its poisoning efforts but didn’t end them. And Putin’s Russia has continued to use similar techniques. In 2006, according to an official British inquiry, Russian secret agents murdered the ex-spy Alexander Litvinenko by slipping polonium into his drink during a meeting at a London hotel. It was the beginning of a new wave of poisonings whose end is not yet in sight.

WSJ Historically Speaking: The Struggle Before #MeToo

Today’s women are not the first to take a public stand against sexual assault and harassment

Rosa Parks in 1955 PHOTO: GETTY IMAGES

Since it began making headlines last year, the #MeToo movement has expanded into a global rallying cry. The campaign has many facets, but its core message is clear: Women who are victims of sexual harassment and assault still face too many obstacles in their quest for justice.

How much harder it was for women in earlier eras is illustrated perfectly by Emperor Constantine’s 326 edict on rape and abduction. While condemning both, the law assumed that all rape victims deserved punishment for their failure to resist more forcefully. The best outcome for the victim was disinheritance from her parents’ estate; the worst, death by burning.

In the Middle Ages, a rape victim was more likely to be blamed than believed, unless she suffered death or dismemberment in the attack. That makes the case of the Englishwoman Isabella Plomet all the more remarkable. In 1292, Plomet went to her doctor Ralph de Worgan to be treated for a leg problem. He made her drink a sleeping drug and then proceeded to rape her while she was unconscious.

It’s likely that Worgan, a respected pillar of local society, had relied for years on the silence of his victims. But Plomet’s eloquence in court undid him: He was found guilty and fined. The case was a landmark in medieval law, broadening the definition of rape to include nonconsent through intoxication.

But prejudice against the victims of sexual assault was slow to change. In Catholic Europe, notions of family honor and female reputation usually meant that victims had to marry their rapists or be classed as ruined. This was the origin of the most famous case of the 17th century. In 1611, Artemisia Gentileschi and her father Orazio brought a suit in a Roman court against her art teacher, Agostino Tassi, for rape.

Although Tassi had a previous criminal record, as a “dishonored” woman it was Gentileschi who had to submit to torture to prove that she was telling the truth. She endured an eight-month trial to see Tassi convicted and temporarily banished from Rome. “Cleared” by her legal victory, Gentileschi refused to let the attack define her or determine the rest of her life. She is now regarded as one of the greatest artists of the Baroque era.

One class of victims who had no voice and no legal recourse were free and enslaved black women in pre-Civil War America. Their stories make grim reading. In 1855, Celia, an 18-year-old slave in Missouri, killed her master when he attempted to rape her. At her trial she insisted—through her lawyers, since she was barred from testifying—that the right to self-defense extended to all women. The court disagreed, and Celia was executed—but not before making a successful prison break and almost escaping.

Change was still far off in 1931, when the 18-year-old Rosa Parks, working as a housekeeper, was pounced on by her white employer. As she later recalled, “He offered me a drink of whiskey, which I promptly and vehemently refused. He moved nearer to me and put his hand on my waist.” She managed to fight him off, and in a larger sense Parks never stopped fighting. She became a criminal investigator for the NAACP, helping black victims of white sexual assault to press charges.

Rosa Parks is often referred to as the “first lady of civil rights,” in recognition of her famous protest on a segregated bus in Montgomery, Alabama in 1955. She should also be remembered as one of the unsung heroines in the long prehistory of #MeToo.

WSJ Historically Speaking: When Royal Love Affairs Go Wrong

From Cleopatra to Edward VIII, monarchs have followed their hearts—with disastrous results.

ILLUSTRATION: THOMAS FUCHS

“Ay me!” laments Lysander in Shakespeare’s “A Midsummer Night’s Dream.” “For aught that I could ever read, / Could ever hear by tale or history, / The course of true love never did run smooth.” What audience would disagree? Thwarted lovers are indeed the stuff of history and art—especially when the lovers are kings and queens.

But there were good reasons why the monarchs of old were not allowed to follow their hearts. Realpolitik and royal passion do not mix, as Cleopatra VII (69-30 B.C.), the anniversary of whose death falls on Aug. 12, found to her cost. Her theatrical seduction of and subsequent affair with Julius Caesar insulated Egypt from Roman imperial designs. But in 41 B.C., she let her heart rule her head and fell in love with Mark Antony, who was fighting Caesar’s adopted son Octavian for control of Rome.

Cleopatra’s demand that Antony divorce his wife Octavia—sister of Octavian—and marry her instead was a catastrophic misstep. It made Egypt the target of Octavian’s fury, and forced Cleopatra into fighting Rome on Antony’s behalf. The couple’s defeat at the sea battle of Actium in 31 B.C. didn’t only end in personal tragedy: the 300-year-old Ptolemaic dynasty was destroyed, and Egypt was reduced to a Roman province.

In Shakespeare’s play “Antony and Cleopatra,” Antony laments, “I am dying, Egypt, dying.” It is a reminder that, as Egypt’s queen, Cleopatra was the living embodiment of her country; their fates were intertwined. That is why royal marriages have usually been inseparable from international diplomacy.

In 1339, when Prince Pedro of Portugal fell in love with his wife’s Castilian lady-in-waiting, Inés de Castro, the problem wasn’t the affair per se but the opportunity it gave to neighboring Castile to meddle in Portuguese politics. In 1355, Pedro’s father, King Afonso IV, took the surest way of separating the couple—who by now had four children together—by having Inés murdered. Pedro responded by launching a bloody civil war against his father that left northern Portugal in ruins. The dozens of romantic operas and plays inspired by the tragic love story neglect to mention its political repercussions; for decades afterward, the Portuguese throne was weak and the country divided.

Perhaps no monarchy in history bears more scars from Cupid’s arrow than the British. From Edward II (1284-1327), whose poor choice of male lovers unleashed murder and mayhem on the country—he himself was allegedly killed with a red hot poker—to Henry VIII (1491-1547), who bullied and butchered his way through six wives and destroyed England’s Catholic way of life in the process, British rulers have been remarkable for their willingness to place personal happiness above public responsibility.

Edward VIII (1894 -1972) was a chip off the block, in the worst way. The moral climate of the 1930s couldn’t accept the King of England marrying a twice-divorced American. Declaring he would have Wallis Simpson or no one, Edward plunged the country into crisis by abdicating in 1936. With European monarchies falling on every side, Britain’s suddenly looked extremely vulnerable. The current Queen’s father, King George VI, quite literally saved it from collapse.

According to a popular saying, “Everything in the world is about sex except sex. Sex is about power.” That goes double when the lovers wear royal crowns.

The Sunday Times: No more midlife crisis – I’m riding the U-curve of happiness

Evidence shows people become happier in their fifties, but achieving that takes some soul-searching

I used not to believe in the “midlife crisis”. I am ashamed to say that I thought it was a convenient excuse for self-indulgent behaviour — such as splurging on a Lamborghini or getting buttock implants. So I wasn’t even aware that I was having one until earlier this year, when my family complained that I had become miserable to be around. I didn’t shout or take to my bed, but five minutes in my company was a real downer. The closer I got to my 50th birthday, the more I radiated dissatisfaction.

Can you be simultaneously contented and discontented? The answer is yes. Surveys of “national wellbeing” in several countries, including the UK, by the Office for National Statistics have revealed a fascinating U-curve in relation to happiness and age. In Britain, feelings of stress and anxiety appear to peak at 49 and subsequently fade as the years increase. Interestingly, a 2012 study showed that chimpanzees and orang-utans exhibited a similar U-curve of happiness as they reach middle age.

On a rational level, I wasn’t the least bit disappointed with my life. The troika of family, work and friends made me very happy. And yet something was eating away at my peace of mind. I regarded myself as a failure — not in terms of work but as a human being. Learning that I wasn’t alone in my daily acid bath of gloom didn’t change anything.

One of F Scott Fitzgerald’s most memorable lines is: “There are no second acts in American lives.” It’s so often quoted that it’s achieved the status of a truism. It’s often taken to be an ironic commentary on how Americans, particularly men, are so frightened of failure that they cling to the fiction that life is a perpetual first act. As I thought about the line in relation to my own life, Fitzgerald’s meaning seemed clear. First acts are about actions and opportunities. There is hope, possibility and redemption. Second acts are about reactions and consequences.

Old habits die hard, however. I couldn’t help conducting a little research into Fitzgerald’s life. What was the author of The Great Gatsby really thinking when he wrote the line? Would it even matter?

The answer turned out to be complicated. As far as the quotation goes, Fitzgerald actually wrote the reverse. The line appears in a 1935 essay entitled My Lost City, about his relationship with New York: “I once thought that there were no second acts in American lives, but there was certainly to be a second act to New York’s boom days.”

It reappeared in the notes for his Hollywood novel, The Love of the Last Tycoon, which was half finished when he died in 1940, aged 44. Whatever he had planned for his characters, the book was certainly meant to have been Fitzgerald’s literary comeback — his second act — after a decade of drunken missteps, declining book sales and failed film projects.

Fitzgerald may not have subscribed to the “It’s never too late to be what you might have been” school of thought, but he wasn’t blind to reality. Of course he believed in second acts. The world is full of middle-aged people who successfully reinvented themselves a second or even third time. The mercurial rise of Emperor Claudius (10BC to AD54) is one of the earliest historical examples of the true “second act”.

According to Suetonius, Claudius’s physical infirmities had made him the butt of scorn among his powerful family. But his lowly status saved him after the assassination of his nephew, Caligula. The plotters found the 56-year-old Claudius cowering behind a curtain. On the spur of the moment, instead of killing him, as they did Caligula’s wife and daughter, the plotters decided the stumbling and stuttering scion of the Julio-Claudian dynasty could be turned into a puppet emperor. It was a grave miscalculation. Claudius seized on his changed circumstances. The bumbling persona was dropped and, although flawed, he became a forceful and innovative ruler.

Mostly, however, it isn’t a single event that shapes life after 50 but the willingness to stay the course long after the world has turned away. It’s extraordinary how the granting of extra time can turn tragedy into triumph. In his heyday, General Mikhail Kutuzov was hailed as Russia’s greatest military leader. But by 1800 the 55-year-old was prematurely aged. Stiff-limbed, bloated and blind in one eye, Kutuzov looked more suited to play the role of the buffoon than the great general. He was Alexander I’s last choice to lead the Russian forces at the Battle of Austerlitz in 1805, but was the first to be blamed for the army’s defeat.

Kutuzov was relegated to the sidelines after Austerlitz. He remained under official disfavour until Napoleon’s army was halfway to Moscow in 1812. Only then, with the army and the aristocracy begging for his recall, did the tsar agree to his reappointment. Thus, in Russia’s hour of need it ended up being Kutuzov, the disgraced general, who saved the country.

Winston Churchill had a similar apotheosis in the Second World War. For most of the 1930s he was considered a political has-been by friends and foes alike. His elevation to prime minister in 1940 at the age of 65 changed all that, of course. But had it not been for the extraordinary circumstances created by the war, Robert Rhodes James’s Churchill: A Study in Failure, 1900-1939 would have been the epitaph rather than the prelude to the greatest chapter in his life.

It isn’t just generals and politicians who can benefit from second acts. For writers and artists, particularly women, middle age can be extremely liberating. The Booker prize-winning novelist Penelope Fitzgerald published her first book at 59 after a lifetime of teaching while supporting her children and alcoholic husband. Thereafter she wrote at a furious pace, producing nine novels and three biographies before she died at 83.

I could stop right now and end with a celebratory quote from Morituri Salutamus by the American poet Henry Wadsworth Longfellow: “For age is opportunity no less/ than youth itself, though in another dress, / And as the evening twilight fades away / The sky is filled with stars, invisible by day.”

However, that isn’t — and wasn’t — what was troubling me in the first place. I don’t think the existential anxieties of middle age are caused or cured by our careers. Sure, I could distract myself with happy thoughts about a second act where I become someone who can write a book a year rather than one a decade. But that would still leave the problem of the flesh-and-blood person I had become in reality. What to think of her? It finally dawned on me that this had been my fear all along: it doesn’t matter which act I am in; I am still me.

My funk lifted once the big day rolled around. I suspect that joining a gym and going on a regular basis had a great deal to do with it. But I had also learnt something valuable during these past few months. Worrying about who you thought you would be or what you might have been fills a void but leaves little space for anything else. It’s coming to terms with who you are right now that really matters.

 

WSJ Historically Speaking: In Awe of the Grand Canyon

Since the 16th century, travelers have recorded the overwhelming impact of a natural wonder.

ILLUSTRATION BY THOMAS FUCHS

Strange as it may sound, it was watching Geena Davis and Susan Sarandon in the tragic final scene of “Thelma and Louise” (1991) that convinced me I had to go to the Grand Canyon one day and experience its life-changing beauty. Nearly three decades have passed, but I’m finally here. Instead of a stylish 1966 Ford Thunderbird, however, I’m driving a mammoth RV, with my family in tow.

The overwhelming presence of the Grand Canyon is just as I dreamed. Yet I’m acutely aware of how one-sided the relationship is. As the Pulitzer Prize-winning poet Carl Sandburg wrote in “Many Hats” in 1928: “For each man sees himself in the Grand Canyon—each one makes his own Canyon before he comes.”

The first Europeans to encounter the Canyon were Spanish conquistadors searching for the legendary Seven Golden Cities of Cibola. In 1540, Hopi guides took a small scouting party led by García López de Cárdenas to the South Rim (60 miles north of present-day Williams, Ariz.). In Cárdenas’s mind, the Canyon was a route to riches. After trying for three days to find a path to reach the river below, he cut his losses in disgust and left. Cárdenas saw no point to the Grand Canyon if it failed to yield any treasure.

Three centuries later, in 1858, the first Euro-American to follow in Cárdenas’s footsteps, Lt. Joseph Christmas Ives of the U.S. Army Corps of Topographical Engineers, had a similar reaction. In his official report, Ives waxed lyrical about the magnificent scenery but concluded, “The region is, of course, altogether valueless….Ours has been the first, and will doubtless be the last, party of whites to visit this profitless locality.”

Americans only properly “discovered” the Grand Canyon through the works of artists such as Thomas Moran. A devotee of the Hudson River School of painters, Moran found his spiritual and artistic home in the untamed landscapes of the West. His romantic pictures awakened the public to the natural wonder in their midst. Eager to see the real thing, the trickle of visitors turned into a stream by the late 1880s.

The effusive reactions to the Canyon recorded by tourists who made the arduous trek from Flagstaff, Ariz. (a railway to Grand Canyon Village was only built in 1901) have become a familiar refrain: “Not for human needs was it fashioned, but for the abode of gods…. To the end it effaced me,” wrote Harriet Monroe, the founder of Poetry magazine, in 1899.

But there was one class of people who were apparently insensible to the Canyon: copper miners. Watching their thoughtless destruction of the landscape, Monroe wondered, “Do they cease to feel it?” President Theodore Roosevelt feared so, and in 1908 he made an executive decision to protect 800,000 acres from exploitation by creating the Grand Canyon National Monument.

Roosevelt’s farsightedness may have put a crimp in the profits of mining companies, but it paid dividends in other ways. By the 1950s, the Canyon had become a must-see destination, attracting visitors from all over the world. Among them were the tragic Sylvia Plath, author of “The Bell Jar,” and her husband, Ted Hughes, the future British Poet Laureate. Thirty years later, the visit to the Canyon still haunted Hughes: “I never went back and you are dead. / But at odd moments it comes, / As if for the first time.” He is not alone, I suspect, in never fully leaving the Canyon behind.

WSJ Historically Speaking: The Power of Pamphlets: A Brief History

As the Reformation passes a milestone, a look at a key weapon of change

ILLUSTRATION: THOMAS FUCHS

The Reformation began on Oct. 31, 1517, when Martin Luther, as legend has it, nailed his “95 Theses” to a church door in Wittenberg, Germany. Whatever he actually did—he may have just attached the papers to the door or delivered them to clerical authorities—Luther was protesting Catholics’ sale of “indulgences” to give sinners at least partial absolution. The protest immediately went viral, to use a modern term, thanks to the new “social media” of the day—the printed pamphlet.

The development of the printing press around 1440 had set the stage: In the famous words of the German historian Bernd Moeller, “Without printing, no Reformation.” But the pamphlet deserves particular recognition. Unlike books, pamphlets were perfect for the mass market: easy to print and therefore cheap to buy. Continue reading…