Historically Speaking: Real-Life Games of Thrones

From King Solomon to the Shah of Persia, rulers have used stately seats to project power.

The Wall Street Journal, April 22, 2019

ILLUSTRATION BY THOMAS FUCHS

Uneasy lies the head that wears a crown,” complains the long-suffering King Henry IV in Shakespeare. But that is not a monarch’s only problem; uneasy, too, is the bottom that sits on a throne, for thrones are often a dangerous place to be. That is why the image of a throne made of swords, in the HBO hit show “Game of Thrones” (which last week began its eighth and final season), has served as such an apt visual metaphor. It strikingly symbolizes the endless cycle of violence between the rivals for the Iron Throne, the seat of power in the show’s continent of Westeros.

In real history, too, virtually every state once put its leader on a throne. The English word comes from the ancient Greek “thronos,” meaning “stately seat,” but the thing itself is much older. Archaeologists working at the 4th millennium B.C. site of Arslantepe, in eastern Turkey, where a pre-Hittite Early Bronze Age civilization flourished, recently found evidence of what is believed to be the world’s oldest throne. It seems to have consisted of a raised bench which enabled the ruler to display his or her elevated status by literally sitting above all visitors to the palace.

Thrones were also associated with divine power: The famous 18th-century B.C. basalt stele inscribed with the law code of King Hammurabi of Babylon, which can be seen at the Louvre, depicts the king receiving the laws directly from the sun god Shamash, who is seated on a throne.

Naturally, because they were invested with so much religious and political symbolism, thrones often became a prime target in war. According to Jewish legend, King Solomon’s spectacular gold and ivory throne was stolen first by the Egyptians, who then lost it to the Assyrians, who subsequently gave it up to the Persians, whereupon it became lost forever.

In India, King Solomon’s throne was reimagined in the early 17th century by the Mughal Emperor Shah Jahan as the jewel-and-gold-encrusted Peacock Throne, featuring the 186-carat Koh-i-Noor diamond. (Shah Jahan also commissioned the Taj Mahal.) This throne also came to an unfortunate end: It was stolen during the sack of Delhi by the Shah of Iran and taken back to Persia. A mere eight years later, the Shah was assassinated by his own bodyguards and the Peacock Throne was destroyed, its valuable decorations stolen.

Perhaps the moral of the story is to keep things simple. In 1308, King Edward I of England commissioned a coronation throne made of oak. For the past 700 years it has supported the heads and backsides of 38 British monarchs during the coronation ceremony at Westminster Abbey. No harm has ever come to it, save for the pranks of a few very naughty choir boys, one of whom carved on the back of the throne: “P. Abbott slept in this chair 5-6 July 1800.”

Historically Speaking: Overcoming the Labor of Calculation

Inventors tried many times over the centuries to build an effective portable calculator—but no one succeeded until John Merryman.

The Wall Street Journal, April 5, 2019

ILLUSTRATION: THOMAS FUCHS

The world owes a debt of gratitude to Jerry Merryman, who died on Feb. 27 at the age of 86. It was Merryman who, in 1965, worked with two other engineers at Texas Instruments to invent the world’s first pocket calculator. Today we all carry powerful calculators on our smartphones, but Merryman was the first to solve a problem that had plagued mathematicians at least since Gottfried Leibniz, one of the creators of calculus in the 17th century, who observed: “It is unworthy of excellent men to lose hours like slaves in the labor of calculation, which could safely be relegated to anyone else if machines were used.”

In ancient times, the only alternative to mental arithmetic was the abacus, with its simple square frame, wooden rods and movable beads. Most civilizations, including the Romans, Chinese, Indians and Aztecs, had their own version—from counting boards for keeping track of sums to more sophisticated designs that could calculate square roots.

We know of just one counting machine from antiquity that was more complex. The 2nd-century B.C. Antikythera Mechanism, discovered in 1901 in the remains of an ancient Greek shipwreck, was a 30-geared bronze calculator that is believed to have been used to track the movement of the heavens. Despite its ingenious construction, the Antikythera had a limited range of capabilities: It could calculate dates and constellations and little else.

In the 15th century, Leonardo da Vinci drew up designs for a mechanical calculating device that consisted of 13 “digit-registering” wheels. In 1967, IBM commissioned the Italian scholar and engineer Roberto Guatelli to build a replica based on the sketches. Guatelli believed that Leonardo had invented the first calculator, but other experts disagreed. In any case, it turned out that the metal wheels would have generated so much friction that the frame would probably have caught fire.

Technological advances in clockmaking helped the French mathematician and inventor Blaise Pascal to build the first working mechanical calculator, called the Pascaline, in 1644. It wasn’t a fire hazard, and it could add and subtract. But the Pascaline was very expensive to make, fragile in the extreme, and far too limited to be really useful. Pascal only made 50 in his lifetime, of which less than 10 survive today.

Perhaps the greatest calculator never to see the light of day was designed by Charles Babbage in the early 19th century. He actually designed two machines—the Difference Engine, which could perform arithmetic, and the Analytical Engine, which was theoretically capable of a range of functions from direct multiplication to parallel processing. Ada, Countess of Lovelace, the daughter of the poet Lord Byron, made important contributions to the development of Babbage’s Analytical Engine. According to the historian Walter Isaacson, Lovelace realized that any process based on logical symbols could be used to represent entities other than quantity—the same principle used in modern computing.

Unfortunately, despite many years and thousands of pounds of government funding, Babbage only ever managed to build a small prototype of the Difference Engine. Even for most of the 20th century, his dream of a portable calculator stayed a dream, while the go-to instrument for doing large sums remained the slide rule. It took Merryman’s invention to allow us all to become “excellent men” and leave the labor of calculation to the machines.

Historically Speaking: The Immortal Charm of Daffodils

The humble flower has been a favorite symbol in myth and art since ancient times

The Wall Street Journal, March 22, 2019

ILLUSTRATION: THOMAS FUCHS

On April 15, 1802, the poet William Wordsworth and his sister Dorothy were enjoying a spring walk through the hills and vales of the English Lake District when they came across a field of daffodils. Dorothy was so moved that she recorded the event in her journal, noting how the flowers “tossed and reeled and danced and seemed as if they verily laughed with the wind that blew upon them over the Lake.” And William decided there was nothing for it but to write a poem, which he published in its final version in 1815. “I Wandered Lonely as a Cloud” is one of his most famous reflections on the power of nature: “For oft, when on my couch I lie/In vacant or in pensive mood,/They flash upon that inward eye/Which is the bliss of solitude;/And then my heart with pleasure fills,/And dances with the daffodils.”

Long dismissed as a common field flower, unworthy of serious attention by the artist, poet or gardener, the daffodil enjoyed a revival thanks in part to Wordsworth’s poem. The painters Claude Monet, Berthe Morisot and Vincent van Gogh were among its 19th-century champions. Today, the daffodil is so ubiquitous, in gardens and in art, that it’s easy to overlook.

But the flower deserves respect for being a survivor. Every part of the narcissus, to use its scientific name, is toxic to humans, animals and even other flowers, and yet—as many cultures have noted—it seems immortal. There are still swaths of daffodils on the lakeside meadow where the Wordsworths ambled two centuries ago.

The daffodil originated in the ancient Mediterranean, where it was regarded with deep ambivalence. The ancient Egyptians associated narcissi with the idea of death and resurrection, using them in tomb paintings. The Greeks also gave the flower contrary mythological meanings. Its scientific name comes from the story of Narcissus, a handsome youth who faded away after being cursed into falling in love with his own image. At the last moment, the gods saved him from death by granting him a lifeless immortality as a daffodil. In another Greek myth, the daffodil’s luminous beauty was used by Hades to lure Persephone away from her friends so that he could abduct her into the underworld. During her four-month captivity the only flower she saw was the asphodelus, which grew in abundance on the fields of Elysium—and whose name inspired the English derivative “daffodil.”

But it is isn’t only Mediterranean cultures that have fixated on the daffodil’s mysterious alchemy of life and death. A fragrant variety of the narcissus—the sweet-smelling paper white—traveled along the Silk Road to China. There, too, the flower appeared to encapsulate the happy promise of spring, but also other painful emotions such as loss and yearning. The famous Ming Dynasty scroll painting “Narcissi and Plum Blossoms” by Qiu Ying (ca. 1494-1552), for instance, is a study in contrasts, juxtaposing exquisitely rendered flowers with the empty desolation of winter.

The English botanist John Parkinson introduced the traditional yellow variety from Spain in 1618. Aided by a soggy but temperate climate, daffodils quickly spread across lawns and fields, causing its foreign origins to be forgotten. By the 19th century they had become quintessentially British—so much so that missionaries and traders, nostalgic for home, planted bucketfuls of bulbs wherever they went. Their legacy in North America is a burst of color each year just when the browns and grays of winter have worn out their welcome.

Historically Speaking: Fantasies of Alien Life

Human beings have never encountered extra-terrestrials, but we’ve been imagining them for thousands of years

The Wall Street Journal, March 7, 2019

Fifty years ago this month, Kurt Vonnegut published “Slaughterhouse-Five,” his classic semi-autobiographical, quasi-science fiction novel about World War II and its aftermath. The story follows the adventures of Billy Pilgrim, an American soldier who survives the bombing of Dresden in 1945, only to be abducted by aliens from the planet Tralfamadore and exhibited in their zoo. Vonnegut’s absurd-looking Tralfamadorians (they resemble green toilet plungers) are essentially vehicles for his meditations on the purpose of life.

Some readers may dismiss science fiction as mere genre writing. But the idea that there may be life on other planets has engaged many of history’s greatest thinkers, starting with the ancient Greeks. On the pro-alien side were the Pythagoreans, a fifth-century B.C. sect, which argued that life must exist on the moon; in the third century B.C., the Epicureans believed that there was an infinite number of life-supporting worlds. But Plato, Aristotle and the Stoics argued the opposite. In “On the Heavens,” Aristotle specifically rejected the possibility that other worlds might exist, on the grounds that the Earth is at the center of a perfect and finite universe.

The Catholic Church sided with Plato and Aristotle: If there was only one God, there could be only one world. But in Asia, early Buddhism encouraged philosophical explorations into the idea of multiverses and parallel worlds. Buddhist influence can be seen in the 10th-century Japanese romance “The Bamboo Cutter,” whose story of a marooned moon princess and a lovelorn emperor was so popular in its time that it is mentioned in Murasaki Shikibu’s seminal novel, “The Tale of Genji.”

During the Renaissance, Copernicus’s heliocentric theory, advanced in his book “On the Revolutions of the Celestial Spheres” (1543), and Galileo’s telescopic observations of the heavens in 1610 proved that the Church’s traditional picture of the cosmos was wrong. The discovery prompted Western thinkers to imagine the possibility of alien civilizations. From Johannes Kepler to Voltaire, imagining life on the moon (or elsewhere) became a popular pastime among advanced thinkers. In “Paradise Lost” (1667), the poet John Milton wondered “if Land be there,/Fields and Inhabitants.”

Such benign musings about extraterrestrial life didn’t survive the impact of industrialization, colonialism and evolutionary theory. In the 19th century, debates over whether aliens have souls morphed into fears about humans becoming their favorite snack food. This particular strain of paranoia reached its apogee in the alien-invasion novel “The War of the Worlds,” published in 1897 by the British writer H.G. Wells. Wells’s downbeat message—that contact with aliens would lead to a Darwinian fight for survival—resonated throughout the 20th century.

And it isn’t just science fiction writers who ponder “what if.” The physicist Stephen Hawking once compared an encounter with aliens to Christopher Columbus landing in America, “which didn’t turn out very well for the Native Americans.” More hopeful visions—such as Steven Spielberg’s 1982 film “E.T. the Extraterrestrial,” about a lovable alien who wants to get back home—have been exceptions to the rule.

The real mystery about aliens is the one described by the so-called “Fermi paradox.” The 20th-century physicist Enrico Fermi observed that, given the number of stars in the universe, it is highly probable that alien life exists. So why haven’t we seen it yet? As Fermi asked, “Where is everybody?”

Historically Speaking: Insuring Against Disasters

Insurance policies go back to the ancient Babylonians and were crucial in the early development of capitalism

The Wall Street Journal, February 21, 2019

ILLUSTRATION: THOMAS FUCHS

Living in a world without insurance, free from all those claim forms and high deductibles, might sound like a little bit of paradise. But the only thing worse than dealing with the insurance industry is trying to conduct business without it. In fact, the basic principle of insurance—pooling risk in order to minimize liability from unforeseen dangers—is one of the things that made modern capitalism possible.

The first merchants to tackle the problem of risk management in a systematic way were the Babylonians. The 18th-century B.C. Code of Hammurabi shows that they used a primitive form of insurance known as “bottomry.” According to the Code, merchants who took high-interest loans tied to shipments of goods could have the loans forgiven if the ship was lost. The practice benefited both traders and their creditors, who charged a premium of up to 30% on such loans.

The Athenians, realizing that bottomry was a far better hedge against disaster than relying on the Oracle of Delphi, subsequently developed the idea into a maritime insurance system. They had professional loan syndicates, official inspections of ships and cargoes, and legal sanctions against code violators.

With the first insurance schemes, however, came the first insurance fraud. One of the oldest known cases comes from Athens in the 3rd century B.C. Two men named Hegestratos and Xenothemis obtained bottomry insurance for a shipment of corn from Syracuse to Athens. Halfway through the journey they attempted to sink the ship, only to have their plan foiled by an alert passenger. Hegestratos jumped (or was thrown) from the ship and drowned. Xenothemis was taken to Athens to meet his punishment.

In Christian Europe, insurance was widely frowned upon as a form of gambling—betting against God. Even after Pope Gregory IX decreed in the 13th century that the premiums charged on bottomry loans were not usury, because of the risk involved, the industry rarely expanded. Innovations came mainly in response to catastrophes: The Great Fire of London in 1666 led to the growth of fire insurance, while the Lisbon earthquake of 1755 did the same for life insurance.

It took the Enlightenment to bring widespread changes in the way Europeans thought about insurance. Probability became subject to numbers and statistics rather than hope and prayer. In addition to his contributions to mathematics, astronomy and physics, Edmond Halley (1656-1742), of Halley’s comet fame, developed the foundations of actuarial science—the mathematical measurement of risk. This helped to create a level playing field for sellers and buyers of insurance. By the end of the 18th century, those who abjured insurance were regarded as stupid rather than pious. Adam Smith declared that to do business without it “was a presumptuous contempt of the risk.”

But insurance only works if it can be trusted in a crisis. For the modern American insurance industry, the deadly San Francisco earthquake of 1906 was a day of reckoning. The devastation resulted in insured losses of $235 million—equivalent to $6.3 billion today. Many American insurers balked, but in Britain, Lloyd’s of London announced that every one of its customers would have their claims paid in full within 30 days. This prompt action saved lives and ensured that business would be able to go on.

And that’s why we pay our premiums: You can’t predict tomorrow, but you can plan for it.

Historically Speaking: Unenforceable Laws Against Pleasure

The 100th anniversary of Prohibition is a reminder of how hard it is to regulate consumption and display

The Wall Street Journal, January 24, 2019

ILLUSTRATION: THOMAS FUCHS

This month we mark the centennial of the ratification of the Constitution’s 18th Amendment, better known as Prohibition. But the temperance movement was active for over a half-century before winning its great prize. As the novelist Anthony Trollope discovered to his regret while touring North America in 1861-2, Maine had been dry for a decade. The convivial Englishman condemned the ban: “This law, like all sumptuary laws, must fail,” he wrote.

Sumptuary laws had largely fallen into disuse by the 19th century, but they were once a near-universal tool, used in the East and West alike to control economies and preserve social hierarchies. A sumptuary law is a rule that regulates consumption in its broadest sense, from what a person may eat and drink to what they may own, wear or display. The oldest known example, the Locrian Law Code devised by the seventh century B.C. Greek law giver Zaleucus, banned all citizens of Locri (except prostitutes) from ostentatious displays of gold jewelry.

Sumptuary laws were often political weapons disguised as moral pieties, aimed at less powerful groups, particularly women. In 215 B.C., at the height of the Second Punic War, the Roman Senate passed the Lex Oppia, which (among other restrictions) banned women from owning more than a half ounce of gold. Ostensibly a wartime austerity measure, 20 years later the law appeared so ridiculous as to be unenforceable. But during debate on its repeal in 195 B.C., Cato the Elder, its strongest defender, inadvertently revealed the Lex Oppia’s true purpose: “What [these women] want is complete freedom…. Once they have achieved equality, they will be your masters.”

Cato’s message about preserving social hierarchy echoed down the centuries. As trade and economic stability returned to Europe during the High Middle Ages (1000-1300), so did the use of sumptuary laws to keep the new merchant elites in their place. By the 16th century, sumptuary laws in Europe had extended from clothing to almost every aspect of daily life. The more they were circumvented, the more specific such laws became. An edict issued by King Henry VIII of England in 1517, for example, dictated the maximum number of dishes allowed at a meal: nine for a cardinal, seven for the aristocracy and three for the gentry.

The rise of modern capitalism ultimately made sumptuary laws obsolete. Trade turned once-scarce luxuries into mass commodities that simply couldn’t be controlled. Adam Smith’s “The Wealth of Nations” (1776) confirmed what had been obvious for over a century: Consumption and liberty go hand in hand. “It is the highest impertinence,” he wrote, “to pretend to watch over the economy of private people…either by sumptuary laws, or by prohibiting the importation of foreign luxuries.”

Smith’s pragmatic view was echoed by President William Howard Taft. He opposed Prohibition on the grounds that it was coercive rather than consensual, arguing that “experience has shown that a law of this kind, sumptuary in its character, can only be properly enforced in districts in which a majority of the people favor the law.” Mass immigration in early 20th-century America had changed many cities into ethnic melting-pots. Taft recognized Prohibition as an attempt by nativists to impose cultural uniformity on immigrant communities whose attitudes toward alcohol were more permissive. But his warning was ignored, and the disastrous course of Prohibition was set.

Historically Speaking: The High Cost of Financial Panics

Roman emperors and American presidents alike have struggled to deal with sudden economic crashes

The Wall Street Journal, January 17, 2019

ILLUSTRATION: THOMAS FUCHS

On January 12, 1819 Thomas Jefferson wrote to his friend Nathaniel Macon, “I have…entire confidence in the late and present Presidents…I slumber without fear.” He did concede, though, that market fluctuations can trip up even the best governments. Jefferson was prescient: A few days later, the country plunged into a full-blown financial panic. The trigger was a collapse in the overseas cotton market, but the crisis had been building for months. The factors that led to the crash included the actions of the Second Bank of the United States, which had helped to fuel a real estate boom in the West only to reverse course suddenly and call in its loans.

The recession that followed the panic of 1819 was prolonged and severe: Banks closed, lending all but ceased and businesses failed by the thousands. By the time it was over in 1823, almost a third of the population—including Jefferson himself—had suffered irreversible losses.

As we mark the 200th anniversary of the 1819 panic, it is worth pondering the role of governments in a financial crisis. During a panic in Rome in the year 33, the emperor Tiberius’s prompt action prevented a total collapse of the city’s finances. Rome was caught among falling property prices, a real estate bubble and a sudden credit crunch. Instead of waiting it out, Tiberius ordered interest rates to be lowered and released 100 million sestertii (large brass coins) into the banking system to avoid a mass default.

But not all government interventions have been as successful or timely. In 1124, King Henry I of England attempted to restore confidence in the country’s money by having the mint-makers publicly castrated and their right hands amputated for producing substandard coins. A temporary fix at best, his bloody act neither deterred people from debasing the coinage nor allayed fears over England’s creditworthiness.

On the other side of the globe, China began using paper money in 1023. Successive emperors of the Ming Dynasty (1368-1644) failed, however, to limit the number of notes in circulation or to back the money with gold or silver specie. By the mid-15th century the economy was in the grip of hyperinflationary cycles. The emperor Yingzong simply gave up on the problem: China returned to coinage just as Europe was discovering the uses of paper.

The rise of commercial paper along with paper currencies allowed European countries to develop more sophisticated banking systems. But they also led to panics, inflation and dangerous speculation—sometimes all at once, as in France in 1720, when John Law’s disastrous Mississippi Company share scheme ended in mass bankruptcies for its investors and the collapse of the French livre.

As it turns out, it is easier to predict the consequences of a crisis than it is to prevent one from happening. In 2015, the U.K.’s Centre for Economic Policy Research published a paper on the effects of 100 financial crises in 20 Western countries over the past 150 years, down to the recession of 2007-09. They found two consistent outcomes. The first is that politics becomes more extreme and polarized following a crisis; the second is that countries become more ungovernable as violence, protests and populist revolts overshadow the rule of law.

With the U.S. stock market having suffered its worst December since the Great Depression of the 1930s, it is worth remembering that the only thing more frightening than a financial crisis can be its aftermath.

Historically Speaking: New Year, Old Regrets

From the ancient Babylonians to Victorian England, the year’s end has been a time for self-reproach and general misery

The Wall Street Journal, January 3, 2019

ILLUSTRATION: ALAN WITSCHONKE

I don’t look forward to New Year’s Eve. When the bells start to ring, it isn’t “Auld Lang Syne” I hear but echoes from the Anglican “Book of Common Prayer”: “We have left undone those things which we ought to have done; And we have done those things which we ought not to have done.”

At least I’m not alone in my annual dip into the waters of woe. Experiencing the sharp sting of regret around the New Year has a long pedigree. The ancient Babylonians required their kings to offer a ritual apology during the Akitu festival of New Year: The king would go down on his knees before an image of the god Marduk, beg his forgiveness, insist that he hadn’t sinned against the god himself and promise to do better next year. The rite ended with the high priest giving the royal cheek the hardest possible slap.

There are sufficient similarities between the Akitu festival and Yom Kippur, Judaism’s Day of Atonement—which takes place 10 days after the Jewish New Year—to suggest that there was likely a historical link between them. Yom Kippur, however, is about accepting responsibility, with the emphasis on owning up to sins committed rather than pointing out those omitted.

In Europe, the 14th-century Middle English poem “Sir Gawain and the Green Knight” begins its strange tale on New Year’s Day. A green-skinned knight arrives at King Arthur’s Camelot and challenges the knights to strike at him, on the condition that he can return the blow in a year and a day. Sir Gawain reluctantly accepts the challenge, and embarks on a year filled with adventures. Although he ultimately survives his encounter with the Green Knight, Gawain ends up haunted by his moral lapses over the previous 12 months. For, he laments (in J.R.R. Tolkien’s elegant translation), “a man may cover his blemish, but unbind it he cannot.”

New Year’s Eve in Shakespeare’s era was regarded as a day for gift-giving rather than as a catalyst for regret. But Sonnet 30 shows that Shakespeare was no stranger to the melancholy that looking back can inspire: “I summon up remembrance of things past, / I sigh the lack of many a thing I sought, / And with old woes new wail my dear time’s waste.”

For a full dose of New Year’s misery, however, nothing beats the Victorians. “I wait its close, I court its gloom,” declared the poet Walter Savage Landor in “Mild Is the Parting Year.” Not to be outdone, William Wordsworth offered his “Lament of Mary Queen of Scots on the Eve of a New Year”: “Pondering that Time tonight will pass / The threshold of another year; /…My very moments are too full / Of hopelessness and fear.”

Fortunately, there is always Charles Dickens. In 1844, Dickens followed up the wildly successful “A Christmas Carol” with a slightly darker but still uplifting seasonal tale, “The Chimes.” Trotty Veck, an elderly messenger, takes stock of his life on New Year’s Eve and decides that he has been nothing but a burden on society. He resolves to kill himself, but the spirits of the church bells intervene, showing him a vision of what would happen to the people he loves.

Today, most Americans recognize this story as the basis of the bittersweet 1946 Frank Capra film “It’s a Wonderful Life.” As an antidote to New Year’s blues, George Bailey’s lesson holds true for everyone: “No man is a failure who has friends.”

Historically Speaking: Trees of Life and Wonder

From Saturnalia to Christmas Eve, people have always had a spiritual need for greenery in the depths of winter

The Wall Street Journal, December 13, 2018

Queen Victoria and family with their Christmas tree in 1848. PHOTO: GETTY IMAGES

My family never had a pink-frosted Christmas tree, though Lord knows my 10-year-old self really wanted one. Every year my family went to Sal’s Christmas Emporium on Wilshire Boulevard in Los Angeles, where you could buy neon-colored trees, mechanical trees that played Christmas carols, blue and white Hanukkah bushes or even a real Douglas fir if you wanted to go retro. We were solidly retro.

Decorating the Christmas tree remains one of my most treasured memories, and according to the National Christmas Tree Association, the tradition is still thriving in our digital age: In 2017 Americans bought 48.5 million real and artificial Christmas trees. Clearly, bringing a tree into the house, especially during winter, taps into something deeply spiritual in the human psyche.

Nearly every society has at some point venerated the tree as a symbol of fertility and rebirth, or as a living link between the heavens, the earth and the underworld. In the ancient Near East, “tree of life” motifs appear on pottery as early as 7000 B.C. By the second millennium B.C., variations of the motif were being carved onto temple walls in Egypt and fashioned into bronze sculptures in southern China.

The early Christian fathers were troubled by the possibility that the faithful might identify the Garden of Eden’s trees of life and knowledge, described in the Book of Genesis, with paganism’s divine trees and sacred groves. Accordingly, in 572 the Council of Braga banned Christians from participating in the Roman celebration of Saturnalia—a popular winter solstice festival in honor of Saturn, the god of agriculture, that included decking the home with boughs of holly, his sacred symbol.

It wasn’t until the late Middle Ages that evergreens received a qualified welcome from the Church, as props in the mystery plays that told the story of Creation. In Germany, mystery plays were performed on Christmas Eve, traditionally celebrated in the church calendar as the feast day of Adam and Eve. The original baubles that hung on these “paradise trees,” representing the trees in the Garden of Eden, were round wafer breads that symbolized the Eucharist.

The Christmas tree remained a northern European tradition until Queen Charlotte, the German-born wife of George III, had one erected for a children’s party at Windsor Castle in 1800. The British upper classes quickly followed suit, but the rest of the country remained aloof until 1848, when the London Illustrated News published a charming picture of Queen Victoria and her family gathered around a large Christmas tree. Suddenly, every household had to have one for the children to decorate. It didn’t take long for President Franklin Pierce to introduce the first Christmas tree to the White House, in 1853—a practice that every President has honored except Theodore Roosevelt, who in 1902 refused to have a tree on conservationist grounds. (His children objected so much to the ban that he eventually gave in.)

Many writers have tried to capture the complex feelings that Christmas trees inspire, particularly in children. Few, though, can rival T.S. Eliot’s timeless meditation on joy, death and life everlasting, in his 1954 poem “The Cultivation of Christmas Trees”: “The child wonders at the Christmas Tree: / Let him continue in the spirit of wonder / At the Feast as an event not accepted as a pretext; / So that the glittering rapture, the amazement / Of the first-remembered Christmas Tree /…May not be forgotten.”

Historically Speaking: The Tradition of Telling All

From ancient Greece to modern Washington, political memoirs have been irresistible source of gossip about great leaders

The Wall Street Journal, November 30, 2018

ILLUSTRATION: THOMAS FUCHS

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”