The Sunday Times: Beyoncé’s NFL half-time show embraced Trump’s new America

A denim pick-up truck. Stetsons galore. And it all played out on a Texan football field. The star’s 12-minute, $20 million set arguably heralds a new cultural era

The Sunday Times

December 28, 2024

Post Malone and Beyoncé performing at the halftime show of a Baltimore Ravens vs. Houston Texans game.

Beyoncé was joined by another US artist, Post Malone, and a jeans-clad jalopy ALEX SLITZ/GETTY IMAGES

The United States during the Reagan years in the 1980s was proud, loud, and unabashed. Americans wore diamantes to bed and ten-gallon stetsons to dinner. They drove gas-guzzling sedans, watched MTV on cable, and boasted a president who owned a white Arabian stallion called El Alamein. It was America resurgent with brass knobs on.

Last week, for her NFL half-time Christmas Day concert, Beyoncé gave this long-discarded America its first public airing in nearly forty years. All the familiar elements were there: the bedazzling, the strutting, the clunky Yank Tanks — even the gleaming white horse which she rode into the stadium.

Beyoncé performing at the Netflix NFL Christmas Gameday halftime show.

NETFLIX/BEYONCÉ

Beyoncé began her set with an off-stage rendition of 16 Carriages, a defiant ode to a hardscrabble childhood, quite possibly her own. The contrast between the grim lyrics and the blindingly white country and western aesthetic was the first of several striking dissonances during the performance. The 32-time Grammy winner has an almost unique ability to channel the zeitgeist into politically charged songs that still have mainstream appeal.

American audiences know that when Beyoncé sings, she isn’t simply putting stories to music, she is communicating a series of messages that collectively form a meta-commentary about what it means to be a person of colour in white America today.

Viewers got an eyeful of the extra-white teeth, the extra-big cars, the extra-wide shoulder pads, and saw the return of 1980s Dallas chic when everybody wanted to own a pair of cowboy boots and the whole world was obsessed with who shot JR. The Guardian waxed lyrical about the joy and playfulness of Beyoncé’s routines rather than the charged racial undertones.

The world’s reaction to the Christmas Day concert was similarly context-free. They took in the unbridled exuberance of Beyoncé’s version of Pop Americana, and saw only, as British viewers did, the 1980s-style flashiness that accompanied America’s resurgence under Ronald Reagan.

But there is more going on. There’s been a seismic shift in how America views itself. In recent years the country’s self-confidence has been dented. The combination of Black Lives Matter, Maga, #MeToo, QAnon, Stop the Steal, the Anti-Vax Movement and cancel culture in general has achieved what Vietnam, the oil crisis, 9/11, the Great Recession, and Covid-19 could not: turn America from an overly positive country into a hyper-negative one.

Americans are feeling bad about themselves, their country, and their history. Those who are not angry are anxious.

Donald Trump’s recent victory was his way of overturning that. A promise of what life could be like in the 80s, America’s boom time. Whereas Reagan projected an open-handed, “what’s-not-to-like?” image, a movie star straight from a Western, now we have a loud, close-fisted, angry, defensive kind of brashness.

Trump’s election promises to enact foreign tariffs, stay out of foreign conflicts, and deport more than a million illegal immigrants are a flat-out contradiction of Reagan’s pro-immigration, pro-trade, pro-American interventionism. Almost by sheer luck, the 40th president proved that the occasional arms race can be good for capitalism. But history offers multiple examples of what price controls coupled with isolationist policies does to a country.

In Beyoncé’s America, both sides can co-exist.

Donald Trump at a campaign rally, surrounded by Secret Service agents.

An image of Trump in the moments after an assassination attempt in July could define the era /EVAN VUCCI

To my mind, Beyoncé was telling Americans to move on and embrace the now. It isn’t about slick LA or the liberal seaboard, but heartland Americana. Her performance was a carefully curated smorgasbord of nods and references to local and national symbols. She sang Dolly Parton’s Jolene, but her backing singers were country music singers who are also black women; she used a 200-strong marching band, but an all-black one recruited from one of the country’s historically black universities.

Beyoncé and backup singers in white cowboy hats and gowns at a halftime performance.

NETFLIX/BEYONCÉ

Texas, and Houston specifically, received a royal wave with line dancing, rodeo stars, and the traditional homecoming parade, complete with car cavalcade and not one but two homecoming queens, both former Texas beauty pageant winners.

Hispanic culture was also a beneficiary of Beyoncé’s cultural largesse with her mariachi-inspired costume decorations, and the deployment of Lowrider Impala Chevrolets, a ban on which was recently lifted by the governor of California, Gavin Newsom. A civilian truck was was clad in denim in an homage to Beyoncé’s song, LEVII’S JEANS.

Beyoncé and Blue Ivy performing together at a halftime show.

Beyoncé was joined during the performance by her 12-year-old daughter Blue Ivy DAVID J PHILLIP

That said, the appearance of an American flag shrouded in sheet plastic was one of the biggest controversies of the performance. One half of the angry sides of America objected to its presence at all, another complained it was insufficient. The rest were anxious about its ambiguity — was it an embrace or rejection of American values? Was Beyoncé ruining country music, a sacred American institution, with her non-white themes, rhythms and tropes, or reinventing it?

Beyoncé's 2024 NFL Halftime Show performance.

YOUTUBE

2025 may show that neither interpretation of Beyoncé’s half-time concert was grounded in any sort of reality. I fear that Beyoncé’s “playful” references to a time when the Dukes of Hazzard could be watched without cringing were not meant to be taken literally. That America is not coming back. However, it will take a great deal more than twelve minutes of foot-stamping glory to usher in the other America, the one where self-hatred gives way to self-reinvention.

Historically Speaking: Paying With Plastic Is New, but Credit Isn’t

From the Knights Templar to the Diners Club, people have long traded in promises instead of cash

The Wall Street Journal

March 17, 2022

On March 6, AmexMastercard and Visa announced that they were suspending their operations in Russia. The move will deal a blow to Russian commerce, as a life tied to cash and checks can be cumbersome in the extreme.

The Mesopotamians were among the first known to grasp the usefulness of the charge account. The extensive trade between Mesopotamia and the Indus Valley made regular shipments of gold highly impracticable. Instead, traders used seals and clay tablets to keep running tallies for settlement at a future date.

Greek and Roman travelers relied on letters of creditworthiness from their personal banks. But this practice died out during the Dark Ages. The Knights Templar revived it during the 12th century as one of their services to Christians traveling to Jerusalem. Pilgrims could deposit their money at any Templar house, receive a “letter of credit” and use it to withdraw funds from the Temple stronghold in Jerusalem.

Illustration: Thomas Fuchs

The letter of credit eventually evolved into the bill of exchange, or promissory note, between banks, used for business transactions. In 1772, the London Exchange Banking Company in England offered its clients a version for everyday use: Called “circular notes,” they were issued in set denominations, could be cashed in many major cities and were guaranteed against loss and theft.

The idea was slow to catch on in the United States until a freight transport business called the American Express Company decided its real profits lay in facilitating the movement of money. Having already enjoyed considerable success with money orders, in 1891 it rolled out the American Express traveler’s check, which merely required the owner’s counter signature to be valid.

The traveler’s check was by no means the only alternative to cash. By the late 19th century, most department stores had tokens, often personalized metal key fobs, that loyal customers could present in lieu of immediate payment. After World War I, oil companies went a step further, offering “courtesy cards” that could be used at their gas stations. Airline companies and hotels did the same.

The profusion of charge cards soon became onerous. In 1946, a Brooklyn bank experimented with the Charg-It Card, which could be used at local businesses. Three years later, so the story goes, New York businessman Frank X. McNamara was dining at a restaurant with clients when he realized he was out of cash. The ensuing embarrassment inspired him to propose a new kind of charge card: one that was members-based, would work anywhere and earned its profit by charging each customer an annual fee.

The Diners Club card had more than 10,000 members by the end of its first year. The first bank to copy the idea was Bank of America in 1958. Its BankAmericard—which became Visa in 1976—allowed card owners to pay interest rather than settling their monthly bill. By the mid-’60s, other banks were scrambling to imitate what had effectively become a cash-cow. The most successful competitor was a consortium of banks behind the Interbank Card, today known as Mastercard.

The first bank outside the U.S. to offer a credit card was Britain’s Barclays Bank in 1966. But by then Visa and Mastercard were already expanding to other countries, setting the stage for the global duopoly they are today.

Vladimir Putin may have difficulty charging his next holiday to his Visa. But he can still use his China-backed UnionPay card—for now.

Appeared in the March 19, 2022, print edition as ‘Paying With Plastic Is New, But Credit Isn’t’.

The Spectator – Towards Zero: the gruesome countdown to the American Civil War

The North and South had been bitterly divided over slavery since the invention of the cotton gin in the 1790s, but the Battle of Fort Sumter in 1861 would prove the point of no return

Some 100,000 books have been written about the American Civil War since it ended in 1865. That’s hardly surprising, given the four-year conflict’s impact on society, and not just because of the immense death toll, which new estimates put as high as 750,000 – more than the losses from all other wars combined. The effusion of blood created a new nation and a new mythology, anchored on the principles of freedom, equality and democracy.

Coloured lithograph of the bombardment of Fort Sumter, Charleston Harbor, 12-13 April 1861. [Lithograph by Currier & Ives, New York, 1861/ Getty Images]

There is not much room in this crowded field for Civil War neophytes. Erik Larson knows what he is about, however, in The Demon of Unrest – but do his critics? The mixed reception this book has received suggests not. As with his previous best-sellers, the author has taken a single event, the Battle of Fort Sumter in Charleston, South Carolina, in this case, and used it as a highly effective framing device for the immersive story he wishes to tell.

The actual event was a straightforward one. After holding out against Confederate forces for 108 days, the starving Union garrison relinquished its control of the fort on 13 April 1861. The battle was the point of no return, and although there hadn’t been any fatalities during the 34-hour bombardment leading up to it, a gruesome accident during the 50-gun salute did result in the first death of the war. Neither the Confederate besiegers nor the northern defenders of the fort had any inkling of the hell they were about to unleash on their fellow Americans.

This is not to say they were blindsided by the war. The country had been tearing itself apart over the issue of free vs slave labour ever since the invention of the cotton gin in the 1790s. Once slave-grown cotton could be efficiently cleaned and processed by machine, the South went from being an agricultural backwater to an economic powerhouse exporting four million cotton bales a year.

The production costs were cheap, the supply inexhaustible, and the southern states had a virtual monopoly on the global market. Unlike the northern states, they didn’t need immigration, education or industrialisation to grow rich – just millions of Africans, force-fed and force-bred into perpetual bondage. In the 1830s, southerners began referring to slavery as the ‘peculiar institution’, not because it was wicked and shameful, but on account of it being unique and inseparable from the southern way of life.

If northerners were at all uncertain about the non-negotiability of slavery – unlikely, given the political paralysis it caused in Washington – incidents on the floor of the Senate such as the Mississippi senator Henry Foote brandishing a loaded revolver and the South Carolina senator Preston Brooks beating the abolitionist campaigner Charles Sumner unconscious, helped to clear up any confusion.

International condemnation of slavery also fuelled southern bluster and arrogance. On the eve of the war, Britons were unamused to hear the South Carolina plantation owner James Hammond (rendered in gloriously repulsive detail by Larson) describe England as a vassal state in the southern empire. ‘Cotton is king,’ Hammond raged in a speech in 1858 that made him infamous on both sides of the Atlantic. ‘No power on Earth dares to make war on it.’ No power except the executive power of Abraham Lincoln, it turned out.‘Cotton is king,’ James Hammond raged in 1858. ‘No power on Earth dares to make war on it’

The pivotal role that individual action plays in momentous times is a recurring theme in Larson’s books. He is fascinated by two kinds of anti-heroes: the monster with a talent for propelling events, like Hammond, and the decent man whose limitations spur him towards catastrophe, like the professorial William Dodd, America’s ambassador to Germany the year Hitler came to power (In the Garden of Beasts, 2011). A colleague of Dodd’s later recalled he had seldom, if ever, ‘worked with a chief of mission who was more futile and ineffective’.In The Demon of Unrest, Larson’s anti-heroes are more starkly drawn. The decent men may be doomed to die, like Lincoln, or fail in their objective, like US Major Robert Anderson, the commander of Fort Sumter, but their flaws and limitations render them more, rather than less, admirable.

Lincoln was elected the first Republican president in November 1860 by a majority in the 34-state electoral college. But he lost the popular vote by a wide margin, giving the impression that his victory was an accident. His failure to address this added fuel to the claims of southern fire-eaters that he intended to destroy the South’s economy using high tariffs, ram abolition down their throats and set off a race war between blacks and whites.

Having never visited the Deep South, Lincoln was unaware of how entrenched the secession movement had become or how desperately pro-Union southerners needed support and leadership. Even more damaging for the prospects of peace was the traditional four months’ grace between the election and the inauguration. The General Assembly of South Carolina, the state with the noisiest supporters and longest history of secession attempts, voted to become an ‘independent commonwealth’ on 20 December 1860. Half a dozen more followed soon afterwards, yet Lincoln was still in Illinois in early February when the seven announced the formation of the Confederate States of America under President Jefferson Davis.

Lincoln only settled into Washington at the end of February, by which time another four states were preparing to secede, bringing the final total to 11. At his inauguration on 4 March, his belated attempt to cool secession fever by insisting he would defend the Union to his last breath while promising that slavery was safe in his hands alienated everyone. Southerners were convinced he was lying, even as abolitionists hoped that he was.

Initially, the majority of Lincoln’s cabinet felt certain he was not up to the job. Several tried to sideline him, sowing chaos among the already confused attempts to prevent disunion. Lincoln never altered his position, however, that slavery was a negotiable issue, but not Federal authority. Only two naval fortifications were still in Union hands by the time he took office: Sumter in Charleston and Fort Pickens in Pensacola, Florida. Back in December, John Floyd, Buchanan’s secessionist secretary of war, believed he had picked ‘one of us’ when he assigned a middle-aged southern officer from a slave-owning family to oversee Charleston’s fortifications. Major Robert Anderson’s rise up the ranks had stalled, despite an exemplary service record and being severely wounded in action during the Mexican-American War. He was teaching cadets at West Point when Floyd recalled him. It was not uncommon for such men to seek compensation by means of treachery. Anderson was the exception.

South Carolina’s secession was meant to have been the signal for Anderson to stand aside or evacuate his position. He did, slipping out under cover of darkness with a few dozen troops; but only to take up a stronger one. Although still under construction, Fort Sumter was the biggest of Charleston’s three forts. Anderson didn’t care about the rights or wrongs of slavery, nor did he care much about politics, but his honour and duty were sacred to him. To the indignation and fury of his now former brother officers, he made it clear that he would protect the Union flag flying over Sumter until he ran out of food or Confederate forces overwhelmed him.

In Larson’s dramatic rendering of the countdown to war, Lincoln was the one who cocked the starting gun by insisting that Major Anderson be resupplied, knowing it would provoke the Confederates; by engaging in a battle he knew he would lose, Anderson was the one who fired it. There’s an unmistakable aura of Greek tragedy to these men in The Demon of Unrest. They are reluctant heroes, forced to act the way they do because they are incapable of behaving in any other way.

It’s history as a form of catharsis, which leads to the deeper purpose of this work. The real project of the book may not be immediately divined from its soaring prose and ripping action scenes, yet it was shaped by the calamitous events at the Capitol on 6 January 2021. ‘I had the eerie feeling that present and past had merged,’ Larson writes in his foreword. It seemed to him as though America was once again in danger of slipping loose from its ideological moorings.

The Demon of Unrest is Larson’s attempt to call the country back to its senses. The book is not so much history in the traditional Thucydidean manner of causes and events, but rather a political argument posing as history in the manner of Xenophon, the father of popular narrative history. It is a full-throated defence of democratic values, individual agency and the power of collective action. Why read it? Because to understand the meaning of freedom for others is to know it in ourselves.

Historically Speaking: The Drama of Finding Lost Cities

We are always a discovery away from rewriting ancient history.

The Wall Street Journal

March 21, 2024

In January, scientists announced the discovery of the oldest pre-Columbian cities in the Amazon. They were built in the Upano Valley in Ecuador by an organized urban society that flourished between 800 B.C. and 600 A.D. Surprisingly, this lost Amazonian civilization created cities that resemble modern suburbs, complete with low-density housing, ample green space and manicured road networks. It would seem that ancient urban complexity didn’t just arise in the familiar model of the Eurasian walled city.

ILLUSTRATION: THOMAS FUCHS

This kind of challenge to the scholarly consensus is actually fairly common with the discovery of lost cities. When civilizations are erased from history, bringing them back into the narrative can lead to a dramatic realignment of facts.

The discovery of Troy is an early example of this phenomenon. In 1870, after 12 years of fruitless searching, a German businessman and amateur archaeologist named Heinrich Schliemann found Troy’s ruins near modern Hisarlik on the Turkish Aegean coast. His careless excavation of the site makes modern archaeologists cringe, but against incredible odds Schliemann proved that Homer’s Iliad was based on real events. The study of ancient Greece was changed forever.

Three major discoveries in the early 1900s also reshaped our understanding of the Bronze Age. In 1906, German archaeologists digging through the ruins of a windswept plateau in north-central Turkey uncovered the forgotten city of Hattusa. Its royal archive confirmed that the city had been the last capital of the Hittites and the nerve center of a powerful empire that rivaled Egypt. After a glorious run of five hundred years, the Hittite civilization disintegrated so completely by around 1200 B.C. that little was known about it beyond the mentions in the Bible. The recovery of Hattusa redrew the map of the ancient world, this time with the Hittites at the center.

Across the Aegean, Sir Arthur Evans similarly rescued the Minoans with his discovery around 1905 of the Palace of Knossos on Crete, which flourished between 1700 and 1500 B.C. Evans’s breakthrough was to show that Minoan culture was nonviolent, unlike the other ancient Greek societies known at the time. It was also perhaps matriarchal. The palace, which lacked fortifications, had frescoes of women in authoritative poses. Recovered sculptures often featured goddesses. Evans ultimately argued that Knossos was proof that gender inequality was not, as generally believed, intrinsic to civilization.

The discovery in 1911 of Mohenjo-daro in present-day Pakistan was no less revelatory. This once-vital center of the Indus Valley Civilization, a trading society that thrived between 2500 and 1700 B.C., notably lacked temples, palaces, noble houses or even rich or poor neighborhoods. The city seems to have spent its wealth instead on civic amenities, such as public granaries and universal plumbing.

The egalitarianism at the heart of Mohenjo-daro suggests that the more cities and civilizations we find, the more it will complicate our understanding of human society’s origins. It is humbling to know that we are always a discovery away from rewriting ancient history. It’s invigorating, too.

Historically Speaking: Aspirin, a Pioneering Wonder Drug

The winding, millennia-long route from bark to Bayer.

The Wall Street Journal

February 1, 2024

For ages the most reliable medical advice was also the most simple: Take two aspirin and call me in the morning. This cheap pain reliever, which also thins blood and reduces inflammation, has been a medicine cabinet staple ever since it became available over the counter nearly 110 years ago.

Willow bark, a distant ancestor of aspirin, was a popular ingredient in ancient remedies to relieve pain and treat skin problems. Hippocrates, the father of medicine, was a firm believer in willow’s curative powers. For women with gynecological troubles in the fourth century B.C., he advised burning the leaves “until the steam enters the womb.”

That willow bark could reduce fevers wasn’t discovered until the 18th century. Edward Stone, an English clergyman, noticed its extremely bitter taste was similar to that of the cinchona tree, the source of the costly malaria drug quinine. Stone dried the bark and dosed himself to treat a fever. When he felt better, he tested the powder on others suffering from “ague,” or malaria. When their fevers disappeared, he reported triumphantly to the Royal Society in 1763 that he had found another malaria cure. In fact, he had identified a way to treat its symptoms.

Willows contain salicin, a plant hormone with anti-inflammatory, fever-reducing and pain-relieving properties. Experiments with salicin, and its byproduct salicylic acid, began in earnest in Europe in the 1820s. In 1853 Charles Frédéric Gerhardt, a French chemist, discovered how to create acetylsalicylic acid, the active ingredient in aspirin, but then abandoned his research and died young.

There is some debate over how aspirin became a blockbuster drug for the German company Bayer. Its official history credits Felix Hoffmann, a Bayer chemist, with synthesizing acetylsalicylic acid in 1897 in the hopes of alleviating his father’s severe rheumatic pain. Bayer patented aspirin in 1899 and by 1918 it had become one of the most widely used drugs in the world.

ILLUSTRATION: THOMAS FUCHS

But did Hoffman work alone? Shortly before his death in 1949, Arthur Eichengrün, a Jewish chemist who had spent World War II in a concentration camp, published a paper claiming that Bayer had erased his contribution. In 2000 the BMJ published a study supporting Eichengrün’s claim. Bayer, which became part of the Nazi-backing conglomerate I.G. Farben in 1925, has denied that Eichengrün had a role in the breakthrough.

Aspirin shed its associations with the Third Reich after I.G. Farben sold off Bayer in the early 1950s, but the drug’s pain-relieving hegemony was fleeting. By 1956 Bayer’s British affiliate brought acetaminophen to the market. Ibuprofen became available in 1962.

The drug’s fortunes recovered after the New England Journal of Medicine published a study in 1989 that found the pill reduced the threat of a heart attack by 44%. Some public-health officials promptly encouraged anyone over 50 to take a daily aspirin as a preventive measure.

But as with the case with Rev. Stone, it seems the science is more complicated. In 2022 the U.S. Preventive Services Task Force officially advised against taking the drug prophylactically, given the risk of internal bleeding and the availability of other therapies. Aspirin may work wonders, but it can’t work miracles.

Historically Speaking: Our Fraught Love Affair With Cannabis

Ban it? Tax it? Humans have been hounded by these questions for millennia.

The Wall Street Journal

January 19, 2024

Ohio’s new marijuana law marks a watershed moment in the decriminalization of cannabis: more than half of Americans now live in places where recreational marijuana is legal. It is a profound shift, but only the latest twist in the long and winding saga of society’s relationship with pot.

Humans first domesticated cannabis sativa around 12,000 years ago in Central and East Asia as hemp, mostly for rope and other textiles. Later, some adventurous forebears found more interesting uses. In 2008, archaeologists in northwestern China discovered almost 800 grams of dried cannabis containing high levels of THC, the psychoactive ingredient in marijuana, among the burial items of a seventh century B.C. shaman.

The Greeks and Romans used cannabis for hemp, medicine and possibly religious purposes, but the plant was never as pervasive in the classical world as it was in ancient India. Cannabis indica, the sacred plant of the god Shiva, was revered for its ability to relieve physical suffering and bring spiritual enlightenment to the holy.

Cannabis gradually spread across the Middle East in the form of hashish, which is smoked or eaten. The first drug laws were enacted by Islamic rulers who feared their subjects wanted to do little else. King al-Zahir Babar in Egypt banned hashish cultivation and consumption in 1266. When that failed, a successor tried taxing hashish instead in 1279. This filled local coffers, but consumption levels soared and the ban was restored.

The march of cannabis continued unabated across the old and new worlds, apparently reaching Stratford-upon-Avon by the 16th century. Fragments of some 400-year-old tobacco pipes excavated from Shakespeare’s garden were found to contain cannabis residue. If not the Bard, at least someone in the household was having a good time.

By the 1600s American colonies were cultivating hemp for the shipping trade, using its fibers for rigs and sails. George Washington and Thomas Jefferson grew cannabis on their Virginia plantations, seemingly unaware of its intoxicating properties.

Veterans of Napoleon’s Egypt campaign brought hashish to France in the early 1800s, where efforts to ban the habit may have enhanced its popularity. Members of the Club des Hashischins, which included Charles Baudelaire, Honoré de Balzac, Alexander Dumas and Victor Hugo, would meet to compare notes on their respective highs.

ILLUSTRATION: THOMAS FUCHS

Although Queen Victoria’s own physician advocated using cannabis to relieve childbirth and menstrual pains, British lawmakers swung back and forth over whether to tax or ban its cultivation in India.

In the U.S., however, Americans lumped cannabis with the opioid epidemic that followed the Civil War. Early 20th-century politicians further stigmatized the drug by associating it with Black people and Latino immigrants. Congress outlawed nonmedicinal cannabis in 1937, a year after the movie “Reefer Madness” portrayed pot as a corrupting influence on white teenagers.

American views of cannabis have changed since President Nixon declared an all-out War on Drugs more than 50 years ago, yet federal law still classifies the drug alongside heroin. As lawmakers struggle to catch up with the zeitgeist, two things remain certain: Governments are often out of touch with their citizens, and what people want isn’t always what’s good for them.

Historically Speaking: The Hunt for a Better Way to Vote

Despite centuries of innovation, the humble 2,500-year-old ballot box is here to stay.

The Wall Street Journal

January 4, 2024

At least 40 national elections will take place around the world over the next year, with some two billion people going to the polls. Thanks to the 2,500-year-old invention of the ballot box, in most races these votes will actually count and be counted.

Ballot boxes were first used in Athens during the 5th century B.C., but in trials rather than elections. Legal cases were tried before a gathering of male citizens, known as the Assembly, and decided by vote. Jurors indicated their verdict by dropping either a marked or unmarked pebble into an urn, which protected them against violence by keeping their decision secret.

The first recorded case of ballot box stuffing also took place in 5th century Athens. To exile an unpopular Athenian via an “ostracism election,” Assembly voters simply had to scratch his name on an ostraka, a pottery shard, and whomever reached a certain threshold of votes was banished for 10 years. It is believed that Themistocles’s political enemies rigged his ostracism vote in 472 B.C. by distributing pre-etched shards throughout the Assembly.

In 139 B.C., the Romans passed a series of voter secrecy laws starting with the Lex Gabinia, which introduced the secret ballot for magistrate elections. A citizen would write his vote on a wax-covered wooden tablet and then deposit it in a wicker basket called a cista. The cistae were so effective at protecting voters from public scrutiny that many senators, including Cicero, regarded the ballot box as an attack on their authority and a dangerous concession to mob rule.

ILLUSTRATION: THOMAS FUCHS

The fact that secret ballots allowed men to vote as they pleased was one reason why King Charles I of England ordered all “balloting boxes” to be taken out of circulation in 1637, where they remained until the 19th century.

Even then, there was strong resistance to ballot boxes in Britain and America on the grounds that they were unmanly: A citizen ought to display his vote, not hide it. In any case, there was nothing special about a 19th-century ballot box except its convenience for stealing or stuffing. One notorious election scam in San Francisco in the 1850s involved a ballot box with a false bottom.

Public outrage over rigged elections in the U.S. led some states to adopt Samuel Jollie’s tamper-proof glass ballot box, which New York first used in 1857. But a transparent design couldn’t prevent these boxes from mysteriously disappearing, nor would it have saved Edgar Allan Poe, who is thought to have died in Baltimore from being “cooped,” a practice where kidnap victims were drugged into docility and made to vote multiple times.

To better guarantee the integrity of elections, New York introduced in 1892 a new machine by Jacob Myers that allowed voters to privately choose candidates by pulling a lever, which dispensed with ballots and ballot boxes. Other inventors quickly improved on the design and by 1900 Jollie’s glass ballot box had become obsolete. By World War II almost every city had switched over to mechanical voting systems, which tallied votes automatically.

The simple ballot box seemed destined to disappear until controversies over machine irregularities in the 2000 presidential election resulted in the Help America Vote Act, which requires all votes to have a paper record. The ballot box still isn’t the perfect shield against fraud. But then, neither is anything else.

Historically Speaking: The Ancient Origins of the Christmas Wreath

Before they became a holiday symbol, wreaths were used to celebrate Egyptian gods, victorious Roman generals and the winter solstice.

On Christmas Eve, 1843, three ghosts visited Ebenezer Scrooge in the Charles Dickens novella “A Christmas Carol” and changed Christmas forever. Dickens is often credited with “inventing” the modern idea of Christmas because he popularized and reinvigorated such beloved traditions as the turkey feast, singing carols and saying “Merry Christmas.”

But one tradition for which he cannot take credit is the ubiquitous Christmas wreath, whose pedigree goes back many centuries, even to pre-Christian times.

ILLUSTRATION: THOMAS FUCHS

Wreaths can be found in almost every ancient culture and were worn or hung for many purposes. In Egypt, participants in the festival of Sokar, a god of the underworld, wore onion wreaths because the vegetable was venerated as a symbol of eternal life. The Greeks awarded laurel wreaths to the winners of competitions because the laurel tree was sacred to Apollo, the god of poetry and athletics. The Romans bestowed them on emperors and victorious generals.

Fixing wreaths and boughs onto doorposts to bring good luck was another widely practiced tradition. As early as the 6th century, Lunar New Year celebrants in central China would decorate their doors with young willow branches, a symbol of immortality and rebirth.

The early Christians did not, as might be assumed, create the Yuletide wreath. That was the pagan Vikings. They celebrated the winter solstice with mistletoe, evergreen wreaths made of holly and ivy, and 12 days of feasting, all of which were subsequently turned into Christian symbols. Holly, for example, has often been equated with the crown of thorns. Perhaps not surprisingly for the people who also introduced the words “knife,” “slaughter” and “berserk,” one Viking custom involved setting the Yuletide wreath on fire in the hope of attracting the sun’s attention.

Across northern Europe, the Norse wreath was eventually absorbed into the Christian calendar as the Advent wreath, symbolizing the four weeks before Christmas. It became part of German culture in 1839, when Johann Hinrich Wichern, a Lutheran pastor in Hamburg, captured the public’s imagination with his enormous Advent wreath. Made with a cartwheel and 24 candles, it was intended to help the children at his orphanage count the days to Christmas.

German immigrants to the U.S. brought the Advent wreath with them. Still, while Americans might accept a candlelit wreath inside the house, door wreaths were rare, especially in former Puritan strongholds such as Boston. The whiff of disapproval hung about until 1935, when Colonial Williamsburg appointed Louise B. Fisher, an underemployed and overqualified professor’s wife, to be its head of flowers and decorations. Inspired by her love of 15th-century Italian art, Fisher allowed her wreath designs to run riot on the excuse that she was only adding fruits and other whimsies that had been available during the Colonial era.

Thousands of visitors saw her designs and returned home to copy them. Like Wichern, she helped inspire a whole generation to be unapologetically joyous with their wreaths. Thanks in part to her efforts, America’s front doors are a thing to behold at this time of year, proving that it’s never too late to pour new energy into an old tradition.

Historically Speaking: A Tale of Two Hats

Napoleon’s bicorne and Santa Claus’s red cap both trace their origins to the felted headgear worn in Asia Minor thousands of years ago.

December makes me think of hats—well, one hat in particular. Not Napoleon’s bicorne hat, an original of which (just in time for Ridley Scott’s movie) sold for $2.1 million at an auction last month in France, but Santa’s hat.

The two aren’t as different as you might imagine. They share the same origins and, improbably, tell a similar story. Both owe their existence to the invention of felt, a densely matted textile. The technique of felting was developed several thousand years ago by the nomads of Central Asia. Since felt stays waterproof and keeps its shape, it could be used to make tents, padding and clothes.

The ancient Phrygians of Asia Minor were famous for their conical felt hats, which resemble the Santa cap but with the peak curving upward and forwards. Greek artists used them to indicate a barbarian. The Romans adopted a red, flat-headed version, the pileus, which they bestowed on freed slaves.

Although the Phrygian style never went out of fashion, felt was largely unknown in Western Europe until the Crusades. Its introduction released a torrent of creativity, but nothing matched the sensation created by the hat worn by King Charles VII of France in 1449. At a celebration to mark the French victory over the English in Normandy, he appeared in a fabulously expensive, wide-brimmed, felted beaver-fur hat imported from the Low Countries. Beaver hats were not unknown; the show-off merchant in Chaucer’s “Canterbury Tales” flaunts a “Flandrish beaver hat.” But after Charles, everyone wanted one.

Hat brims got wider with each decade, but even beaver fur is subject to gravity. By the 17th century, wearers of the “cavalier hat” had to cock or fold up one or both sides for stability. Thus emerged the gentleman’s three-sided cocked hat, or tricorne, as it later became known—the ultimate divider between the haves and the have-nots.

The Phrygian hat resurfaced in the 18th century as the red “Liberty Cap.” Its historical connections made it the headgear of choice for rebels and revolutionaries. During the Reign of Terror, any Frenchman who valued his head wore a Liberty Cap. But afterward, it became synonymous with extreme radicalism and disappeared. In the meantime, the hated tricorne had been replaced by the less inflammatory top hat. It was only naval and military men, like Napoleon, who could get away with the bicorne.

The wide-brimmed felted beaver hat was resurrected in the 1860s by John B. Stetson, then a gold prospector in Colorado. Using the felting techniques taught to him by his hatter father, Stetson made himself an all-weather head protector, turning the former advertisement for privilege into the iconic hat of the American cowboy.

Thomas Nast, the Civil War caricaturist and father of Santa Claus’s modern image, performed a similar rehabilitation on the Phrygian cap. To give his Santa a far-away but still benign look, he gave him a semi-Phrygian crossed with a camauro, the medieval clergyman’s cap. Subsequent artists exaggerated the peak and cocked it back, like a nightcap. Thus the red cap of revolution became the cartoon version of Christmas.

In this tale of two hats lies a possible rejoinder to the cry in T.S. Eliot’s “The Waste Land”: “Who is the third who walks always beside you?” It is history, invisible yet present, protean yet permanent—and sometimes atop Santa’s head.

Historically Speaking: The Enduring Allure of a Close Shave

Men may be avoiding their razors for ‘Movember,’ but getting rid of facial and other body hair goes back millennia in many different cultures

November is a tough time for razors. Huge numbers of them haven’t seen their owners since the start of “Movember,” the annual no-shave fundraiser for men’s health. But the razors needn’t fear: The urge of men to express themselves by trimming and removing their hair, facial and otherwise, runs deep.

Although no two cultures share the exact same attitude toward shaving, it has excited strong feelings in every time and place. The act is redolent with symbolism, especially religious and sexual. Even Paleolithic Man shaved his hair on occasion, using shells or flint blades as a crude razor. Ancient Egypt was the earliest society to make hair removal a way of life for both sexes. By 3000 B.C., the Egyptians were using gold and copper to manufacture razors with handles. The whole head was shaved and covered by a wig, a bald head being a practical solution against head lice and overheating. Hair’s association with body odor and poor hygiene also made its absence a sign of purity.

Around 1900 B.C., the Egyptians started experimenting with depilatory pastes made of beeswax and boiled caramel. The methods were so efficacious that many of them are still used today. The only parts never shaved were the eyebrows, except when mourning the death of a cat, a sacred animal in Egyptian culture. The Greco-Roman world adopted the shaved look following Alexander the Great, who thought beards were a liability in battle, since they could be grabbed and pulled. The Romans took their cue from him.

But they also shared the Egyptian obsession with body hair. The Roman fetish for tweezing created professional hair pluckers. The philosopher Seneca, who lived next door to a public bath, complained bitterly about the loud screaming from plucking sessions, which intruded upon the serenity he required for deep thoughts. Both pluckers and barbers disappeared during Rome’s decline.

Professional barbers only reappeared in significant numbers after 1163 when the Council of Tours banned the clergy, who often provided this service, from shedding blood of any kind. The skills of barber-surgeons improved, unlike their utensils, which hardly changed at all until the English invented the foldable straight razor in the late 17th century.

The innovation prompted a “safety” race between English and French blade manufacturers. The French surged ahead in the late 18th century with the Perret razor, which had protective guards on three sides. The English responded with the T-shaped razor patented by William Henson in 1847. In 1875, the U.S. leapfrogged its European rivals with electrolysis, still the best way to remove hair permanently.

Nevertheless, the holy grail of a safe, self-shaving experience remained out of reach until King Camp Gillette, a traveling salesman from New York, worked with an engineer to produce the first disposable, double-edged razorblade. His Gillette razor went on sale in 1903. By the end of World War I, he was selling millions—and not just to men.

Gillette seized on the fashion for sleeveless summer dresses to market the Milady Décolleté Gillette Razor for the “smooth underarm” look. The campaign to convince women to use a traditionally male tool was helped by a series of scandals in the 1920s and 30s over tainted hair products. The worst involved Koremlu, a depilatory cream that was largely rat poison.

The razor may be at least 50,000 years old, but it remains an essential tool. And it’s a great stocking stuffer.