Historically Speaking: Women Who Made the American West

From authors to outlaws, female pioneers helped to shape frontier society.

The Wall Street Journal

September 9, 2020

On Sept. 14, 1920, Connecticut became the 37th state to ratify the 19th Amendment, which guaranteed women the right to vote. The exercise was largely symbolic, since ratification had already been achieved thanks to Tennessee on August 18. Still, the fact that Connecticut and the rest of the laggard states were located in the eastern part of the U.S. wasn’t a coincidence. Though women are often portrayed in Westerns as either vixens or victims, they played a vital role in the life of the American frontier.

The outlaw Belle Starr, born Myra Belle Shirley, in 1886.
PHOTO: ROEDER BROTHERS/BUYENLARGE/GETTY IMAGES

Louisa Ann Swain of Laramie, Wyo., was the first woman in the U.S. to vote legally in a general election, in September 1870. The state was also ahead of the pack in granting women the right to sit on a jury, act as a justice of the peace and serve as a bailiff. Admittedly, it wasn’t so much enlightened thinking that opened up these traditionally male roles as it was the desperate shortage of women. No white woman crossed the continent until 17-year-old Nancy Kelsey traveled with her husband from Missouri to California in 1841. Once there, as countless pioneer women subsequently discovered, the family’s survival depended on her ability to manage without his help.

Women can and must fend for themselves was the essential message in the ‘”Little House on the Prairie” series of books by Laura Ingalls Wilder, who was brought up on a series of homesteads in Wisconsin and Minnesota in the 1870s. Independence was so natural to her that she refused to say “I obey” in her marriage vows, explaining, “even if I tried, I do not think I could obey anybody against my better judgment.”

Although the American frontier represented incredible hardship and danger, for many women it also offered a unique kind of freedom. They could forge themselves anew, seizing opportunities that would have been impossible for women in the more settled and urbanized parts of the country.

This was especially true for women of color. Colorado’s first Black settler was a former slave named Clara Brown, who won her freedom in 1856 and subsequently worked her way west to the gold-mining town of Central City. Recognizing a need in the market, she founded a successful laundry business catering to miners and their families. Some of her profits went to buy land and shares in mines; the rest she spent on philanthropy, earning her the nickname “Angel of the Rockies.” After the Civil War, Brown made it her mission to locate her lost family, ultimately finding a grown-up daughter, Eliza.

However, the flip of side of being able to “act like men” was that women had to be prepared to die like men, too. Myra Belle Shirley, aka Belle Starr, was a prolific Texas outlaw whose known associates included the notorious James brothers. Despite a long criminal career that mainly involved bootlegging and fencing stolen horses, Starr was convicted only once, resulting in a nine-month prison sentence in the Detroit House of Correction. Her luck finally ran out in 1889, two days before her 41st birthday. By now a widow for the third time, Belle was riding alone in Oklahoma when she was shot and killed in an ambush. The list of suspects included her own children, although the murder was never solved.

HistoricallySpeaking: How Fear of Sharks Became an American Obsession

Since colonial times, we’ve dreaded what one explorer called ‘the most ravenous fish known in the sea’

The Wall Street Journal

August 27, 2020

There had never been a fatal shark incident in Maine until last month’s shocking attack on a woman swimmer by a great white near Bailey Island in Casco Bay. Scientists suspect that the recent rise in seal numbers, rather than the presence of humans, was responsible for luring the shark inland.

A great white shark on the attack.
PHOTO: CHRIS PERKINS / MEDIADRUMWORLD / ZUMA PRESS

It’s often said that sharks aren’t the bloodthirsty killing machines portrayed in the media. In 2019 there were only 64 shark attacks world-wide, with just two fatalities. Still, they are feared for good reason.

The ancient Greeks knew well the horror that could await anyone unfortunate enough to fall into the wine-dark sea. Herodotus recorded how, in 492 B.C., a Persian invasion fleet of 300 ships was heading toward Greece when a sudden storm blew up around Mt. Athos. The ships broke apart, tossing some 20,000 men into the water. Those who didn’t drown immediately were “devoured” by sharks.

The Age of Discovery introduced European explorers not just to new landmasses but also to new shark species far more dangerous than the ones they knew at home. In a narrative of his 1593 journey to the South Seas, the explorer and pirate Richard Hawkins described the shark as “the most ravenous fishe knowne in the sea.”

It’s believed that the first deadly shark attack in the U.S. took place in 1642 at Spuyten Duyvil, an inlet on the Hudson River north of Manhattan. Antony Van Corlaer was attempting to swim across to the Bronx when a giant fish was seen to drag him under the water.

But the first confirmed American survivor of a shark attack was Brook Watson, a 14-year-old sailor from Boston. In 1749, Watson was serving on board a merchant ship when he was attacked while swimming in Cuba’s Havana Harbor. Fortunately, his crewmates were able to launch a rowboat and pull him from the water, leaving Watson’s right foot in the shark’s mouth.

Despite having a wooden leg, Watson enjoyed a successful career at sea before returning to his British roots to enter politics. He ended up serving as Lord Mayor of London and becoming Sir Brook Watson. His miraculous escape was immortalized by his friend the American painter John Singleton Copley. “Watson and the Shark” was completely fanciful, however, since Copley had never seen a shark.

The American relationship with sharks was changed irrevocably during the summer of 1916. The East Coast was gripped by both a heat wave and a polio epidemic, leaving the beach as one of the few safe places for Americans to relax. On July 1, a man was killed by a shark on Long Beach Island off the New Jersey coast. Over the next 10 days, sharks in the area killed three more people and left one severely injured. In the ensuing national uproar, President Woodrow Wilson offered federal funds to help get rid of the sharks, an understandable but impossible wish.

The Jersey Shore attacks served as an inspiration for Peter Benchley’s bestselling 1974 novel “Jaws,” which was turned into a blockbuster film the next year by Steven Spielberg. Since then the shark population in U.S. waters has dropped by 60%, in part due to an increase in shark-fishing inspired by the movie. Appalled by what he had unleashed, Benchley spent the last decades of his life campaigning for shark conservation.

Historically Speaking: The Delicious Evolution of Mayonnaise

Ancient Romans ate a pungent version, but the modern egg-based spread was created by an 18th-century French chef.

July 9, 2020

The Wall Street Journal

I can’t imagine a summer picnic without mayonnaise—in the potato salad, the veggie dips, the coleslaw, and yes, even on the french fries. It feels like a great dollop of pure Americana in a creamy, satisfying sort of way. But like a lot of what makes our country so successful, mayonnaise originally came from somewhere else.

Where, exactly, is one of those food disputes that will never be resolved, along with the true origins of baklava pastry, hummus and the pisco sour cocktail. In all likelihood, the earliest version of mayonnaise was an ancient Roman concoction of garlic and olive oil, much praised for its medicinal properties by Pliny the Elder in his first-century encyclopedia “Naturalis Historia.” This strong-tasting, aioli-like proto-mayonnaise remained a southern Mediterranean specialty for millennia.

But most historians believe that modern mayonnaise was born in 1756 in the port city of Mahon, in the Balearic Islands off the coast of Spain. At the start of the Seven Years’ War between France and Britain, the French navy, led by the Duc de Richelieu, smashed Admiral Byng’s poorly armed fleet at the Battle of Minorca. (Byng was subsequently executed for not trying hard enough.) While preparing the fish course for Richelieu’s victory dinner, his chef coped with the lack of cream on the island by ingeniously substituting a goo of eggs mixed with oil and garlic.

The anonymous cook took the recipe for “mahonnaise” back to France, where it was vastly improved by Marie-Antoine Careme, the founder of haute cuisine. Careme realized that whisking rather than simply stirring the mixture created a soft emulsion that could be used in any number of dishes, from the savory to the sweet.

It wasn’t just the French who fell for Careme’s version. Mayonnaise blended easily with local cuisines, evolving into tartar sauce in Eastern Europe, remoulades in the Baltic countries and salad dressing in Britain. By 1838, the menu at the iconic New York restaurant Delmonico’s featured lobster mayonnaise as a signature dish.

All that whisking, however, made mayonnaise too laborious for home cooks until the invention of the mechanical eggbeater, first patented by the Black inventor Willis Johnson, of Cincinnati, Ohio, in 1884. “Try it once,” gushed Good Housekeeping magazine in 1889, “and you’ll never go back to the old way as long as you live.”

Making mayonnaise was one thing, preserving it quite another, since the raw egg made it spoil quickly. The conundrum was finally solved in 1912 by Richard Hellmann, a German-American deli owner in New York. By using his own trucks and factories, Hellmann was able to manufacture and transport mayonnaise faster. And in a revolutionary move, he designed the distinctive wide-necked Hellmann’s jar, encouraging liberal slatherings of mayo and thereby speeding up consumption.

Five years later, Eugenia Duke of North Carolina created Duke’s mayonnaise, which is eggier and has no sugar. The two brands are still dueling it out. But when it comes to eating, there are no winners and losers in the mayo department, just 14 grams of fat and 103 delicious calories per tablespoon.

Historically Speaking: Golfing With Emperors and Presidents

From medieval Scotland to the White House, the game has appealed to the powerful as well as the common man.

June 3, 2020

The Wall Street Journal

The history of golf is a tale of two sports: one played by the common man, the other by kings and presidents. The plebeian variety came first. Paganica, a game played with a bent stick and a hard ball stuffed with feathers, was invented by Roman soldiers as a way to relieve the monotony of camp life. It is believed that a version of Paganica was introduced to Scotland when the Roman emperor Septimius Severus invaded the country in 208 A.D.

Golf buddies Arnold Palmer (left) and Dwight Eisenhower.
PHOTO: AUGUSTA NATIONAL/GETTY IMAGES

Golf might also have been influenced by stick-and-ball games from other cultures, such as the medieval Chinese chuiwan (“hit-ball”) and Dutch colf, an indoor game using rubber balls and heavy clubs. But the game we know today originated in the 15th century on the Links—the long, grassy sand dunes that are such a distinctive feature of Scotland’s coastline. The terrain was perfect for all-weather play, as well as for keeping out of sight of the authorities: Scottish kings prohibited the game until 1502, anxious that it would interfere with archery practice.

Two years after lifting the ban, King James IV of Scotland played the first recorded golf match while staying at Falkland Palace near St. Andrews. In theory, anyone could play on the Links since it was common land. Starting in 1754, however, access was controlled by the Royal and Ancient Golf Club of St. Andrews, known today as the “Home of Golf.” The R & A did much to standardize the rules of the game, while cementing golf’s reputation as an aristocratic activity.

In the 19th century, innovations in lawn care and ball manufacturing lowered the cost of golf, but the perception of elitism persisted. When William Howard Taft ran for president in 1908, Teddy Roosevelt urged him to beware of projecting an upper-crust image: “photographs on horseback, yes; tennis, no. And golf is fatal.” Taft ignored Roosevelt’s advice, as did Woodrow Wilson, who played more rounds of golf—nearly 1,200 in all—than any other president. He even played in the snow, using a black-painted ball.

Wilson’s record was nearly matched by Dwight Eisenhower, who so loved the game that he had a putting green installed outside the Oval Office in 1954. At first the media criticized his fondness for a rich man’s game. But that changed after Arnold Palmer, one of the greatest and most charismatic golfers in history, became Eisenhower’s friend and regular golf partner. The frequent sight of the president and the sports hero playing together made golf appear attractive, aspirational and above all accessible, inspiring millions of ordinary Americans to try the game for the first time.

But that popularity has been dented in recent years. The number of golfers in the U.S. dropped from a high of 30 million in 2005 to 24.1 million in 2015. In addition to being pricey, golf is still criticized for being snobby. Earlier this year, Brooks Koepka, a professional golfer once ranked number one in the world, told GQ that he loved the game but not “the stuffy atmosphere that comes along with it.” “Golf has always had this persona of the triple-pleated khaki pants, the button-up shirt, very country club atmosphere,” he complained. Now that almost all of the country’s golf courses have reopened from pandemic-related shutdowns, golf has a new opportunity to make every player feel included.

Historically Speaking: Sleuthing Through the Ages

Illustration by Dominic Bugatto

From Oedipus to Sherlock Holmes, readers have flocked to stories about determined detectives.

May 21, 2020

The Wall Street Journal

I have to confess that I’ve spent the lockdown reading thrillers and whodunits. But judging by the domination of mystery titles on the bestseller lists, so has nearly everyone else. In uncertain times, crime fiction offers certainty, resolution and comfort.

The roots of the genre go back to the ancient Greeks. Sophocles’s “Oedipus the King,” written around 429 B.C., is in essence a murder mystery. The play begins with Oedipus swearing that he will not rest until he discovers who killed Laius, the previous king of Thebes. Like a modern detective, Oedipus questions witnesses and follows clues until the terrible truth is revealed: He is both the investigator and the criminal, having unwittingly murdered his father and married his mother.

The Chinese were the first to give crime fiction a name. Gong’an or “magistrate’s desk” literature developed during the Song dynasty (960-1279), featuring judges who recount the details of a difficult or dangerous case. Modern Western crime fiction adopted a more individualistic approach, making heroes out of amateurs. The 1819 novella “Mademoiselle de Scuderi,” by the German writer E.T.A. Hoffmann, is an early prototype: The heroine, an elderly writer, helps to solve a serial murder case involving stolen jewelry.

But it is Edgar Allan Poe who is generally regarded as the godfather of detective fiction. His short story “The Murders in the Rue Morgue,” published in 1841, features an amateur sleuth, Auguste Dupin, who solves the mysterious, gruesome deaths of two women. (Spoiler: The culprit was an escaped orangutan.) Poe invented some of the genre’s most important devices, including the “locked room” puzzle, in which a murder takes place under seemingly impossible conditions.

Toward the end of the 19th century, Arthur Conan Doyle’s Sherlock Holmes series added three innovations that quickly became conventions: the loyal sidekick, the arch-villain and the use of forensic science. In the violin-playing, drug-abusing Holmes, Doyle also created a psychologically complex character who enthralled readers—too much for Doyle’s liking. Desperate to be considered a literary writer, he killed off Holmes in 1893, only to be forced by public demand to resurrect him 12 years later.

When Doyle published his last Holmes story in 1927, the “Golden Age” of British crime fiction was in full swing. Writers such as Agatha Christie and Dorothy L. Sayers created genteel detectives who solved “cozy crimes” in upper-middle-class settings, winning a huge readership and inspiring American imitators like S.S. Van Dine, the creator of detective Philo Vance, who published a list of “Twenty Rules for Writing Detective Stories.”

As violence and corruption increased under Prohibition, American mystery writing turned toward more “hard-boiled” social realism. In Dashiell Hammett and Raymond Chandler’s noir fiction, dead bodies in libraries are replaced by bloody corpses in cars.

At the time, critics quarreled about which type of mystery was superior, though both can seem old-fashioned compared with today’s spy novels and psychological thrillers. The number of mystery subgenres seems to be infinite. Yet one thing will never change: our yearning for a hero who is, in Raymond Chandler’s words, “the best man in his world and a good enough man for any world.”

 

Historically Speaking: Hobbies for Kings and the People

From collecting ancient coins to Victorian taxidermy, we’ve found ingenious ways to fill our free time.

Wall Street Journal, April 16, 2020

It’s no surprise that many Americans are turning or returning to hobbies during the current crisis. By definition, a hobby requires time outside of work.

Sofonisba Anguissola, ‘The Chess Game’ (1555)

We don’t hear much about hobbies in ancient history because most people never had any leisure time. They were too busy obeying their masters or just scraping by. The earliest known hobbyists may have been Nabonidus, the last king of Babylonia in the 6th century B.C., and his daughter Ennigaldi-Nanna. Both were passionate antiquarians: Nabonidus liked to restore ruined temples while Ennigaldi-Nanna collected ancient artifacts. She displayed them in a special room in her palace, effectively creating the world’s first museum.

Augustus Caesar, the first Roman emperor, was another avid collector of ancient objects, especially Greek gold coins. The Romans recognized the benefits of having a hobby, but for them the concept excluded any kind of manual work. When the poet Ovid, exiled by Augustus on unknown charges, wrote home that he yearned to tend his garden again, he didn’t mean with a shovel. That’s what slaves were for.

Hobbies long continued to be a luxury for potentates. But in the Renaissance, the printing press combined with higher standards of living to create new possibilities for hobbyists. The change can be seen in the paintings of Sofonisba Anguissola, one of the first Italian painters to depict her subjects enjoying ordinary activities like reading or playing an instrument. Her most famous painting, “The Chess Game” (1555), shows members of her family engaged in a match.

Upper-class snobbery toward any hobby that might be deemed physical still lingered, however. The English diplomat and scholar Sir Thomas Elyot warned readers in “The Boke Named the Governour” (1531) that playing a musical instrument was fine ”‘for recreation after tedious or laborious affaires.” But it had to be kept private, lest the practitioner be mistaken for “a common servant or minstrel.”

Hobbies received a massive boost from the Industrial Revolution. It wasn’t simply that people had more free time; there were also many more things to do and acquire. Stamp collecting took off soon after the introduction of the world’s first adhesive stamp, the Penny Black, in Britain in 1840. As technology became cheaper, hobbies emerged that bridged the old division between intellectual and manual labor, such as photography and microscopy. Taxidermy allowed the Victorians to mash the macrabre and the whimsical together: Ice-skating hedgehogs, card-playing mice and dancing cats were popular with taxidermists.

In the U.S., the adoption of hobbies increased dramatically during the Great Depression. For the unemployed, they were an inexpensive way to give purpose and achievement to their days. Throughout the 1930s, nonprofit organizations such as the Leisure League of America and the National Home Workshop Guild encouraged Americans to develop their talents. “You Can Write” was the hopeful title of a 1934 Leisure League publication.

Even Winston Churchill took up painting in his 40s, saying later that the hobby rescued him “in a most trying time.” We are in our own trying time, so why not go for it? I think I’ll teach myself to bake bread next week.

Historically Speaking: The Long Fight Against Unjust Taxes

From ancient Jerusalem to the American Revolution and beyond, rebels have risen up against the burden of taxation.

March 19, 2020

The Wall Street Journal

With the world in the grip of a major health crisis, historical milestones are passing by with little notice. But the Boston Massacre, whose 250th anniversary was this month, deserves to be remembered as a cautionary tale.

ILLUSTRATION: THOMAS FUCHS

The bloody encounter on March 5, 1770, began with the harassment of a British soldier by a crowd of Bostonians. Panicked soldiers responded by firing on the crowd, leaving five dead and six wounded. The colonists were irate about new taxes imposed by the British Parliament to pay for the expenses of the Seven Years War, which in North America pitted the British and Americans against the French and their Indian allies. Whether or not the tax increase was justified, the failure of British leaders to include the American colonies in the deliberative process was catastrophic. The slogan “No taxation without representation” became a rallying cry for the fledgling nation.

The attitude of tax collecting authorities had hardly changed since ancient times, when empires treated their subject populations with greed, brutality and arrogance. In 1st century Judea, anger over the taxes imposed by Rome combined with religious grievances to provoke a full-scale Jewish revolt in 66-73 A.D. It was an unequal battle, as most tax rebellions are, and the resistors were made to pay dearly: Jerusalem was sacked and the Second Temple destroyed, and all Jews in the Roman Empire were forced to pay a punitive tax.

Even when tax revolts met with initial success, there was no guarantee that the authorities would carry out their promises. In 1381, a humble English roof tiler named Wat Tyler led an uprising, dubbed the Peasants’ Revolt, against a new poll tax. King Richard II met with Tyler and agreed to his demands, but only as a delaying tactic. The ringleaders were then rounded up and executed, and Richard revoked his concessions, claiming they had been made under duress.

Nevertheless, as the historian David F. Burg notes in his book “A World History of Tax Rebellions,” tax revolts have been more frequent than we realize, mainly because governments tend not to advertise them. In Germany, 210 separate protests and uprisings were recorded from 1300 to 1550, and at least 1,000 in Japan from 1600 to 1868.

The 19th century saw the rise of a new kind of tax rebel, the conscientious objector. In 1846, the writer and abolitionist Henry David Thoreau spent a night in the Concord, Mass., jail after he refused to pay a poll tax as a protest against slavery. He was released the next morning when his aunt paid it for him, against his will. But Thoreau would go on to withhold his taxes in protest against the Mexican-American War, arguing in his 1849 essay “Civil Disobedience” that it was better to go to jail than to “enable the State to commit violence and shed innocent blood.”

Irwin Schiff, a colorful antitax advocate and failed libertarian presidential candidate, wouldn’t get off so easily. Arguing that the income tax violated the U.S. Constitution, he refused to pay it, despite being convicted of tax evasion three times. In 2015, he died at age 87 in a federal prison—an ironic confirmation of Benjamin Franklin’s adage that “nothing can be said to be certain, except death and taxes.”

Fortunately for Americans at this time of national duress, tax day this year has been mercifully postponed.

Historically Speaking: Cities Built by Royal Command

From ancient Egypt to modern Russia, rulers have tried to build new capitals from the ground up.

March 5, 2020

The Wall Street Journal

In 1420, the Yongle Emperor moved China’s capital from Nanjing to the city then known as Beiping. To house his government he built the Forbidden City, the largest wooden complex in the world, with more than 70 palace compounds spread across 178 acres. Incredibly, an army of 100,000 artisans and one million laborers finished the project in only three years. Shortly after moving in, the emperor renamed the city Beijing, “northern capital.”

The monument of Peter the Great in St. Petersburg, Russia.
PHOTO: GETTY IMAGES

In the 600 years since, countless visitors have marveled at Yongle’s creation. As its name suggests, the Forbidden City could only be entered by permission of the emperor. But the capital city built around it was an impressive symbol of imperial power and social order, a kind of 3-D model of the harmony of the universe.

Beijing’s success and longevity marked an important leap forward in the history of purpose-built cities. The earliest attempts at building a capital from scratch were usually hubristic affairs that vanished along with their founders. Nothing remains of the fabled Akkad, commissioned by Sargon the Great in the 24th century B.C. following his victory over the Sumerians.

But the most vainglorious and ultimately futile capital ever constructed must surely be Akhetaten, later known as Amarna, on the east bank of the Nile River in Egypt. It was built by the Pharaoh Akhenaten around 1346 B.C. to serve as a living temple to Aten, the god of the sun. The pharaoh hoped to make Aten the center of a new monotheistic cult, replacing the ancient pantheon of Egyptian deities. In his eyes, Amarna was a glorious act of personal worship. But to the Egyptians, this hastily erected city of mud and brick was an indoctrination camp run by a crazed fanatic. Neither Akhenaten’s religion nor his city long survived his death.

In 330 B.C., Alexander the Great was responsible for one of the worst acts of cultural vandalism in history when he allowed his army to burn down Persepolis, the magnificent Achaemenid capital founded by Darius I, in revenge for the Persian destruction of Athens 150 years earlier.

Ironically, the year before he destroyed a metropolis in Persia, the Macedonian emperor had created one in Egypt. Legend has it that Alexander chose the site of Alexandria after being visited by the poet Homer in a dream. He may also have been influenced by the advantages of the location, near the island of Pharos on the Mediterranean coast, which boasted two harbors as well as a limitless supply of fresh water. Working closely with the famed Greek architect Dinocrates, Alexander designed the walls, city quarters and street grid himself. Alexandria went on to become a center of Greek and Roman civilization, famous for its library, museum and lighthouse.

No European ruler would rival the urban ambitions of Alexander the Great, let alone Emperor Yongle, until Tsar Peter the Great. In 1703, he founded St. Petersburg on the marshy archipelago where the Neva River meets the Baltic Sea. His goal was to replace Moscow, the old Russian capital, with a new city built according to modern, Western ideas. Those ideas were unpopular with Russia’s nobility, and after Peter’s death his successor moved the capital back to Moscow. But in 1732 the Romanovs transferred their court permanently to St. Petersburg, where it remained until 1917. Sometimes, the biggest urban-planning dreams actually come true.

 

Historically Speaking: Funding Wars Through the Ages

U.S. antiterror efforts have cost nearly $6 trillion since the 9/11 attacks. Earlier governments from the ancient Greeks to Napoleon have had to get creative to finance their fights

The Wall Street Journal, October 31, 2019

The successful operation against Islamic State leader Abu Bakr al-Baghdadi is a bright spot in the war on terror that the U.S. declared in response to the attacks of 9/11. The financial costs of this long war have been enormous: nearly $6 trillion to date, according to a recent report by the Watson Institute of International and Public Affairs at Brown University, which took into account not just the defense budget but other major costs, like medical and disability care, homeland security and debt.

ILLUSTRATION: THOMAS FUCHS

War financing has come a long way since the ancient Greeks formed the Delian League in 478 B.C., which required each member state to contribute an agreed amount of money each year, rather than troops,. With the League’s financial backing, Athens became the Greek world’s first military superpower—at least until the Spartans, helped by the Persians, built up their naval fleet with tribute payments extracted from dependent states.

The Romans maintained their armies through tributes and taxes until the Punic Wars—three lengthy conflicts between 264 and 146 B.C.—proved so costly that the government turned to debasing the coinage in an attempt to increase the money supply. The result was runaway inflation and eventually a sovereign debt crisis during the Social War a half-century later between Rome and several breakaway Italian cities. The government ended up defaulting in 86 B.C., sealing the demise of the ailing Roman Republic.

After the fall of Rome in the late fifth century, wars in Europe were generally financed by plunder and other haphazard means. William the Conqueror financed the Norman invasion of England in 1066 the ancient Roman way, by debasing his currency. He learned his lesson and paid for all subsequent operations out of tax receipts, which stabilized the English monetary system and established a new model for financing war.

Taxation worked until European wars became too expensive for state treasuries to fund alone. Rulers then resorted to a number of different methods. During the 16th century, Philip I of Spain turned to the banking houses of Genoa to raise the money for his Armada invasion fleet against England. Seizing the opportunity, Sir Francis Walsingham, Elizabeth I’s chief spymaster, sent agents to Genoa with orders to use all legal means to sabotage and delay the payment of Philip’s bills of credit. The operation bought England a crucial extra year of preparation.

In his own financial preparations to fight England, Napoleon had better luck than Philip I: In 1803 he was able to raise a war chest of over $11 million in cash by selling the Louisiana Territory to the U.S.

Napoleon was unusual in having a valuable asset to offload. By the time the American Civil War broke out in 1861, governments had become reliant on a combination of taxation, printing money or borrowing to pay for war. But the U.S. lacked a regulated banking system since President Andrew Jackson’s dismantling of the Second Bank of the United States in the 1830s. The South resorted to printing paper money, which depreciated dramatically. The North could afford to be more innovative. In 1862 the financier Jay Cooke invented the war bond. This was marketed with great success to ordinary citizens. At the war’s end, the bonds had covered two-thirds of the North’s costs.

Incurring debt is still how the U.S. funds its wars. It has helped to shield the country from the full financial effects of its prolonged conflicts. But in the future it is worth remembering President Calvin Coolidge’s warning: “In any modern campaign the dollars are the shock troops…. A country loaded with debt is devoid of the first line of defense.”

Historically Speaking: Duels Among the Clouds

Aerial combat was born during World War I, giving the world a new kind of military hero: the fighter pilot

“Top Gun” is back. The 1986 film about Navy fighter pilots is getting a sequel next year, with Tom Cruise reprising his role as Lt. Pete “Maverick” Mitchell, the sexy flyboy who can’t stay out of trouble. Judging by the trailer released by Paramount in July, the new movie, “Top Gun: Maverick,” will go straight to the heart of current debates about the future of aerial combat. An unseen voice tells Mr. Cruise, “Your kind is headed for extinction.”

The mystique of the fighter pilot began during World War I, when fighter planes first entered military service. The first aerial combat took place on Oct. 5, 1914, when French and German biplanes engaged in an epic contest in the sky, watched by soldiers on both sides of the trenches. At this early stage, neither plane was armed, but the German pilot had a rifle and the French pilot a machine gun; the latter won the day.

A furious arms race ensued. The Germans turned to the Dutch engineer Anthony Fokker, who devised a way to synchronize a plane’s propeller with its machine gun, creating a flying weapon of deadly accuracy. The Allies soon caught up, ushering in the era of the dogfight.

From the beginning, the fighter pilot seemed to belong to a special category of warrior—the dueling knight rather than the ordinary foot soldier. Flying aces of all nationalities gave each other a comradely respect. In 1916, the British marked the downing of the German fighter pilot Oswald Boelcke by dropping a wreath in his honor on his home airfield in Germany.

But not until World War II could air combat decide the outcome of an entire campaign. During the Battle of Britain in the summer of 1940, the German air force, the Luftwaffe, dispatched up to 1,000 aircraft in a single attack. The Royal Air Force’s successful defense of the skies led to British Prime Minister Winston Churchill’s famous declaration, “Never in the field of human conflict was so much owed by so many to so few.”

The U.S. air campaigns over Germany taught American military planners a different lesson. Rather than focusing on pilot skills, they concentrated on building planes with superior firepower. In the decades after World War II, the invention of air-to-air missiles was supposed to herald the end of the dogfight. But during the Vietnam War, steep American aircraft losses caused by the acrobatic, Soviet-built MiG fighter showed that one-on-one combat still mattered. The U.S. response to this threat was the highly maneuverable twin-engine F-15 and the formation of a new pilot training academy, the Navy Fighter Weapons School, which inspired the original “Top Gun.”

Since that film’s release, however, aerial combat between fighter planes has largely happened on screen, not in the real world. The last dogfight involving a U.S. aircraft took place in 1999, during the NATO air campaign in Kosovo. The F-14 Tomcats flown by Mr. Cruise’s character have been retired, and his aircraft carrier, the USS Enterprise, has been decommissioned.

Today, conventional wisdom again holds that aerial combat is obsolete. The new F-35 Joint Strike Fighter is meant to replace close-up dogfights with long-range weapons. But not everyone seems to have read the memo about the future of air warfare. Increasingly, U.S. and NATO pilots are having to scramble their planes to head off Russian incursions. The knights of the skies can’t retire just yet.