The Long Fight Against Unjust Taxes

From ancient Jerusalem to the American Revolution and beyond, rebels have risen up against the burden of taxation.

March 19, 2020

The Wall Street Journal

With the world in the grip of a major health crisis, historical milestones are passing by with little notice. But the Boston Massacre, whose 250th anniversary was this month, deserves to be remembered as a cautionary tale.

ILLUSTRATION: THOMAS FUCHS

The bloody encounter on March 5, 1770, began with the harassment of a British soldier by a crowd of Bostonians. Panicked soldiers responded by firing on the crowd, leaving five dead and six wounded. The colonists were irate about new taxes imposed by the British Parliament to pay for the expenses of the Seven Years War, which in North America pitted the British and Americans against the French and their Indian allies. Whether or not the tax increase was justified, the failure of British leaders to include the American colonies in the deliberative process was catastrophic. The slogan “No taxation without representation” became a rallying cry for the fledgling nation.

The attitude of tax collecting authorities had hardly changed since ancient times, when empires treated their subject populations with greed, brutality and arrogance. In 1st century Judea, anger over the taxes imposed by Rome combined with religious grievances to provoke a full-scale Jewish revolt in 66-73 A.D. It was an unequal battle, as most tax rebellions are, and the resistors were made to pay dearly: Jerusalem was sacked and the Second Temple destroyed, and all Jews in the Roman Empire were forced to pay a punitive tax.

Even when tax revolts met with initial success, there was no guarantee that the authorities would carry out their promises. In 1381, a humble English roof tiler named Wat Tyler led an uprising, dubbed the Peasants’ Revolt, against a new poll tax. King Richard II met with Tyler and agreed to his demands, but only as a delaying tactic. The ringleaders were then rounded up and executed, and Richard revoked his concessions, claiming they had been made under duress.

Nevertheless, as the historian David F. Burg notes in his book “A World History of Tax Rebellions,” tax revolts have been more frequent than we realize, mainly because governments tend not to advertise them. In Germany, 210 separate protests and uprisings were recorded from 1300 to 1550, and at least 1,000 in Japan from 1600 to 1868.

The 19th century saw the rise of a new kind of tax rebel, the conscientious objector. In 1846, the writer and abolitionist Henry David Thoreau spent a night in the Concord, Mass., jail after he refused to pay a poll tax as a protest against slavery. He was released the next morning when his aunt paid it for him, against his will. But Thoreau would go on to withhold his taxes in protest against the Mexican-American War, arguing in his 1849 essay “Civil Disobedience” that it was better to go to jail than to “enable the State to commit violence and shed innocent blood.”

Irwin Schiff, a colorful antitax advocate and failed libertarian presidential candidate, wouldn’t get off so easily. Arguing that the income tax violated the U.S. Constitution, he refused to pay it, despite being convicted of tax evasion three times. In 2015, he died at age 87 in a federal prison—an ironic confirmation of Benjamin Franklin’s adage that “nothing can be said to be certain, except death and taxes.”

Fortunately for Americans at this time of national duress, tax day this year has been mercifully postponed.

Cities Built by Royal Command

From ancient Egypt to modern Russia, rulers have tried to build new capitals from the ground up.

March 5, 2020

The Wall Street Journal

In 1420, the Yongle Emperor moved China’s capital from Nanjing to the city then known as Beiping. To house his government he built the Forbidden City, the largest wooden complex in the world, with more than 70 palace compounds spread across 178 acres. Incredibly, an army of 100,000 artisans and one million laborers finished the project in only three years. Shortly after moving in, the emperor renamed the city Beijing, “northern capital.”

The monument of Peter the Great in St. Petersburg, Russia.
PHOTO: GETTY IMAGES

In the 600 years since, countless visitors have marveled at Yongle’s creation. As its name suggests, the Forbidden City could only be entered by permission of the emperor. But the capital city built around it was an impressive symbol of imperial power and social order, a kind of 3-D model of the harmony of the universe.

Beijing’s success and longevity marked an important leap forward in the history of purpose-built cities. The earliest attempts at building a capital from scratch were usually hubristic affairs that vanished along with their founders. Nothing remains of the fabled Akkad, commissioned by Sargon the Great in the 24th century B.C. following his victory over the Sumerians.

But the most vainglorious and ultimately futile capital ever constructed must surely be Akhetaten, later known as Amarna, on the east bank of the Nile River in Egypt. It was built by the Pharaoh Akhenaten around 1346 B.C. to serve as a living temple to Aten, the god of the sun. The pharaoh hoped to make Aten the center of a new monotheistic cult, replacing the ancient pantheon of Egyptian deities. In his eyes, Amarna was a glorious act of personal worship. But to the Egyptians, this hastily erected city of mud and brick was an indoctrination camp run by a crazed fanatic. Neither Akhenaten’s religion nor his city long survived his death.

In 330 B.C., Alexander the Great was responsible for one of the worst acts of cultural vandalism in history when he allowed his army to burn down Persepolis, the magnificent Achaemenid capital founded by Darius I, in revenge for the Persian destruction of Athens 150 years earlier.

Ironically, the year before he destroyed a metropolis in Persia, the Macedonian emperor had created one in Egypt. Legend has it that Alexander chose the site of Alexandria after being visited by the poet Homer in a dream. He may also have been influenced by the advantages of the location, near the island of Pharos on the Mediterranean coast, which boasted two harbors as well as a limitless supply of fresh water. Working closely with the famed Greek architect Dinocrates, Alexander designed the walls, city quarters and street grid himself. Alexandria went on to become a center of Greek and Roman civilization, famous for its library, museum and lighthouse.

No European ruler would rival the urban ambitions of Alexander the Great, let alone Emperor Yongle, until Tsar Peter the Great. In 1703, he founded St. Petersburg on the marshy archipelago where the Neva River meets the Baltic Sea. His goal was to replace Moscow, the old Russian capital, with a new city built according to modern, Western ideas. Those ideas were unpopular with Russia’s nobility, and after Peter’s death his successor moved the capital back to Moscow. But in 1732 the Romanovs transferred their court permanently to St. Petersburg, where it remained until 1917. Sometimes, the biggest urban-planning dreams actually come true.

 

Historically Speaking: Funding Wars Through the Ages

U.S. antiterror efforts have cost nearly $6 trillion since the 9/11 attacks. Earlier governments from the ancient Greeks to Napoleon have had to get creative to finance their fights

The Wall Street Journal, October 31, 2019

The successful operation against Islamic State leader Abu Bakr al-Baghdadi is a bright spot in the war on terror that the U.S. declared in response to the attacks of 9/11. The financial costs of this long war have been enormous: nearly $6 trillion to date, according to a recent report by the Watson Institute of International and Public Affairs at Brown University, which took into account not just the defense budget but other major costs, like medical and disability care, homeland security and debt.

ILLUSTRATION: THOMAS FUCHS

War financing has come a long way since the ancient Greeks formed the Delian League in 478 B.C., which required each member state to contribute an agreed amount of money each year, rather than troops,. With the League’s financial backing, Athens became the Greek world’s first military superpower—at least until the Spartans, helped by the Persians, built up their naval fleet with tribute payments extracted from dependent states.

The Romans maintained their armies through tributes and taxes until the Punic Wars—three lengthy conflicts between 264 and 146 B.C.—proved so costly that the government turned to debasing the coinage in an attempt to increase the money supply. The result was runaway inflation and eventually a sovereign debt crisis during the Social War a half-century later between Rome and several breakaway Italian cities. The government ended up defaulting in 86 B.C., sealing the demise of the ailing Roman Republic.

After the fall of Rome in the late fifth century, wars in Europe were generally financed by plunder and other haphazard means. William the Conqueror financed the Norman invasion of England in 1066 the ancient Roman way, by debasing his currency. He learned his lesson and paid for all subsequent operations out of tax receipts, which stabilized the English monetary system and established a new model for financing war.

Taxation worked until European wars became too expensive for state treasuries to fund alone. Rulers then resorted to a number of different methods. During the 16th century, Philip I of Spain turned to the banking houses of Genoa to raise the money for his Armada invasion fleet against England. Seizing the opportunity, Sir Francis Walsingham, Elizabeth I’s chief spymaster, sent agents to Genoa with orders to use all legal means to sabotage and delay the payment of Philip’s bills of credit. The operation bought England a crucial extra year of preparation.

In his own financial preparations to fight England, Napoleon had better luck than Philip I: In 1803 he was able to raise a war chest of over $11 million in cash by selling the Louisiana Territory to the U.S.

Napoleon was unusual in having a valuable asset to offload. By the time the American Civil War broke out in 1861, governments had become reliant on a combination of taxation, printing money or borrowing to pay for war. But the U.S. lacked a regulated banking system since President Andrew Jackson’s dismantling of the Second Bank of the United States in the 1830s. The South resorted to printing paper money, which depreciated dramatically. The North could afford to be more innovative. In 1862 the financier Jay Cooke invented the war bond. This was marketed with great success to ordinary citizens. At the war’s end, the bonds had covered two-thirds of the North’s costs.

Incurring debt is still how the U.S. funds its wars. It has helped to shield the country from the full financial effects of its prolonged conflicts. But in the future it is worth remembering President Calvin Coolidge’s warning: “In any modern campaign the dollars are the shock troops…. A country loaded with debt is devoid of the first line of defense.”

Historically Speaking: Duels Among the Clouds

Aerial combat was born during World War I, giving the world a new kind of military hero: the fighter pilot

“Top Gun” is back. The 1986 film about Navy fighter pilots is getting a sequel next year, with Tom Cruise reprising his role as Lt. Pete “Maverick” Mitchell, the sexy flyboy who can’t stay out of trouble. Judging by the trailer released by Paramount in July, the new movie, “Top Gun: Maverick,” will go straight to the heart of current debates about the future of aerial combat. An unseen voice tells Mr. Cruise, “Your kind is headed for extinction.”

The mystique of the fighter pilot began during World War I, when fighter planes first entered military service. The first aerial combat took place on Oct. 5, 1914, when French and German biplanes engaged in an epic contest in the sky, watched by soldiers on both sides of the trenches. At this early stage, neither plane was armed, but the German pilot had a rifle and the French pilot a machine gun; the latter won the day.

A furious arms race ensued. The Germans turned to the Dutch engineer Anthony Fokker, who devised a way to synchronize a plane’s propeller with its machine gun, creating a flying weapon of deadly accuracy. The Allies soon caught up, ushering in the era of the dogfight.

From the beginning, the fighter pilot seemed to belong to a special category of warrior—the dueling knight rather than the ordinary foot soldier. Flying aces of all nationalities gave each other a comradely respect. In 1916, the British marked the downing of the German fighter pilot Oswald Boelcke by dropping a wreath in his honor on his home airfield in Germany.

But not until World War II could air combat decide the outcome of an entire campaign. During the Battle of Britain in the summer of 1940, the German air force, the Luftwaffe, dispatched up to 1,000 aircraft in a single attack. The Royal Air Force’s successful defense of the skies led to British Prime Minister Winston Churchill’s famous declaration, “Never in the field of human conflict was so much owed by so many to so few.”

The U.S. air campaigns over Germany taught American military planners a different lesson. Rather than focusing on pilot skills, they concentrated on building planes with superior firepower. In the decades after World War II, the invention of air-to-air missiles was supposed to herald the end of the dogfight. But during the Vietnam War, steep American aircraft losses caused by the acrobatic, Soviet-built MiG fighter showed that one-on-one combat still mattered. The U.S. response to this threat was the highly maneuverable twin-engine F-15 and the formation of a new pilot training academy, the Navy Fighter Weapons School, which inspired the original “Top Gun.”

Since that film’s release, however, aerial combat between fighter planes has largely happened on screen, not in the real world. The last dogfight involving a U.S. aircraft took place in 1999, during the NATO air campaign in Kosovo. The F-14 Tomcats flown by Mr. Cruise’s character have been retired, and his aircraft carrier, the USS Enterprise, has been decommissioned.

Today, conventional wisdom again holds that aerial combat is obsolete. The new F-35 Joint Strike Fighter is meant to replace close-up dogfights with long-range weapons. But not everyone seems to have read the memo about the future of air warfare. Increasingly, U.S. and NATO pilots are having to scramble their planes to head off Russian incursions. The knights of the skies can’t retire just yet.

Historically Speaking: Playing Cards for Fun and Money

From 13th-century Egypt to the Wild West, the standard deck of 52 cards has provided entertainment—and temptation.

ILLUSTRATION: THOMAS FUCHS

More than 8,500 people traveled to Las Vegas to play in this year’s World Series of Poker, which ended July 16—a near-record for the contest. I’m not a poker player myself, but I understand the fun and excitement of playing with real cards in an actual game rather than online. There’s something uniquely pleasurable about a pack of cards—the way they look and feel to the touch—that can’t be replicated on the screen.

Although the origins of the playing card are believed to lie in China, the oldest known examples come from the Mamluk Caliphate, an Islamic empire that stretched across Egypt and the eastern Mediterranean from 1260 to 1517. It’s significant that the empire was governed by a warrior caste of former slaves: A playing card can be seen as an assertion of freedom, since time spent playing cards is time spent freely. The Mamluk card deck consisted of 52 cards divided into four suits, whose symbols reflected the daily realities of soldiering—a scimitar, a polo stick, a chalice and a coin.

Returning Crusaders and Venetian traders were probably responsible for bringing cards to Europe. Church and state authorities were not amused: In 1377, Parisians were banned from playing cards on work days. Like dice, cards were classed as gateway vices that led to greater sins. The authorities may not have been entirely wrong: Some surviving card decks from the 15th century have incredibly bawdy themes.

The suits and symbols used in playing cards became more uniform and less ornate following the advent of the printing press. French printers added a number of innovations, including dividing the four suits into red and black, and giving us the heart, diamond, club and spade symbols. Standardization enabled cards to become a lingua franca across cultures, further enhancing their appeal as a communal leisure activity.

In the 18th century, the humble playing card was the downfall of many a noble family, with vast fortunes being won and lost at the gaming table. Cards also started to feature in paintings and novels as symbols of the vagaries of fortune. The 19th-century short story “The Queen of Spades,” by the Russian writer Alexander Pushkin, beautifully captures the card mania of the period. The anti-hero, Hermann, is destroyed by his obsession with winning at Faro, a game of chance that was as popular in the saloons of the American West as it was in the drawing rooms of Europe. The lawman Wyatt Earp may have won fame in the gunfight at the OK Corral, but he earned his money as a Faro dealer in Tombstone, Ariz.

In Britain, attempts to regulate card-playing through high taxes on the cards themselves were a failure, though they did result in one change: Every ace of spades had to show a government tax stamp, which is why it’s the card that traditionally carries the manufacturer’s mark. The last innovation in the card deck, like the first, had military origins. Many Civil War regiments killed time by playing Euchre, which requires an extra trump card. The Samuel Hart Co. duly obliged with a card which became the forerunner to the Joker, the wild card that doesn’t have a suit.

But we shouldn’t allow the unsavory association of card games with gambling to have the last word. As Charles Dickens wrote in “Nicholas Nickleby”: “Thus two people who cannot afford to play cards for money, sometimes sit down to a quiet game for love.”

Historically Speaking: Beware the Red Tide

Massive algae blooms that devastate ocean life have been recorded since antiquity—and they are getting worse.

Real life isn’t so tidy. Currently, there is no force, biological or otherwise, capable of stopping the algae blooms that are attacking coastal waters around the world with frightening regularity, turning thousands of square miles into odoriferous graveyards of dead and rotting fish. In the U.S., one of the chief culprits is the Karenia brevis algae, a common marine microorganism that blooms when exposed to sunlight, warm water and phosphorus or nitrates. The result is a toxic sludge known as a red tide, which depletes the oxygen in the water, poisons shellfish and emits a foul vapor strong enough to irritate the lungs.

The red tide isn’t a new phenomenon, though its frequency and severity have certainly gotten worse thanks to pollution and rising water temperatures. There used to be decades between outbreaks, but since 1998 the Gulf Coast has suffered one every year.

The earliest description of a red tide may have come from Tacitus, the first-century Roman historian, in his “Annals”: “the Ocean had appeared blood-red and…the ebbing tide had left behind it what looked to be human corpses.” The Japanese recorded their first red tide catastrophe in 1234: An algae bloom in Osaka Bay invaded the Yodo River, a major waterway between Kyoto and Osaka, which led to mass deaths among humans and fish alike.

The earliest reliable accounts of red tide invasions in the Western Hemisphere come from 16th-century Spanish sailors in the Gulf of Mexico. The colorful explorer Álvar Núñez Cabeza de Vaca (ca. 1490-1560) almost lost his entire expedition to red tide poisoning while sailing in Apalachee Bay on the west coast of Florida in July 1528. Unaware that local Native American tribes avoided fishing in the area at that time of year, he allowed his men to gorge themselves on oysters. “The journey was difficult in the extreme,” he wrote afterward, “because neither the horses were sufficient to carry all the sick, nor did we know what remedy to seek because every day they languished.”

Red tides started appearing everywhere in the late 18th and early 19th centuries. Charles Darwin recorded seeing red-tinged water off the coast of Chile during his 1832 voyage on HMS Beagle. Scientists finally identified K. brevis as the culprit behind the outbreaks in 1946-47, but this was small comfort to Floridians, who were suffering the worst red tide invasion in U.S. history. It started in Naples and spread all the way to Sarasota, hanging around for 18 months, destroying the fishing industry and making life unbearable for residents. A 35-mile long stretch of sea was so thick with rotting fish carcasses that the government dispatched Navy warships to try to break up the mass. People compared the stench to poison gas.

The red tide invasion of 2017-18 was particularly terrible, lasting some 15 months and covering 145 miles of Floridian coastline. The loss to tourism alone neared $100 million. Things are looking better this summer, fortunately, but we need more than hope or luck to combat this plague; we need a weapon that hasn’t yet been invented.

Historically Speaking: How We Kept Cool Before Air Conditioning

Wind-catching towers and human-powered rotary fans were just some of the devices invented to fight the heat.

ILLUSTRATION: THOMAS FUCHS

What would we do without our air conditioning? Given the number of rolling blackouts and brownouts that happen across the U.S. each summer, that’s not exactly a rhetorical question.

Fortunately, our ancestors knew a thing or two about staying cool even without electricity. The ancient Egyptians developed the earliest known technique: Evaporative cooling involved hanging wet reeds in front of windows, so that the air cooled as the water evaporated.

The Romans, the greatest engineers of the ancient world, had more sophisticated methods. By 312 B.C. they were piping fresh water into Rome via underground pipes and aqueducts, enabling the rich to cool and heat their houses using cold water pipes embedded in the walls and hot water pipes under the floor.

Nor were the Romans alone in developing clever domestic architecture to provide relief in hot climes. In the Middle East, architects constructed buildings with “wind catchers”—tall, four-windowed towers that funneled cool breezes down to ground level and allowed hot air to escape. These had the advantage of working on their own, without human labor. The Chinese had started using rotary fans as early as the second century, but they required a dedicated army of slaves to keep them moving. The addition of hydraulic power during the Song era, 960-1279, alleviated but didn’t end the manpower issue.

There had been no significant improvements in air conditioning designs for almost a thousand years when, in 1734, British politicians turned to Dr. John Theophilus Desaguliers, a former assistant to Isaac Newton, and begged him to find a way of cooling the overheated House of Commons. Desaguliers designed a marvelous Rube Goldberg-like system that used all three traditional methods: wind towers, pipes and rotary fans. It actually worked, so long as there was someone to crank the handle at all times.

But the machinery wore out in the late 1760s, leaving politicians as hot and bothered as ever. In desperation, the House invited Benjamin Franklin and other leading scientists to design something new. Their final scheme turned out to be no better than Desaguliers’s and required not one but two men to keep the system working.

The real breakthrough occurred in 1841, after the British engineer David Boswell Reid figured out how to control room temperature using steam power. St. George’s Hall in Liverpool is widely considered to be the world’s first air-conditioned building.

Indeed, Reid is one of history’s unsung heroes. His system worked so well that he was invited to install his pipe and ventilation design in hospitals and public buildings around the world. He was working in the U.S. at the start of the Civil War and was appointed inspector of military hospitals. Unfortunately, he died suddenly in 1863, leaving his proposed improvements to gather dust.

The chief problem with Reid’s system was that steam power was about to be overtaken by electricity. When President James Garfield was shot by an assassin in the summer of 1881, naval engineers attempted to keep him cool by using electric fans to blow air over blocks of ice. Two decades later, Willis Haviland Carrier invented the first all-electric air conditioning unit. Architects and construction engineers have been designing around it ever since.

Fears for our power grid may be exaggerated, but it’s good to know that if the unthinkable were to happen and we lost our air conditioners, history can offer us some cool solutions.

Historically Speaking: New Year, Old Regrets

From the ancient Babylonians to Victorian England, the year’s end has been a time for self-reproach and general misery

The Wall Street Journal, January 3, 2019

ILLUSTRATION: ALAN WITSCHONKE

I don’t look forward to New Year’s Eve. When the bells start to ring, it isn’t “Auld Lang Syne” I hear but echoes from the Anglican “Book of Common Prayer”: “We have left undone those things which we ought to have done; And we have done those things which we ought not to have done.”

At least I’m not alone in my annual dip into the waters of woe. Experiencing the sharp sting of regret around the New Year has a long pedigree. The ancient Babylonians required their kings to offer a ritual apology during the Akitu festival of New Year: The king would go down on his knees before an image of the god Marduk, beg his forgiveness, insist that he hadn’t sinned against the god himself and promise to do better next year. The rite ended with the high priest giving the royal cheek the hardest possible slap.

There are sufficient similarities between the Akitu festival and Yom Kippur, Judaism’s Day of Atonement—which takes place 10 days after the Jewish New Year—to suggest that there was likely a historical link between them. Yom Kippur, however, is about accepting responsibility, with the emphasis on owning up to sins committed rather than pointing out those omitted.

In Europe, the 14th-century Middle English poem “Sir Gawain and the Green Knight” begins its strange tale on New Year’s Day. A green-skinned knight arrives at King Arthur’s Camelot and challenges the knights to strike at him, on the condition that he can return the blow in a year and a day. Sir Gawain reluctantly accepts the challenge, and embarks on a year filled with adventures. Although he ultimately survives his encounter with the Green Knight, Gawain ends up haunted by his moral lapses over the previous 12 months. For, he laments (in J.R.R. Tolkien’s elegant translation), “a man may cover his blemish, but unbind it he cannot.”

New Year’s Eve in Shakespeare’s era was regarded as a day for gift-giving rather than as a catalyst for regret. But Sonnet 30 shows that Shakespeare was no stranger to the melancholy that looking back can inspire: “I summon up remembrance of things past, / I sigh the lack of many a thing I sought, / And with old woes new wail my dear time’s waste.”

For a full dose of New Year’s misery, however, nothing beats the Victorians. “I wait its close, I court its gloom,” declared the poet Walter Savage Landor in “Mild Is the Parting Year.” Not to be outdone, William Wordsworth offered his “Lament of Mary Queen of Scots on the Eve of a New Year”: “Pondering that Time tonight will pass / The threshold of another year; /…My very moments are too full / Of hopelessness and fear.”

Fortunately, there is always Charles Dickens. In 1844, Dickens followed up the wildly successful “A Christmas Carol” with a slightly darker but still uplifting seasonal tale, “The Chimes.” Trotty Veck, an elderly messenger, takes stock of his life on New Year’s Eve and decides that he has been nothing but a burden on society. He resolves to kill himself, but the spirits of the church bells intervene, showing him a vision of what would happen to the people he loves.

Today, most Americans recognize this story as the basis of the bittersweet 1946 Frank Capra film “It’s a Wonderful Life.” As an antidote to New Year’s blues, George Bailey’s lesson holds true for everyone: “No man is a failure who has friends.”

Historically Speaking: The Miseries of Travel

ILLUSTRATION: THOMAS FUCHS

Today’s jet passengers may think they have it bad, but delay and discomfort have been a part of journeys since the Mayflower

The Wall Street Journal, September 20, 2018

Fifty years ago, on September 30, 1968, the world’s first 747 Jumbo Jet rolled out of Boeing’s Everett plant in Seattle, Washington. It was hailed as the future of commercial air travel, complete with fine dining, live piano music and glamorous stewardesses. And perhaps we might still be living in that future, were it not for the 1978 Airline Deregulation Act signed into law by President Jimmy Carter.

Deregulation was meant to increase the competitiveness of the airlines, while giving passengers more choice about the prices they paid. It succeeded in greatly expanding the accessibility of air travel, but at the price of making it a far less luxurious experience. Today, flying is a matter of “calculated misery,” as Columbia Law School professor Tim Wu put it in a 2014 article in the New Yorker. Airlines deliberately make travel unpleasant in order to force economy passengers to pay extra for things that were once considered standard, like food and blankets.

So it has always been with mass travel, since its beginnings in the 17th century: a test of how much discomfort and delay passengers are willing to endure. For the English Puritans who sailed to America on the Mayflower in 1620, light and ventilation were practically non-existent, the food was terrible and the sanitation primitive. All 102 passengers were crammed into a tiny living area just 80 feet long and 20 feet wide. To cap it all, the Mayflower took 66 days to arrive instead of the usual 47 for a trans-Atlantic crossing and was 600 miles off course from its intended destination of Virginia.

The introduction of the commercial stage coach in 1610, by a Scottish entrepreneur who offered trips between Edinburgh and Leith, made it easier for the middle classes to travel by land. But it was still an expensive and unpleasant experience. Before the invention of macadam roads—which rely on layers of crushed stone to create a flat and durable surface—in Britain in the 1820s, passengers sat cheek by jowl on springless benches, in a coach that trundled along at around five miles per hour.

The new paving technology improved the travel times but not necessarily the overall experience. Charles Dickens had already found fame with his comic stories of coach travel in “The Pickwick Papers” when he and Mrs. Dickens traveled on an American stage coach in Ohio in 1842. They paid to have the coach to themselves, but the journey was still rough: “At one time we were all flung together in a heap at the bottom of the coach.” Dickens chose to go by rail for the next leg of the trip, which wasn’t much better: “There is a great deal of jolting, a great deal of noise, a great deal of wall, not much window.”

Despite its primitive beginnings, 19th-century rail travel evolved to offer something revolutionary to its paying customers: quality service at an affordable price. In 1868, the American inventor George Pullman introduced his new designs for sleeping and dining cars. For a modest extra fee, the distinctive green Pullman cars provided travelers with hotel-like accommodation, forcing rail companies to raise their standards on all sleeper trains.

By contrast, the transatlantic steamship operators pampered their first-class passengers and abused the rest. In 1879, a reporter at the British Pall Mall Gazette sailed Cunard’s New York to Liverpool route in steerage in order to “test [the] truth by actual experience.” He was appalled to find that passengers were treated worse than cattle. No food was provided, “despite the fact that the passage is paid for.” The journalist noted that two steerage passengers “took one look at the place” and paid for an upgrade. I think we all know how they felt.

https://www.wsj.com/articles/the-miseries-of-travel-1537455854?mod=searchresults&page=1&pos=1

WSJ Historically Speaking: The Psychology and History of Snipers

PHOTO: THOMAS FUCHS

Sharpshooters helped turn the course of World War II 75 years ago at the Battle of Stalingrad

The Battle of Stalingrad during World War II cost more than a million lives, making it one of the bloodiest battles in human history. The death toll began in earnest 75 years ago this week, after the Germans punched through Soviet defenses to reach the outskirts of the city. Once inside, however, they couldn’t get out.

With both sides dug in for the winter, the Russians unleashed one of their deadliest weapons: trained snipers. By the end of the war, Russia had trained more than 400,000 snipers, including thousands of women. At Stalingrad, they had a devastating impact on German morale and fighting capability. Continue reading…