Historically Speaking: Women Who Popped the Question

Tradition holds that women only propose marriage on leap days, but queens have never been afraid to take the initiative.

February 20, 2020

The Wall Street Journal

ILLUSTRATION: THOMAS FUCHS

An old tradition holds that every leap year, on Feb. 29, women may propose marriage to men without censure or stigma. Sources disagree about the origin of this privilege. One attributes it to St. Brigid, who became concerned for all the unmarried women in 5th-century Ireland and persuaded St. Patrick to grant them this relief. Another gives the credit to Queen Margaret of Scotland, who supposedly had the custom written into Scottish law in 1288.

Neither story is likely to be true: St. Brigid, if she even existed, would have been a child at the time of St. Patrick’s death, and Margaret died at the age of 7 in 1290. But around the world, there have always been a few women who exercised the usually male privilege of proposing.

In the Bible, the widowed Ruth, future great-grandmother of King David, asks her kinsman Boaz to marry her—not with words but by lying down at the foot of his bed. On the Bissagos Islands off the coast of Guinea-Bissau, women propose by offering the man of their choice a ceremonial dish of fish marinated in palm oil.

Queen Ankhesenamun of Egypt, who lived in the 14th century B.C., is believed to have made the earliest recorded marriage proposal by a woman. Based on a surviving letter in the Hittite royal archives, scholars have theorized that Ankhesenamun, the widow of the boy king Tutankhamun, secretly asked the Hittite king Suppiluliuma to agree to a match with one of his sons, so that she could avoid a forced marriage to Ay, her late husband’s grand vizier. The surprised and suspicious king eventually sent her his son Zannanza. Unfortunately, the plan leaked out and the Hittite wedding party was massacred at the Egyptian border. Ankhesenamun disappears from the historical record shortly after.

The Roman princess Justa Grata Honoria, sister of Emperor Valentinian III, had marginally better luck. In 450 she appealed to Attila, King of the Huns, to marry her, in order to escape an arranged marriage with a minor politician. When Valentinian learned of the plan, he refused to allow it and forced Honoria to wed the senator. In retaliation, Attila launched an attack against Rome on the pretext of claiming his bride, capturing Gaul and advancing as far as the Po River in northern Italy.

The idea that marriage was a sentimental union between two individuals, rather than an economic or strategic pact between families, gained ground in Europe in the late 18th century. Jane Austen highlighted the clash of values between generations in her 1813 novel “Pride and Prejudice”: Lady Catherine de Bourgh insists that her daughter and Mr. Darcy have been engaged since birth, while the heroine Elizabeth Bennet declares she will have Darcy if she wants.

Two decades later, a 20-year-old Queen Victoria came down on the side of love when she chose her cousin Albert to be her husband. As a ruling monarch, it was Victoria’s right and duty to make the proposal. As she recorded in her diary on October 15, 1839, “I said to him … that it would make me too happy if he would consent to what I wished.” The 21-year marriage was one of the most successful in royal history.

Although it’s still the custom in most countries for men to propose marriage, leap year or not, there’s more to courtship than getting down on one knee. As the Irving Berlin song goes, “A man chases a girl (until she catches him).”

Historically Speaking: Plagues From the Animal Kingdom

The coronavirus is just the latest of many deadly diseases to cross over to human beings from other species.

February 6, 2020

The Wall Street Journal

Earlier this week, the still-rising death toll in mainland China from the coronavirus surpassed the 349 fatalities recorded during the 2003 SARS epidemic. Although both viruses are believed to have originated in bats, they don’t behave in the same way. SARS spread slowly, but its mortality rate was 9.6%, compared with about 2% for the swift-moving coronavirus.

A scientist examines Ötzi, a 5,300-year-old mummy discovered in the Tyrolean Alps in 1991.
PHOTO: EURAC/MARION LAFOGLER

Statistics tell only one part of the story, however. Advances in the genetic sequencing of diseases have revealed that a vast hinterland of growth and adaptation precedes the appearance of a new disease. Cancer, for example, predates human beings themselves: Last year scientists announced that they had discovered traces of bone cancer in the fossil of a 240-million-year-old shell-less turtle from the Triassic period. This easily surpasses the oldest example of human cancer, which was found in a 1.7 million-year-old toe bone in South Africa. The findings confirm that even though cancer has all kinds of modern triggers such as radiation poisoning, asbestos and smoking, the disease is deeply rooted in our evolutionary past.

Unlike cancer, the majority of human diseases are zoonotic, meaning that they are passed between animals and people by viruses, fungi, parasites or bacteria. The rise of agriculture around 10,000 years ago, which forced humans into close contact with animals, was probably the single greatest factor behind the spread of infectious disease.

Rabies was one of the earliest diseases to be recognized as having an animal origin. The law code of Eshnunna, a Mesopotamian city that flourished around 2000 B.C., mandated harsh punishments against owners of mad dogs that bit people. Lyme disease was only identified by scientists in 1975, but it too was an ancient scourge. Ötzi, the 5,300-year-old mummy who was discovered in the Tyrolean Alps with cracked ribs and an arrow wound in his shoulder, was an unlucky fellow even before he was killed. DNA sequencing in 2010 revealed that while he was alive, Ötzi was lactose intolerant, had clogged arteries and suffered from Lyme disease.

Smallpox, which was eradicated by the World Health Organization in 1979, had been one of the most feared diseases for most of human history, with a mortality rate of 30%. Those who survived were often left with severe scarring; the telltale lesions of smallpox have been identified on the mummy of Pharaoh Ramesses V, who died in 1145 B.C. The disease is caused by the variola virus, which is thought to have crossed over to human beings from an animal, likely a rodent, in prehistoric times.

Although it can’t be proved for certain, it is likely that smallpox was behind the terrible plague that killed 20% of the Athenian population in 430 B.C. The historian Thucydides, who lived through it, described the agony of those infected with red pustules, the dead bodies piled high in the temples and the scars left on the survivors. He also noticed that those who did survive acquired immunity to the disease.

Thucydides’s observation turned out to be the key to one of humanity’s greatest weapons against infectious disease, vaccination. But apart from smallpox, the only eradication programs to have made some progress have been against viruses transmitted from human to human, such as polio and measles. Meanwhile, since the 1970s more than 40 new infectious diseases have emerged from the animal realm, including HIV, swine flu and Zika. And those are just the ones we know about.

Historically Speaking: Royal Treasures, Lost and Found

From Montezuma’s gold to the crown jewels of Scotland, some of the world’s most famous valuables have gone missing.

January 23, 2020

The Wall Street Journal

What sort of nitwit goes off in a snowstorm to feed leftovers to the chickens while still wearing her Christmas Day finery? In my defense, I was just trying to share the love. Alas, I ended up sharing an antique ring along with the Brussels sprouts. Only the chickens know where it is, and they aren’t talking.

The Honours of Scotland were recovered in 1818 after being lost for decades. PHOTO: ALAMY

One doesn’t need to read J.R.R. Tolkien’s “The Lord of the Rings” to be reminded that lost treasures are often the result of epic folly. In October 1216, King John of England lost the crown jewels while leading a campaign against rebellious barons. Against all advice, John—who is chiefly remembered for being forced to sign the Magna Carta, one of the cornerstones of civil liberty—took a shortcut via the Wash, a tidal estuary on England’s east coast. He got across the causeway just in time to see the waters come rushing up behind him. The wagon train with all his supplies and baggage—including, crucially, the king’s treasury—sank without a trace. The incident has given countless British schoolchildren the joy of being able to say, “Bad King John lost his crown in the wash.”

Folly also played a starring role in the disappearance of the treasure of the Aztec Emperor Montezuma II. In 1520, the inhabitants of the Aztec city of Tenochtitlan rose up against the occupying Spanish forces led by Hernán Cortés. By July 1, the situation was so critical that the outnumbered conquistadors attempted a midnight escape from the city. Hampered by their haul of plunder, however, the Spanish were too slow in crossing Lake Texcoco. Unable to run or fight, desperation overcame their greed and the conquistadors tossed the treasures into the water. Despite losing half his men on what he called “La Noche Triste,” the Night of Sorrows, Cortes captured the Aztec capital a year later. But he never found the lost gold.

It was a case of absent-mindedness that led to the disappearance of the Scottish royal sword, scepter and crown, known collectively as the Honours of Scotland. Having been successfully hidden during the Interregnum, England’s brief experiment with republicanism in 1649-60, the Honours were returned to Edinburgh Castle for safekeeping. Too safe, it turned out: No one could remember where they were. But the story has a happy ending. In 1818, the Scottish novelist Sir Walter Scott received permission to conduct his own search of the castle. He found the Honours in a locked storeroom, inside a trunk packed with linens.

Occasionally, royal treasures have been lost on purpose. One of the last rulers to be buried with his jewels was Emperor Tu Duc of Vietnam (1829-83). To outwit potential grave robbers, he left orders that he should not be buried in his elaborate official tomb but in a secret location; to ensure it stayed secret, the laborers who interred him were executed. In 1913, Georges Mahé, the French colonial administrator of the Vietnamese city of Hue, provoked a national outcry after he dug up Tu Duc’s official tomb in the hope of finding the hidden jewels. The French swiftly apologized and Mahé was sacked.

Tu Duc’s treasure remains lost, but it may not stay that way forever. Earlier this month, scientists in Mexico City confirmed that a gold bar found on a construction site is one of the ingots discarded by Cortés and his fleeing conquistadors almost exactly 500 years ago.

 

Historically Speaking: The Blessing and Curse of ‘Black Gold’

From the pharaohs to John D. Rockefeller, oil has been key to the growth of civilization—but it comes at a high cost.

January 10, 2020

The Wall Street Journal

This January marks the 150th anniversary of the Standard Oil Company, incorporated in 1870 by John D. Rockefeller and three partners. Such was their drive and ruthlessness that within a decade Standard Oil became a vast monopoly, controlling over 90% of America’s oil refineries.

Spindletop Hill in Beaumont, Texas, was the site of the first Texas oil gusher in 1901. PHOTO: TEXAS ENERGY MUSEUM/NEWSMAKERS/GETTY IMAGES

Standard Oil’s tentacle-like grip on U.S. commerce was finally prized loose in 1911, when the Supreme Court broke it up into 33 separate companies. But this victory didn’t put an end to the nefarious activities surrounding “black gold.” In the 1920s, tycoon Edward Doheny was drawn into the Teapot Dome scandal after he gave a $100,000 bribe to Secretary of the Interior Albert Fall. Doheny served as the inspiration for the corrupt and blood-soaked tycoon J. Arnold Ross in Upton Sinclair’s 1927 novel “Oil!” (which in turned inspired the Oscar-winning 2007 film “There Will Be Blood”).

Though clearly responsible for a great many evils, oil has also been key to the growth of human civilization. As the Bible attests, bitumen—a naturally forming liquid found in oil sands and pitch lakes—was used in ancient times for waterproofing boats and baskets. It also played an important role in Egyptian burial practices: The word “mummy” is derived from mumiyyah, the Arabic word for bitumen.

Over the centuries, oil proved useful in a variety of ways. As early as the fourth century, the Chinese were drilling for oil with bamboo pipes and burning it as heating fuel. In Central Eurasia it was a treatment for mange in camels. By the ninth century, Persian alchemists had discovered how to distill oil into kerosene to make light. The oil fields of medieval Baku, in today’s Azerbaijan, brought trade and culture to the region, rather than political oppression and underdevelopment, as is often the case in oil-rich countries today.

The drilling of the first commercial oil well in the U.S., in Pennsylvania in 1859, brought a raft of benefits. In the 19th century, an estimated 236,000 sperm whales were killed to make oil for lamps. The whaling industry died overnight once Standard Oil began marketing a clean-smelling version of kerosene. Plentiful oil also made the automobile industry possible. In 1901, when a massive oil gusher was discovered in Spindletop, Texas, there were 14,800 cars in the U.S.; two decades later, there were 8.5 million.

After World War II, the world’s oil supply was dominated not by private companies like Standard Oil but by global alliances such as OPEC. When OPEC nations declared an embargo in 1973, the resulting crisis caused the price of oil to climb nearly 400%.

At the time, the U.S. depended on foreign suppliers for 36% of its oil supply. Last month, the U.S. Energy Information Administration announced that, thanks to new technologies such as hydraulic fracturing, the country had become a net exporter of oil for the first time in 75 years.

Though helpful geopolitically, America’s oil independence doesn’t solve the environmental problems caused by carbon emissions. Ironically, some of John D. Rockefeller’s own descendants, aided by the multibillion-dollar fortune he bequeathed, are now campaigning against Exxon Mobil —one of the 33 Standard Oil offshoots—over its record on climate change.

Historically Speaking: Electric Lights for Yuletide

In 1882, Thomas Edison’s business partner put up a Christmas tree decorated with 80 red, white and blue bulbs—and launched an American tradition.

The Wall Street Journal, December 5, 2019

As a quotation often attributed to Maya Angelou has it, ‘’You can tell a lot about a person by the way (s)he handles these three things: a rainy day, lost luggage and tangled Christmas tree lights.” I’m not sure what it says about me that I actually look forward to getting my hands on the latter. My house is so brightly decorated with energy-saving LEDs that it could double as a landing beacon on a foggy night. It’s the one thing I really missed when I lived abroad—no other country does Christmas lights like America. More than 80 million households put up lighting displays each December, creating a seasonal spike in U.S. energy use that’s bigger than the annual consumption of some small countries.

Holiday lights in Brooklyn, 2015. PHOTO: GETTY IMAGES

Hanging festive lighting during the winter solstice is an ancient practice, but the modern version owes its origins to Thomas Edison and his business partner Edward H. Johnson. Edison perfected the first fully functional lightbulb in 1879. For Christmas the following year, he strung up lights outside his Menlo Park factory—partly to provide good cheer but mostly to advertise the benefits of electrification. Johnson went one further in 1882, placing a Christmas tree decorated with 80 red, white and blue blinking lightbulbs on a revolving turntable in his parlor window in Manhattan.

Johnson repeated the display every year, much to the delight of New Yorkers, striving to make it bigger and better each time. He thus founded the other great American tradition: the competitive light display. The first person to take up Johnson’s challenge was President Grover Cleveland, who in 1894 erected an enormous multi-light Christmas tree in the White House, thereby starting a new presidential tradition.

The initial $300 price tag for an electrified Christmas tree (about $2,000 today) put it beyond the reach of the average consumer. The alternative was clip-on candles, but they were so hazardous that by 1910 most home insurance policies contained a nonpayment clause for house fires caused by candlelit Christmas trees. Although it was possible to rent electric Christmas lights for the season, and the General Electric Company was beginning to produce easy-to-assemble kits, the stark difference between lit and unlit homes threatened to become a powerful symbol of social inequality.

Fortunately, a New Yorker named Emilie D. Lee Herreshoff was on the case. She persuaded the city council to allow an electrified Christmas tree to be put up in Madison Square Park. The inaugural tree-lighting celebration, on December 24, 1912, generated so much public enthusiasm that within two years over 300 cities and towns were holding similar ceremonies.

Not content with just one festive tree, in 1920 the city of Pasadena, Calif., agreed to light up the 150 mature evergreens lining Santa Rosa Avenue, leading to its nickname “Christmas Tree Lane.” This was quite a feat of electrical engineering, given that outdoor Christmas lights didn’t become commercially available until 1927.

To encourage buyers, GE began sponsoring local holiday lighting contests, unleashing a competitive spirit each Yuletide that only seems to have grown stronger with the passing decades. Since 2014, the Guinness Book of World Records title for the most lights on a private residence has been held by the Gay family of LaGrangeville, N.Y., with strong competition from Australia. To which I say, “God bless them, everyone.”

Historically Speaking: Funding Wars Through the Ages

U.S. antiterror efforts have cost nearly $6 trillion since the 9/11 attacks. Earlier governments from the ancient Greeks to Napoleon have had to get creative to finance their fights

The Wall Street Journal, October 31, 2019

The successful operation against Islamic State leader Abu Bakr al-Baghdadi is a bright spot in the war on terror that the U.S. declared in response to the attacks of 9/11. The financial costs of this long war have been enormous: nearly $6 trillion to date, according to a recent report by the Watson Institute of International and Public Affairs at Brown University, which took into account not just the defense budget but other major costs, like medical and disability care, homeland security and debt.

ILLUSTRATION: THOMAS FUCHS

War financing has come a long way since the ancient Greeks formed the Delian League in 478 B.C., which required each member state to contribute an agreed amount of money each year, rather than troops,. With the League’s financial backing, Athens became the Greek world’s first military superpower—at least until the Spartans, helped by the Persians, built up their naval fleet with tribute payments extracted from dependent states.

The Romans maintained their armies through tributes and taxes until the Punic Wars—three lengthy conflicts between 264 and 146 B.C.—proved so costly that the government turned to debasing the coinage in an attempt to increase the money supply. The result was runaway inflation and eventually a sovereign debt crisis during the Social War a half-century later between Rome and several breakaway Italian cities. The government ended up defaulting in 86 B.C., sealing the demise of the ailing Roman Republic.

After the fall of Rome in the late fifth century, wars in Europe were generally financed by plunder and other haphazard means. William the Conqueror financed the Norman invasion of England in 1066 the ancient Roman way, by debasing his currency. He learned his lesson and paid for all subsequent operations out of tax receipts, which stabilized the English monetary system and established a new model for financing war.

Taxation worked until European wars became too expensive for state treasuries to fund alone. Rulers then resorted to a number of different methods. During the 16th century, Philip I of Spain turned to the banking houses of Genoa to raise the money for his Armada invasion fleet against England. Seizing the opportunity, Sir Francis Walsingham, Elizabeth I’s chief spymaster, sent agents to Genoa with orders to use all legal means to sabotage and delay the payment of Philip’s bills of credit. The operation bought England a crucial extra year of preparation.

In his own financial preparations to fight England, Napoleon had better luck than Philip I: In 1803 he was able to raise a war chest of over $11 million in cash by selling the Louisiana Territory to the U.S.

Napoleon was unusual in having a valuable asset to offload. By the time the American Civil War broke out in 1861, governments had become reliant on a combination of taxation, printing money or borrowing to pay for war. But the U.S. lacked a regulated banking system since President Andrew Jackson’s dismantling of the Second Bank of the United States in the 1830s. The South resorted to printing paper money, which depreciated dramatically. The North could afford to be more innovative. In 1862 the financier Jay Cooke invented the war bond. This was marketed with great success to ordinary citizens. At the war’s end, the bonds had covered two-thirds of the North’s costs.

Incurring debt is still how the U.S. funds its wars. It has helped to shield the country from the full financial effects of its prolonged conflicts. But in the future it is worth remembering President Calvin Coolidge’s warning: “In any modern campaign the dollars are the shock troops…. A country loaded with debt is devoid of the first line of defense.”

Historically Speaking: The Many Roads to Vegetarianism

Health, religion and animal rights have all been advanced as reasons not to eat meat.

The Wall Street Journal, October 18, 2019

ILLUSTRATION: PETER ARKLE

The claim that today’s ingeniously engineered fake meat tastes like the real thing and helps the planet is winning over consumers from the carnivore side of the food aisle. According to Barclays, the alt-meat market could be worth $140 billion a year a decade from now. But the argument over the merits of vegetarianism is nothing new; it’s been going on since ancient times.

Meat played a pivotal role in the evolution of the human brain, providing the necessary calories and protein to enable it to increase in size. Nonetheless, meat-eating remained a luxury in the diets of most early civilizations. It wasn’t much of a personal sacrifice, therefore, when the Greek philosopher Pythagoras (ca. 570-495 B.C.), author of the famous theorem, became what many consider the first vegetarian by choice. Pythogoreans believed that humans could be reincarnated as animals and vice versa, meaning that if you ate meat, Aunt Lydia could end up on your plate.

The anti-meat school of thought was joined a century later by Plato, who argued in the Republic that meat consumption encouraged decadence and warlike behavior. These views were strongly countered by Aristotelian philosophy, which taught that animals exist for human use—an opinion that the Romans heartily endorsed.

The avoidance of meat for moral and ascetic reasons also found a home in Buddhism and Hinduism. Ashoka the Great, the 3rd-century Buddhist emperor of the Maurya Dynasty of India, abolished animal sacrifice and urged his people to abstain from eating flesh.

It wasn’t until the Enlightenment, however, that Western moralists and philosophers began to argue for vegetarianism on the grounds that we have a moral duty to avoid causing animals pain. In 1641 the Massachusetts Bay Colony passed one of the earliest laws against animal cruelty. By the early 19th century, the idea that animals have rights had started to take hold: The English Romantic poet Percy Bysshe Shelley proselytized for vegetarianism, as did the American transcendentalist thinker Henry David Thoreau, who wrote in “Walden”: “I have no doubt that it is part of the destiny of the human race … to leave off eating animals.”

The word “vegetarian” first appeared in print in England in 1842. Within a decade there were vegetarian societies in Britain and America. Echoing the Platonists rather than Pythagoras, their guiding motivation was self-denial as opposed to animal welfare. Sylvester Graham, the leader of the early American vegetarian movement, also urged sexual abstinence on his followers.

Vegetarianism finally escaped its moralistic straitjacket at the end of the 19th century, when the health guru John Harvey Kellogg, the inventor of corn flakes, popularized meat-free living for reasons of bodily well-being at his Battle Creek Sanitarium in Michigan.

There continue to be mixed motivations for vegetarianism today. Burger King’s meatless Impossible Whopper may be “green,” but it has less protein and virtually the same number of calories as the original. A healthier version will no doubt appear before long, and some people hope that when lab-grown meat hits the market in a few years, it will be as animal- and climate-friendly as plant-based food. With a lot of science and a bit of luck, vegetarians and meat-eaters may end up in the same place.

Historically Speaking: The Long Road to Cleanliness

The ancient Babylonians and Egyptians knew about soap, but daily washing didn’t become popular until the 19th century.

As the mother of five teenagers, I have a keen appreciation of soap—especially when it’s actually used. Those little colored bars—or more frequently nowadays, dollops of gel—represent one of the triumphs of civilization.

Adolescents aside, human beings like to be clean, and any product made of fats or oils, alkaline salts and water will help them to stay that way. The Babylonians knew how to make soap as early as 2800 B.C., although it was probably too caustic for washing anything except hair and textiles. The Ebers Papyrus, an ancient Egyptian medical document from 1550 B.C., suggests that the Egyptians used soap only for treating skin ailments.

ILLUSTRATION: THOMAS FUCHS

The Greeks and Romans also avoided washing with harsh soaps, until Julius Caesar’s conquest of Gaul in 58 B.C. introduced them to a softer Celtic formula. Aretaeus of Cappadocia, a 1st century A.D. Greek physician, wrote that “those alkaline substances made into balls” are a “very excellent thing to cleanse the body in the bath.”

Following Rome’s collapse in the 5th century, the centers of soap-making moved to India, Africa and the Middle East. In Europe, soap suffered from being associated with ancient paganism. In the 14th century, Crusaders returning from the Middle East brought back with them a taste for washing with soap and water, but not in sufficient numbers to slow the spread of plague.

Soap began to achieve wider acceptance in Europe during the Renaissance, though geography still played a role: Southern countries had the advantage of making soap out of natural oils and perfumes, while the colder north had to make do with animal fats and whale blubber. Soap’s growing popularity also attracted the attention of revenue-hungry governments. In 1632, in one of the earliest documented cases of crony capitalism, King Charles I of England granted a group of London soapmakers a 14-year monopoly in exchange for annual payments of 4 pounds per ton sold.

Soap remained a luxury item, however, until scientific advances during the age of the Enlightenment made large-scale production possible: In 1790, the French chemist Nicholas Leblanc discovered how to make alkali from common salt. The saying “cleanliness is next to godliness”—credited to John Wesley, the founder of Methodism—was a great piece of free advertising, but it was soap’s role in modern warfare that had a bigger impact on society. During the Crimean War in Europe and the Civil War in the U.S., high death tolls from unsanitary conditions led to new requirements that soldiers use soap every day.

In the late 19th century, soap manufacturers helped to jump-start the advertising industry with their use of catchy poems and famous artworks as marketing tools. British and American soapmakers were ahead of their time in other ways, too: Lever (now Unilever) built housing for its workers, while Procter and Gamble pioneered the practice of profit-sharing.

And it was Procter and Gamble that made soap the basis for one of the most influential cultural institutions of the last century. Having read reports that women would like to be entertained while doing housework, the company decided it would sponsor the production of daytime radio domestic dramas. Thus began the first soap opera, “Ma Perkins,” a 15-minute tear-laden serial that ran from 1933 until 1960—and created a new form of storytelling.

Historically Speaking: Fashion Shows: From Royal to Retail

The catwalk has always been a place for dazzling audiences as well as selling clothes

The 2007 Fendi Fall Collection show at the Great Wall of China. PHOTO: GETTY IMAGES

As devotees know, the fashion calendar is divided between the September fashion shows, which display the designers’ upcoming spring collections, and the February shows, which preview the fall. New York Fashion Week, which wraps up this weekend, is the world’s oldest; it started in 1943, when it was called “press week,” and always goes first, followed by London, Milan, and Paris.

Although fashion week is an American invention, the twice-yearly fashion show can be traced back to the court of Louis XIV of France in the 17th century. The king insisted on a seasonal dress code at court as a way to boost the French textile industry: velvet and satin in the winter, silks in the summer. The French were also responsible for the rise of the dress designer: Charles Frederick Worth opened the first fashion house in Paris in 1858. Worth designed unique dresses for individual clients, but he made his fortune with seasonal dress collections, which he licensed to the new department stores that were springing up in the world’s big cities.

Worth’s other innovation was the use of live models instead of mannequins. By the late 1800s this had evolved into the “fashion parade,” a precursor to today’s catwalk, which took place at invitation-only luncheons and tea parties. In 1903, the Ehrich brothers transported the fashion parade idea to their department store in New York. The big difference was that the dresses on show could be bought and worn the same day. The idea caught on, and all the major department stores began holding fashion shows.

The French couture houses studiously ignored the consumer-friendly approach pioneered by American retailers. After World War II, however, they had to tout for business like anyone else. The first Paris fashion week took place in 1947. But unlike New York’s, which catered to journalists and wholesale buyers only, the emphasis of the Paris fashion shows was still on haute couture.

The two different types of fashion show—the selling kind, organized by department stores for the public, and the preview kind, held by designers for fashion insiders—coexisted until the 1960s. Suddenly, haute couture was out and buying off the rack was in. The retail fashion show became obsolete as the design houses turned to ready-to-wear collections and accessories such as handbags and perfume.

Untethered from its couture roots, the designer fashion show morphed into performance art—the more shocking the better. The late designer Alexander McQueen provocatively titled his 1995 Fall show “Highland Rape” and sent out models in bloodied and torn clothes. The laurels for the most insanely extravagant runway show still belong to Karl Lagerfeld, who staged his 2007 Fendi Fall Collection on the Great Wall of China at a cost of $10 million.

But today there’s trouble on the catwalk. Poor attendance has led to New York’s September Fashion Week shrinking to a mere five days. Critics have started to argue that the idea of seasonal collections makes little sense in today’s global economy, while the convenience of e-commerce has made customers unwilling to wait a week for a dress, let alone six months. Designers are putting on expensive fashion shows only to have their work copied and sold to the public at knockdown prices a few weeks later. The Ehrich brothers may have been right after all: don’t just tell, sell.

Historically Speaking: Before Weather Was a Science

Modern instruments made accurate forecasting possible, but humans have tried to predict the weather for thousands of years.

The Wall Street Journal, August 31, 2019

ILLUSTRATION: THOMAS FUCHS

Labor Day weekend places special demands on meteorologists, even when there’s not a hurricane like Dorian on the way. September weather is notoriously variable: In 1974, Labor Day in Iowa was a chilly 43 degrees, while the following year it was a baking 103.

Humanity has always sought ways to predict the weather. The invention of writing during the 4th millennium B.C. was an important turning point for forecasting: It allowed the ancient Egyptians to create the first weather records, using them as a guide to predict the annual flood level of the Nile. Too high meant crop failures, too low meant drought.

Some early cultures, such as the ancient Greeks and the Mayans, based their weather predictions on the movements of the stars. Others relied on atmospheric signs and natural phenomena. One of the oldest religious texts in Indian literature, the Chandogya Upanishad from the 8th century B.C., includes observations on various types of rain clouds. In China, artists during the Han Dynasty (206 B.C.-9 A.D.) painted “cloud charts” on silk for use as weather guides.

These early forecasting attempts weren’t simply products of magical thinking. The ancient adage “red sky at night, shepherd’s delight,” which Jesus mentions in the gospel of Matthew, is backed by hard science: The sky appears red when a high-pressure front moves in from the west, driving the clouds away.

In the 4th century B.C., Aristotle tried to provide rational explanations for weather phenomena in his treatise Meteorologica. His use of scientific method laid the foundations for modern meteorology. The problem was that nothing could be built on Aristotle’s ideas until the invention of such tools as the thermometer (an early version was produced by Galileo in 1593) and the barometer (invented by his pupil Torricelli in 1643).

Such instruments couldn’t predict anything on their own, but they made possible accurate daily weather observations. Realizing this, Thomas Jefferson, a pioneer in modern weather forecasting, ordered Meriwether Lewis and William Clark to keep meticulous weather records during their 1804-06 expedition to the American West. He also made his own records wherever he resided, writing in his meteorological diary, “My method is to make two observations a day.”

Most governments, however, remained dismissive of weather forecasting until World War I. Suddenly, knowing which way the wind would blow tomorrow meant the difference between gassing your own side or the enemy’s.

To make accurate predictions, meteorologists needed a mathematical model that could combine different types of data into a single forecast. The first attempt, by the English mathematician Lewis Fry Richardson in 1917, took six weeks to calculate and turned out to be completely wrong.

There were still doubts about the accuracy of weather forecasting when the Allied meteorological team told Supreme Commander Dwight Eisenhower that there was only one window of opportunity for a Normandy landing: June 6, 1944. Despite his misgivings, Eisenhower acted on the information, surprising German meteorologists who had predicted that storms would continue in the English Channel until mid-June.

As we all know, meteorologists still occasionally make the wrong predictions. That’s when the old proverb comes into play: “There is no such thing as bad weather, only inappropriate clothes.”