Historically Speaking: The Women Who Have Gone to War

There have been female soldiers since antiquity, but only in modern times have military forces accepted and integrated them

The Wall Street Journal

July 14, 2022

“War is men’s business,” Prince Hector of Troy declares in Homer’s Iliad, a sentiment shared by almost every culture since the beginning of history. But Hector was wrong. War is women’s business, too, even though their roles are frequently overlooked.

This month marks 75 years since the first American woman received a regular Army commission. Florence Aby Blanchfield, superintendent of the 59,000-strong Army Nurse Corps during World War II, was appointed Lt. Colonel by General Dwight Eisenhower in July, 1947. Today, women make up approximately 19% of the officer corps of the Armed Forces.

The integration of women into the military is a fundamental difference between the ancient and modern worlds. In the past, a weapon-wielding woman was seen as symbolizing the shame and emasculation of men. Among the foundation myths depicted on the Parthenon are scenes of the Athenian army defeating the Amazons, a race of warrior women.

Such propaganda couldn’t hide, however, the fact that in real life the Greeks and Romans on occasion fought and even lost against female commanders. Artemisia I of Caria was one of the Persian king Xerxes’s most successful naval commanders. Hearing about her exploits against the Greeks during the Battle of Salamis in 480 B.C., he is alleged to have exclaimed: “My men have become women, and my women men.” The Romans crushed Queen Boudica’s revolt in what is now eastern England in 61 A.D., but not before she had destroyed the 9th Roman Legion and massacred 70,000 others.

The medieval church was similarly torn between ideology and reality in its attitude toward female Christian warriors. Yet women did take part in the Crusades. Most famously, Queen Eleanor of Aquitaine accompanied her husband King Louis VII of France on the Second Crusade (1147-1149) and was by far the better strategist of the two. However, Eleanor’s enemies cited her presence as proof that she was a gender-bending harlot.

Florence Aby Blanchfield was the first woman to receive a regular commission with the U.S. Army

For centuries, the easiest way for a woman to become a soldier was to pass as a boy. In 1782, Massachusetts-born Deborah Sampson became one of the first American women to fight for her country by enlisting as a youth named Robert Shurtleff. During the Civil War, anywhere between 400 and 750 women practiced similar deceptions.

A dire personnel shortage finally opened a legal route for women to enter the Armed Forces. Unable to meet its recruitment targets, in March, 1917, the U.S. Navy announced that it would allow all qualified persons to enlist in the reserves. Loretta Perfectus Walsh, a secretary in the Philadelphia naval recruiting office, signed up almost immediately. The publicity surrounding her enlistment as the Navy’s first female chief yeoman encouraged thousands more to step forward.

World War II proved to be similarly transformative. In the U.S., more than 350,000 women served in uniform. In Britain, Queen Elizabeth II made history by becoming a military mechanic in the women’s branch of the British Army.

Although military women have made steady gains in terms of parity, the debate over their presence is by no means over. Yet the “firsts” keep coming. In June, Adm. Linda L. Fagan of the U.S. Coast Guard became the first woman to lead a branch of the U.S. Armed Forces. In the past as now, whatever the challenge, there’s always been a woman keen to accept it.

Historically Speaking: The Quest to Understand Skin Cancer

The 20th-century surgeon Frederic Mohs made a key breakthrough in treating a disease first described in ancient Greece.

The Wall Street Journal

June 30, 2022

July 1 marks the 20th anniversary of the death of Dr. Frederic Mohs, the Wisconsin surgeon who revolutionized the treatment of skin cancer, the most common form of cancer in the U.S. Before Mohs achieved his breakthrough in 1936, the best available treatment was drastic surgery without even the certainty of a cure.

Skin cancer is by no means a new illness or confined to one part of the world; paleopathologists have found evidence of it in the skeletons of 2,400- year-old Peruvian mummies. But it wasn’t recognized as a distinct cancer by ancient physicians. Hippocrates in the 5th century B.C. came the closest, noting the existence of deadly “black tumors (melas oma) with metastasis.” He was almost certainly describing malignant melanoma, a skin cancer that spreads quickly, as opposed to the other two main types, basal cell and squamous cell carcinoma.

ILLUSTRATION: THOMAS FUCHS

After Hippocrates, nearly 2,000 years elapsed before earnest discussions about black metastasizing tumors began to appear in medical writings. The first surgical removal of a melanoma took place in London in 1787. The surgeon involved, a Scotsman named John Hunter, was mystified by the large squishy thing he had removed from his patient’s jaw, calling it a “cancerous fungus excrescence.”

The “fungoid disease,” as some referred to skin cancer, yielded up its secrets by slow degrees. In 1806 René Laënnec, the inventor of the stethoscope, published a paper in France on the metastatic properties of “La Melanose.” Two decades later, Arthur Jacob in Ireland identified basal cell carcinoma, which was initially referred to as “rodent ulcer” because the ragged edges of the tumors looked as though they had been gnawed by a mouse.

By the beginning of the 20th century, doctors had become increasingly adept at identifying skin cancers in animals as well as humans, making the lack of treatment options all the more frustrating. In 1933, Mohs was a 23-year-old medical student assisting on cancer research in rats when he noticed the destructive effect of zinc chloride on malignant tissue. Excited by its potential, within three years he had developed a zinc chloride paste and a technique for using it on cancerous lesions.

He initially described it as “chemosurgery” since the cancer was removed layer by layer. The results for his patients, all of whom were either inmates of the local prison or the mental health hospital, were astounding. Even so, his method was so novel that the Dane County Medical Association in Wisconsin accused him of quackery and tried to revoke his medical license.

Mohs continued to encounter stiff resistance until the early 1940s, when the Quislings, a prominent Wisconsin family, turned to him out of sheer desperation. Their son, Abe, had a lemon-sized tumor on his neck which other doctors had declared to be inoperable and fatal. His recovery silenced Mohs’s critics, although the doubters remained an obstacle for several more decades. Nowadays, a modern version of ”Mohs surgery,” using a scalpel instead of a paste, is the gold standard for treating many forms of skin cancer.

Historically Speaking: The Modern Flush Toilet Has Ancient Origins

Even the Minoans of Crete found ways to whisk away waste with flowing water.

The Wall Street Journal

June 9, 2021

Defecation is a great equalizer. As the 16th-century French Renaissance philosopher Michel de Montaigne put it trenchantly in his Essays, “Kings and philosophers shit, as do ladies.”

Yet, even if each person is equal before the loo, not all toilets are considered equal. Sitting or squatting, high or low-tech, single or dual flush: Every culture has preferences and prejudices. A top-end Japanese toilet with all the fixtures costs as much as a new car.

THOMAS FUCHS

Pride in having the superior bathroom experience goes back to ancient times. As early as 2500 B.C., wealthy Mesopotamians could boast of having pedestal lavatories and underfloor piping that fed into cesspits. The Harappans of the Indus Valley Civilization went one better, building public drainage systems that enabled even ordinary dwellings to have bathrooms and toilets. Both, however, were surpassed by the Minoans of Crete, who invented the first known flush toilet, using roof cisterns that relied on the power of gravity to flush the contents into an underground sewer.

The Romans’ greatest contribution to sanitary comfort was the public restroom. By 300 B.C., Rome had nearly 150 public toilet facilities. These were communal, multi-seater affairs consisting of long stone benches with cut-out holes set over a channel of continuously running water. Setting high standards for hygiene, the restrooms had a second water trough for washing and sponging.

Although much knowledge and technology was lost during the Dark Ages, the Monty Python depiction of medieval society as unimaginably filthy was somewhat of an exaggeration. Castle bedrooms were often en-suite, with pipes running down the exterior walls or via internal channels to moats or cesspits. Waste management was fraught with danger, though—from mishaps as much as disease.

The most famous accident was the Erfurt Latrine Disaster. In 1184, King Henry VI of Germany convened a royal gathering at Petersburg Citadel in Erfurt, Thuringia. Unfortunately, the ancient hall was built over the citadel’s latrines. The meeting was in full swing when the wooden flooring suddenly collapsed, hurling many of the assembled nobles to their deaths in the cesspit below.

Another danger was the vulnerability of the sewage system to outside penetration. Less than 20 years after Erfurt, French troops in Normandy captured the English-held Chateau Gaillard by climbing up the waste shafts.

Sir John Harington, a godson of Queen Elizabeth I, rediscovered the flushable toilet in 1596. Her Majesty had one installed at Richmond Palace. But the contraption failed to catch on, perhaps because smells could travel back up the pipe. The Scottish inventor Alexander Cumming solved that problem in the late 18th century by introducing an S-shaped pipe below the bowl that prevented sewer gas from escaping.

Thomas Crapper, contrary to lore, didn’t invent the modern toilet: He was the chief supplier to the royal household. Strangely for a country renowned for its number of bathrooms per household, the U.S. granted its first patent for a toilet—or “plunger closet”—only in 1857. As late as 1940, some 45% of households still had outhouses. The American toilet race, like the space race, only took off later, in the ‘50s. There is no sign of its slowing down. Coming to a galaxy near you: The cloud-connected toilet that keeps track of your vitals and cardiovascular health.

Historically Speaking: Inflation Once Had No Name, Let Alone Remedy

Empires from Rome to China struggled to restore the value of currencies that spiraled out of control

The Wall Street Journal

May 27, 2022

Even if experts don’t always agree on the specifics, there is broad agreement on what inflation is and on its dangers. But this consensus is relatively new: The term “inflation” only came into general usage during the mid-19th century.

Long before that, Roman emperors struggled to address the nameless affliction by debasing their coinage, which only worsened the problem. By 268 AD, the silver content of the denarius had dropped to 0.5%, while the price of wheat had risen almost 200-fold. In 301, Emperor Diocletian tried to restore the value of Roman currency by imposing rigid controls on the economy. But the reforms addressed inflation’s symptoms rather than its causes. Even Diocletian’s government preferred to collect taxes in kind rather than in specie.

A lack of knowledge about the laws of supply and demand also doomed early Chinese experiments in paper money during the Southern Song, Mongol and Ming Dynasties. Too many notes wound up in circulation, leading to rampant inflation. Thinking that paper was the culprit, the Chongzhen Emperor hoped to restore stability by switching to silver coins. But these introduced other vulnerabilities. In the 1630s, the decline of Spanish silver from the New World (alongside a spate of crop failures) resulted in a money shortage—and a new round of inflation. The Ming Dynasty collapsed not long after, in 1644.

Spain was hardly in better shape. The country endured unrelenting inflation during the so-called Price Revolution in Europe in the 16th and 17th centuries, as populations increased, demand for goods spiraled and the purchasing power of silver collapsed. The French political theorist Jean Bodin recognized as early as 1568 that rising prices were connected to the amount of money circulating in the system. But his considered view was overlooked in the rush to find scapegoats, such as the Jews.

ILLUSTRATION: THOMAS FUCHS

The great breakthrough came in the 18th century as classical economists led by Adam Smith argued that the market was governed by laws and could be studied like any other science. Smith also came close to identifying inflation, observing that wealth is destroyed when governments attempt to “inflate the currency.” The term “inflation” became common in the mid-19th century, particularly in the U.S., in the context of boom and bust cycles caused by an unsecured money supply.

Ironically, the worst cases of inflation during the 20th century coincided with the rise of increasingly sophisticated models for predicting it. The hyperinflation of the German Papiermark during the Weimar Republic in 1921-23 may be the most famous, but it pales in comparison to the Hungarian Pengo in 1945-46. Inflamed by the government’s weak response, prices doubled every 15 hours at their peak. The one billion trillion Pengo note was worth about one pound sterling. By 1949 the currency had gone—and so had Hungary’s democracy.

In 1982, the U.S. Federal Reserve under Paul Volcker achieved a historic victory over what became known as the Great Inflation of the 1960s and ‘70s. It did so through an aggressive regimen of high interest rates to curb spending. Ordinary Americans suffered high unemployment as a result, but the country endured. As with any affliction, it isn’t enough for doctors to identify the cause: The patient must be prepared to take his medicine.

Historically Speaking: Typos Have Been Around as Long as Writing Itself

Egyptian engravers, medieval scribes and even Shakespeare’s printer made little mistakes that have endured

May 12, 2022

The Lincoln Memorial in Washington, D.C., is 100 years old this month. The beloved national monument is no less perfect for having one slight flaw: The word “future” in the Second Inaugural Address was mistakenly carved as “Euture.” It is believed that the artist, Ernest C. Bairstow, accidentally picked up the “e” stencil instead of the “f.” He tried to fix it by filling in the bottom line, but the smudged outline is still visible.

Bairstow was by no means the first engraver to rely on the power of fillers. It wasn’t uncommon for ancient Egyptian carvers—most of whom were illiterate—to botch their inscriptions. The seated Pharaoh statue at the Penn Museum in Philadelphia depicts Rameses II, third Pharaoh of the 19th dynasty, who lived during the 13th century BC. Part of the inscription ended up being carved backward, which the artist tried to hide with a bit of filler and paint. But time and wear have made the mistake, well, unmistakable.

ILLUSTRATION: THOMAS FUCHS

Medieval scribes were notorious for botching their illuminated manuscripts, using all kinds of paint tricks to hide their errors. But for big mistakes—for example, when an entire line was dropped—the monks could get quite inventive. In a 13th-century Book of Hours at the Walters Museum in Baltimore, an English monk solved the problem of a missing sentence by writing it at the bottom of the page and drawing a ladder with man on it, pulling the sentence by a rope, to where it was meant to be.

The phrase “the devil is in the details” may have been inspired by Titivillus, the medieval demon of typos. Monks were warned that Titivillus ensured that every scribal mistake was collected and logged, so that it could be held against the offender at Judgment Day.

The warning seems to have had only limited effect. The English poet Geoffrey Chaucer was so enraged by his copyist Adam Pinkhurst that he attacked him in verse, complaining he had “to rub and scrape: and all is through thy negligence and rape.”

The move to print failed to solve the problem of typos. When Shakespeare’s Cymbeline was first committed to text, the name of the heroine was accidentally changed from Innogen to Imogen, which is how she is known today. Little typos could have big consequences, such as the so-called Wicked Bible of 1631, whose printers managed to leave out the “not” in the seventh commandment, thereby telling Christians that “thou shalt commit adultery.”

The rise of the newspaper deadline in the 19th century inevitably led to typos big and small, as well as those unfortunate and unlikely. In 1838, British readers of the Manchester Guardian were informed that “writers” rather than “rioters” had caused extensive property damage during a protest meeting in Yorkshire.

In the age of computers, a single typo can have catastrophic consequences. On July 22, 1962, NASA’s Mariner 1 probe to Venus exploded just 293 seconds after launching. The failure was traced to an inputting error. A single hyphen was inadvertently left off one of the codes.

Historically Speaking: When Generals Run the State

Military leaders have been rulers since ancient times, but the U.S. has managed to keep them from becoming kings or dictators.

The Wall Street Journal

April 29, 2022

History has been kind to General Ulysses S. Grant, less so to President Grant. The hero of Appomattox, born 200 years ago this month, oversaw an administration beset by scandal. In his farewell address to Congress in 1876, Grant insisted lamely that his “failures have been errors of judgment, not of intent.”

Yet Grant’s presidency could as well be remembered for confirming the strength of American democracy at a perilous time. Emerging from the trauma of the Civil War, Americans sent a former general to the White House without fear of precipitating a military dictatorship. As with the separation of church and state, civilian control of the military is one of democracy’s hard-won successes.

In ancient times, the earliest kings were generals by definition. The Sumerian word for leader was “Lugal,” meaning “Big Man.” Initially, a Lugal was a temporary leader of a city-state during wartime. But by the 24th century B.C., Lugal had become synonymous with governor. The title wasn’t enough for Sargon the Great, c. 2334—2279 B.C., who called himself “Sharrukin,” or “True King,” in celebration of his subjugation of all Sumer’s city-states. Sargon’s empire lasted for three more generations.

In subsequent ancient societies, military and political power intertwined. The Athenians elected their generals, who could also be political leaders, as was the case for Pericles. Sparta was the opposite: The top Spartan generals inherited their positions. The Greek philosopher Aristotle described the Spartan monarchy—shared by two kings from two royal families—as a “kind of unlimited and perpetual generalship,” subject to some civic oversight by a 30-member council of elders.

ILLUSTRATION: THOMAS FUCHS

By contrast, ancient Rome was first a traditional monarchy whose kings were expected to fight with their armies, then a republic that prohibited actively serving generals from bringing their armies back from newly conquered territories into Italy, and finally a militarized autocracy led by a succession of generals-cum-emperors.

In later periods, boundaries between civil and military leadership blurred in much of the world. At the most extreme end, Japan’s warlords seized power in 1192, establishing the Shogunate, essentially a military dictatorship, and reducing the emperor to a mere figurehead until the Meiji Restoration in 1868. Napoleon trod a well-worn route in his trajectory from general to first consul, to first consul for life and finally emperor.

After defeating the British, General George Washington might have gone on to govern the new American republic in the manner of Rome’s Julius Caesar or England’s Oliver Cromwell. Instead, Washington chose to govern as a civilian and step down at the end of two terms, ensuring the transition to a new administration without military intervention. Astonished that a man would cling to his ideals rather than to power, King George III declared if Washington stayed true to his word, “he will be the greatest man in the world.”

The trust Americans have in their army is reflected in the tally of 12 former generals who have been U.S. presidents, from George Washington to Dwight D Eisenhower. President Grant may not have fulfilled the hopes of the people, but he kept the promise of the republic.

Historically Speaking: The Game of Queens and Grandmasters

Chess has captivated minds for 1,500 years, surviving religious condemnation, Napoleonic exile and even the Russian Revolution

The Wall Street Journal

April 15, 2022

Fifty years ago, the American chess grandmaster Bobby Fischer played the reigning world champion Boris Spassky at the “Match of the Century” in Reykjavik, Iceland. The Cold War was at its height, and the Soviets had held the title since 1948. More was riding on the competition than just the prize money.

ILLUSTRATION: THOMAS FUCHS

The press portrayed the Fischer-Spassky match as a duel between the East and the West. But for the West to win, Fischer had to play, and the temperamental chess genius wouldn’t agree to the terms. He boycotted the opening ceremony on July 1, 1972, prompting then-National Security Adviser Henry Kissinger to call Fischer and tell him that it was his patriotic duty to go out there and play. Fischer relented—and won, using the Queen’s Gambit in game 9 (a move made famous by the Netflix series about a fictional woman chess player).

The Fischer-Spassky match reignited global enthusiasm for a 1,500-year-old game. From its probable origins in India around the 6th century, the basic idea of chess spread rapidly across Asia, the Middle East and Europe. Religious authorities initially condemned the game; even so, the ability to play became an indispensable part of courtly culture.

Chess was a slow-moving game until the 1470s, when new rules were introduced that made it faster and more aggressive. The most important changes were greater mobility for the bishops and the transformation of the queen into the most powerful piece on the board. The instigator remains unknown, although the tradition seems to have started in Spain, inspired, perhaps, by Queen Isabella who ruled jointly with King Ferdinand.

The game captivated some of the greatest minds of the Renaissance. Around 1500, the Italian mathematician Luca Pacioli, known as the Father of Accounting, analyzed more than 100 plays and strategies in “De ludo schaccorum” (On the Game of Chess). The hand of Leonardo da Vinci has been detected in some of the illustrations in the only known copy of the book.

Although called the “game of kings,” chess was equally popular with generals. But, as a frustrated Napoleon discovered, triumph on the battlefield was no guarantee of success on the board. Nevertheless, during his exile on St. Helena, Napoleon played so often that one of the more enterprising escape attempts by his supporters involved hidden plans inside an ivory chess set.

PHOTO: J. WALTER GREEN/ASSOCIATED PRESS

London’s Great Exhibition of 1851 inspired the British chess master Howard Staunton, who gave his name to the first standardized chess pieces, to organize the first international chess tournament. Travel delays meant that none of the great Russian players were able to participate, despite the country’s enthusiasm for the game. In a letter to his wife, Russia’s greatest poet Alexander Pushkin declared, “[Chess] is a must for any well-organized family.” It was one of the few bourgeois pastimes to survive the Revolution unscathed.

The Russians regained the world title after the 1972 Fischer-Spassky match. However, in the 1990s they faced a new challenger that wasn’t a country but a computer. Grandmaster Garry Kasparov easily defeated IBM’s Deep Blue in 1996, only to suffer a shock defeat in 1997. Mr. Kasparov even questioned whether the opposition had played fair. Six years later, he agreed to a showdown at The FIDE Man Versus Machine World Chess Championship, against the new and improved Deep Junior. The 2003 match was a draw, leaving chess the winner.

Historically Speaking: Humanity’s Long Quest to Bottle Energy

The first batteries produced bursts of power. Making them last has been the work of centuries.

The Wall Street Journal

April 1, 2022

Electric cars were once dismissed as a pipe dream. Now experts predict that by 2025, they will account for one-fifth of all new cars. Helping to drive this revolution is the once-humble battery. Today the lithium-ion battery powers everything from phones to planes. The next generation of advanced batteries may use sulfur, silicon or even seawater.

Humans have long dreamed of capturing the earth’s energy in portable vessels that can deliver continuous power. The two-thousand-year-old “Baghdad Battery,” which disappeared from the National Museum of Iraq in 2003, consisted of an iron rod surrounded by a copper cylinder inside a clay jar. When filled with an electrolytic solution such as vinegar or grape juice, the jar produced a tiny amount of energy. Whether this was by design or a coincidence no one can say.

ILLUSTRATION: THOMAS FUCHS

The ancient Greeks observed that rubbing fur against amber produced enough static electricity to move light objects. In the late 16th century, the English scientist William Gilbert experimented with compasses, concluding that the phenomenon discerned by the Greeks wasn’t the same as magnetism. He called this unknown force “electricus,” from the Greek “elektron,” meaning amber.

Creating little bursts of electricity was one thing; figuring out a way to store it was another. To this end, between 1744 and 1746 the Dutch scientist Pieter van Musschenbroek and German Ewald Georg Von Kleist separately developed what became known as the Leyden jar, essentially a metal-lined glass vessel containing water and a conducting wire. Six years later, Benjamin Franklin used one in his famous kite experiment to prove that lightning was a display of electric conduction. He realized that Leyden jars could be linked together to form what he termed a “battery” to create a bigger charge.

Leyden batteries contained just a single burst of energy, however. Scientists still sought to create a more continuous power source, but to do it they needed to understand how electricity was generated. In the late 18th century, the Italian scientist Luigi Galvani mistakenly believed that his electrical experiments on frogs’ legs revealed the existence of “animal electricity.”

His fellow Italian, Alessandro Volta, wasn’t convinced. To prove Galvani wrong, he demonstrated in 1800 that he could create a continuous current via his “voltaic pile,” consisting of copper and zinc discs interleaved with brine-soaked cloths. Volta’s experiments led to the invention of the first chemical battery, which worked on the same principle, using metals soaked in an acidic solution to generate an electric current. This technology made all kinds of little mechanisms possible, such as telegraphs and doorbells.

In 1859 the Frenchman Gaston Planté made a vast improvement to battery life with the first rechargeable lead-acid battery. Although it could power larger objects such as cars, the lead-acid battery was excessively heavy and prone to corrosion.

Thomas Edison believed he could corner the nascent car market if only he could build a battery using different technology. After repeated failures, in 1903 he announced with great fanfare his nickel-alkaline battery. It was the power cell of the future: light and rechargeable. By 1910, when Edison was finally able to begin mass production, Henry Ford had already introduced his cheap Model T car with its polluting, gas-guzzling internal combustion engine. No one wanted Edison’s more expensive version.

Timing is everything, as they say. Edison might have lost the first battle for the battery-powered car, but from the perspective of 2022, he may yet win the war.

Historically Speaking: Democracy Helped Seed National Parks

Green spaces and nature preserves have long existed, but the idea of protecting natural wonders for human enjoyment has American roots.

The Wall Street Journal

March 3, 2022

Yellowstone, the world’s oldest national park, turned 150 this month. The anniversary of its founding is a timely reminder that democracy isn’t just a political system but a way of life. The Transcendentalist writer Henry David Thoreau was one of the earliest Americans to link democratic values with national parks. Writing in 1858 he declared that having “renounced the king’s authority” over their land, Americans should use their hard-won freedom to create national preserves for “inspiration and our own true re-creation.”

There had been nature reserves, royal hunting grounds, pleasure gardens and parks long before Yellowstone, of course. The origins of green spaces can be traced to ancient Egypt’s temple gardens. Hyde Park, which King Charles I opened to Londoners in 1637, led the way for public parks. In the 3rd century B.C., King Devanampiya Tissa of Sri Lanka created the Mihintale nature reserve as a wildlife sanctuary, prefiguring by more than 2,000 years the likely first modern nature reserve, which the English naturalist Charles Waterton built on his estate in Yorkshire in the 1820s.

The Grand Canyon of the Yellowstone River, Yellowstone National Park, 1920. GAMMA-KEYSTONE/GETTY IMAGES

The 18th century saw a flowering of interest in man’s relationship with nature, and these ideas encouraged better management of the land. The English scientist Stephen Hale demonstrated the correlation between tree coverage and rainfall, leading a British MP named Soame Jenyns to convince Parliament to found the Tobago Main Ridge Forest Reserve in 1776. The protection of the Caribbean colony’s largest forest was a watershed moment in the history of conservation. Highly motivated individuals soon started conservation projects in multiple countries.

As stewards of their newly independent nation, Americans regarded their country’s natural wonders as places to be protected for the people rather than from them. (Niagara Falls, already marred by development when Alexis de Tocqueville visited in 1831, served as a cautionary example of legislative failure.) The first attempt to create a public nature reserve was at the state level: In 1864 President Abraham Lincoln signed the Yosemite Valley Grant Act, giving the land to California “upon the express conditions that the premises shall be held for public use, resort, and recreation.” But the initiative lacked any real oversight.

Many groups pushed for a federal system of national parks. Among them were the Transcendentalists, environmentalists and landscape painters such as George Catlin, Thomas Cole and Albert Bierstadt, but the ultimate credit belongs to the geologist Ferdinand Hayden, who surveyed Yellowstone in 1871. The mass acclaim following his expedition finally convinced Congress to turn Yellowstone into a national park.

Unfortunately, successive administrations failed to provide sufficient funds for its upkeep, and Yellowstone suffered years of illegal poaching and exploitation. In desperation, the federal government sent the U.S. Army in to take control of the park in 1886. The idea proved to be an inspired one. The military was such a conscientious custodian that its management style became the model for the newly created National Park Service in 1916.

The NPS currently oversees 63 National Parks. But the ethos hasn’t changed since the Yellowstone Act of 1872 set aside 2 million pristine acres “for the benefit and enjoyment of the people.” These words are now engraved above the north entrance to the park, an advertisement, as the novelist and environmentalist Wallace Stegner once wrote, that national parks are “absolutely American, absolutely democratic.”

Historically Speaking: Anorexia’s Ancient Roots And Present Toll

The deadly affliction, once called self-starvation, has become much more common during the confinement of the pandemic.

The Wall Street Journal

February 18, 2022

Two years ago, when countries suspended the routines of daily life in an attempt to halt the spread of Covid-19, the mental health of children plunged precipitously.

Two years ago, when countries suspended the routines of daily life in an attempt to halt the spread of Covid-19, the mental health of children took a plunge. One worrying piece of evidence for this was an extraordinary spike in hospitalizations for anorexia and other eating disorders among adolescents, especially girls between the ages of 12 and 18, and not just in the U.S. but around the world. U.S. hospitalizations for eating disorders doubled between March and May 2020. England’s National Health Service recorded a 46% increase in eating disorder referrals by 2021 compared with 2019. Perth Children’s hospital in Australia saw a 104% increase in hospitalizations and in Canada, the rate tripled.

Anorexia nervosa has a higher death rate than any other mental illness. According to the National Eating Disorders Association, 75% of its sufferers are female. And while the affliction might seem relatively new, it has ancient antecedents.

As early as the sixth century B.C., adherents of Jainism in India regarded “santhara,” fasting to death, as a purifying religious ritual, particularly for men. Emperor Chandragupta, founder of the Mauryan dynasty, died in this way in 297 B.C. St. Jerome, who lived in the fourth and fifth centuries A.D., portrayed extreme asceticism as an expression of Christian piety. In 384, one of his disciples, a young Roman woman named Blaesilla, died of starvation. Perhaps because she fits the contemporary stereotype of the middle-class, female anorexic, Blaesilla rather than Chandragupta is commonly cited as the first known case.

The label given to spiritual and ascetic self-starvation is anorexia mirabilis, or “holy anorexia,” to differentiate it from the modern diagnosis of anorexia nervosa. There were two major outbreaks in history. The first began around 1300 and was concentrated among nuns and deeply religious women, some of whom were later elevated to sainthood. The second took off during the 19th century. So-called “fasting girls” or “miraculous maids” in Europe and America won acclaim for appearing to survive without food. Some were exposed as fakes; others, tragically, were allowed to waste away.

But, confusingly, there are other historical examples of anorexic-like behavior that didn’t involve religion or women. The first medical description of anorexia, written by Dr. Richard Morton in 1689, concerned two patients—an adolescent boy and a young woman—who simply wouldn’t eat. Unable to find a physical cause, Morton called the condition “nervous consumption.”

A subject under study in the Minnesota Starvation Experiment, 1945. WALLACE KIRKLAND/THE LIFE PICTURE COLLECTION/SHUTTERSTOCK

Almost two centuries passed before French and English doctors accepted Morton’s suspicion that the malady had a psychological component. In 1873, Queen Victoria’s physician, Sir William Gull, coined the term “Anorexia Nervosa.”

Naming the disease was a huge step forward. But its treatment was guided by an ever-changing understanding of anorexia’s causes, which has spanned the gamut from the biological to the psychosexual, from bad parenting to societal misogyny.

The first breakthrough in anorexia treatment, however, came from an experiment involving men. The Minnesota Starvation Experiment, a World War II –era study on how to treat starving prisoners, found that the 36 male volunteers exhibited many of the same behaviors as anorexics, including food obsessions, excessive chewing, bingeing and purging. The study showed that the malnourished brain reacts in predictable ways regardless of race, class or gender.

Recent research now suggests that a genetic predisposition could count for as many as 60% of the risk factors behind the disease. If this knowledge leads to new specialized treatments, it will do so at a desperate time: At the start of the year, the Lancet medical journal called on governments to take action before mass anorexia cases become mass deaths. The lockdown is over. Now save the children.

A shorter version appeared in The Wall Street Journal