Kylo Ren, Meet Huck Finn: A History of Sequels and Their Heroes

The pedigree of sequels is as old as storytelling itself

ILLUSTRATION: RUTH GWILY

“Star Wars: The Last Jedi” may end up being the most successful movie sequel in the biggest sequel-driven franchise in the history of entertainment. That’s saying something, given Hollywood’s obsession with sequels, prequels, reboots and remakes. Although this year’s “Guardians of the Galaxy 2” was arguably better than the first, plenty of people—from critics to stand-up comedians—have wondered why in the world we needed a 29th “Godzilla,” an 11th “Pink Panther” or “The Godfather Part III.”

But sequels aren’t simply about chasing the money. They have a distinguished pedigree, as old as storytelling itself. Homer gets credit for popularizing the trend in the eighth century B.C., when he followed up “The Iliad” with “The Odyssey,” in which one of the relatively minor characters in the original story triumphs over sexy immortals, scary monsters and evil suitors of his faithful wife. Presumably with an eye to drawing in fans of the “Iliad,” Homer was sure to throw in a flashback about the Trojan horse.

Homer’s successors knew a good thing when they saw it. In 458 B.C., Aeschylus, the first of the great Greek dramatists, wrote his own sequel to “The Iliad,” a trilogy called “The Oresteia” that follows the often-bloody fates of the royal house of Agamemnon. More than 400 years later, the Latin poet Virgil penned a Roman-themed “Iliad” sequel, “The Aeneid,” which takes up the story from the point of view of the defeated Trojans.

The advent of printing in the 15th century made it easier to capitalize on a popular work. Roughly two decades after Shakespeare wrote “The Taming of the Shrew” around 1591, John Fletcher penned “The Woman’s Prize, or The Tamer Tam’d,” which reversed genders—with the women taming men. Shakespeare himself wrote sequels to some of his works, carrying his fat, irrepressible knight Falstaff through three plays.

By the Middle Ages, sequel activity had shifted focus. With “The Iliad” (like most of ancient Greek culture) virtually forgotten in Europe, tales about doomed lovers and Christian heroes, such as the legends of King Arthur and the Knights of the Round Table, took the place of Homeric epics in the public imagination. A whole industry of Camelot sequels grew up around major and even minor characters of the story.

Eight hundred miles away in Madrid, Miguel de Cervantes Saavedra countered an unauthorized sequel to his “Don Quixote” with his own sequel. In it, many of the characters have read Cervantes’ first part, Don Quixote himself is outraged by the plot of the fake sequel and Cervantes frequently ridicules the sequel writer.

Other authors had less success defending their creations. Jonathan Swift, author of “Gulliver’s Travels” (1726), was helpless when his own publisher immediately cashed in on the book’s success with several anonymous sequels, including “Memoirs of the Court of Lilliput.”

European audiences were not the only ones to yearn for sequels to favorite books. The oldest of China’s four classical novels, “Water Margin,” concerning a group of heroic outlaws, was completed sometime before the 17th century and spawned sequels that became classics in themselves. In late 19th-century America, Mark Twain struck gold twice with “The Adventures of Tom Sawyer,” followed by “The Adventures of Huckleberry Finn.” But few people have heard of the second or third sequels: “Tom Sawyer Abroad” and “Tom Sawyer, Detective.”

 For its part, Hollywood knows that a sequel to even the most beloved story is no sure thing. Although “The Last Jedi” is set to break records, this summer’s “King Arthur: Legend of the Sword” went down as a major flop. Sometimes it’s good to know when to stop.

Social-media addiction and self-harm: why teenage girls are in crisis – The Sunday Times

Raising girls has never been simple, says the historian Amanda Foreman. But now, more than ever, they need strong maternal support

A Short but Tasty History of Pumpkin Pie

An odyssey from colonial staple to political emblem to holiday standby

ILLUSTRATION: THOMAS FUCHS

Pumpkin pie may not compete with its apple-filled rival for most of the year, but on Thanksgiving, it’s the iconic dessert, despite often resembling a giant helping of baby food. As a slice of Americana, the pie has a history as complicated as the country itself.

The pumpkin’s ancestors were ancient gourds that left Asia some 60 million years ago. Known botanically as Cucurbitaceae, the plant family slowly spread to the African, Australian and American continents, laying down roots (and vines) to become such familiar garden goodies as the melon, the cucumber and the squash.

Scientists have traced Cucurbita pepo, the founding fruit of pumpkin pie, to seeds 8,000 to 10,000 years old in the Guilá Naquitz Cave in Mexico. The site is believed to have the earliest evidence of domesticated crops in North America. Though these early Mexican varieties were smaller and more bitter than the pumpkins we know, early Americans ate or otherwise used almost every part of them. By the time Christopher Columbus reached the New World in 1492, pumpkins and squashes had spread north to Canada.

In 1796, Amelia Simmons of Connecticut published “American Cookery,” believed to be the first American cookbook to rely on native-grown ingredients. Despite their artery-clogging richness, the ingredients for her two “pompkin” dessert recipes—stewed pumpkin, eggs, sugar, cream, spices and dough—wouldn’t be out of place today.

Then came pumpkins’ biggest transformation: into a sort of political emblem. According to Cindy Ott, author of “Pumpkin: The Curious History of an American Icon,” pumpkin pie became a symbol of the cultural war between North and South. For Northerners, particularly abolitionists, the virtually self-growing pumpkin was the antithesis of the slave-grown plantation crop. Antislavery novelists celebrated pumpkin pie, and the abolitionist Sarah Josepha Hale (1788-1879), who successfully campaigned to establish Thanksgiving, described the dish as “indispensible” for “a good and true Yankee” version of the holiday. In the South, “cartoons and illustrations…associated blacks with pumpkins as a form of derision,” Ms. Ott told the media website Mic in 2015.

Today, pumpkin pie shares the holiday stage with pumpkin-spice lattes and other flavored concoctions—a craze that has now spread as far as China. Not bad for a humble gourd with global ambitions.

 

A History of the Unloved Electoral College

Opponents have ranged from John Adams to Richard Nixon. Why has the system survived?

PHOTO: THOMAS FUCHS

The 2016 election results caused plenty of bitterness—not the least of which had to do with the Electoral College. Donald Trump won the presidency a year ago this week but lost the popular vote—something that has happened a handful of times in the republic’s history and twice in the past two decades. In a December press conference, President Barack Obama declared the system to be past its sell-by date: “It’s a carry-over from an earlier vision of how our federal government was going to work.”

What were the Founding Fathers thinking? At the 1787 Constitutional Convention, they created a unique system for choosing the president. Each state got a number of electors based on the total of its U.S. senators (two) and U.S. representatives (as set by census). Each state legislature could decide the method of picking electors, but if the electors’ vote was inconclusive, the choice would be sent to the House of Representatives. “The original idea,” wrote Federal Election Commission official William C. Kimberling in 1992, “was for the most knowledgeable and informed individuals from each State to select the president based solely on merit and without regard to State of origin or political party.”

The system didn’t last long without repairs, precipitated by the crisis of the 1800 election. Electors could vote for two names for president, with the runner-up becoming vice president. With the Federalist Party’s John Adams defeated, it came down to the candidates of the Democratic-Republican party. But because of a procedural error, Thomas Jefferson tied with his running-mate Aaron Burr. Awkwardly, the tiebreaking vote went to the House of Representatives, which the Federalists still controlled. It took 36 ballots for Jefferson to win his majority.

It’s not surprising that four years later the 12th Amendment was ratified. Among other changes, it separated the vote for president and vice president into two processes.

The next change to the electoral college, as it came to be known, happened without constitutional changes. One by one, the states began making their presidential picks by popular vote, which the electors were then supposed to echo. The move led to the winner-take-all system for each state that all but Maine and Nebraska practice today.

This was not what the Founding Fathers had intended. In the 1820s, the aging James Madison suggested various ideas for reform, such as having each congressional district vote for an elector, or even having each member of the electoral college offer two choices for president (neither would become vice president). Congress briefly considered abolishing the college completely, with President Andrew Jackson declaring in 1829 that the more “agents” there were to do the will of the people, the more likely it was that their will would be frustrated.

But nothing more happened.

A serious attempt at abolition died in the Senate in 1934. In 1969, after segregationist George Wallace got 46 electoral votes and raised the possibility that neither major-party candidate would win a majority of electors, President Richard Nixon tried to abolish the electoral system—with the aid of his defeated rival, Hubert Humphrey. But Southern and small-state senators stopped the plan with a filibuster.

Why has reform failed so often? As many have pointed out, the electoral college was an attempt to balance the power of more populous states with that of more rural ones, to balance the needs of the nation with those of the states. Many have called the solution imperfect, but perhaps it’s a good match for our remarkable but imperfect democracy.

The Power of Pamphlets: A Brief History

As the Reformation passes a milestone, a look at a key weapon of change

ILLUSTRATION: THOMAS FUCHS

The Reformation began on Oct. 31, 1517, when Martin Luther, as legend has it, nailed his “95 Theses” to a church door in Wittenberg, Germany. Whatever he actually did—he may have just attached the papers to the door or delivered them to clerical authorities—Luther was protesting Catholics’ sale of “indulgences” to give sinners at least partial absolution. The protest immediately went viral, to use a modern term, thanks to the new “social media” of the day—the printed pamphlet.

The development of the printing press around 1440 had set the stage: In the famous words of the German historian Bernd Moeller, “Without printing, no Reformation.” But the pamphlet deserves particular recognition. Unlike books, pamphlets were perfect for the mass market: easy to print and therefore cheap to buy.

By the mid-16th century, the authorities in France, Germany and England were fighting a rear-guard action to ban pamphlets. Despite various edicts in 1523, ’53, ’66 and ’89, the pamphlet flourished—and gained some highly placed authors. Although she professed disdain for the medium, Queen Elizabeth I contributed speeches to a 1586 pamphlet that justified her decision to execute Mary, Queen of Scots. Two years later, the Spanish printed a slew of propaganda pamphlets that tried to turn King Philip II’s failed invasion attempt of England into a qualified success.

By the 17th century, virulent “pamphlet wars” accompanied every major religious and political controversy in Europe. By then, pamphleteers needed an exceptionally strong voice to be heard above the din—something even harder to achieve once newspapers and periodicals joined the battle for readers as the century matured.

What is a pamphlet, anyway? One popular source says 80 pages; Unesco puts it as five to 48 pages. Shortness is a pamphlet’s strength. Though the work did little to ease Ireland’s poverty, the satirist Jonathan Swift opened English eyes to the problem with his 3,500-word mock pamphlet of 1729, “A Modest Proposal,” which argued that the best way to alleviate hunger was for the Irish to rear their children as food.

Half a century later, Thomas Paine took less than 50 pages to inspire the American Revolution with his “Common Sense” of 1776. A guillotine killed Marie Antoinette in 1793, but often-anonymous pamphlets had assassinated her character first in a campaign that portrayed her as a sex-crazed monster.

Pamphlets could also save reputations—such as that of Col. Alfred Dreyfus, the French-Jewish army officer falsely convicted in 1894 of spying for Germany. After realizing that Dreyfus was a victim of anti-Semitism, the writer Émile Zola published in 1898, first in a newspaper and then as a pamphlet, a 4,000-word open letter, “J’accuse…!” which blamed the French establishment for a vast coverup. His cry, “Truth is on the march, and nothing will stop it,” was ultimately proved right; Dreyfus won a full exoneration in 1906.

What Zola achieved for religious equality, Martin Luther King Jr. did for the civil-rights movement with his “Letter from Birmingham Jail,” written after his arrest for civil disobedience. Eventually published in many forms, including a pamphlet, the 1963 letter of about 7,000 words contains the famous line, “Injustice anywhere is a threat to justice everywhere.” The words crystallized the importance of the struggle and made tangible King’s campaign of nonviolent protest.

Not everyone has lost their belief in pamphlet power. Today, the best-selling “On Tyranny: Twenty Lessons from the Twentieth Century” clocks in at about 130 pages, but the author, Yale history professor Timothy Snyder, said he’s comfortable with calling it a long political pamphlet.

A Brief History of Protest in Sports

From angry gladiators to Suffragette sabotage

ILLUSTRATION: THOMAS FUCHS

Sports and protest often go together: As soon as someone makes a call, somebody else is disputing it. But in recent weeks, the really big clashes have happened off the playing fields, as President Donald Trump and others criticized football players kneeling during the national anthem. Such mixing of sports, politics and protest has ancient roots—on the part of both spectators and players.

An early protest by a player comes down to us in “Lives of the Twelve Caesars” by the Roman historian Suetonius (69-130 A.D.). An unnamed gladiator once refused to fight in front of the Emperor Caligula. Then, the gladiator, seeing he would die anyway, grabbed his trident and killed his would-be victors.

But in the ancient world, spectators, not players, were mostly the ones to express their feelings. At Rome’s Circus Maximus in 190 A.D., a young woman followed by a group of children rushed forward during the races and accused an official of hoarding grain. A crowd gathered, threatened the home of Emperor Commodus and succeeded in getting him to fire the official.

Another mass sporting protest wasn’t so civil. In sixth-century Constantinople—the ancestor of the city now known as Istanbul—tensions reached a breaking point between the Blues and Greens, political factions that took their names from colors worn by charioteers. When one side lost at the city’s Hippodrome in 532, a crowd started a mass insurrection known as the Nika Riot. Tens of thousands died in a city whose population was about half a million, and Emperor Justinian never again allowed chariot racing at the Hippodrome.

Perhaps with these rebellions in mind, rulers of the Middle Ages kept sports largely aristocratic, with pageantry in and peasants out. Sometimes, though, a game could be a form of protest. During the heyday of England’s Puritan government in the mid-17th century, some towns rebelled by staging soccer games, which were anathema to Puritans.

Sports regained its full status as a mass spectator event at the end of the 19th century. In 1906 Athens, 10 years after the first modern Olympics, Peter O’Connor, an Irish track-and-field athlete, protested British rule by refusing to accept his silver medal under the British flag. Instead, O’Connor scaled the flagpole and attached an Irish one.

Back home, British sports lovers faced a challenge when the Suffragettes began sabotaging men-only sporting activities. This culminated in a tragedy: During the 1913 Epsom Races outside London, protester Emily Davison ran onto the course, reached for the bridle of King George V’s horse and was trampled to death.

Racism fueled one of the most famous Olympics protests, at the Mexico City games in 1968. American runner Tommie Smith had won the 200-meter race; John Carlos had won the bronze. Wearing no shoes, to symbolize black poverty, the two men raised fists in a black-power salute.

In sports, though, there are many winning plays, and that goes for ways to protest iniquity as well. In 1973, champion tennis player Billie Jean King and some other players were unhappy about the vastly unequal prize money between men and women. Ignored and furious, the women left the circuit and started their own organizing body, the Women’s Tennis Association. The net gains are history.

Amanda Foreman: public schools shun classic novels – The Sunday Times

A bestselling biographer fears Austen and Dickens have been forsaken to boost results

Photo by DAN CALLISTER

Top public schools including Eton and Marlborough have been accused of “shutting children out of their literary heritage” by failing to teach classic novels.

The academic and writer Amanda Foreman is campaigning to return classic novels by authors such as Jane Austen, Charles Dickens and George Eliot to the curriculum of some of Britain’s most famous schools.

She was spurred into action after being “horrified” to discover that her 16-year-old daughter “had not read a single 18th or 19th-century novel” at her private school in England.

Foreman, who lives in America, has five children — all educated privately in Britain. She said she was alarmed and angered to discover that it had become common for a child to go through school without having read a single classic novel.

“Kids are leaving school shut out from their literary heritage. It is so cruel,” she said.

Foreman spoke to friends who had children at private schools and found they were also being taught modern novels that are easier to read, up to and including at GCSE level, such as Michael Frayn’s Spies, Holes by Louis Sachar and Steinbeck’s Of Mice and Men.

By comparison, state-educated children study at least one 19th-century novel and a Shakespeare play following the curriculum overhaul by Michael Gove when he was education secretary.

The iGCSE, however, which is popular in private schools, also includes the option to study only more modern books.

Foreman, whose bestselling biography Georgiana, Duchess of Devonshire, was made into a film starring Keira Knightley, said children had to be given “remedial” reading lists by universities in both the UK and America.

By contrast, children at schools such as Friends Seminary, a private school in New York, were getting lots of “Chaucer, Beowulf, Shakespeare, Jane Austen and a whole term of poetry”.

She added: “In America they are expected to have read more classic books such as Mark Twain and to have studied them in class, including 19th-century texts.

“When I speak to professors here in New York they talk about how surprised they are how the English students they always regarded as top drawer are having to go into remedial classes to read.”

Frederic Smoler, a professor of literature and history at Sarah Lawrence College, New York, said he had encountered British students who had “extraordinary” gaps in their knowledge.

“My friends who went to English public schools in the 1970s and 1980s had a wonderful education. I am not sure that is true now,” he said.

Foreman said she feared that leading private schools were choosing easier novels from exam syllabuses to improve results. Teachers had also told her that they were trying to encourage a generation reared on smartphones and iPads who find works by writers such as Austen and Dickens too “difficult”.

“They say the children are digital natives, not natives of the English language,” she said.

Parents expected public schools to deliver a classical education, Foreman said — the kind she herself enjoyed — in return for fees of £30,000 a year. Teachers had to persevere with difficult books, she said, otherwise children would know the classics only by watching films.

John Sutherland, emeritus Lord Northcliffe professor of English literature at University College London, said: “I find it ironic as a card-carrying Victorianist that Gutenberg.org has given us a sesame’s cave of thousands of 19th-century works, handsomely eprinted, free of charge.

“And now, apparently, certain high- performing schools decide they are surplus to educational requirements. Shame on them.”

Some head teachers said part of the problem was that private schools had opted for iGCSE exams, rather than the more difficult GCSE in English literature brought in by Gove, which was first set this summer.

Eton said pupils take the iGCSE and some boys do study a classic, but others study books such as Spies, rather than Austen’s Mansfield Park. A spokesman said: “Eton believes in offering a balanced range of modern and classical texts for its pupils to study.”

Marlborough said pupils must be given “a broad literary understanding from across the genres, as well as from different periods and cultures”.

It added that English literature was compulsory and said that the school offered a “ guided reading club”.

A Brief History of Driving on the Left

Over the centuries, plenty of empires and nations have driven on the left side of the road

ILLUSTRATION: THOMAS FUCHS

Fifty years ago this month, on Sept. 3, 1967, the world turned upside down in Sweden. Or rather it went from left to right: On that day, the Swedes abandoned 200 years of left-hand traffic, or LHT, to switch over to RHT. The event was commemorated as Högertrafikomläggningen (the right-hand traffic diversion) or H-Day for short.

Bahrain, Finland and Iceland soon followed Sweden’s example. Pakistan considered switching to RHT in the 1960s but decided it would be too difficult, am

ong other things, to change the habits of the country’s numerous camel-cart drivers. Even the U.K. briefly toyed with the idea only to drop it because of cost and rising nationalist affection for driving on the left.

By the early 1970s, more than 160 countries had switched to RHT, leaving just the U.K., its former colonies and a few other holdouts on the left. Such is the global dominance of RHT that it might seem that humans have always felt more comfortable on the right side of life. After all, studies suggest that some 85% of people are right-handed.

But there’s nothing natural about driving on the right. Evidence from cart tracks on Roman roads in Britain suggests that traffic flowed on the left. This makes sense, since wagon drivers held their whips in their right hands, like charioteers, which causes the whip to cross diagonally to the left, make oncoming drivers less likely to get struck.

LHT continued long after the Roman Empire disappeared. Medieval knights carried their swords on the right and their shields on the left, so by keeping to the left side of the road, their sword arm was free to strike at any foe they might encounter as they traveled.

In 1300, Pope Boniface VIII instituted the first holy Jubilee, a year-long celebration of Catholic faith that prompted mass pilgrimages to Rome. The ensuing chaos on the roads forced Boniface to issue a decree ordering pilgrims to pick a side and stay left.

Five centuries later, the British government had to issue a similar directive, the General Highways Act of 1773, after the rise of mass horse-ownership led to increasing anarchy on the streets. A popular ditty ran: “As you’re driving your carriage along; / If you go to the left, you are sure to go right / If you go to the right, you are wrong.”

Driving on the left didn’t face a real challenge until Napoleon decided that all countries in the French Empire must go right—emulating France, which had switched during the Revolution. (It was considered aristocratic to hog the left side of the road.) Thus the Napoleonic War was a battle of lefts and rights, with Napoleon’s foes—Britain, Portugal, the Austro-Hungarian Empire—staying left.

The U.S. started to drift toward driving on the right after winning its independence, probably to make an anti-British point. Yet the person most responsible for RHT in America was Henry Ford. Before then, the wheel and controls were sometimes on the right or even in the middle of the car. In 1908, Ford announced a new model that had the steering wheel on the left, so that passengers would always exit curbside—“especially,” the publicity materials claimed, “if there is a lady to be considered.”

Driving on the right received a grim boost from Hitler, whose megalomania, like Napoleon’s, was such that all conquered countries had to emulate German RHT.

When not being propelled by imperialism or capitalism, does RHT always win over LHT? Apparently not. In 1978, Japan went fully LHT, as did Samoa in 2009. As with so much in life, humans are unpredictable, stubborn creatures who, given the chance, will go in any direction they please.

The Psychology and History of Snipers – Wall Street Journal

PHOTO: THOMAS FUCHS

Sharpshooters helped turn the course of World War II 75 years ago at the Battle of Stalingrad

The Battle of Stalingrad during World War II cost more than a million lives, making it one of the bloodiest battles in human history. The death toll began in earnest 75 years ago this week, after the Germans punched through Soviet defenses to reach the outskirts of the city. Once inside, however, they couldn’t get out.

With both sides dug in for the winter, the Russians unleashed one of their deadliest weapons: trained snipers. By the end of the war, Russia had trained more than 400,000 snipers, including thousands of women. At Stalingrad, they had a devastating impact on German morale and fighting capability.

Snipers have always been feared by their enemies. Unlike conventional soldiers, they are trained not for brawn and obedience but for skill and independence. They work alone or in pairs and often get to know their targets as they stalk them. In a 2012 article for BBC Magazine, the Israeli anthropologist Neta Bar, who has studied snipers, said, “It’s killing that is very distant but also very personal. I would even say intimate.”

The first recorded use of snipers comes from the army of ancient Rome. Each legion carried into battle about 60 “scorpios”—a crude-looking crossbow, almost like a portable catapult, that could deliver a precision shot at more than 300 feet. The effect was terrifying, as the rebellious Gauls discovered in the first century B.C. when trying to defend themselves against Julius Caesar.

After the fall of Rome, Western attitudes toward the sniper turned negative. Crossbows delivered long-distance, devastating wounds to a victim who had no chance of defending himself. The aristocracy also disliked the weapon, since it gave peasants the same kill power as a knight. In 1139, the Church condemned the use of crossbows against Christian enemies, though they could still be used against infidels.

No such inhibitions existed in China, whose crossbow marksmen were probably the best snipers in the world during the Middle Ages. Crossbowmen were considered the army’s elite and trained accordingly.

Crossbows eventually returned to the field in the West, but the advent of the rifle in the 16th century made officials see the true value of snipers. In the 1770s, British soldiers in India coined the term sniper to describe someone who could hit a little bird, such as a snipe.

Unfortunately for Britain, its enemies could train shooters to achieve the same level of proficiency. During the Napoleonic Wars in 1805, a French marine sniper on board the Redoubtable shot and killed Lord Nelson, just as the British achieved their crushing victory over the French fleet.

Those who underestimated the skill, determination and luck of snipers did so at their peril. At the Battle of Spotsylvania Court House in 1864, the Union General John Sedgwick chastised his men for ducking, insisting: “They couldn’t hit an elephant at this distance.” A few minutes later a Confederate sniper shot him dead.

In our own era, the most famous sniper was Chris Kyle, who among other things saved a group of Marines in 2003 from being blown up in Iraq. Killed in Texas in 2013 by a disturbed Marine vet, Kyle became famous for his skill and heroism as the subject of the phenomenally popular 2014 film “American Sniper.”

The snipers of Stalingrad, by contrast, are mostly just names to history, if their names are known at all. The final seconds of many a Nazi soldier were shared with an enemy he neither saw nor heard. But the battle was a catastrophe for Hitler, and it helped to turn the course of the war.

‘WHAT BOOK would historian Amanda Foreman take to a desert island?’ – The Daily Mail

Historian Amanda Foreman shares that she is currently reading The Dry by Jane Harper

. . . are you reading now?

The Dry, by Jane Harper. The hero, Aaron Falk, is a Melbourne-based federal agent, whose life has settled into a narrow furrow of work and more work.
However, he harbours a dark past that comes back to haunt him after his childhood friend inexplicably kills himself and his family.
Falk reluctantly returns to his home town and finds a seething community that’s suffering from more than just a prolonged drought. A complete page-turner.

. . . would you take to a desert island?

J. R. R. Tolkien’s The Lord Of The Rings. One of the reasons people love the LOTR so much is because it’s both familiar and strange at the same time.

Tolkien was an expert on Anglo-Saxon and Middle English and, when he wasn’t writing about elves and hobbits, he was analysing Beowulf and other epics. He poured all his scholarship into LOTR and then disguised it through layers of mythology and imagination. Continue reading…