What Is Humanity’s Greatest Invention?

Yuval Noah Harari, author of Sapiens: A Brief History of Humankind, offers an answer:

Humanity’s greatest invention is religion, which does not mean necessarily mean belief in gods. Rather, religion is any system of norms and values that is founded on a belief in superhuman laws. Some religions, such as Islam, Christianity and Hinduism, believe that these superhuman laws were created by the gods. Other religions, such as Buddhism, Communism and Nazism believed that these superhuman laws are natural laws. Thus Buddhists believe in the natural laws of karma, Nazis argued that their ideology reflected the laws of natural selection, and Communists believe that they follow the natural laws of economics.

No matter whether they believe in divine laws or in natural laws, all religions have exactly the same function: to give stability to human institutions. Without some kind of religion, it is simply impossible to maintain social order. During the modern era religions that believe in divine laws went into eclipse. But religions that believe in natural laws became ever more powerful. In the future, they are likely to become more powerful yet. Silicon Valley, for example, is today a hot-house of new techno-religions, which promise us paradise on earth with the help of new technologies. From a religious perspective, Silicon Valley is the most interesting place in the world.



I vs AI

A new year, a new annual Edge question: “What do you think about machines that think?” James J. O’Donnell, a classical scholar, contends that “nobody would ever ask a machine what it thinks about machines that think”. He breaks down the question:

1. “Thinking” is a word we apply with no discipline whatsoever to a huge variety of reported behaviors. “I think I’ll go to the store” and “I think it’s raining” and “I think therefore I am” and “I think the Yankees will win the World Series” and “I think I am Napoleon” and “I think he said he would be here, but I’m not sure,” all use the same word to mean entirely different things. Which of them might a machine do someday?  I think that’s an important question.

2. Could a machine get confused? Experience cognitive dissonance? Dream? Wonder? Forget the name of that guy over there and at the same time know that it really knows the answer and if it just thinks about something else for a while might remember? Lose track of time? Decide to get a puppy? Have low self-esteem? Have suicidal thoughts? Get bored? Worry? Pray? I think not.

Daniel Dennett tries to pinpoint the “real danger” of artificial intelligence:

What’s wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don’t delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.

(1) It is very, very hard to imagine (and keep in mind) the limitations of entities that can be such valued assistants, and the human tendency is always to over-endow them with understanding—as we have known since Joe Weizenbaum’s notorious Eliza program of the early 1970s. This is a huge risk, since we will always be tempted to ask more of them than they were designed to accomplish, and to trust the results when we shouldn’t.

(2) Use it or lose it. As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is) but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That’s an event we should bend our efforts to averting now, because it could happen any day.

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.

Eric J. Topol, a doctor, sees the benefits for medicine:

Almost any medical condition with an acute episode—like an asthma attack, seizure, autoimmune attack, stroke, heart attack—will be potentially predictable in the future with artificial intelligence and the Internet of all medical things. There’s already a wristband that can predict when a seizure is imminent, and that can be seen as a rudimentary, first step. In the not so distant future, you’ll be getting a text message or voice notification that tells you precisely what you need to prevent a serious medical problem. When that time comes, those who fear AI may suddenly embrace it. When we can put together big data for an individual with the requisite contextual computing and analytics, we’ve got a recipe for machine-mediated medical wisdom.

Donald D. Hoffman cautions that “not alarm, but prudence is effective,” adding that he suspects “AIs will be a source of awe, insight, inspiration, and yes, profit, for years to come”. Psychology professor Nicholas Humphrey is also optimistic:

Psychopaths are sometimes credited with having not too little but too great an understanding of human psychology. Is this something we should fear with machines?

I don’t think so. This situation is actually not a new one. For thousands of years humans have been selecting and programming a particular species of biological machine to act as servants, companions and helpmeets to ourselves. I’m talking of the domestic dog. The remarkable result has been that modern dogs have in fact acquired an exceptional and considerable ability to mind-read—both the minds of other dogs and humans—superior to that of any animal other than humans themselves. This has evidently evolved as a mutually beneficial relationship, not a competition, even if it’s one in which we have retained the upper hand. If and when it gets to the point where machines are as good at reading human minds as dogs now are, we shall of course have to watch out in case they get too dominant and manipulative, perhaps even too playful—just as we already have to do with man’s best friend. But I see no reason to doubt we’ll remain in control.

Paul Saffo, meanwhile, is uncertain:

The rapid advance of AIs also is changing our understanding of what constitutes intelligence. Our interactions with narrow AIs will cause us to realize that intelligence is a continuum and not a threshold. Earlier this decade Japanese researchers demonstrated that slime mold could thread a maze to reach a tasty bit of food. Last year a scientist in Illinois demonstrated that under just the right conditions, a drop of oil could negotiate a maze in an astonishingly lifelike way to reach a bit of acidic gel. As AIs insinuate themselves ever deeper in our lives, we will recognize that modest digital entities as well as most of the natural world carry the spark of sentience. From there is it just a small step to speculate about what trees or rocks—or AIs—think.

In the end, the biggest question is not whether AI super-intelligences will eventually appear. Rather the question is what will be the place of humans in a world occupied by an exponentially growing population of autonomous machines. Bots on the Web already outnumber human users—the same will soon be true in the physical world as well.

Lord Dunsany once cautioned, “If we change too much, we may no longer fit into the scheme of things.”

Psychologist Susan Blackmore zooms out:

Digital information is evolving all around us, thriving on billions of phones, tablets, computers, servers, and tiny chips in fridges, car and clothes, passing around the globe, interpenetrating our cities, our homes and even our bodies. And we keep on willingly feeding it. More phones are made every day than babies are born, 100 hours of video are uploaded to the Internet every minute, billions of photos are uploaded to the expanding cloud. Clever programmers write ever cleverer software, including programs that write other programs that no human can understand or track. Out there, taking their own evolutionary pathways and growing all the time, are the new thinking machines.

Are we going to control these machines? Can we insist that they are motivated to look after us? No. Even if we can see what is happening, we want what they give us far too much not to swap it for our independence.

So what do I think about machines that think? I think that from being a little independent thinking machine I am becoming a tiny part inside a far vaster thinking machine.

How To Tell When Time Is Up

Chrissie Giles investigates how medical professionals deliver diagnoses of terminal illness:

Individual words matter. Professor Elena Semino and colleagues at Lancaster University have been conducting a study of how certain kinds of language are used in communication about the end of life. They’ve created a set of over 1.5 million words, collected from interviews and online forums, where patients, carers or healthcare professionals meet to talk with their peers.

Violence or war metaphors (“battling my disease”, “keep up the fight!”) can be disempowering or disheartening for people with cancer, potentially demanding constant effort or implying that a turn for the worse is a personal failure. But in other contexts, they can empower people, helping someone express determination or solidarity, or bringing a sense of meaning, pride and identity. “You don’t need to be a linguist to realise what metaphors a patient’s using,” says Semino. Doctors should ask: are those metaphors working for the patient at that point? Are they helpful, giving them a sense of meaning, identity, purpose? Or are they increasing anxiety?

Dissent By Design

Cassie Packard reviews Disobedient Objects, an exhibition at London’s Victoria and Albert Museum devoted to examining “the powerful role of objects in movements for social change”:

The disobedient objects in the exhibition range from the more tactically frivolous, like the [inflatable] cobblestones [from a 2012 May Day demonstration in Berlin-Kreuzberg], to the blindingly necessary, like DIY tear gas masks. They are largely the products of various left-wing grassroots social movements, though a few objects made in service of paramilitaries — but questionably dressed in the show’s glamorizing rhetoric of left-wing activism — make it into the mix. On the whole, the exhibition is right on trend, combining a growing public interest in design activism with a recent popularization of “object”-centered museum exhibitions and publications. …

In one section of the exhibition is a collection of handmade “book blocs,” a functional riff on the riot shield in which Plexiglas and cardboard are made to resemble oversized book covers. Wielded by groups protesting tuition hikes and budget cuts to education programs, these blocs put police in the bizarre position of physically assaulting oversize books when they attack protestors. It’s a memorable image, and one that certainly drums up attention for its causes as it’s disseminated on the web. Another standout is the Spanish-born “flone” [see above video], an economical marriage of laser-cut plywood and open-source software. “Reinventing airspace as public space” in an age of insidious drone warfare, the flone allows the user to fly his or her smartphone for the purpose of filming police and demonstrations. Of all the items on display, it is the flone whose revolutionary potential feel the most formidable.

In an earlier review, Alice Bell also praised the exhibit:

Entering the gallery, you are greeted with rows of metal poles holding up the displays. An allusion to the bars of prison walls, they also offer the basis for one of the best exhibits: a history of the “lock-on” technique used by activists to attach themselves to each other and/or objects. … [A] history of lock-ons and the Capitalism is Crisis banner [made for the 2009 Blackheath Climate Camp] are as much part of modernity as a gallery of wedding dresses. Their arguments are part of the hard tissue of our world, even if we don’t always recognise it. As are the history of the pink triangle, a collection of anti-apartheid badges and a simple poster-paint-and-cardboard banner that declares “I wish my boyfriend was as dirty as your policies”, all also displayed in the gallery. It’s not complete – there are many protest movements missing – but it’s diverse and engaging.

In another earlier review, Victoria Sadler noted that she “found this a very emotive exhibition, one that drives up a lot of anger and frustration”:

Supporting the exhibits is a number of video clips, including footage from protests such as Tiananmen, the Middle East, Seoul and Japan. The level of violent resistance from the state in almost all instances is incredibly depressing. And the disproportionate use of that violence can be harrowing to watch.

Watching again the tanks of the Chinese Army toppling over the 30ft Goddess of Democracy in Tiananmen Square was quite distressing, as it was to see the footage from Palestine of kids throwing stones with their slingshots, only for their pebbles to be met with gunfire. The V&A has managed to obtain one of these slingshots, a makeshift item created from the tongue of a child’s trainer, and alongside this VT this really had an impact. Included in the video clips is commentary and observation from those that have been active in protest movements, or who have studied them.

It’s eye-opening to listen to how these movements do bring in the egalitarian principles and community consciousness they want to see in the world, but how they also struggle with gender and class structures within their own ranks. But it was easy to agree with these commentators that change has only come about from direct action, from challenges to property and power, rather than negotiation and dialogue.

The exhibition runs through February 1st.

MLK’s “Other America”

King delivers a speech, “The Other America,” before an audience at Stanford University on April 14, 1967:

Eugene Robinson looks back at how King, in the final weeks of his life, increasingly turned his focus to Americans plagued by poverty – “the other America”:

King explained the shift in his focus: “Now our struggle is for genuine equality, which means economic equality. For we know that it isn’t enough to integrate lunch counters. What does it profit a man to be able to eat at an integrated lunch counter if he doesn’t earn enough money to buy a hamburger and a cup of coffee?”

Robinson continues:

[W]hat King saw in 1968 — and what we all should recognize today — is that it is useless to try to address race without also taking on the larger issue of inequality. He was planning a poor people’s march on Washington that would include not only African-Americans but also Latinos, Native Americans and poor Appalachian whites. He envisioned a rainbow of the dispossessed, assembled to demand not just an end to discrimination but a change in the way the economy doles out its spoils.

King did not live to lead that demonstration, which ended up becoming the “Resurrection City” tent encampment on the National Mall. Protesters never won passage of the “economic bill of rights” they had sought.

Today, our society is much more affluent overall — and much more unequal. Since King’s death, the share of total U.S. income earned by the top 1 percent has more than doubled. Studies indicate there is less economic mobility in the United States than in most other developed countries. The American dream is in danger of becoming a distant memory. … Paying homage to King as one of our nation’s greatest leaders means remembering not just his soaring oratory about racial justice but his pointed words about economic justice as well. Inequality, he told us, threatens the well-being of the nation. Extending a hand to those in need makes us stronger.

Max Ehrenfreud, citing Robinson’s article, remarks that “persistent economic inequality has arguably undermined some of the most important achievements of the civil rights movement”:

Legally, our schools are integrated, but in practice, research suggests they’re becoming more segregated. White and black children in kindergarten and younger are much more likely to be separated from each other than whites and blacks in the population at large, which is largely because black families still can’t afford to live in the neighborhoods with the best schools, as Emily Badger has explained. And while segregation between neighborhoods has been steadily decreasing, there are still many places like Ferguson, Mo. where the economic ramifications of decades of racially biased business practices and government policies keep low-income blacks from finding a way out.

It’s often said on Martin Luther King Day that the civil rights movement still has unfinished business, but somehow, the events of the past year seem to have made that fact especially clear.

Read the text of the speech above here.

How We Got Modern Political Parties


Reviewing Julian E. Zelizer’s The Fierce Urgency of Now, an account of Lyndon Johnson and the Great Society, Sam Tanenhaus traces our ideologically-sorted political parties to a 1950 report by the American Political Science Association, “Toward a More Responsible Two-Party System”:

This report grew out of a project supervised by E. E. Schattschneider, a professor at Wesleyan and a leading exponent of Wilsonian party government. A political party that “does not capitalize on its successes by mobilizing the whole power of the government is a monstrosity reflecting the stupidity of professional politicians who are more interested in the petty spoils of office than they are in the control of the richest and most powerful government in the world,” Schattschneider had written.

This argument became the overriding theme of the 1950 report. Schattschneider’s team of fifteen scholars and policy experts had talked to congressional leaders, to officials in the Truman Administration, and to state and local politicians. It concluded that the two major parties were “probably the most archaic institutions in the United States”—scarcely more than “loose associations of state and local organizations, with very little national machinery and very little national cohesion.”

The Republican or Democrat sent to Congress was seldom screened by the national party and so felt no obligation to support the party’s program, if he even knew what it was. Once in office, he delivered patronage and pork to his constituents back home. He operated free of a coherent agenda and belonged to no “binding” caucus. The parties, in other words, were failing because they weren’t sufficiently ideological, partisan, and polarizing.

The solution was a “responsible party system”—centralized, idea-driven, serious-minded. Each party needed stronger central “councils” that met regularly, not just in Convention years, to establish principles and programs. Candidates should be expected to campaign on these platforms and then to carry them out, with dissidents punished or expelled. The party in power would enact its program, and the minority party would provide strong criticism and develop alternatives to present at election time.

(Image: President Johnson endeavors to give “The Treatment” to Senator Richard Russell in 1963, via Wikimedia Commons)

The Web’s Heart Of Darkness

Last summer, Andrew O’Hagan undertook the peculiar experiment of borrowing “a dead young man’s name and see[ing] how far I could go in animating a fake life for him.” In the resulting meditation on identity and “the ghostliness of the internet and the way we live with it,” O’Hagan describes how he plunged his fictionalized “Ronald Pinn” into the dark web, “where one can be anybody one wants to be”:

There were areas I wouldn’t allow him to go into – porn, for instance – but the Ronnie who existed last summer was alive both to drugs and to the idea of weaponry.

It’s one of the contradictions of the dark web, that its love of throwing off constraints doesn’t always sit well with its live-and-let-live philosophy. There are people in those illicit marketplaces who sell ‘suicide tablets’ and bomb-making kits. ‘Crowd-sourced hitmen’ were on offer beside assault weapons, bullets and grenades. One of the odd things I discovered during my time with cyber-purists – and Ronnie found it too – was how right-wing they are at the heart of their revolutionary programmes. The internet is libertarian in spirit, as well as cultish, paranoid, rabble-rousing and demagogic, given to emptying other people’s trash cans while hiding their own, devoted not to persuasion but to trolling, obsessed with making a religion of democracy while broadly mistrusting people. Far down in the dark web, there exists an anti-authoritarian madness, a love of disorder as long as one’s own possessions aren’t threatened. The peaceniks come holding grenades. The Manson Family would feel at home.

When Ronnie Pinn went to see this world he found it welcoming and vile. He saw Uzis and assault rifles, bomb-making kits, grenades, machetes and pistols. As a man with cyber-currency, he was welcome in every room and was never checked. He was anybody as well as nobody. He could have been a teenager, a warrior, a terrorist or a psychopath. So long as he had currency he was okay.

The Sinking Ship Of Chivalry

Tyler Cowen points out the abstract of a new paper that upends notions of “women and children first”:

Since the sinking of the Titanic, there has been a widespread belief that the social norm of “women and children first” (WCF) gives women a survival advantage over men in maritime disasters, and that captains and crew members give priority to passengers. We analyze a database of 18 maritime disasters spanning three centuries, covering the fate of over 15,000 individuals of more than 30 nationalities.

Our results provide a unique picture of maritime disasters. Women have a distinct survival disadvantage compared with men. Captains and crew survive at a significantly higher rate than passengers. We also find that: the captain has the power to enforce normative behavior; there seems to be no association between duration of a disaster and the impact of social norms; women fare no better when they constitute a small share of the ship’s complement; the length of the voyage before the disaster appears to have no impact on women’s relative survival rate; the sex gap in survival rates has declined since World War I; and women have a larger disadvantage in British shipwrecks. Taken together, our findings show that human behavior in life-and-death situations is best captured by the expression “every man for himself.”

Reading Into Reading Campaigns

Emmett Rensin and David Shor criticize public efforts – like Hillary Clinton’s Too Small to Fail campaign – that suggest reading to your kids will make them smarter. They explain why their preferred method of educational reform is simply “called ‘giving money to people'”:

Here’s a story about Norway. On August 21, 1969, massive oil reserves were discovered under Norway’s sovereign waters in the North Sea. Previously poor regions became suddenly wealthy as the petroleum boom–later bolstered by a natural gas discovery–poured new income into the region. But the wealth wasn’t spread evenly—not every Norwegian in the north could get in on the action. Suddenly there were the makings of a great natural experiment (PDF). Researchers wanted to see what the impact of sudden cash infusions–a significant environmental change–had on previously poor students, as compared with their still-impoverished peers. The influx of money bested almost every other popular solution to the education gap: students in suddenly-well-off families saw an average of 3 percent increase in absolute IQ and a 6 percent increase in college attendance. The results were as good as the best American charter schools at a fraction of the cost and logistical hassle.

Another set of circumstances conspired to demonstrate the same principle in the United States. During the course of a long longitudinal study, the calculation of the Earned Income Tax Credit–an essentially unconditional cash transfer to poor parents–changed several times, allowing researchers to plot the causal achievement impact of cash transfers on a curve of multiple benefit levels (PDF). These results were even more significant: for a mere $3,000 given annually to the parents of poor children, the data suggests a 7 percent increase in expected student test scores. That’s a relatively low number, too: $8,000 annually wouldn’t double the impact, but it would get us well clear of 10 percent, and still cost less than comparable alternatives. Other studies back up the same thesis, though they don’t quite have the same fun stories.

What Is Religious Faith?

In an interview, John Caputo – a philosopher whose work explores the connections between postmodernism and Christian theology – distinguishes it from mere “belief”:

Faith is a form of life and so it also has a specific form. I wouldn’t say that faith is more general; I would say it is deeper. It gets expressed in a specific form like liturgy. It is an exercise of the whole person: affective, bodily, performative. It is making the truth.

If we didn’t have the specific historical religious traditions, we would be much the poorer for it. Without Christianity, we wouldn’t have the memory of Jesus. We wouldn’t have the books of the New Testament. You need these concrete, historical traditions that are the bearers of ancient stories and are cut to fit to various cultures. But I don’t want to absolutize them or freeze-frame them. I don’t think of one religion being true at the expense of another in a zero-sum game. I am not saying that if you burrow deeply enough under each religious tradition, you will find they are all the same. They are quite different. They are as different as the cultures and the languages out of which they come. There is an irreducible multiplicity.

This is one of the hallmarks of postmodernity: you can’t boil everything down to one common thing. There are many ways of doing the truth. There can’t be one true religion any more than there can be one true language. The truth of religion is not the truth of a certain body of assertions. It is not about a core set of agreements. That’s not relativism, and it is not saying that there is nothing true in religion. It is saying that religious truth is not like the truth of mathematics. It is a different sort that is deeply woven together with a form of life.