Can America “Destroy” ISIS?

Tomasky wishes Obama would treat Americans like grownups and admit that we can’t eradicate the evil embedded in ISIS:

We’ve been trying to destroy Al Qaeda for 13 years now. We have not. We will not. And we will not destroy ISIS. We can’t destroy these outfits. They’re too nimble and slippery and amorphous, and everybody knows it. So why say it? Why not say what we hopefully can do and what we should do: contain it. We have contained Al Qaeda. Some of the methods have been morally problematic (drone strikes that sometimes kill innocents, etc.), but the methods have worked. Al Qaeda, say the experts, is now probably not in a position to pull off a 9/11. Containment is fine. It does the job. But no, I guess a president can’t say that. A president has to sound like John Wayne. It’s depressing and appalling.

Steve Chapman explains why, in his view, the war against ISIS is unlikely to succeed:

The United States is not incapable of fighting reasonably successful wars. It did so in the 1991 Iraq war, the 1999 Kosovo war and the 1989 invasion of Panama. In each case, we had a well-defined adversary in the form of a government, a limited goal and a clear path to the exit. We generally fail, though, when we undertake open-ended efforts to stamp out radical insurgents in societies alien to ours. We lack the knowledge, the resources, the compelling interest and the staying power to vanquish those groups.

The Islamic State is vulnerable to its local enemies—which include nearly every country in the region. But that doesn’t mean it can be destroyed by us. In fact, it stands to benefit from one thing at which both Obama and Bush have proved adept: creating enemies faster than we can kill them. We don’t know how to conduct a successful war against the Islamic State. So chances are we’ll have to settle for the other kind.

In fact, Ishaan Tharoor notes, the one time we managed to “destroy” a major terrorist outfit, it came back … as ISIS:

The closest the United States has come to destroying a terrorist organization like the Islamic State was when it subdued the al-Qaeda insurgency that led to its rise. A U.S. counteroffensive in 2008, aided by a coalition of Sunni tribal militias, beat back al-Qaeda in Iraq; Baghdad, for a brief moment, seemed to be showing the political will to better accommodate Iraq’s Sunni majority regions. But those gains didn’t hold and, in the chaos of Syria’s civil war, units that once belonged to al-Qaeda in Iraq reemerged as the Islamic State.

The irony is unwelcome for a raft of reasons: The Islamic State is far more powerful than its predecessor, boasting as many as 31,500 fighters, according to new estimates from the CIA. That includes an influx of radicalized European nationals, as well as opportunistic defectors from other Syrian rebel groups. The United States does not have the boots on the ground as it did during its occupation in Iraq; nor is it certain that the Obama administration or the Iraqi government can call on the same Sunni militias that helped first push back al-Qaeda in Iraq.

Face Of The Day

Morsi jailbreak trial adjourned to September 21

Muslim Brotherhood leader Mohamed El-Beltagy flashes rabia sign during a trial of the Wadi el-Natrun prison case at Cairo Police Academy in Egypt on September 15, 2014. Cairo Criminal Court adjourned the trial of Mohamed Morsi and 130 others to 21 September. By Ahmed Ramadan/Anadolu Agency/Getty Images.

Paternity Pays, Ctd

Kay Hymowitz responds to Claire Cain Miller:

The jumping off point of [Miller’s] piece will be familiar to anyone who has kept a casual eye on gender gap research. Mothers earn less than fathers with similar credentials and laboring in similar occupations.  Citing research by sociologist Michelle Budig, Miller notes that “childless, unmarried women earn 96 cents for every dollar a man earns, while married mothers earn 76 cents.” Men, on the other hand, get a parenthood bonus.  Their earnings go up when they become fathers.

Now, there are two plausible reasons for the motherhood penalty and fatherhood bonus.

One is that women behave differently than men when they deal with the inevitable tradeoffs between jobs and children.  That position is certainly consistent with Budig’s finding that men increase their work hours after becoming a parent while women reduce theirs—not to mention mother’s oft-repeated preference for part time over full time employment. Miller, however, seizes on the gender gap literature’s preferred explanation: The gap is caused by discrimination against women, or in this case, “old fashioned notions of parenthood.” Employers remain suspicious of working mothers, she believes, while they are forever patting working dads on the back with raises and promotions.

Megan McArdle also sees some inherent divisions in family roles:

The fundamental unfairness of reproduction carries over into the partnerships we form to assist it. The ideal of an egalitarian partnership in which both partners work outside the home and inside the home in equal measure isn’t achieved even in those Nordic paradises where everyone gets scads of fully paid parental leave and subsidized day care — and women are even less likely to end up in a private-sector job or management than they are in the heartless U.S.

Instead of talking about how unfair it all is, it’s probably more useful to talk about what we want to achieve. Do we want to encourage the formation of marriages in which one spouse charges harder outside the home and the other spouse assumes more domestic duties? Or should we penalize spouses who made the mistake of counting on their partner to provide the lion’s share of the earning power? That was the argument of many feminists in the 1970s; they didn’t want women to have the choice of becoming housewives.

A reader chimes in:

My husband benefited from the Family Man trope. I didn’t benefit so much. We made compromises – I made compromises – all through our marriage to help him meet the requirements of his job. That meant that my career got derailed and we moved to places where we had no support network and I had problems finding work. So his career flourished and I went along with it, because the more I compromised the less of a career I had. Critics of “wage gap” calculations will no doubt point out that I undermined my own earnings, so I have no right to complain about the wage gap, but in retrospect if I’d been paid more for comparable work when I started out, my job would have been a more powerful bargaining chip when we made joint decisions.

When my husband died unexpectedly from a rare and previously undiagnosed cancer before age 60 and I was a widow in my late 50s, I found out widow’s benefits don’t go far and sometimes aren’t even available until later. Pensions are reduced for widows and cut further when the pension holder dies before retirement age. Social Security isn’t much help for younger widows who earn a disproportionately smaller share of the family income. Even though he had life insurance (I insisted), and we had retirement funds, I am facing a rocky retirement that began earlier than expected after my dead-end job laid me off just after I turned 60.

I have good friends who made the same compromises, but their husbands are alive and the wives are working part-time or not at all, traveling, and facing a much different old age – because so far, their husbands are still earning money and clocking time on pension actuarial tables and contributing to Social Security (and they’re lucky because as far as I can tell, they have less insurance and retirement savings than we had, although a couple of the men have larger pensions than my husband did). It takes a lot of capital to produce the high cash flow that a living, successful spouse brings in.

While my problems may sound like the problems of privilege to single moms, I find there isn’t much awareness about how early death pretty much burns up all the financial advantages of having a Family Man who earns most of the family income. It’s very hard for a couple to calculate how career decisions will play out financially 30 or 40 years in the future. These days, I don’t recommend the Family Man model.

Zoolander Award Nominee

https://twitter.com/HuffPostBiz/status/511515488388521985

The Zoolander Award for fashion absurdity has introduced Dish readers to everything from erotic Mickey Mouse ears to Holocaust-evoking children’s-wear. Abby Ohlheiser spots a new contender:

“Get it or regret it!” read the description for a “vintage,” one-of-a-kind Kent State sweatshirt that Urban Outfitters briefly offered for just $129. However, the fact that there was just one available for purchase is far from the most regrettable part of the item: the shirt was decorated with a blood spatter-like pattern, reminiscent of the 1970 “Kent State Massacre” that left four people dead. …

As outrage spread, Urban Outfitters issued an apology for the product on Monday morning, claiming that the product was “was purchased as part of our sun-faded vintage collection.” The company added that the bright red stains and holes, which certainly seemed to suggest blood, were simply “discoloration from the original shade of the shirt and the holes are from natural wear and fray.” The statement added: “We deeply regret that this item was perceived negatively.”

Update from a reader:

I was disheartened to see you jumping onto this pathetic bandwagon. The fact that this became a story, with each outlet attempting to out-outrage the others, shows just how lazy we’ve all gotten. This shirt was a single vintage item that had naturally faded and aged into the (admittedly, very unfortunate) finish shown in the photos. It’s “SOLD OUT” because there was only one of them. That’s how vintage clothing works. This key bit of information was completely missed by nearly everyone who covered this non-story. The Daily Beast went so far as to demand – DEMAND! – that Urban Outfitters tell them “who designed” this item. The answer, of course is: “A few decades of runs through the average American washer and dryer.”

Quote For The Day

“The economic arguments against independence seem not to be working — and may even be backfiring. I think I know why. Telling a Scot, ‘You can’t do this — if you do, terrible things will happen to you,’ has been a losing negotiating strategy since time immemorial. If you went into a Glasgow pub tonight and said to the average Glaswegian, ‘If you down that beer, you’ll get your head kicked in,’ he would react by draining his glass to the dregs and telling the barman, ‘Same again,'” – Niall Ferguson, who knows whereof he speaks.

Don’t Keep Your Cool

In an excerpt from his new book The Meaning of Human Existence, E.O. Wilson sums up an evolutionary history of selfish and cooperative tendencies. “In a nutshell,” he writes, “individual selection favors what we call sin and group selection favors virtue. The result is the internal conflict of conscience that afflicts all but psychopaths”:

The internal conflict in conscience caused by competing levels of natural selection is more than just an arcane subject for theoretical biologists to ponder. It is not the presence of good and evil tearing at one another in our breasts. It is a biological trait fundamental to the human condition, and necessary for survival of the species. The opposed selection pressures during human evolution produced an unstable mix of innate emotional responses. They created a mind that is continuously and kaleidoscopically shifting in mood — variously proud, aggressive, competitive, angry, vengeful, venal, treacherous, curious, adventurous, tribal, brave, humble, patriotic, empathetic and loving. All normal humans are both ignoble and noble, often in close alternation, sometimes simultaneously.

The instability of the emotions is a quality we should wish to keep. It is the essence of the human character, and the source of our creativity. We need to understand ourselves in both evolutionary and psychological terms in order to plan a more rational, catastrophe-proof future. We must learn to behave, but let us never even think of domesticating human nature.

The Geography Of Suicide

Global_AS_suicide_rates_bothsexes_2012

The World Health Organization recently released a report (pdf) illustrating suicide risk across the globe. Tanya Basu unpacks it:

One dramatic trend the WHO reports is that countries in the developing world have suicide rates that are many times higher than the Western world. “Despite preconceptions that suicide is more prevalent in high-income countries,” the report states, “in reality, 75 percent of suicides occur in low- and middle-income countries.”

The high male-to-female ratio of suicide victims is also rapidly equalizing, particularly in the developing world. The changing makeup of the global workforce and its increasing inclusion of women have made women more susceptible to the socioeconomic stress that increases the likelihood for suicide. While the male-to-female ratio for high-income countries is 3.5, the ratio is almost even in low-income countries at 1.6. The divide is particularly close in the Western Pacific (0.9), Southeast Asia (1.6), and the Eastern Mediterranean (1.4).  Variation in suicide rates by age is also important. Younger women in the 15-to-29 age bracket are as likely as their male counterparts to commit suicide in developing countries at a 1:1 ratio. The gap widens up to middle age, but in general, data indicates that the gender of suicide victims can be male or female, unlike the male dominance of suicides in the developed world.

Steven E. Hyman argues that rich and poor countries alike are failing their mentally ill citizens:

In the United States, for example, the federal Mental Health Parity and Addiction Equity Act, which banned much of the previously existing health insurance discrimination against people with mental illness, was passed only as recently as 2008. However, the regulations needed to implement the law languished for five years, issuing only in 2013. Such late but laudable reforms notwithstanding, in the United States and other high-income countries, many individuals with chronic mental illness become homeless or are imprisoned, often for offenses that stem from their disorders.

The low priority of mental illness in the health care systems of many [low- and middle-income countries (LMICs)] is attested to by health budget allocations that generally lie in the range of 1 to 2 percent of health expenditure. As a result, health care spending on mental disorders is often less than US$0.25 per capita in low-income countries and averages less than US$2.00 per capita globally. The WHO estimates that 80 percent of individuals with mental illnesses in LMICs do not receive meaningful treatment. And when treatments are available, they are often in the form of medications dating from the 1950s that should have been long superseded by more modern medicines.

Bill Gardner considers one reason why:

[W]ith cost-effective means to treat mental illnesses we could relieve an enormous burden of human suffering and greatly increase human productivity. But we neglect the care of the mentally ill relative to our care for those with other disorders. Hyman documents how policy makers discount the importance of mental illness and asks why. One reason is the stigmatization of the mentally ill. But then what explains stigmatization? [Hyman writes:]

I believe that a seemingly more arcane but powerful cognitive distortion also plays a role in the deprioritization of mental illness: the belief that mental disorders should somehow be controllable, if only the affected person tried hard enough or adhered to a better set of beliefs.

The symptoms of mental disorders are derangements of thought and emotion. Our sense of personal autonomy tells us that we determine what we think and can at least shape what we feel. So if we can control ourselves, why can’t they?  The suspicion that the mentally ill are responsible for their state may be built into who we are.

Barack Obama, Neocon?

image001 (1)

In Obama’s reluctance to refer to his military operation against ISIS as a war, Uri Friedman reads an implicit embrace of the notion of perpetual war:

The distinctions between war and peace, of course, have long been murky (think America’s “police action” in Vietnam during another seemingly endless conflict: the Cold War). And few declarations of war are as clear as, say, those issued during World War II. Obama, moreover, has been careful to present his counterterrorism measures as limited to specific groups in specific places that pose specific threats to the United States—rather than, in his words, a “boundless ‘global war on terror.’” But over the course of his presidency, these efforts have expanded from Pakistan and Yemen to Somalia, and now to Iraq and Syria. “This war, like all wars, must end,” Obama declared at National Defense University.

[Last] week, the president set aside that goal. Thirteen years after his predecessor declared war on a concept—terror—Obama avoided explicitly declaring war on the very real adversary ISIS has become. All the same, U.S. soldiers are now going on the offensive again in the Middle East. What is the nature of their enemy? Is it peacetime or wartime? After Wednesday’s speech, it’s more difficult than ever to tell.

Allahpundit thinks Obama has adopted the same logic Bush used to justify invading Iraq in 2003:

He’s spent six years using, and even expanding, the counterterror tools that Bush gave him, but not until now did he take the final step and adopt Bush’s view of war itself.

Obama isn’t responding to an “immediate” threat against the U.S. in hitting ISIS; he’s engaging in preemptive war to try to neutralize what will, sooner or later (likely sooner), become a grave strategic threat. It’s like trying to oust the Taliban circa 1998 for fear of what terrorists based in Afghanistan might eventually do to America — or, if you prefer, like ousting Saddam circa 2003 for fear of what he might eventually do to America with his weapons program. Obama’s going to hit ISIS before cells nurtured in their territory hit us, and good for him. But let’s not kid ourselves what this means: If, as Conor Friedersdorf says, Obama’s now willing to preemptively attack a brutal Iraqi enemy for fear of what he might do down the line to America and its interests, he should have also supported the war in Iraq in 2003.

Former Bush advisor William Inboden unsurprisingly depicts that shift as the president waking up to reality:

It is often forgotten today, but in President Jimmy Carter’s last year in office he developed an assertive policy towards the Soviet Union including a major defense buildup, support for rebels fighting the Soviets in Afghanistan, and suspending any further arms control agreements. Carter adopted these policies after the many traumas of 1979, culminating in the Soviet invasion of Afghanistan, made him realize that the previous three years of his Cold War policies had been naïve and weak. Six years into his presidency, perhaps President Obama has now arrived at a similar “Carter moment” and realizes that with just over two years left in his administration, he needs to make a similar shift.