James C. Banks

Archive for March, 2011|Monthly archive page

But Who’s the Man That Comes after Gaddafi?

In Uncategorized on March 30, 2011 at 12:53 am

I missed the president’s speech on the “situation in Libya” last night, but I have caught up since then. I must give the president credit. I was expecting platitudes, but the speech—at least in as far as the Economist presented it—was actually one of the best arguments for intervention that I have yet seen. It didn’t convince me, though.

Part of the reason why I found this speech unconvincing was the way that the president and newspaper framed the argument. While neither is seeking direct regime change, both appear to hope that the circumstances the coalition has created in Libya will lead to a collapse of Gaddafi’s government, and that the collapse of this regime, in itself, is a positive good. I am not convinced because I have no idea who comes after Gaddafi. The only thing I know is that, if Gaddafi does fall, then someone will have to come.

The rebels don’t inspire much confidence in this regard. Marriages of convenience like the one between the United States and these rebels often end with both parties at one another’s throats. Our co-belligerencies with Saddam Hussein, with Osama bin Laden, and (to a very limited extent in 1969) with Muammar Al-Gaddafi did not end happily, in spite of the fact that they all seemed perfectly reasonable during the periods we perpetrated them.

Interventionists would say that it made sense for the United States to back Saddam Hussein as he buffered the Islamist revolution in Iran.  It made sense to give terrorists like Osama bin Laden training as they pushed back on a Soviet occupation.  What does not make sense is siding with possibly Islamist rebels (who will likely pursue a radically and dangerously anti-American agenda if they ever come to power) when the dictator they fight is cruel but, in his relationship to the United States, relatively insignificant.

Last night, I was waiting for the president to answer why we are pursuing this agenda.  As of tonight, I am still waiting.


War Movies: In Seach of a Standard for a Usually-Dry Genre

In Uncategorized on March 25, 2011 at 3:03 am

Last year, The Onion ran a piece entitled “Tom Hanks Forces Houseguests to Play ‘World War II’ with Him”. When their satirical version of Hanks assigns roles, he says:

“Bruce, you’re the tough guy from Brooklyn who cares a lot more than he lets on and everybody calls you ‘Brooklyn,'” Hanks reportedly said, pacing back and forth in an authentic 1943 U.S. officer’s field jacket. “Martin, you’re the funny medic named Dankowitz.”

“Ron, you’ll be the weakling Irish kid everybody thinks is going to get us killed,” Hanks added. “Let’s just hope you don’t, soldier.”

Somehow, when I read that, I felt that this was approximately how screenwriters developed characters when plotting war movies—by type rather than individuality. There is nothing about the war film itself that should make characters dull and two dimensional, but often they are.

Recently, I watched Band of Brothers and, after finishing the last episode, realized that I had watched ten hours of narrative without ever feeling empathy for the characters—because, among all the protagonists, there was no character to distinguish any of them. The same could be true of pictures like Saving Private Ryan and Blackhawk Down.

This is not to say that soldiers—as portrayed in movies and on television—have no personality: Private Ryan does have distinctive devotion to his duty and Major Winters (the hero of Band of Brothers) is human enough to suffer post-traumatic stress after looking into the eyes of a German before killing him. The problem is that war is too often portrayed in media as an extremely limited experience, deviating as a sort of dialectic between the stress of a mission and the decompression that follows it. This is an ebb and flow which makes for intensity but not poignancy.

There are a number of war films that are indisputably great movies: The Dirty Dozen, The Great Escape, The Thin Red Line, The Hurt Locker. But in all of the above mentioned pictures, war is the setting but not the theme. These are all fundamentally films about how specific individuals navigate adverse circumstances, unlike the formerly mentioned films which are more about the monotonizing nature of warfare.

This is a subtle distinction and it is hard to conceive of its standard, but, were I to try, the standard would probably be something like this: A war film is intriguing if you can make an interesting movie about its protagonists set during peace times. Short of that, the picture might be captivating enough to hold my attention while watching it, but it certainly wouldn’t replay in my mind after turning the screen off.

Libya: The New Front

In Uncategorized on March 21, 2011 at 1:59 am

It is now pointless to argue whether we should or should not interfere in Libya, because—though our involvement in Libya is only modest at this time—we have. The Libyan conflict has very little to do with our national interest. As much as America hates Muommar Ghadafi, there is minimal evidence that whoever replaces him would be any better. Yes, Ghadafi has supported and carried out terrorist attacks on the United States; but it is less likely that he will do so in the future than radical Islamists, should they take power, which might happen should he be overthrown. Those of us who were opposed to military intervention (myself included) can still provide thoughts on how to proceed, now that we can’t turn back time.

By interfering in Libya, we have opened up the potential for putting boots on the ground, though this does not have to happen. Whether or not we do so depends on how specifically the mission is defined and it is for this reason that I advocate creating a very low bar for success. We probably should have done this in Afghanistan also—we could have withdrawn from Afghanistan after the battle of Tora Bora claiming to have defeated the Al Qaeda and, therefore, emerged victorious; this is no longer an option in Afghanistan, but we can still set a low bar in Libya.

The president should announce that his only intention, for the moment, is to turn back Muommar Ghadafi’s assault on Benghazi and, once this has been done, he will allow the Libyans to fight it out. The worst bar for success that could be set would be the removal of Ghadafi from power; I do not think that such an outcome is impossible, but I do think that it will be much harder to achieve than a no-fly zone. Already, the coalition is going beyond its original goal of establishing a no-fly zone by striking ground targets.

Whether these actions are the gateway drug to having to oust Ghadafi and then help rebuild so as to avoid a refugee crisis to Egypt and Italy is unclear. Admiral Mullen seems to think that ousting Ghadafi will not be necessary; France seems to disagree. France and Britain have been much more cavalier in the intervention than has the United States, possibly because instability in Libya could spill over into Europe and also possibly as an act of atonement after having extended a hand of friendship to Ghadafi during the last decade. (I should note that the United States took on Ghadafi as an ally, but never offered serious friendship.) 

If France and Britain want to perform acts of contrition, they have every reason to. But if this means marching into Tripoli, putting Ghadafi on trial and then mediating the nation-building process to make sure another crisis doesn’t erupt, then the United States should take no more than a supportive role, kind of like a cheerleading squad. So far, I have heard several individuals argue that the United States ought to be more involved because “we can’t let France beat us.” Actually, I’m glad to let France and Britain take the lead. After all, they are the ones whose interests are more directly at stake.

That Was Then, What Is Now?

In Uncategorized on March 17, 2011 at 2:11 am

This time it’s different. At least, that is what it looks like. Martin Peretz has joined the axis of internationalism in calling for establishing a no-fly zone over Libya, while Ross Douthat, The National Review and other prominent conservatives—while not explicitly backing the president’s status quo approach to foreign policy—have expressed skepticism as to its alternatives. 

This realignment seems strange given where it was just four years ago—the National Review holding firm on Iraq, The New Republic running away from its support of the invasion and then-candidate Obama touting his dovish instincts in debates.  While I was surprised to see such a severe a critique of the Obama administration’s foreign policy coming from TNR, I was not surprised that they rediscovered their inner-hawk. 

Nor is it particularly surprising that The National Review should oppose formidable intervention. Most decisions relating to international relations are not made at the level of ideology, but rather at the level of strategy. But it is the dynamic within conservatism that is most interesting. I suspect that it is not just The National Review. I have no desire to see the authorization for use of force in Libya go to vote in Congress, but I would be curious to see what would happen if it did. Such a bill would not garner much support from either party, but I have no idea which party would oppose it more vociferously.

Robust international foreign policy has been one of the three legs of American conservatism since V-Day Europe, but it is the weakest leg of the stool. Foreign policy conservatives no longer have the Soviet Union to contend with, and while radical Islam is probably as dangerous as the Soviet Union (in the sense that mutually assured destruction is not a deterrent to their resolve) it does not unite the movement as effectively.  It does not seem as threatening because Islamism is not an ideology which could ever gain much currency in liberal democracies. 

During the Cold War, this was a different matter. The main line between the Free World and the Commissars was not a wall running through Berlin. Rather, as Solzhenitsyn wrote, the line that separates good from evil cuts through every human heart. The War on Terror has produced traitors, like Major Hasan at Fort Hood, there is comparatively little danger of Islamists infiltrating American institutions in the same way as Alger Hiss or the Rosenbergs. Enemies in the War on Terror do not represent a visible nation state, unlike enemies during the Cold War, but that does not mean all of them are harder to see.

More importantly, though, is the fact that while a robust foreign policy during the Cold War could be easily tied to support for free enterprise and social conservatism—since these were the very things which the Soviet Union had sought to root out—it is harder to connect these values with the foreign interventions associated with the War on Terror. 

This is not because the principles of free enterprise and social conservatism (in its American manifestation) are consistent with anything that Islamism represents; they are not.  Rather, it is because Islamism lacks the strength and precision necessary to corrode the institutions of society and, therefore, can only hope to destroy society itself. There is no bifurcating option of “Better dead than red” because red—or green, for the matter—is not an option.  Only dead is.

The conservatism of interventionists is often running up against the goals of the other sectors of the conservative movement.  While some social conservatives consider support of Israel to be a family value, or at least an appropriate priority for a Values Voters Summit, and any reasonable conservative can agree the readiness and vigilance is important in a world which he gauges to be too complicated for predictability or ideology, it has become much more difficult to determine how conservatives should evaluate strategies at the international level.  On issues like support of Israel, I agree with most mainstream conservatives; but, on issues like Libya . . . well, foreign interventionist conservatives might have to get used to the fact that there is a new mainstream.

How Fast Is the World Turning? Technology and Fulfillment

In Uncategorized on March 14, 2011 at 11:34 pm

News isn’t supposed to get old—that’s why it’s news. But that doesn’t prevent reporters from publishing stories that they probably could have chronicled without sources. Every week one social pundit or another indites a not-so-new article about how we’re falling behind in whatever it’s not fashionable to be behind in. Though his ideas themselves are more nuanced than these sentiments, it is these sentiments among the vast wonkosphere which has made Tyler Cowen’s The Great Stagnation “the most debated book so far this year.”

I have not read the book—not for a lack of interest, but for lack of a Kindle; I have read the the column that Cowen wrote which supposedly more or less summarizes the theme. It is his conclusion of which I am most skeptical:

In the narrow sense, the solution to the stagnation of median income will not be a political one. And one of the hardest points to grasp about this quandary is that no one in particular is to blame. Scientific progress has never proceeded on an even, predictable basis, even though for part of the 20th century it seemed that it might.

Science should be encouraged with subsidies for basic research, as well as private charity, educational reform, a business culture geared toward commercializing inventions, and greater public appreciation for the scientific endeavor. A lighter legal and regulatory hand could ease the path of future innovations.

I tend to believe any notion that scientific invention and innovation will save us is wishful thinking, but even if Professor Cowen’s hopes are oversold, it is just as likely that his gloom about low hanging fruit—or, that economic progress has slowed in the late twentieth and early twenty-first century—is also exaggerated. I argued earlier that many of the technologies that don’t seem as significant as the car might yet turn out to be just a revolutionary, though they have not been entirely unpacked at this point. Virtual offices, to cite just one instance, have the ability to change the work force from being primarily labor-based to being primarily contractually-based, but this is not a potential of which many firms have yet taken advantage.

And even if the government did not choose to rule with a “lighter legal and regulatory hand” it is easy to imagine ways that individual citizens could cast it off: I am not convinced that sea-steading will be a viable option for long-term communities, but it is imaginable that one could park a research facility twenty miles off the coast of California (that is, in international waters) and conduct research there untrammeled by government interference.

While I don’t think that we should ban the taxi to save the rickshaw driver, I tend to think that the most formidable challenge the country faces is not the fact that things are not changing fast enough, but rather that they are changing too fast.  It is becoming increasingly difficult to imagine any kind of work that cannot be automated. This does not mean that humanity will be wanting for jobs to do; there will always be a need for human minds to gauge and set priorities for which tasks to automate and when.  Robots might become the labor force, but humans will always provide the management at some level.

The more significant problem is that, as work becomes more virtual and automated, it will become less fulfilling and more stressful for the human individuals at the top, responsible for seeing it carried through.  While I’ve never been sentimental about the virtues of truck-driving or hammering a nail, there is definitely something to be said for having a job at which you can actually see the fruits of your labor.  But craft—the satisfaction that comes from doing a job well—is turning into an alien concept as people concentrate less on providing necessary labors which were once necessary are becoming irrelevant and the tasks assigned to individuals are becoming increasingly geared toward providing leisure services.

Is there a solution to this problem? I don’t think so; mainly because people think that their newfound vocations, while possibly less fulfilling, stave off a larger challenge (id est, being unemployed); this is true, but it doesn’t mean that nothing has been lost and that it is something that technology won’t bring back.

Expected Facts and Strange Conclusions

In Uncategorized on March 12, 2011 at 3:36 pm

I don’t want to wag my finger and call fowl too much, because I don’t think that David Brooks is serious about this, but this passage is still an interesting case study since I can imagine others drawing the same conclusion in perfect seriousness:

“What do you do after your party wins an election? In a forthcoming study for the journal Computers in Human Behavior, Patrick Markey and Charlotte Markey compared Internet searches in red and blue states after the 2006 and 2010 elections. They found that the number of searchers for pornography was much higher right after the 2010 election (a big G.O.P. year) than after 2006 (a big Democratic year). Conversely, people in blue states searched for porn at much higher rates after 2006 than after 2010. One explanation is this: After winning a vicarious status competition, people (predominantly men, I guess) tend to seek out pornography.”

This sounds a bit like a post hoc ergo propter hoc. Just because people hit pornography right after their party’s victory doesn’t mean that the victory is the reason they visited a pornography site.

It seems more likely to me that the correlation between victory and pornography-viewing stems from the contrasting contexts of people who win versus people who lose: Victorious voters—and, importantly, voters who expect to be victorious—are more likely to go to the internet to scrutinize election results; they are also more likely to stay on the internet for more than five minutes and wait for further results and news stories and, therefore, pursue recreation in between. When they get politicked out—well, they’re already at the computer, so why not?

I am not a connoisseur of pornography (you’ll have to trust me) and have not had this experience, but the motivation behind the facts is far from clear. It might have been more telling if someone had looked into whether or not pornographic video rentals went up in blue states the day after the 1992 election or vice versa after the 1994 election. But, as it is, the facts don’t lead to a foregone conclusion that elections compel people to seek out pornography; it is as likely that elections just bring people together in a large, electronic forum where pornography can easily be found.

When the Future’s Not What It Used to Be

In Uncategorized on March 6, 2011 at 3:18 am

Remember how—when you were a twenty-something or, in my case, a kid—putting a 2000 on the end of any product used to be a clever marketing ploy because the millennium was approaching? If so, you might wonder whatever happened to the millennium. This is part of what I gather Tyler Cowen has been trying to answer in his new book, The Great Stagnation. Technology still changes, but not like it used to. I’m old enough to remember when having a cordless phone was a status symbol, but didn’t have the opportunity to see car-ownership transform the lives of America’s middle class, as many people born in the early twentieth century would have.

I am not as uneasy with this as the elite class has tended to be, stepping up to pour money into initiatives which range from innovative to ludicrous. The concern is common among academics as well; every couple of months (or at least every year, when SAT scores are released) one pundit or another churns out another piece about how “American kids” are “falling behind” in a “competitive, global economy.” The public generally seems to agree also. The latest manifestation of this is their overreaction[i] to Amy Chua’s book, The Battle Hymn of the Tiger Mother. Nearly every commentary on the controversial tome has mentioned how the response masks Americans’ insecurity about their future prospects in the new Republic of Earth (or will it be an autocracy?)

As a general rule, the pundits who compare American students to peers abroad miss the important point that, while America might be losing at its own game, it is still its own game and that counts for something. Burgeoning economies in China, India and Brazil aren’t redefining the way that business is done; they are not undermining long-lasting assumptions of international economy or free trade. On the contrary, the degree to which they have been successful has been contingent upon their willingness to adopt Western models of political economy. China did not start growing until one of their leaders decided that getting rich was glorious.

But this does not matter, because a falling tide lowers all boats. If we are nearing a moment at which entrepreneurship cannot come up with new strategies for redefining the way that we live, we might be living near the end of history after all. I do not believe this to be the case (on the grounds that there are always new corners to look into somewhere), but with every year that passes, innovation diminishes slightly. This is not to say that new products fail to emerge; just that the new innovations that do hit the marketplace are much less ambitious than the flying cars portrayed in futurist movies like Blade Runner.

The imaginary concept of the flying car turned out to be much less revolutionary than the internet or the mobile phone; it might have gotten people to places faster, but it didn’t make it possible for anyone to talk to anyone else at almost any time. Even so, it seemed much more glorious. It was a symbol of human ingenuity and expansiveness. The innovations that actually did come about in the second half of the twentieth century and first decade of the twenty-first—suburbia or the personal computer—are in many ways symbolic of just the opposite: maybe it isn’t good for mankind to be alone, but judging by what he creates, he sure likes being alone.

Would the Decline of the City Be All That Bad?

In Uncategorized on March 1, 2011 at 12:36 am

Joel Kotkin writes on the decline of the city as reflected in the latest census statistics.  Kotkin is one of the most interesting voices on issues of city planning precisely because he is also one of the least conventional.  Whereas almost every voice that has bother to make a sound on city planning in recent years has concentrated on building “sustainable communities” or promoting “new urbanism,” Kotkin has become a vocal apologist for suburban lifestyle, even while David Brooks has fallen away.

Even as one who has loved living in the city and hates living in suburbia, I have to say that Joel Kotkin is probably right.  Suburbia has been attacked both on the Left and the Right, but this should not necessarily be the case. 

This is by-and-large because suburban hideousness is tied to the sub- being tied to the urbs.  In Fairfax County (where I live), the sprawling highways and decentralized townships are meant to serve the convenience rather than the aesthetic tastes of their inhabitants. If anyone wants monuments, they can go to Washington, but if you want a wider variety of grocery store retail, go to Northern Virginia.

However, technology is breaking the barrier between this old dichotomy. If the car made it possible for individuals to work in cities without living in them, the computer has made it possible—if not yet economical—for individuals to live and work without visiting a city at all.

I don’t find convincing the argument that technology could never change society so much as to bring down the city, because the city—at least as we know it today—is a product of technology (and, more specifically, industrialization).  When manufacturing was still central to the American economy, it made sense to concentrate labor in urban centers.  Even through the 1950’s when workers began moving from factories facing waterfronts to offices high above city streets, it was natural that everyone should work in an environment where they could look over one another’s shoulders and interact more directly.

This is still true to some degree, but every new technological development makes the office center seem quainter and quainter.  People can converse on mobile phones rather than office lines, can deliver messages through e-mails and blackberries rather than post offices, and speak virtual-face-to-virtual-face on Skype rather than, well, face-to-face. 

Eventually, there may come a time when it will be cheaper for companies to contract most of their work out to individuals who work from home, rather than reimbursing their public transit or gasoline costs and renting an urban office. Most businesses could probably already save money this way—they just haven’t realized this yet.  I doubt that a time will come when they “realize this” per se.  More likely, companies that contract out will slowly overcome companies that don’t and cities will slowly shrink to their more natural sizes.

This will definitely be good for suburbia, but what will the suburbs do without the urbs?  It remains to be determined, but, if nothing else, the decline of the city would mean that the car would become less essential to suburban life.  If this were to occur, then suburbanites might see fit to turn to a more town-oriented model, pushing to become a center distinct communities unto themselves, rather than an outskirt of a larger—but every hollowing-out—urban center.  Hopefully, they will need something to satisfy the aesthetic and cultural experiences of city life.  This is indeterminate, but the descent of the city might not mean the ascent of suburbia; it might mean the rebirth of the township.