General war, total war, medieval war – whatever you wish to call it – is rendered unacceptable by the presence of fission and fusion weapons. And nuclear weapons, we find, are not operationally scalable. Sub-KT warheads have tactical utility, but the attendant radioactive residue outweighs that utility. There are niche uses for nuclear explosives – the deploying of an electromagnetic pulse (EMP) to disable the electrics and electronics over a wide area, for example – but exploding a nuclear weapon in the airspace of a sovereign state would most certainly be viewed as a hostile act. Not to mention the tracking of an inbound missile. It could only be taken as the first strike of a nuclear war. The same for using orbital nuclear-pumped lasers to blind communications and GPS satellites.
That’s a sketch of the nuclear profile for sovereign states. It’s quite different for non-state and sub-state actors. If al Qaeda, for example, could get their hands on a nuclear warhead (and the codes for arming it), they would use it for blackmail, and I wouldn’t put them above detonating it. But that’s the top of the pyramid. They can use spent nuclear fuel rods, or cobalt (Co60) from medical equipment, or tailings from uranium reprocessing – any radioactive materials – to jacket an explosive charge in creating a “dirty” bomb. Any of these stolen materials could be used to spike a water supply, or a corn field. The point is, a tactically insignificant amount of radioactivity released in public is a terrorist act – it creates panic rather than damage, and demonstrates just how powerless a government can be in protecting its citizens. KGB used polonium (Po109) to assassinate dissidents, rogue scientists and foreign assets for whom they had no further use.
And speaking of spent fuel rods, we – and by “we,” I mean humanity, not just Americans – still have no idea how to securely dispose of them, so they sit in Čerenkov pools at power stations around the world, waiting for someone to provide a solution (or figure out how to steal them). Needless to say, this is a precarious situation. The point is, nuclear materials are an ongoing problem when terrorism is rampant, and much of this is due to naïveté during the Atoms for Peace program of the 1960s. The situation in Iran today is a product of the Nuclear Nonproliferation Treaty (NPT) – allowing all activity up to manufacturing a nuclear weapon. It’s not illegal under the Treaty to enrich uranium to 93%, only to then use it as a bomb core. It’s not illegal to machine a bomb core out of U235 and Pu239, only to then place them into a bomb. A state can get “a screw away” and be legal under NPT. This is why Iran can enrich their uranium hexafluoride (HEX) to any degree of purity they wish under the Treaty, or work on the micro-shaped-charges needed to symmetrically implode the uranium jacket onto the plutonium core, or investigate the world of neutron initiators, or conduct research on the carbon-laced ceramics needed for re-entry, on and on, and not violate NPT. The naïveté was allowing the processing of uranium outside the five original nuclear states – the United States, the United Kingdom, Russia, France and PRC.
But we’re here now, and the lesson is to stop being naïve.
This could begin by thinking hard about the “No Nukes” mentality. Is the world better with or without general war between great powers? History shows us that the Cold War represents the longest period during which the dominant powers did not engage one another in general war. What’s different about the Cold War period? Both dominant powers were in possession of fission and fusion weapons.
Speaking of naïveté, “No Nukes” presupposes that no nuclear power will cheat during or after the nuclear disarmament process. Does anyone really think that DPRK and Pakistan will forego what they see as the only thing making them relevant? Will Beijing or Moscow? A childish supposition. Beyond that, once the world is truly nuclear-free, there is no disincentive for great powers to engage in total war. What started out as our genie and bottle problem is now the world’s.
Proliferation is a problem, as we have seen, for which there exists no reliable safeguard. What could be done, however, is for the West’s nuclear powers to say out loud that any nuclear detonation will be traced to the party whose enriched uranium was used, will be considered a nuclear attack by that nation, and retaliation will be visited on that nation.
The asymmetrical warfare being waged against the West changes the way in which the West fights as well. Insurgents and terrorists know that they cannot win in force-on-force engagements, so they avoid them. They instead hit-and-run, plant improvised explosive devices (IEDs) along roadsides, assassinate personnel, and so on. Guerrilla warfare. These insurgencies can be won by the dominant power, but it takes time – more time than a democracy typically allows. It’s a bottom-up exercise – convincing the population that you are a better risk than the fighters from their own homeland whom they harbor. That we will answer the problems that drove the insurgency’s beginnings. That we can deliver to them a responsive government. All of this must be done while militarily fighting the insurgents (and protecting non-combatants). Britain was very good at this throughout their Empire, we did it in the Philippines, France did it in Algeria. We were starting to do it in Viet Nam when Congress pulled the plug. David Kilcullen and then-Major General David Petraeus laid out a textbook counterinsurgency (COIN) operation in the “Anwar Awakening,” where the Sunnis of Anwar Province were talked into backing coalition forces against the Sunnis of al Qaeda.
But the cat is out of the bag now. The Arab Spring has unleashed multiple insurgencies of varying degrees of legitimacy, all of which are infused by terrorist opportunists. Tunisia, the index case of the Arab Spring, produced a genuinely elected leader to replace a despot, but the situation has since deteriorated and is being co-opted by al Qaeda in the Islamic Maghreb (AQIM); Egypt’s popular ousting of Hosni M’barak was co-opted by the Muslim Brotherhood (who were pronounced as “moderate” by our State Department, in spite of eighty years of terrorist activities). Libya was botched by us to the point of losing our ambassador and three State Department people in the terrorist sacking of our consulate. Syria is rapidly unraveling. We are reaping the fruits of not having a cohesive foreign policy. No one knows what America stands for anymore, and that includes our foreign allies and adversaries. Our friends no longer trust us and our enemies don’t fear us. That is a dangerous mix.
Most revolutions are not won by the political idealists who start them. They get co-opted by the best organized and most brutish element of the dissidents, and this is what has happened in the Greater Middle East. One of the things that makes America truly exceptional is that our revolution was won by the political idealists who started it, and they were able to establish the government they envisioned. The Founders knew that equality is a snapshot of society in which liberty is a dependent variable, and liberty is condition of society in which equality is an independent variable. They opted for liberty. Government was to establish and protect a geopolitical space within which the people could be free. That’s what “Life, Liberty and the Pursuit of Happiness” means.
But I digress. The ways in which insurgent tactics changes the ways in which the superior power fights are of interest here, and one of those changes is the replacement of the cavalry by ISR in supplying the combat commander with situational awareness. COIN is an intelligence-driven experience. Guerrillas are vastly more nimble and mobile than are the formal forces of a state. Their leaders blend into the background, their fighters emerge from the noise and disappear back into it. They strike from apparently random directions and at unpredictable targets. Enter UAVs. In 2001, CIA deployed General Atomics’ MQ-1 Predator drones – Unmanned Aerial Vehicles (UAVs), in official parlance – to watch the movements of individual and groups of insurgents beyond the visual range of our combat commanders. Basic intelligence on the whereabouts and movements of enemy forces. But UAVs have brought a profound upgrade to that ability by being able to loiter for up to 14 hours over an area of interest; streaming real-time video back to shooters in the field as well as to the UAV operator (who could be close by or in a trailer in Arizona); can follow persons of interest to see who they report to; and so forth. The Air Force brought more MQ-1s into the game and took them into Iraq in 2003. This cracked open the door on remote capabilities in combat.
On March 4 2002, a CIA-operated MQ-1A armed Predator fired an AGM-114 Hellfire missile into a reinforced Taliban machine gun bunker that had pinned down an Army Ranger team whose CH-47 Chinook had crashed on the top of Takur Ghar Mountain [Afghanistan]. This action took place during what has become known as the “Battle of Robert’s Ridge”, a part of Operation Anaconda. This appears to be the first use of an armed UAV in the close ground support role. This kicked the door the rest of the way open on remote weapons in combat. We all have seen these things carry out strikes on television since. It’s not here that the problems lie. These strikes are carefully vetted, court tested and carried out with precision and professionalism. The problems lie in where industry goes from here.
Once turned on, Israel’s Iron Dome will automatically engage incoming Hamas rockets that it deems headed for populated spaces. Our Navy’s Aegis system automatically engages incoming low-skimming aerial targets (cruise missiles or aircraft), and can be pointed at medium- and high-altitude aerial targets. There are autonomous machines that can pull armed sentry duty. These are all task-narrowed machines that are carefully programmed to perform a limited range of actions in response to a limited range of stimuli. This is also true of LockeedMartin’s RQ-170 Sentinel UAV, one of which wound up in Iranian hands.
An experimentally modified Northrup Grumman RQ-4 Global Hawk was flown from Edwards AFB to an airfield in Sydney with human intervention only to start the engine in California and turn it off in Australia. It taxied, took off, flew its route, landed, rolled off to the tarmac and parked, totally autonomously. That’s all this Global Hawk can do – fly back and forth between that Australian airfield and Edwards AFB – but it’s a start on fully autonomous activity. All that is needed is the software to make decisions along the way. That’s the stuff of artificial intelligence (AI), and they’re working on it in AI labs from MIT to Carnegie-Mellon to CalTech.
DARPA (Defense Advanced Research Projects Agency) has been running Robotics Challenges for some years now, the most widely known being their annual competition for fully autonomous cars running obstacle courses. There are several of these “Challenges,” open to universities, corporations or individuals, that foster R&D and practical application of sensor-fusion and AI programs to produce dependable, accurate and creative decision-making by machines to unexpected stimuli.
It’s not a question of “if” we can produce fully autonomous combat platforms, but “when.” There is a current discussion regarding armed drones being used in Afghanistan (and Iraq before our departure), and that’s a good thing. Most of the discussion is non-germane to ill-informed to agenda-driven, but it is important that discussions are occurring because as the capabilities of these machines increase, the more important these discussions are going to be. For example, what happens when an autonomous combat platform engages a wrong target? Who will be held responsible? The operator? The field commander? The theater commander? The programmer? The manufacturer? The perpetrator is a machine – can’t place responsibility there.
Industry has defined combat platforms into three categories: human in the loop (which is what we’ve got now); human on the loop (where an operator has veto power over what the platform is doing); and human out of the loop (where the platform is unsupervised). The Predator is a prime example of “human in the loop,” in that it is operated by a human via video and sensor feedback. The experimental Global Hawk is an example of “human on the loop,” in that after its engine was started, it went about its business. Within its narrow task-set, Iron Dome is an example of “human out of the loop.”
The first class – in the loop – we’ve pretty much got a handle on. The aspect that still needs exploring is the spread of ISR UAVs into the civilian sector, and this is already happening as police departments and lower government agencies are acquiring them and the FAA is considering how to license them and assign them altitudes and routes, etc. The concerns yet to be resolved have to do with how they interface with the public. Will they be regulated differently than, say, police helicopters or cruisers? Will a wiretap warrant cover sensor-captured electromagnetic intelligence gathering (ELINT)? What about private investigators? What legal problems will arise from private use of remotely piloted platforms that mount cameras and transmitters? Other sensors?
The second class – on the loop – isn’t as mature a technology as Class One, although several mature systems of this type are in use. Essentially autonomous platforms that are overseen by operators will more problematic than human-in-the-loop systems because some of its activity (most, actually) will be under software control, and could execute actionable behavior before human intervention is possible as a practical matter. These situations will yield to the same accountability questions as fully autonomous systems, although the operator, even if unable to stop the questionable behavior, will be the prime target for these accountability questions. This will be exacerbated by the fact that the operator will probably be the lowest-ranked individual in the chain of command associated with semi-autonomous combat platforms.
A sub-class of semi-autonomous platforms that are currently being tested is referred to as a “swarm,” whereby a single operator “controls” a number of similar platforms on a common mission – “flying” the “flock,” taking control of a single machine only if necessary to normalize its behavior. Once on-station, these platforms are programmed to execute a common activity, communicating among themselves to accomplish the overall task. If perfected to the point of deployability, swarms will give rise to a whole new set of accountability problems associated with operations outside of the laws of war, rules of engagement, proportionality of attack, differentiation of combatants from non-combatants, and so forth.
And of course, Class Three platforms are fully autonomous, human-out-of-the-loop machines that operate on their own. Northrup Grumman is making strides in that direction with its X-47B UCAV (unmanned combat aerial vehicle), a UAV designed from the start to be armed and to test new realms of autonomy.
It has undergone initial flight testing at Edwards AFB, and is now aboard CVN75 USS Harry S Truman for semi-autonomous carrier launch and trap testing, and then autonomous launch and trap testing. It will demonstrate autonomous air-to-air refueling (using a KA-6D Intruder tanker variant). The turbofan- powered UCAV has two internal weapons bays for up to 4,500 pounds of ordnance. UCAVs are sexy, and get most of the press, but true autonomy will probably come first to land-based platforms – Boston Dynamics’ Big Dog robot, for example, is being tested to carry equipment and supplies for patrols, the quadruped can semi-autonomously matriculate over rough terrain while carrying up to 340 pounds of supplies.
Automated systems already deployed include a HUMVEE-mounted device that hears sniperfire, pinpoints its origin, and returns fire with an M-60 7.62mm machine gun. All without human input, although it is a human-on-the-loop system that can be switched over to human-directed behavior. It has been very effective under actual tactical conditions. The Israelis have deployed an automated sentry system that watches over remote areas of the Gaza-Israel border and can challenge intruders, firing on them under a strict set of rules of engagement. We’ve already covered the Navy’s Aegis system and Israel’s Iron Dome, which is based on our Patriot anti-air battery. These are all examples of ground-based systems that are, in practical terms, further down the road toward true autonomy than the aerial systems that get most of the coverage, although these, too are poised for rapid advancement. The bottleneck with all these systems is the software – a true autonomous system that enjoys an acceptable degree of confidence from critics and the public at large will require artificial intelligence of a nuanced sophistication that is beyond what we can yet do.
There are discussions taking place in academia on what this software should be able to do, which is comforting in that the AI research isn’t happening in an intellectual vacuum. This happens a lot in high tech. There was an orgasm of criticism and commentary about cloning only after Dolly was presented and the public became aware that large mammals were now being cloned – could people be far behind? The outburst caused a rash on countries issuing bans on human cloning, but the discussion was initiated far too late in the process.
Čerenkov radiation is emitted when a charged particle (such as an electron) polarizes the molecules of the medium in which it was emitted, which then turn back rapidly to their ground state, emitting radiation in the process. The characteristic blue glow of these cooling pools is due to Čerenkov radiation.
This is a term used to describe a nuclear warhead that only needs to be assembled to be viable. In other words, one can have a finished warhead, needing only the last component to be added, and not violate NPT.
Isotopic residue after a nuclear detonation has a unique signature that can be attributed to the processing facility used to refine the fissile materials.
See, for example, G-Drive:Counterinsurgency Component/John O’Sullivan, The Malay precedent: lessons from the Brits.
See G-Drive:Counterinsurgency Component/Philippines/The Hukbalahap Insurrection/The Hukbalahap Insurrection: The Insurrection – Phase II (1950-1955).
See G-Drive: Counterinsurgency Component/David Galula, Pacification in Algeria.
An Australian colonel and counterinsurgency expert.
Hamas is the Gazan “chapter” of the Muslim Brotherhood, and al Qaeda’s post-bin Laden leader, Ayman al-Zawahiri, is a product of the Egyptian Muslim Brotherhood.
See Desktop/Benghazi/Debacle in Benghazi.
An example of this is the way in which we handled President M’barak’s departure and Saudi Arabia’s reaction. See Have You Seen Me? on EagleWatchOnline.net, 24 November 2012.
This would include wireless internet activity, eMails, wireless cell phones, etc.
Much work has been done by DARPA at the Advanced Research Lab [Penn State University] on self-organizing sensor networks that can be air-dropped over an area of interest, and the sensors will organize themselves into a network that is fault-tolerant and establishes multiple message-passing routes to ensure connectivity. These lessons are no doubt applied to swarm self-organization.
See, for example, Patrick Lin, Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA, in the Atlantic, December 15 2011; WJ Hennigan, New drone has no pilot anywhere, so who’s accountable?, in Los Angeles Times, January 26 2012; Jason Koebler, Artificial Intelligence Pioneer: We Can Build Robots With Morals, in Jewish World Review, March 26 2012; Tara McKelvey, Could We Trust Killer Robots?, in Wall Street Journal, May 18 2012.
Dolly was a domestic sheep cloned by Ian Wilmut, Keith Campbell and colleagues at the Roslin Institute and the biotechnology company PPL Therapeutics near Edinburgh [Scotland]. She was born on 5 July 1996 and she lived until the age of six, at which point she died from a progressive lung disease.