China’s military scientists call for development of anti-Starlink measures

China must develop capabilities to disable and maybe even destroy Starlink internet satellites, the country’s military researchers said in a paper published by the Chinese journal Modern Defense Technology. The authors highlighted the possibility of Starlink being used for military purposes that could aid other countries and threaten China’s national security. According to South China Morning Post, the scientists are calling for the development of anti-satellite capabilities, including both hard and soft kill methods. The former is used to physically destroy satellites, such as the use of missiles, while a soft kill method targets a satellite’s software and operating system. 

In addition, the researchers are suggesting the development of a surveillance system with the ability to track each and every Starlink satellite. That would address one of their concerns, which is the possibility of launching military payloads along with a bunch of satellites for the constellation. David Cowhig’s Translation Blog posted an English version of the paper, along with another article from state-sponsored website China Military Online that warned about the dangers of the satellite internet service. 

“While Starlink claims to be a civilian program that provides high-speed internet services, it has a strong military background,” it said. Its launch sites are built within military bases, it continued, and SpaceX previously received funds from the US Air Force to study how Starlink satellites can connect to military aircraft under encryption. The Chinese scientists warned Starlink could boost the communication speeds of fighter jets and drones by over 100 times. 

The author warned:

“When completed, Starlink satellites can be mounted with reconnaissance, navigation and meteorological devices to further enhance the US military’s combat capability in such areas as reconnaissance remote sensing, communications relay, navigation and positioning, attack and collision, and space sheltering.”

Between hard and soft kill, the researchers favor the latter, since physically destroying satellites would produce space debris that could interfere with China’s activities. The country previously filed a complaint with the United Nations about the Tiangong space station’s near-collision with Starlink satellites. Apparently, the station had to perform evasive maneuvers twice in 2021 to minimize the chances of collision. Destroying a few satellites also wouldn’t completely take out the Starlink constellation, seeing as SpaceX has already launched over 2,500 satellites at this point in time. 

Russia claims it’s using new laser weapons against Ukraine

Russia is supposedly using its invasion of Ukraine to try new technology on the battlefield. As Reutersreports, the Russian government says it’s using a new wave of laser weapons to counter the Western technology aiding Ukraine’s self-defense. Deputy prime minister Yury Borisov claimed Russia was using prototypes for a drone-destroying laser weapon, Zadira, that can burn up drones. One test incinerated a drone 3.1 miles away within five seconds, according to the official.

A more established system, Peresvet, reportedly blinds satellites up to 932 miles above Earth. This was already “widely deployed,” Borisov claimed. The deputy leader maintained that new lasers using wide electromagnetic bands could eventually replace traditional weapons.

This isn’t the first reported use of cutting-edge tech in the war against Ukraine. CNNnoted that Russia has fired multiple Kinzhal hypersonic missiles at Ukrainian targets. This variant of the Iskander short-range ballistic missile can be launched from a fighter jet (the MiG-31K). Russia has maintained that Kinzhal is virtually impossible to stop due to its very high speed, but US and UK officials have dismissed its effectiveness and argued that it’s really just an air-launched variant of a conventional weapon.

As with those hypersonic weapons, it’s difficult to know how well the lasers work in practice. Russia has routinely made false claims about its overall capabilities and the war in Ukraine, where it has struggled to gain ground despite a large military. However, these uses may be less about turning the war around and more about symbolism — Russia wants to boast about its technological prowess and discourage further material support for Ukraine.

The Pentagon’s new AI chief is a former Lyft executive

The Pentagon is still new to wielding artificial intelligence, and it’s looking to an outsider for help. Breaking Defense has learned Lyft machine learning head Craig Martell is joining the Defense Department as its Chief Digital and Artificial Intelligence Officer (CDAO). He’ll lead the American military’s strategies for AI, analytics and data, and should play a key part in a Joint All-Domain Command and Control initiative to improve multi-force combat awareness through technology.

Martell is a partial outsider. While he directed the Naval Postgraduate School’s AI-driven Natural Language Processing Lab for 11 years, he hasn’t served in military leadership. Outside of Lyft, he’s best-known for heading up AI work at Dropbox and LinkedIn. As CDAO, Martell said he expected to spend to spend the first three to six months identifying “marquee customers” and the systems his office will need to improve. He’ll have a $600 million budget for fiscal 2023.

The office itself was only created months earlier, though. Martell also told Breaking Defense he believed someone with his private background could be “very agile” in a way an established military leader might not. The Defense Department “really needs” someone who can quickly shift strategies in AI and analytics, the new CDAO said.

The US military is relatively new to AI use as it is. The Defense Department only published its draft AI ethics guidelines in late 2019, and its use of the technology initially focused more on experiments rather than autonomy on the frontlines. Martell may play a significant role in defining the Pentagon’s approach to AI for years to come, if just because many areas remain relatively unexplored.

Hitting the Books: When the military-industrial complex came to Silicon Valley

As with most every other aspect of modern society, computerization, augmentation and automation have hyper-accelerated the pace at which wars are prosecuted — and who better to help reshape the US military into a 21st century fighting force than an entire industry centered on moving fast and breaking things? In his latest book, War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future, professor and chair of the Anthropology Department at San José State University, Roberto J González examines the military’s increasing reliance on remote weaponry and robotic systems are changing the way wars are waged. In the excerpt below, González investigates Big Tech’s role in the Pentagon’s high-tech transformations.  

War Virtually cover
UC Press

Excerpted from War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future by Roberto J. González, published by the University of California Press. © 2022 by Roberto J. González.


Ash Carter’s plan was simple but ambitious: to harness the best and brightest ideas from the tech industry for Pentagon use. Carter’s premise was that new commercial companies had surpassed the Defense Department’s ability to create cutting-edge technologies. The native Pennsylvanian, who had spent several years at Stanford University prior to his appointment as defense secretary, was deeply impressed with the innovative spirit of the Bay Area and its millionaire magnates. “They are inventing new technology, creating prosperity, connectivity, and freedom,” he said. “They feel they too are public servants, and they’d like to have somebody in Washington they can connect to.” Astonishingly, Carter was the first sitting defense secretary to visit Silicon Valley in more than twenty years.

The Pentagon has its own research and development agency, DARPA, but its projects tend to pursue objectives that are decades, not months, away. What the new defense secretary wanted was a nimble, streamlined office that could serve as a kind of broker, channeling tens or even hundreds of millions of dollars from the Defense Department’s massive budget toward up-and-coming firms developing technologies on the verge of completion. Ideally, DIUx would serve as a kind of liaison, negotiating the needs of grizzled four-star generals, the Pentagon’s civilian leaders, and hoodie-clad engineers and entrepreneurs. Within a year, DIUx opened branch offices in two other places with burgeoning tech sectors: Boston, Massachusetts, and Austin, Texas.

In the short term, Carter hoped that DIUx would build relationships with local start-ups, recruit top talent, get military reservists involved in projects, and streamline the Pentagon’s notoriously cumbersome procurement processes. “The key is to contract quickly — not to make these people fill out reams of paperwork,” he said. His long-term goals were even more ambitious: to take career military officers and assign them to work on futuristic projects in Silicon Valley for months at a time, to “expose them to new cultures and ideas they can take back to the Pentagon… [and] invite techies to spend time at Defense.”

In March 2016, Carter organized the Defense Innovation Board (DIB), an elite brain trust of civilians tasked with providing advice and recommendations to the Pentagon’s leadership. Carter appointed former Google CEO (and Alphabet board member) Eric Schmidt to chair the DIB, which includes current and former executives from Facebook, Google, and Instagram, among others.

Three years after Carter launched DIUx, it was renamed the Defense Innovation Unit (DIU), indicating that it was no longer experimental. This signaled the broad support the office had earned from Pentagon leaders. The Defense Department had lavished nearly $100 million on projects from forty-five companies, almost none of which were large defense contractors. Despite difficulties in the early stages — and speculation that the Trump administration might not support an initiative focused on regions that tended to skew toward the Democratic Party — DIUx was “a proven, valuable asset to the DoD,” in the words of Trump’s deputy defense secretary, Patrick Shanahan. “The organization itself is no longer an experiment,” he noted in an August 2018 memo, adding: “DIU remains vital to fostering innovation across the Department and transforming the way DoD builds a more lethal force.” Defense Secretary James “Mad Dog” Mattis visited Amazon’s Seattle headquarters and Google’s Palo Alto office in August 2017 and had nothing but praise for the tech industry. “I’m going out to see what we can pick up in DIUx,” he told reporters. In early 2018, the Trump administration requested a steep increase in DIU’s budget for fiscal year 2019, from $30 million to $71 million. For 2020, the administration requested $164 million, more than doubling the previous year’s request.

Q BRANCH

Although Pentagon officials portrayed DIUx as a groundbreaking organization, it was actually modeled after another firm established to serve the US Intelligence Community in a similar way. In the late 1990s, Ruth David, the CIA’s deputy director for science and technology, suggested that the agency needed to move in a radically new direction to ensure that it could capitalize on innovations being developed in the private sector, with a special focus on Silicon Valley firms. In 1999, under the leadership of its director, George Tenet, the CIA established a nonprofit legal entity called Peleus to fulfill this objective, with help from former Lockheed Martin CEO Norman Augustine. Soon after, the organization was renamed In-Q-Tel.

The first CEO, Gilman Louie, was an unconventional choice to head the enterprise. Louie had spent nearly twenty years as a video game developer who, among other things, created a popular series of Falcon F-16 flight simulators. At the time he agreed to join the new firm, he was chief creative officer for the toy company Hasbro. In a 2017 presentation at Stanford University, Louie claimed to have proposed that In-Q-Tel take the form of a venture capital fund. He also described how, at its core, the organization was created to solve “the big data problem”:

The problem they [CIA leaders] were trying to solve was: How to get technology companies who historically have never engaged with the federal government to actually provide technologies, particularly in the IT space, that the government can leverage. Because they were really afraid of what they called at that time the prospects of a “digital Pearl Harbor” Pearl Harbor

happened with every different part of the government having a piece of information but they couldn’t stitch it together to say, “Look, the attack at Pearl Harbor is imminent.” The White House had a piece of information, naval intelligence had a piece of information, ambassadors had a piece of information, the State Department had a piece of information, but they couldn’t put it all together [In] 1998, they began to realize that information was siloed across all these different intelligence agencies of which they could never stitch it together [F]undamentally what they were trying to solve was the big data problem. How do you stitch that together to get intelligence out of that data?

Louie served as In-Q-Tel’s chief executive for nearly seven years and played a crucial role in shaping the organization.

By channeling funds from intelligence agencies to nascent firms building technologies that might be useful for surveillance, intelligence gathering, data analysis, cyberwarfare, and cybersecurity, the CIA hoped to get an edge over its global rivals by using investment funds to co-opt creative engineers, hackers, scientists, and programmers. The Washington Post reported that “In-Q-Tel was engineered with a bundle of contradictions built in. It is independent of the CIA, yet answers wholly to it. It is a non- profit, yet its employees can profit, sometimes handsomely, from its work. It functions in public, but its products are strictly secret.” In 2005, the CIA pumped approximately $37 million into In-Q-Tel. By 2014, the organization’s funding had grown to nearly $94 million a year and it had made 325 investments with an astonishing range of technology firms, almost none of which were major defense contractors.

If In-Q-Tel sounds like something out of a James Bond movie, that’s because the organization was partly inspired by — and named after — Q Branch, a fictional research and development office of the British secret service, popularized in Ian Fleming’s spy novels and in the Hollywood blockbusters based on them, going back to the early 1960s. Ostensibly, both In-Q-Tel and DIUx were created to transfer emergent private-sector technologies into the US intelligence and military agencies, respectively. A somewhat different interpretation is that these organizations were launched “to capture technological innovations… [and] to capture new ideas.” From the perspective of the CIA these arrangements have been a “win-win,” but critics have described them as a boondoggle — lack of transparency, oversight, and streamlined procurement means that there is great potential for conflicts of interest. Other critics point to In-Q-Tel as a prime example of the militarization of the tech industry.

There’s an important difference between DIUx and In-Q-Tel. DIUx is part of the Defense Department and is therefore financially dependent on Pentagon funds. By contrast, In-Q-Tel is, in legal and financial terms, a distinct entity. When it invests in promising companies, In-Q-Tel also becomes part owner of those firms. In monetary and technological terms, it’s likely that the most profitable In-Q-Tel investment was funding for Keyhole, a San Francisco–based company that developed software capable of weaving together satellite images and aerial photos to create three-dimensional models of Earth’s surface. The program was capable of creating a virtual high-resolution map of the entire planet. In-Q-Tel provided funding in 2003, and within months, the US military was using the software to support American troops in Iraq.

Official sources never revealed how much In-Q-Tel invested in Keyhole. In 2004, Google purchased the start-up for an undisclosed amount and renamed it Google Earth. The acquisition was significant. Yasha Levine writes that the Keyhole-Google deal “marked the moment the company stopped being a purely consumer-facing internet company and began integrating with the US government [From Keyhole, Google] also acquired an In-Q-Tel executive named Rob Painter, who came with deep connections to the world of intelligence and military contracting.” By 2006 and 2007, Google was actively seeking government contracts “evenly spread among military, intelligence, and civilian agencies,” according to the Washington Post.

Apart from Google, several other large technology firms have acquired startups funded by In-Q-Tel, including IBM, which purchased the data storage company Cleversafe; Cisco Systems, which absorbed a conversational AI interface startup called MindMeld; Samsung, which snagged nanotechnology display firm QD Vision; and Amazon, which bought multiscreen video delivery company Elemental Technologies. While these investments have funded relatively mundane technologies, In-Q-Tel’s portfolio includes firms with futuristic projects such as Cyphy, which manufactures tethered drones that can fly reconnaissance missions for extended periods, thanks to a continuous power source; Atlas Wearables, which produces smart fitness trackers that closely monitor body movements and vital signs; Fuel3d, which sells a handheld device that instantly produces detailed three-dimensional scans of structures or other objects; and Sonitus, which has developed a wireless communication system, part of which fits inside the user’s mouth. If DIUx has placed its bets with robotics and AI companies, In-Q-Tel has been particularly interested in those creating surveillance technologies — geospatial satellite firms, advanced sensors, biometrics equipment, DNA analyzers, language translation devices, and cyber-defense systems.

More recently, In-Q-Tel has shifted toward firms specializing in data mining social media and other internet platforms. These include Dataminr, which streams Twitter data to spot trends and potential threats; Geofeedia, which collects geographically indexed social media messages related to breaking news events such as protests; PATHAR, a company specializing in social network analysis; and TransVoyant, a data integration firm that collates data from satellites, radar, drones, and other sensors. In-Q-Tel has also created Lab41, a Silicon Valley technology center specializing in big data analysis and machine learning.

Hitting the Books: How American militarism and new technology may make war more likely

There’s nobody better at persecuting a war than the United States — we’ve got the the best-equipped and biggest-budgeted fighting force on the face of the Earth. But does carrying the biggest stick still constitute a strategic advantage if the mere act of possessing it seems to make us more inclined to use it?

In his latest book, Future Peace (sequel to 2017’s Future War) Dr. Robert H. Latiff, Maj Gen USAF (Ret), explores how the American military’s increasing reliance on weaponized drones, AI and Machine Learning systems, automation and similar cutting-edge technologies, when paired with an increasingly rancorous and often outright hostile global political environment, could create the perfect conditions for getting a lot of people killed. In the excerpt below, Dr. Latiff looks at the impact that America’s lionization of its armed forces in the post-Vietnam era and new access to unproven tech have on our ability to mitigate conflict and prevent armed violence.

Future Peace cover. It's the top half of a globe with a targeting reticle over it. Very mid-90s Tom Clancy.
Notre Dame University Press

Excerpted from Future Peace: Technology, Aggression, and the Rush to War by Robert H. Latiff. Published by University of Notre Dame Press. Copyright © 2022 by Robert H. Latiff. All rights reserved.


Dangers of Rampant Militarism

I served in the military in the decades spanning the end of the Vietnam War to the post-9/11 invasion of Iraq and the war on terror. In that time, I watched and participated as the military went from being widely mistrusted to being the subject of veneration by the public. Neither extreme is good or healthy. After Vietnam, military leaders worked to reestablish trust and competency and over the next decade largely succeeded. The Reagan buildup of the late 1980s further cemented the redemption. The fall of the USSR and the victory of the US in the First Gulf War demonstrated just how far we had come. America’s dominant technological prowess was on full display, and over the next decade the US military was everywhere. The attacks of 9/11 and the subsequent invasions of Afghanistan and Iraq, followed by the long war on terror, ensured that the military would continue to demand the public’s respect and attention. What I have seen is an attitude toward the military that has evolved from public derision to grudging respect, to an unhealthy, unquestioning veneration. Polls repeatedly list the military as one of the most respected institutions in the country, and deservedly so. The object of that adulation, the military, is one thing, but militarism is something else entirely and is something about which the public should be concerned. As a nation, we have become alarmingly militaristic. Every international problem is looked at first through a military lens; then maybe diplomacy will be considered as an afterthought. Non-military issues as diverse as budget deficits and demographic trends are now called national security issues. Soldiers, sailors, airmen, and marines are all now referred to as “warfighters,” even those who sit behind a desk or operate satellites thousands of miles in space. We are endlessly talking about threats and dismiss those who disagree or dissent as weak, or worse, unpatriotic.

The young men and women who serve deserve our greatest regard and the best equipment the US has to offer. Part of the respect we could show them, however, is to attempt to understand more about them and to question the mindset that is so eager to employ them in conflicts. In the words of a soldier frequently deployed to war zones in Iraq and Afghanistan, “[An] important question is how nearly two decades of sustained combat operations have changed how the Army sees itself… I feel at times that the Army culturally defines itself less by the service it provides and more by the wars it fights. This observation may seem silly at first glance. After all, the Army exists to fight wars. Yet a soldier’s sense of identity seems increasingly tied to war, not the service war is supposed to provide to our nation.” A 1955 American Friends Service Committee pamphlet titled Speak Truth to Power described eloquently the effects of American fascination with militarism:

The open-ended nature of the commitment to militarization prevents the pursuit of alternative diplomatic, economic, and social policies that are needed to prevent war. The constant preparation for war and large-scale investment in military readiness impose huge burdens on society, diverting economic, political and psychological resources to destructive purposes. Militarization has a corrosive effect on social values… distorting political culture and creating demands for loyalty and conformity… Under these conditions, mass opinion is easily manipulated to fan the flames of nationalism and military jingoism.

Barbara Tuchman described the national situation with regard to the Vietnam War in a way eerily similar to the present. First was an overreaction and overuse of the term national security and the conjuring up of specters and visions of ruin if we failed to meet the imagined threat. Second was the “illusion” of omnipotence and the failure to understand that conflicts were not always soluble by the application of American force. Third was an attitude of “Don’t confuse me with the facts”: a refusal to credit evidence in decision-making. Finally — and perhaps most importantly in today’s situation — was “a total absence of reflective thought” about what we were doing. Political leaders embraced military action on the basis of a perceived, but largely uninformed, view of our technological and military superiority. The public, unwilling to make the effort to challenge such thinking, just went along. “There is something in modern political and bureaucratic life,” Tuchman concluded, “that subdues the functioning of the intellect.”

High Tech Could Make Mistakes More Likely

Almost the entire world is connected and uses computer networks, but we’re never really sure whether they are secure or whether the information they carry is truthful. Other countries are launching satellites, outer space is getting very crowded, and there is increased talk of competition and conflict in space. Countries engage in attacks on adversary computers and networks, and militaries are rediscovering the utility of electronic warfare, employing radio-frequency (RF) signals to damage, disrupt, or spoof other systems. While in cyber war and electronic warfare the focus is on speed, they and space conflict are characterized by significant ambiguity. Cyber incidents and space incidents as described earlier, characterized as they are by such great uncertainty, give the hotheads ample reason to call for response, and the cooler heads reasons to question the wisdom of such a move.

What could drag us into conflict? Beyond the geographical hot spots, a mistake or miscalculation in the ongoing probes of each other’s computer networks could trigger an unwanted response. US weapon systems are extremely vulnerable to such probes. A 2018 study by the Government Accountability Office found mission-critical vulnerabilities in systems, and testers were able to take control of systems largely undetected. Worse yet, government managers chose not to accept the seriousness of the situation. A cyber probe of our infrastructure could be mistaken for an attack and result in retaliation, setting off response and counter response, escalating in severity, and perhaps lethality. Much of the DOD’s high-priority traffic uses space systems that are vulnerable to intrusion and interference from an increasing number of countries. Electronic warfare against military radios and radars is a growing concern as these capabilities improve.

China and Russia both have substantial space programs, and they intend to challenge the US in space, where we are vulnerable. With both low-earth and geosynchronous orbits becoming increasingly crowded, and with adversary countries engaging in close approaches to our satellites, the situation is ripe for misperception. What is mere intelligence gathering could be misconstrued as an attack and could generate a response, either in space or on the ground. There could be attacks, both direct and surreptitious, on our space systems. Or there could be misunderstandings, with too-close approaches of other satellites viewed as threatening. Threats could be space-based or, more likely, ground-based interference, jamming, or dazzling by lasers. Commercial satellite imagery recently revealed the presence of an alleged ground-based laser site in China, presumed by intelligence analysts to be for attacks against US satellites. Russia has engaged in close, on-orbit station-keeping with high-value US systems. New technology weapons give their owners a new sense of invincibility, and an action that might have in the past been considered too dangerous or provocative might now be deemed worth the risk.

Enormous vulnerability comes along with the high US dependence on networks. As the scenarios at the beginning of this chapter suggest, in a highly charged atmosphere, the uncertainty and ambiguity surrounding incidents involving some of the new war-fighting technologies can easily lead to misperceptions and, ultimately, violence. The battlefield is chaotic, uncertain, and unpredictable anyway. Such technological additions — and the vulnerabilities they entail — only make it more so. A former UK spy chief has said, “Because technology has allowed humans to connect, interact, and share information almost instantaneously anywhere in the world, this has opened channels where misinformation, blurred lines, and ambiguity reign supreme.”

It is easy to see how such an ambiguous environment could make a soldier or military unit anxious to the point of aggression. To carry the “giant armed nervous system” metaphor a bit further, consider a human being who is excessively “nervous.” Psychologists and neuroscientists tell us that excessive aggression and violence likely develop as a consequence of generally disturbed emotional regulation, such as abnormally high levels of anxiety. Under pressure, an individual is unlikely to exhibit what we could consider rational behavior. Just as a human can become nervous, super sensitive, overly reactive, jumpy, perhaps “trigger-happy,” so too can the military. A military situation in which threats and uncertainty abound will probably make the forces anxious or “nervous.” Dealing with ambiguity is stressful. Some humans are able to deal successfully with such ambiguity. The ability of machines to do so is an open question.

Technologies are not perfect, especially those that depend on thousands or millions of lines of software code. A computer or human error by one country could trigger a reaction by another. A computer exploit intended to gather intelligence or steal data might unexpectedly disrupt a critical part of an electric grid, a flight control system, or a financial system and end up provoking a non proportional and perhaps catastrophic response. The hyper-connectedness of people and systems, and the almost-total dependence on information and data, are making the world—and military operations—vastly more complicated. Some military scholars are concerned about emerging technologies and the possibility of unintended, and uncontrollable, conflict brought on by decisions made by autonomous systems and the unexpected interactions of complex networks of systems that we do not fully understand. Do the intimate connections and rapid communication of information make a “knee-jerk” reaction more, or less, likely? Does the design for speed and automation allow for rational assessment, or will it ensure that a threat impulse is matched by an immediate, unfiltered response? Command and control can, and sometimes does, break down when the speed of operations is so great that a commander feels compelled to act immediately, even if he or she does not really understand what is happening. If we do not completely understand the systems—how they are built, how they operate, how they fail—they and we could make bad and dangerous decisions.

Technological systems, if they are not well understood by their operators, can cascade out of control. The horrific events at Chernobyl are sufficient evidence of that. Flawed reactor design and inadequately trained personnel, with little understanding of the concept of operation, led to a fatal series of missteps. Regarding war, Richard Danzig points to the start of World War I. The antagonists in that war had a host of new technologies never before used together on such a scale: railroads, telegraphs, the bureaucracy of mass mobilization, quick-firing artillery, and machine guns. The potential to deploy huge armies in a hurry put pressure on decision makers to strike first before the adversary was ready, employing technologies they really didn’t understand. Modern technology can create the same pressure for a first strike that the technology of 1914 did. Americans are especially impatient. Today, computer networks, satellites in orbit, and other modern infrastructures are relatively fragile, giving a strong advantage to whichever side strikes first. Oxford professor Lucas Kello notes that “in our era of rapid technological change, threats and opportunities arising from a new class of weapons produce pressure to act before the laborious process of strategic adoption concludes.” In other words, we rush them to the field before we have done the fundamental work of figuring out their proper use.

Decorated Vietnam veteran Hal Moore described the intense combat on the front lines with his soldiers in the Ia Drang campaign in 1965. He told, in sometimes gruesome detail, of the push and shove of the battle and how he would, from time to time, step back slightly to gather his thoughts and reflect on what was happening and, just as importantly, what was not happening. Political leaders, overwhelmed by pressures of too much information and too little time, are deprived of the ability to think or reflect on the context of a situation. They are hostage to time and do not have the luxury of what philosopher Simone Weil calls “between the impulse and the act, the tiny interval that is reflection.”

Today’s battles, which will probably happen at lightning speed, may not allow such a luxury as reflection. Hypersonic missiles, for instance, give their targets precious little time for decision-making and might force ill-informed and ill-advised counter decisions. Autonomous systems, operating individually or in swarms, connected via the internet in a network of systems, create an efficient weapon system. A mistake by one, however, could speed through the system with possibly catastrophic consequences. The digital world’s emphasis on speed further inhibits reflection.

With systems so far-flung, so automated, and so predisposed to action, it will be essential to find ways to program our weapon systems to prevent unrestrained independent, autonomous aggression. However, an equally, if not more, important goal will be to identify ways to inhibit not only the technology but also the decision makers’ proclivity to resort to violence.