276°
Posted 20 hours ago

Four Battlegrounds: Power in the Age of Artificial Intelligence

£13.05£26.10Clearance
ZTS2023's avatar
Shared by
ZTS2023
Joined in 2023
82
63

About this deal

So the book explores the US/China rivalry in artificial intelligence and the book comes looks at what's happening with artificial intelligence as a general purpose technology, much like electricity or computer networks or the internal combustion engine. And like those other technologies, it has a wide variety of applications across society. The book concludes that the four key battlegrounds of global competition: in artificial intelligence, or data, computing hardware or compute, human talent, and the institutions necessary to successfully adopt AI systems. But if countries are not interested in going down the surveillance road the way the Chinese have, it's less important to be at the cutting edge of AI in this space. Correct? Well, I think it remains an open question, but the really key linchpin that's making all of this work is the restrictions on the manufacturing technology, the tooling and software that's needed to make chips. And that's almost the more important aspect of the export controls that the administration put in place, which will effectively freeze China out of the ability to build advanced semiconductors. So, the U.S. has guidelines they have to follow to ensure weapons involving AI are responsibly developed and deployed. But not every country has those, especially not authoritarian regimes. How could that play into the race for AI weapons?

Since the late 1990s, second-generation AI has produced some remarkable breakthroughs on the basis of big data, massive computing power, and algorithms. There were three seminal events. On May 11 1997, IBM’s Deep Blue beat Garry Kasparov, the world chess champion. In 2011, IBM’s Watson won Jeopardy!. Even more remarkably, in March 2016, AlphaGo beat the world champion Go player, Lee Seedol, 4-1. So the third of your four pillars of power in an AI world is talent. Where do we stand in the talent competition? Where is the talent? Where might it be going? Four Battlegrounds argues that four key elements define this struggle: data, computing power, talent, and institutions. Data is a vital resource like coal or oil, but it must be collected and refined. Advanced computer chips are the essence of computing power—control over chip supply chains grants leverage over rivals. Talent is about people: which country attracts the best researchers and most advanced technology companies? The fourth “battlefield” is maybe the most critical: the ultimate global leader in AI will have institutions that effectively incorporate AI into their economy, society, and especially their military.Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read. The second area is in hardware, where we’ve seen with the Biden administration’s export controls in October and the leverage that the U.S. has over China’s access to advanced AI hardware. Doing so has effectively stopped China’s access to advanced AI chips and the tooling needed to manufacture its own chips. It’s an incredibly powerful point of leverage. If you can deny China access to the most advanced AI chips, then they’re simply not able to compete at the frontier of AI research and development with the most cutting edge models.

We're seeing some of this on display very recently with the implementation of these chatbots into search, for example. I mean, both Microsoft and Google most recently deployed AI chatbots publicly that weren't ready. And the problem isn't that Bing is declaring its love for users and saying that someone that is chatting with should leave their wife to be with it — I mean, that's kind of odd — but the real problem is that the best AI scientists in the world don't know how to stop these chatbots from doing that. The U.S. has tremendous strength in an AI competition with China, and I firmly believe that the United States can remain the global leader in artificial intelligence. That’s if we harness those strengths and work with U.S. allies; if we’re doubling down on advantages in talent, drawing on some of the best and brightest from around the world, bringing them to the States keeping them here; and if we invest in the next generation of research into semiconductor technology to ensure that U.S. companies stay dominant in key points of the semiconductor supply chain. There's a possibly large disconnect between AI progress in leading US tech companies, and US military use of AI. What's been behind the recent explosion in AI has been largely driven by machine learning and in particular, deep learning, a type of machine learning that uses deep neural networks. And these AI systems like Chat GPT for example, that have garnered so much attention, one of the things that's interesting about them is they learn based on data. So rather than older rule-based systems, like for example, a commercial airline autopilot where there's a set of rules for how the airplane has to behave, these newer systems are trained on data and then they learn from the data to identify patterns, and that governs their behavior. An award-winning defense expert tells the story of today’s great power rivalry—the struggle to control artificial intelligence.Reminder: The Phrase "No Evidence" Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don't find those biases to be a reason for not reading the book. Scharre worries that that combination will leave China's military ahead of the US military at adopting AI. I see nothing clearly mistaken about that concern. But I see restrictions on semiconductor sales to China as likely to matter more 3 to 5 years from now. Beyond 5 years, I see more importance in advances in the basic technology for AI. Scharre discusses the four key battlegrounds that will determine which country emerges as the leader in AI: data collection, computing power, talent, and institutional structures. He highlights the importance of each battleground and explains how they all interconnect to create a successful AI ecosystem. I guess Scharre's answer is: a few decades from now. He suggests that in the long term, AI might have dramatic effects such as reliably predicting which side will win a war, which presumably would cause the losing side to concede. He analogizes AI to "a Cambrian explosion of intelligence". But for the foreseeable future, he focuses exclusively on AI as a tool for waging war. There are common conclusions between this engaging pair of authors. Both suggest that the introduction of autonomous systems is unlikely to change the nature of war. It is axiomatic to the U.S. military that war’s essential nature is immutable, while the character of warfare (how war is conducted) is always changing. Scharre notes that the increased reliance on drones, uncrewed systems, and swarms reduces the role of humans at some levels of war. Yet humans will still initiate war, set out the policy aims, develop strategies, employ machines, make decisions, and even fight. Not surprisingly, Payne agrees. He does not envision the human element of war disappearing any time soon. “Even if machines make more decisions at the tactical level,” Payne concludes, “war will remain something that is done by and to humans” (84).

A solid, well-organized account of the military applications of AI and of the race to take the lead global position. Payne explores the creative capacity of AI programs with a typology of three different kinds of creativity. He finds that AI supports only the first two types: exploratory and combinatorial. In these two forms, algorithms examine patterns and assess probabilities from existing data. This is the kind of creativity exhibited by the winning poker-playing computer program Libratus or the earlier AlphaGo program that beat a world champion Go player convincingly. Where computers and AI systems fall short is in the third category—transformative creativity. This is the kind of intelligence needed when facing a novel problem or when an old problem requires solutions that have not yet been conceived. These situations require more than predictive computation and more imagination. As Payne stresses, AI programs may be tactically brilliant in the narrow task each is designed for, but they cannot connect dots or “understand” a novel situation that they have not been programmed for or provided a data set to learn from. Well, as you pointed out, with these companies, this kind of disruptive change can be merciless to companies or to countries that aren't able to adapt to both maintain a leadership position in the technology itself, but then figure out the best ways of using it. So we saw during the industrial revolution, for example, that Great Britain and Germany industrialized faster than other nations. They raced ahead in economic power and then military power. As a result, Russia was a laggard industrializing, and by the end of the 19th century, they'd fallen far behind Great Britain and Germany. And so there's major costs in moving slowly. Technology's a key enabler of political, economic and military power, but it's not enough to be in a leadership position, countries also have to figure out the best ways of using technology. I think there’s value in people finding ways to embrace the technology, where it might be useful or increase productivity. The caveat is that it does sometimes make things up, so you shouldn’t trust it. Artificial intelligence has already brought us killer robots, chat bots that can pen government speeches, programs that can process data faster than our mammalian minds, and software that can make apparently original art upon request.Here again the rapidly developing field of artificial intelligence (AI) brought out a spate of spurious claims and serious concerns. Given the purported progress being made in computational intelligence, it is imperative that the Armed Forces be attentive to understanding what AI can and cannot do within our professional sphere. There is little doubt that AI will bring about profound changes in the conduct of warfare, and equally little agreement on just what those changes will be.

The title's battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn't resources be a better description? Well, didn't get any kills. Didn't get any kills. I mean shooting, but didn't actually hit anything. And so the wild thing to me though was that the AI was able to use tactics that humans can't do. So it wasn't just that it was better, it's that it fights differently than people. Now in this case, what the AI did was make these superhuman gunshots when the aircraft are racing at each other head to head, for aviation enthusiasts, forward-quarter gunshots, which are not only basically impossible for humans because there's a split second where there's an opportunity to make these shots, they're actually banned in training because they're dangerous for humans to even try because the air crafts are racing each other hundreds of miles an hour. So that gives an example of how AI has an opportunity to not just be better than people, but open up new ways of operating, new ways of war fighting. And that kind of disruptive change is exactly the kind of thing that U.S. military needs to be in the forefront of. The dangers from AI aren't the dangers science fiction warned us about. We needn't fear robots rising up to throw off their human overlords, or at least not anytime soon. In his book, Scharre breaks down the international contest for AI predominance into four battlegrounds: data, talent, computing hardware and institutions. And he assesses the strengths and weaknesses of the major players — foremost the U.S. and China, and to a lesser extent the European Union — in each area, finding a parity between the two leading nations in the field. And, you know, the behavior of some of the major companies here has not exactly been super responsible. And so we're already seeing with Open AI, and Microsoft, and Google saw the rush over the last couple months to hastily deploy AI chatbots that were not at all ready and the companies responding to each other in this competitive dynamic that's really harmful, this sort of race to the bottom on safety.Stepping away from military stuff – you had a sneak peek of Chat-GPT before it was released. I assume the biggest danger with the software isn’t the AI becoming sentient and taking over the world. What worries you most about this type of technology? Despite examples of AI’s stupendous capacities in simulated combat, described in detail in Four Battlegrounds, it’s this kind of unpredictable behavior that raises the question of whether AI should be anywhere near lethal weaponry, or decisions where lives are on the line. “Militaries are working hard to adopt artificial intelligence. They’re largely focused on near term issues, and I do worry there is some degree of wishful thinking about our ability to control AI systems,” Scharre says. “It’s very possible that we end up in a place where countries are building and deploying quite dangerous AI weapons, and I think that’s something we need to think about and guard against.” One of my favorite examples of this brittleness was reportedly one of the early versions of AlphaGo, the AI agent that achieved superhuman performance at the game Go, if you change the size of the board slightly, proportionately its performance dropped off very dramatically because it was not trained on data of that size. It was only trained on a certain size and so that's a good example of the problems that come up often quite brittle and failing in generalizing within one to other situations. And so obviously any human would notice these things. The human would see the box moving and be like, "Oh, there's somebody under the box," but the AI wasn't trained on that. And so that is a real problem when we think about AI being used in competitive environments because it can be so easily manipulated and people are clever. Our adversaries are clever, and that's a limitation when we think about how we're going to use AI in the real world.

Asda Great Deal

Free UK shipping. 15 day free returns.
Community Updates
*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, but this never affects which deals get posted. Find more info in our FAQs and About Us page.
New Comment