“AI Generation and Our Future”: The most revolutionary and unpredictable consequences will happen where artificial intelligence and human intelligence meet

Sharing is Caring

Artificial Intelligence and Security Instability

The destructive power of nuclear weapons and the mystery of cyber weapons are paired with new types of combat power, combined with the principles of artificial intelligence mentioned in previous chapters. Quietly, sometimes tentatively, with clear and visible force, nations are developing and deploying artificial intelligence to take strategic action, activate different military capabilities, and have the potential to revolutionize security policy .

Importing impersonal logic into military systems and processes changes strategy. Military and security forces trained or paired with AI will have the intelligence and influence to strike surprise and cause chaos. Such partnerships may invalidate traditional strategies and tactics, or they may bring decisive reinforcement to traditional strategies and tactics. If artificial intelligence can be assigned to control (offensive or defensive) cyber weapons or physical weapons such as fighter jets, it may quickly perform some functions that are difficult for humans to perform. For example, those of the U.S. Air Force have already piloted fighter jets and manipulated radar systems during the test flight stage. In this example, developers designed the AI ​​to make a “last judgment call” without human control, but the AI’s capabilities were limited to manipulating fighter jets and radar systems. Other countries and design teams may have fewer restrictions.

In addition to the possibility of artificial intelligence affecting strategy, because there is a separate logic that can be calculated independently, it will also add a layer of non-computable characteristics. The most traditional basis of military strategy and tactics is the assumption that the actions or decisions of human adversaries fit within known frameworks, or as defined by experience and conventional wisdom. However, when artificial intelligence is manipulating fighters or scanning targets, it follows its own logic. The opponent may not know what logic this is, and it is impossible to judge whether this is a traditional signal or a false move. In most cases, the processing speed of artificial intelligence is faster than Human thoughts come quickly.

War has always been uncertain, and it has been responding to changes, but after the addition of artificial intelligence, it will create a new dimension. Because AI is dynamic and can respond to critical situations, some countries, even if they create an AI-designed weapon or an AI-operated weapon, don’t fully know how powerful it is or what it will do in a given situation. If such a thing can detect environmental changes that humans may not be able to detect so quickly, and learn and change from it, and the speed of learning and adapting is faster than human thinking, how can we develop attack or defense strategies? ? If artificial intelligence-assisted weapons rely on artificial intelligence’s awareness of the battle situation, and draw conclusions based on the phenomena observed by artificial intelligence, will the strategic effects of some weapons be known only after they are used? If adversaries secretly train their own artificial intelligence, will heads of state outside the conflict know whether they are ahead or behind in the arms race?

In traditional conflict, the psychology of the opponent is crucial to strategic action. Algorithms only know their own operating methods and goals, and they don’t understand morality and doubts. Because AI has the potential to adapt to the phenomena encountered, when two AI weapon systems are confronted, neither side can clearly understand the outcome of the interaction, or the indirect effects that spill over into other domains.

Artificial intelligence may only be able to judge the opponent’s capabilities very imprecisely, and the cost of being involved in the conflict. For engineers and builders, these constraints make them value speed, range of effects, and durability, traits that make conflict more intense, felt by more people, and less predictable.

At the same time, even with artificial intelligence, a strong national defense is still a prerequisite for security. Because artificial intelligence is ubiquitous, no country will unilaterally abandon this new technology. But even if the governments of various countries are strengthening their arms, they should evaluate and try to explore how to add the logic of artificial intelligence to human combat experience in order to make war more humane and more precise, and reflect on the impact of this whole thing on diplomacy and the world influence on order.

Artificial intelligence and machine learning expand the capabilities of existing weapons and therefore change actors’ strategic and tactical options. AI could not only allow traditional weapons to be aimed more precisely, but could also be aimed in new ways that differ from traditional ones, such as (at least in theory) being able to target people or objects rather than places. After sorting out massive amounts of information, artificial intelligence cyber weapons can learn to break through defenses without requiring humans to discover software weaknesses. Likewise, AI can be used in defense, finding flaws and fixing them before enemies exploit them. But because attackers can choose their targets, AI gives attackers a certain and possibly insurmountable advantage.

How would tactics, strategy, and willingness to use larger weapons (or even nuclear weapons) change if adversaries trained artificial intelligence to steer fighter jets, independently set targets, and fire?

Artificial intelligence has opened up a new vision of combat power in the information field. Generative AI can create a lot of fake news that is difficult to distinguish between true and false. Disinformation or psychological warfare facilitated by artificial intelligence, including the use of artificial characters, pictures, videos, speeches to create new barriers in a free society. Photos and videos of many of the marches have been widely republished and many photorealistic photos and videos have been created that fake public figures saying things they never said. In theory, AI could find the most effective way to deliver this synthetic content, tailoring AI-generated content to the biases and expectations of specific groups. If a synthetic image of a head of state is altered by an adversary to incite social discord or issue misleading orders, will the public (or even other governments and officials) be able to instantly tell the truth from the fake?

In the field of nuclear weapons there is a widely accepted prohibition, and a clear concept of deterrence (level of escalation of conflict), but with regard to the use of artificial intelligence, these do not exist. U.S. adversaries are preparing physical and cyber weapons assisted by artificial intelligence, and some are already reportedly using them. The power of artificial intelligence is ready to deploy machines and systems, use fast logic and responsive behavior to attack, defend, detect, spread false information, identify the opponent’s artificial intelligence, and suspend the opponent’s artificial intelligence.

The capabilities of artificial intelligence are evolving and proliferating, and great powers will continue to strive for dominance without verifiable constraints. They will presuppose that as long as useful new artificial intelligence capabilities appear, artificial intelligence will inevitably spread and become popular. Since artificial intelligence technology can be used for both military and civilian purposes, and is easy to copy and spread, a large part of the basic functions and key innovations of artificial intelligence must be public and open. Where AI is controlled, those controls are bound to prove less than perfect, whether because technological advances have rendered them obsolete, or because they will find a way to infiltrate them if the adversary is determined to do so. New users may tweak the underlying algorithms for different goals. A commercial innovation in one society may be diverted for security or information warfare purposes by another society. The most strategic aspects of the most advanced parts of AI development will often be employed by governments to serve the national interest.

Many people want to implement the balance of power on the Internet or the deterrence of artificial intelligence into concrete concepts. If they can be written down, it is still very early. Before these concepts are defined, all kinds of plans are abstract. In a conflict, one of the belligerent parties may want to use such a weapon whose effects are not fully understood to suppress the will of the other party, and may threaten the other party to use this weapon.

The most revolutionary and unpredictable consequences will occur where artificial intelligence meets human intelligence. Historically, countries preparing for war have understood the opponent’s principles, tactics, and strategic psychology. Even if the understanding is not perfect sometimes, they are more or less informed, so they can develop confrontational strategies and tactics, as well as communication methods for symbolic military actions, such as Intercept aircraft approaching airspace boundaries and send warships through contested waterways. However, when the military uses artificial intelligence to plan or lock targets, even if it is to assist in patrols or conflicts, these familiar concepts and interactions may become very unfamiliar, because the military has to interact with people who are not familiar with this method and tactics. Artificial intelligence communication needs to be interpreted and interpreted.

Fundamentally, the switch to AI and AI-assisted weapons and defense systems would rely on, and in extreme cases even delegate to, an intelligence with powerful analytical potential that operates on a completely different model of experience. This reliance introduces unknown risks and risks that we don’t fully understand. To do this, human operators must monitor the AI’s actions, which can be fatal. If this human role cannot avoid all mistakes, at least it can ensure the bottom line of morality and responsibility.

The deepest challenges are actually philosophical questions. If aspects of strategy operate in conceptual or analytical domains that AI can understand but human reason cannot, those strategies—whether in terms of how they are played, how far they are executed, or what they ultimately mean—will become unreliable. too opaque. If policymakers conclude that AI is helping us unravel the deepest patterns of truth, we must have this information in order to understand the capabilities and intentions of our adversaries (and they may have deployed their own AI), immediately Response, then in the future, important decision-making will inevitably be delegated to machines. Different societies may have different restrictions as to which tasks can be entrusted to machines, and with what risks and consequences. Major powers should not wait for a crisis to begin a dialogue on the strategic, normative, and moral consequences of these evolutions. If you choose to wait, the consequences may not be reversed. International efforts must be made to limit these risks.

Sharing is Caring

Leave a Reply