Pages

Thursday, November 9, 2023

Artificial Intelligence

Artificial intelligence (AI) seems to be the future of warfare or, at the very least, a major component of it.  Heck, we already have it to varying degrees and have for many decades.  What we need to address is what level of control we cede to AI, under circumstances, to what extent to do we allow it to replace our human actions, and what degree of ultimate control do we maintain over it?
 
Before we go any further with this discussion, we need to define what AI is.
 
At the most simplistic level, AI is nothing more than machine (programming) logic which takes inputs (for example, sense an enemy), performs calculations and analysis, and generates outputs (for example, shoot the enemy) without requiring any human action.  This can be as simple as an air to air missile which senses a heat source (input), calculates an intercept course (calculation), and flies toward it (output) and then senses the proximity of an object (input) and detonates an explosive (output).  This level of AI is very basic but very efficient and effective.  We’re all comfortable with this level of AI and have no moral qualms about using it.  Of course, one hopes that the heat source was enemy rather than friendly although accidents have occurred.
 
At the other end of the spectrum is the Terminator (from the movie series) AI which has all the thinking capability of a human enhanced by electronic sensors and processing speed.
 
Currently, our technology lies In between the two extremes.  We have some fairly advanced input and analysis chaining (conditional algorithms that attempt to consider and evaluate multiple inputs) leading to condition-based outputs.  We do not, however, come anywhere near Terminator AI.
 
Consider a recent example of flawed AI in which an auto-driving vehicle was involved in an accident (precipitated by another human-piloted vehicle) and, after the event, chose to drive to the side of the road, dragging the injured pedestrian twenty feet and stopping on top of the person’s leg where the person remained trapped until responders were eventually able to free them.  Even the dumbest human driver would have known to not move until the injured pedestrian was located and clear.  This illustrates just how far we are from true AI even in a situation that an ordinary person would deem simplistic and with only one viable action:  remain motionless until the pedestrian’s location can be ascertained.[1]
 
 
Let’s look at some of the arguments for and against AI and caveats regarding its use.
 
 
Arguments for AI
 
Accuracy.  Human oversight is often detrimental and harmful.  The Vincennes incident occurred only because humans were ‘in the loop’.  The AI (Aegis) had correctly identified and assessed the situation but humans came to a different, incorrect conclusion.  Had we allowed the AI to operate without oversight, the incident would not have happened.
 
Speed.  Human assessment is too slow for the modern battlefield.  When an enemy missile appears at the horizon, traveling at Mach+ speed, there is no time for human decision making.  Only AI can react with sufficient speed.  If we’re going to send unmanned ships out onto the naval battlefield, we need to grant them full authority or we degrade their effectiveness. 
 
Ethical Disadvantage.  Enemies will ignore collateral damage and unintended consequences.  China and Russia, among others, will not hesitate to turn AI systems loose without regard to civilian casualties or even friendly fire.  Countries that have embraced human wave attacks and massive citizen murders will not be particularly squeamish about the possibility of unintended lethal effects if it means they can accomplish their objectives.  If we do not embrace AI we will be at a significant disadvantage.
 
 
Arguments Against AI
 
Dependency.  We run the risk that the use of AI will degrade our innate human abilities.  For example, we’ve seen that the use of GPS has resulted in a dependency/addiction to GPS and resulted in a loss of our ability to navigate and locate without it despite having done so for thousands of years prior.  This has already been a factor in multiple ship groundings and collisions. 
 
Similarly, dependence on AI will certainly render our ability to think and analyze a lost skill.  We’ll come to depend on AI for our thinking and analysis and will be paralyzed and ineffective in the absence of it.  We’ve all witnessed the phenomenon of younger people who are wholly dependent on calculators or cash registers (calculators) to determine change.  They have zero ability to do simple arithmetic in their heads. 
 
It hardly requires any foresight to recognize that military leadership – already an ineffective and flawed group of thinkers – will quickly become dependent on AI if for no other reason than to absolve themselves of any hint of responsibility and accountability (blame).  Do we really want to cede our thinking to AI and become just the unthinking, physical hands for a computer program?
 
Novelty.  It is impossible to anticipate, and program for, every contingency.  Thus, at a critical but unexpected moment our AI may fail (the pedestrian dragging incident, for example).  Having become dependent on AI, how would we even recognize a flawed AI output (garbage in, garbage out)?  This is the Internet or calculator phenomenon.  If the Internet or a calculator says something, it’s assumed to be right.  We’ve lost our ability to evaluate the output for ourselves.
 
Susceptibility.  AI is just computer programming.  We’ve already seen that any computer or network can be hacked.  It would be foolish to depend on something that can be easily hacked/attacked.
 
 
Caveats
 
If we don’t allow full control by the AI we’re reducing its effectiveness.  Human oversight is simply too slow to allow an AI system to function at maximum effectiveness.  Our enemies will use AI to full advantage.  If we opt not to do the same, we’ll essentially be fighting with one hand tied behind our back.
 
 
Solution
 
Bounds.  We can maintain control of AI via bounded authority.  In other words, we can turn AI loose with full authority but limit the time or area of that authority.  For example, we can grant an AI system full authority for the next 24 hours and then the system defaults back to human control.  Or, we can grant an AI system full authority within a designated geographical area, outside of which the system defaults back to human control. 
 
The magnitude of the bounds would be determined by the degree of ‘faith’ we have in the AI and the degree of risk we’re willing to accept.  For example, do we have faith, based on previous experience and testing, that an AI weapon can distinguish between an enemy ship and a civilian one and are we willing to accept that a harmless fishing trawler might be attacked if it means we can sink an enemy ship?
 
 
____________________________

11 comments:

  1. I think youve posed the right questions... And correct answers too. The idea of boundaries, whether geographical or chronological, is somthing I really havent seen in the discussion before, and wonder why, since its solid. Id even suggest that the idea of granting full autonomy to a military asset, to achieve a 'victory', is somthing important not just for somthing controlled by AI, but humans as well. That, more than anything, imho, is the reasoning behind so many 'failures' in recent history. Of course that becomes a political and even moral question. But Id rather debate the pros and cons of history from the victors seats...

    ReplyDelete
  2. AI, like other technologies, is a tool. A nation's technological competency determines how useful it can be.

    Not just US, China also pursues AI which you cannot stop since people are there. Temporary hardware sanction (stop selling them top tier chips) only opens up market for their domestic chip makers to grow.

    Real question lies whose AI is more advanced thus can be better used in real military actions than just shows and displays.

    ReplyDelete
  3. AI systems can make Vincennes like mistakes, AI can hallucinate. (below lifted from IBM)

    AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

    ReplyDelete
    Replies
    1. Another dangerous consequence of AI hallucination is the every increasing difficulty in finding out and fixing it when it happens, as it becomes more difficult to find what's going wrong and where the more data you provide and algorithm to train on.

      Delete
  4. I wouldn't be surprised if AI could find something that doesn't exist or more likely as we have seen with the recent attack from HAMAS on Israel, if nothing shows up on it's "radar" correctly or maybe the signals are so small or just deemed unimportant, a country so reliant on AI could face a massive new Pearl Harbor.....and let's not forget the inevitable GIGO, that won't change with AI!!!!

    ReplyDelete
    Replies
    1. The real danger is that the output, like the Internet or hand calculators, will be accorded unquestioned verisimilitude regardless of the degree of 'garbage in'.

      Delete
  5. "For example, do we have faith, based on previous experience and testing, that an AI weapon can distinguish between an enemy ship and a civilian one and are we willing to accept that a harmless fishing trawler might be attacked if it means we can sink an enemy ship?"

    This was one of the issues with the Navy's 80s brainworm of usng TASM for long distance strikes. Setting aside the 2 hour flight time, the missile would go after the first radar contact it detected that was large enough.

    LRASM and NSM supposedly have threat libraries so that the onboard IIR seeker can take a snapshot of the target in view and the missile can compare it to the threat library to determine if this is a legitimate target. I'll believe it when it's actually proven tho.

    "Or, we can grant an AI system full authority within a designated geographical area, outside of which the system defaults back to human control. "

    This happened a few years ago and made the headlines in quite a few very sensationalist news pieces. Essentially, a Turkish loitering munition autonomously detected, identified, and engaged a fleeing insurgent fighter. Journalists and normies were up in arms about how this meant AI was out of control, but really, it's no different from setting a free fire zone and telling a human soldier that anything in that zone is hostile and is to be engaged.

    ReplyDelete
  6. AI is another tool that a good Commander will use wisely. It can catch things that tired or biased humans can miss. Would it have argued against invading IRAQ, or at least recommended other actions to make the occupation/rebuilding work? It depends on the training which is continual BTW. I think the best way to view AI is as another type of staff officer. If a Commander continually refuses the advice or even shoots down a recommendation or demotes a staff position, pretty soon the careerists stop giving any advice but what the boss wants to hear. AI can be trained the same way. No careerist wants to have a record of something recommending a different COA when they are trying to explain why things didn't work out right.

    ReplyDelete
    Replies
    1. "AI is another tool that a good Commander will use wisely. "

      Absolutely ... in theory. In theory, GPS is another tool that a good Commander will use wisely, however, we've seen that the reality is that we've developed a dependency and lost our ability to navigate and locate without it. Similarly, if we had AI our commanders would become dependent on it and lose (what little they had) their ability to think for themselves. Heck, we already see that our leadership has lost all ability to rationally analyze, right? How much worse will that be when they have AI relieving them of the necessity to think for themselves?

      " It can catch things that tired or biased humans can miss."

      Absolutely ... in theory. However, in the Port Royal and Guardian grounding incidents, our navigation systems (crude AI as those are) warned us that we were in danger and we ignored the warnings. In the Vincennes incident, our Aegis (a somewhat higher level of AI) correctly noted the absence of danger and yet we ignored the signs. The reality is that we have a definite predilection for ignoring information that does not fit our preconceived notions. We could, of course, insist that commanders always defer to machine logic but there's that dependency issue, again.

      Damned if we do, damned if we don't. The only sure solution is to not use AI and ruthlessly fire/discipline people who screw up on their own (as we quite successfully did in WWII). Of course, then we forego the benefits of AI. What's a military to do ... ?

      Delete
    2. Be disciplined and practice repeatedly without these tools. But as you point out we don't go to the field or realistically exercise our troops, tactics, and equipment.

      Delete
  7. " . . . are we willing to accept that a harmless fishing trawler might be attacked if it means we can sink an enemy ship?"

    Given how many blue-on-blue or blue-on-green incidents there have their been over the years with humans involved, AI might be an improvement.

    ReplyDelete

Comments will be moderated for posts older than 7 days in order to reduce spam.