Wednesday, July 8, 2020

Artificial Intelligence - Garbage In, Garbage Out

Artificial Intelligence (AI) is the current fad sweeping the military.  The military is feverishly working on vast, interconnected, AI battle management systems.  AI will allow us to vanquish opponents who outgun and outnumber us, or so the fairy tale goes.

There’s just one small problem (aside from the utter lack of firepower in this vision!).  Who programs AI?  Idiots, that’s who!

There is a fundamental principle in computer programming that has been recognized since the first program was written:  GIGO, which stands for Garbage In, Garbage Out.  If you feed a computer garbage input, data, and instructions the computer will dutifully produce garbage output.  In other words, the output is only as good as the input.

Before we go any further, let’s consider an analogy.  Say I want to learn calculus.  No matter how many times I observe an idiot trying – and failing - to solve basic arithmetic, I can’t learn calculus because my ‘teacher’ didn’t know it and couldn’t demonstrate it.  Similarly, AI can’t learn something that isn’t there and what isn’t there is combat competency.  No one in the US military has it.

So, back to the problem of idiots programming military AI - this is the Garbage In portion. 

As we’ve thoroughly documented, our professional warriors are complete morons when it comes to strategy, doctrine, and tactics.  Unfortunately, these are the exact people who will be providing the input and instructions to the AI program.  No, they won’t be doing the actual programming – programmers will do that but the programmers know nothing about warfare.  They’ll be programming what our so-called military experts tell them.  They’ll be creating an AI that perfectly mimics the ineptitude of our admirals.  Thus, the AI will produce idiotic output.  GIGO.  It will just do it faster than the human idiots could.  Humans take a while to produce stupidity.  AI programs will produce stupidity in the blink of an eye!

Now, there’s a continuum of AI ranging from the simplistic, ‘if you see a certain set of conditions, take the following action’ to true, self-learning, nearly sentient artificial intelligence.

Unless we have true self-learning AI, which we don’t, the AI will be no better than the people programming it and the people programming it are the same people who are constantly demonstrating their complete ignorance of warfare.

A true, self-learning AI in the field of warfare can only learn by conducting warfare and observing the results.  Unless we plan to engage in a series of real wars just so our AI can learn, this is a non-starter.

Sure, we can conduct table-top wargames for the AI to observe but the wargames are conducted by idiots with unrealistic constraints and premises.  We’ve seen plenty of examples of that.  Look what our Marine Commandant learned from the wargames he observed.  Is that what we want an AI to learn from?  An AI raised on what passes for wargames in our military will be an idiot AI.  I can’t watch someone attempt basic arithmetic and magically learn calculus.  Neither can an AI watch idiots conduct fundamentally flawed wargames and learn warfare.  The AI will simply hone in on, and optimize for, what we tell it is ‘good’.  The problem is that what we tell it is ‘good’ is horribly flawed.

If I want to program an AI to conduct combat, I need to find a good warrior for the AI to learn from.  This is where it all falls apart.  We don’t have any good warriors!

Garbage In, Garbage Out

25 comments:

  1. There is a simple solution to the problem of implementing AI in a productive way : start from the end user you are trying to help with AI (the sonar operator, the fighter pilot ...), explain to him what AI can do by showing him examples of stuff that works (pattern recognition for example) then discuss iwth him how his job can be made easier by "subcontracting" to a computer some of his routines and repeatable tasks. Then you are onto something. If you manage to convince him by building an application that helps him out I am sure he will ask for more.

    From experience in the industry a lot of managers don't understand this approach, maybe because it amounts to acknowledging the fact that the person at the bottom of the food chain is an expert in his own field (provided he is good at his job of course). This is the basis for the way the Toyota production line workers are allowed to change things in their job on a daily basis, it seems to work ....

    ReplyDelete
    Replies
    1. "discuss iwth him how his job can be made easier by "subcontracting" to a computer some of his routines and repeatable tasks."

      At the risk of a pedantic, semantics discussion, what you're describing is automation rather than AI. Automation simply performs what you refer to as a 'repeatable' task, over and over again. AI, in theory, leans a task and then is capable of improving on it with no other guidance or input.

      Automation is a nice first step on the ultimate path to AI and I think that's what you were suggesting.

      Delete
  2. I think you might be missing something about"AI" CNO. One AI system/program/supercomputer might/will make some some mistakes , but when many AI systems become interconnected they can interact/correct and learn from the other "AI" , whichs leads to Skynet. In that respect they will become much like a human chain of command. No one human is error free , and humans become much better at figuring out a problem when other people are also working on the same problem, both above and below any one human's idea. No one person invented anything or made an invention perfect, most built upon earlier inventions and also had associates or other inventors who affected/imporved/influenced their invention. I think AI will get to that very shortly. You won't have just a smart drone battle unit out there running on its own AI , it will be networking with as many AI's in its network , who will all be talking to each other as well, and adjusting their own programming and output based on all that, at much faster speeds than any human/group of humans can do.

    It is gonna get interesting. A group of AI battle robots might not be smarter than a group of soldiers/sailors/battleships/airplanes, but their collective intelligence (even if somne have some fairly stupid AI) will be alot faster and will learn quicker .

    ReplyDelete
    Replies
    1. "they can interact/correct and learn from the other "AI" , whichs leads to Skynet."

      That is eons beyond our lifetime so, for our purposes, we'll stick with the more conventional and current concept of AI.

      Delete
  3. minor correction-"You won't have just a smart drone battle unit out there running on its own AI"

    "You won't just have one smart (or dumb) drone battle unit out there running its own AI"

    ReplyDelete
  4. That's not how true AI works - what you're talking about isn't AI, it's just more complex software. No-one progams true AI, it learns itself. You let it analyse as much input as it can get (connect it to the internet) and it'll happily read everything, watch through all the cameras, phones, TVs etc and run billions of scenarios to see what makes what happen and the best way to do things or achieve a certain goal. A key benefit is that it isn't subject to a particular perspective.

    Again, I agree with you that what the Defense industry is doing now is just marketing BS - this isn't AI. AI is still the right answer, when it comes, but it's not here yet and won't be for decades.

    ReplyDelete
    Replies
    1. "what you're talking about isn't AI, it's just more complex software."

      That's correct but that's all we can aspire to for the foreseeable future and that's what the military considers AI. What you're describing is a fantasy level of Terminator sentient intelligence. We can't write a simple database for the military so I'm pretty confident that a true sentient AI is millennia down the road. So, for the time being, we'll stick to the military's vision and definition of AI.

      Delete
    2. @anon. Actually you really are making CNO point when you saying AI learns by what it sees in the world... was it FB or Google that turned off their AI because after 24 hours of watching the human internet it decided it's best course of "being" was turning into a racist pedophile porn addict asshole? I think that makes CNO point of GIGO!!!

      What would AI learn about USA military use and results in the past 100 years? That war should go on for years and be wasteful?!? We dont need AI for that!

      Delete
    3. @NICO - perhaps if the AI had been left on a bit longer it might have come up with some useful suggestions about why that was the case and what could be done about it.

      There are lots of good reasons to pursue even the current 'AI'. Humans frequently make poor decisions for all sorts of reasons - bias, lack of knowledge, incompetence, conflicting orders etc. Where the current AI level has potential is analysing large amounts of data and quickly reacting to it. The key point is to work out where computers do things better than humans (and vice versa) and take humans or computers out of the loop as appropriate - and that divide is certainly not where the military thinks it is now. Exactly the same applies to politicians.

      Delete
    4. "that divide is certainly not where the military thinks it is now."

      My sense of it is that the military views AI as a way to exert even closer, 'better', micro-managing than they do now and that would be extremely unwise.

      Delete
    5. If that is truly their mindset (and there is good reason to think it is) then even a 2nd Lt can work out that politicians could use AI to exert closer,'better', micro-managing of the military.

      Delete
  5. Does anyone know how to turn a bot loose in the comments here to drive home the point? Given the current politics in this country and AIs hand in helping in the division, it's extremely naive to underestimate it as an ally and opponent. Who cares what the military thinks AI is or isn't.

    ReplyDelete
  6. Spent time with some people in defense industry, they cant talk about what they working on but just regular human being problems: meetings dragging on, idiots, incompetent managers, etc. What hit me was somewhat what u saying: if the simulation or wargaming people dont care or just are taking the data from the military and making the numbers match, are we so sure our weapons work against the Chinese or Russians? These people are clueless, its just another job for them, I bet they have no clue or interest in the data provided, it's all top secret anyways, right! So their interest is to make the numbers match and prove their weapon works in the sim against whatever data the military provided. If it's right, great but if the data about the enemy is wrong.....

    For me, "AI" could be useful as a different kind or form of chief of staff, somebody you could run scenarios on and not be judged, run different ideas by,etc....where it could get interesting is as your personal AI gets smarter and learns more about the job and the general or admiral its working for, what does it do? Provide good advice to win or do whatever pleases the boss? LOL!

    Seriously though, example where AI could be useful is against China as you input the data and desired end result, wouldn't it force the USA military to decide what "victory" looks like? I mean, you want the AI to have an end game result that you agree with,right? It would focus the humans minds on the the desired end game....if not AI will decide what victory is and you better hope we humans agree....

    ReplyDelete
    Replies
    1. I do worry that we still think of defense as war starts / war stops / victory. It needs to be, "If you try and take Taiwan you will never win, you will never be rid of the mess, this will bleed you dry. Also, we will make your own people empathize with the enemy and destroy you from within."

      Delete
    2. I worry that this exactly the same in reverse for US dreams of beating China.

      Delete
    3. @AndyM. Agree, this is where a form of AI gaming could be useful although we do go back to GIGO, how do we model Chinese reactions? A very loose AI could produce some very wide "human" reactions that could be unrealistic or perfectly possible but are we really using AI to it's full potential or just clever war gaming? We know how that is working out for USMC....I could see AI giving us solutions other than start/stop, victory/defeat and that could be useful for policy makers/military.

      Delete
  7. Two major thoughts/problems:
    First, I feel that the vision of autonomous machines, whether soldiers, ships, aircraft, etc running on AI is very far in the future, it still ignores the fundamental problems we have now, which is fragile networking, and that will always be. The EW struggle is real! AI can/will find its niche military use, but it will be much smaller scale, say as an assistant to a sonar operator, or as an add-on/upgrade to Aegis. Controlling sub-units or systems will be where its speed, narrowly focused, will see it shine.
    Second, AI or not, we're still thinking information processing will win battles or wars. Wrong. Its firepower. Explosives. Ordanance. Our continued focus on megabytes instead of megabooms is folly. The need for everyone to know everything in a combat zone is wasteful and unnecessary. We have not only forgotten what we learned about numbers, combat power, and survivability, but thrown out the classic "overwhelming firepower" mentality that has been an American hallmark for so long. So, regardless of how well we master the information domain, its unimportant relative to our ability to shoot/bomb/kill the enemy....

    ReplyDelete
  8. http://kylemizokami.com/?p=909 - The Man in the Gray Weapons Suit. A good read

    ReplyDelete
    Replies
    1. Good one. Reminds me of a short space story where the pilots pretty much need to live in special space cocoon to fight the aliens and the difficulty of getting the pilots to leave the cocoon after living in them for weeks!

      Delete
  9. In the words of the FT appearing in a recent Navy advert, "Warheads on foreheads." That's what wins wars.

    ReplyDelete
  10. The July copy of Proceedings had an interesting quote from Adm. Winnefeld, (Ret.): The overmatch that enabled the United States to overcome the tyranny of distance in the Pacific is fading. According to one commentator, “The Pentagon has reportedly enacted 18 war games against China over Taiwan, and China has prevailed in every one.”6

    6. See Fareed Zakaria, “The New China Scare,” Foreign Affairs (January/February 2020): 52.

    While I personally have a low opinion of Zakaria, I would be quite interested to know what was the "In" of these combat models.

    ReplyDelete
  11. "Similarly, AI can’t learn something that isn’t there and what isn’t there is combat competency. No one in the US military has it."

    But, history is repleat with competent (and incompetent) commanders both foreign and domestic. An AI can learn from that history just as our current leaders have done. (And, its the wiser man who learns from the mistakes of others.) Plus, there are dozens, maybe hundreds, of texts and thousands of papers on military theory from strategy and tactics to logistics and supply that would help form the basis for teaching an AI.

    ReplyDelete
    Replies
    1. I think you're missing the main point. Unless you believe we can produce true sentient AI (which isn't going to happen for eons to come), we're limited to AI which is just sophisticated programming. We'll program an AI and tell it what to value, what factors to consider, how to weight those factors, and so on. The kind of AI we can produce in any foreseeable future is pretty limited compared to the Terminator AI you seem to have in mind. With a limited AI, I refer you back to the post points.

      There is zero possibility of an all-learning, all-knowing AI that you seem to have in mind.

      Now, if you want to have fun and imagine a Terminator, sure, go ahead, but that's not even remotely relevant to any practical aspect of our current capability.

      Delete
    2. "There is zero possibility of an all-learning, all-knowing AI that you seem to have in mind."

      And, as any number of science fiction books and movies have suggested over many years, that's not something we want.

      Delete
  12. As other have mentioned, I think AI will be more useful as a aide to an ASW operator or platoon leader looking for a sniper, ambush sites, IEDs,etc....it wont be super smart like a Terminator as whats really needed is more of a giant memory bank with some limited ability to shift data and find patterns...one area where AI could help is with passing down knowledge. Example: unit A has been patrolling for 1 year in Iraq, they have acquired plenty of knowledge ( the hard way!) and input all the events and locations of ambushes, IEDs, important facts and dates,etc...Unit A is preparing to leave and talks to Unit B about all the aspects and problems they encountered BUT let's face it, is it possible to impart everything? Now, with AI, unit A leaves them this black box of knowledge to Unit B. How much more useful is having Unit B now patrol with the help of AI and its local database? Imagine AI staying in Iraq and helping Unit C, Unit D then unit E, etc...I think that could be useful and could be achieved in next few years. It's not true AI but clever programming with a huge database but its probably more useful IMO.

    ReplyDelete

Comments will be moderated for posts older than 7 days in order to reduce spam.