Pages

Wednesday, October 26, 2016

UCAV AI

A reader put me onto this article about artificial intelligence (AI) for unmanned combat aerial vehicles (UCAV).  According to the article, an AI program has been developed that is capable of consistently beating fighter pilots in simulators.  If this is true, and that’s a huge if, this is the enabler that makes UCAVs a reality and I’ll have to readjust all my thinking about force structure, doctrine, tactics, etc.

I’m not going to recap the article.  Instead, I urge you to read it for yourself via the link at the end of this post (1).

The basic concept of the program is claimed to be a combination of fuzzy logic and programmatic evolution.  Fuzzy logic has been around for a long time.  The most interesting aspect of this, to me, is the use of programming evolution – survival of the fittest program.  Apparently, numerous, differing versions of the program were created and exercised.  As time went on, the more successful versions survived and were utilized to create newer and better versions and so on in a form of Darwinian evolution.

I’m sure there are limits to this program that prevent it from replacing pilots, yet.  For example, I’m assuming that it’s been developed as a one-versus-one (1v1) combat program as opposed to a one-member-of-a-team program.  An actual pilot not only needs to be able to win 1v1 aerial duels but also function as a member of a group and make evaluations about supporting other team members, assess mission accomplishment versus personal survival and supporting teammates, make decisions about mission accomplishment versus collateral damage risks, etc.  I’m sure the program can’t do any of that.  However, if they’ve managed to create a working UCAV AI then there’s no reason they couldn’t fold in the other aspects of piloting.


______________________________

(1)phys.org website, “Beyond video games: New artificial intelligence beats tactical experts in combat simulation”, M.b. Reilly, June 27, 2016,


36 comments:

  1. I'm not sure this is the critical breakthrough it seems, because how great is an F-35 with an AI pilot as opposed to a human pilot? Sure, it won't require the magical helmet, but the defense contractor would have promised a magical 360-degree sensor that far exceeds anything the helmet could have done anyway. There will still be the flaws associated with concurrent development, massive cost overruns and planes that are obsolete before they're deployed. Now, if the plane is much cheaper than an equivalent human-piloted plane, well then we're talking about something.

    ReplyDelete
    Replies
    1. You're quite right that an AI does nothing to make an acquisition program any better. All it does is potentially get a little more performance out of an aircraft if it can outperform a human pilot.

      I also pointed out that there are many other aspects to aviation missions that a strictly combat AI doesn't even pretend to address, at least at the moment.

      The point of this post is simply to note a possible interesting development in the AI field. I'm not suggesting we're ready to abandon manned flight!

      Delete
    2. You could get more performance if you designed the fighter around an AI as opposed to a pilot. A pilot is limited to about 10g's, an AI pilot isn't. We could have AI controlled fighters with 15g capability, allowing them to outperform the best manned fighters.

      Delete
  2. DeepMind a Google London based AI/Machine Learning unit moves onto the next milestone and is able to solve problems by deduction, that for the first time can solve small-scale problems without prior knowledge, calling it a "differentiable neural computer".

    "Modern neural networks are good at making quick, reactive decisions and recognizing patterns, but they're not very skilled at the careful, deliberate thought that you need for complex choices.

    The key is how the AI uses its memory. The computer's controller is figuring out how to use memory as it goes along -- it's learning how to get ever closer to the correct answer without being explicitly told how to get there."

    https://www.engadget.com/2016/10/13/google-neural-network-with-memory/

    October 26, 2016 at 7:07 AM

    Publish

    ReplyDelete
  3. Here's the journal paper on it:

    https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwigmaat5vjPAhUkJ8AKHZA2DX8QFggeMAA&url=http%3A%2F%2Fwww.omicsgroup.org%2Fjournals%2Fgenetic-fuzzy-based-artificial-intelligence-for-unmanned-combat-aerialvehicle-control-in-simulated-air-combat-missions-2167-0374-1000144.pdf&usg=AFQjCNECxPw_AmZLVBiVNUUWTAKqhm3InQ&sig2=4vsQi06JJcejMB4exS933g

    The journal article describes the AI as more of a tactical program, running multi-aircraft engagements in a Harpoon/CMANO type simulator. Seems to be very good at situational awareness and coordination:

    "WOLF-1 [The AI side] fires a MRM to evoke a defensiveness response by blue, having no intent of actually killing
    its target. This missile is shot at a range in which the blue aircraft will need to evade away from WOLF-1 or be hit, but will be able to do so successfully. ...Shortly into the second
    phase, WOLF-4’s launch computer reports that the enemy could easily evade a missile, but this does not take into consideration the fact that the optimal evasion route has been cut off. Two kill shots are fired from WOLF-4..."

    ReplyDelete
    Replies
    1. I believe that this program and other targeting scenarios shows the ECM should be at the forefront of any development program. Either aircraft, missile, missile defense system, ships and all related and used systems should all incorporate this.

      The fact that AI is capable of doing this with an aircraft in such a dynamic environment means that ships are not far behind. Imagine the sortie rate of sending ships out fast and hard for 2-3 week dashes through contested regions acting as bait and firing on any unfriendly ships or aircraft in the area along with lobbing missiles at various targets throughout a dash.

      ECM becomes much more important because AI takes the human element out as one of the risk factors. This show we run the risk of wars being fought with robots only to have us fall back on the bigger weapons to strike each other in a way that slows the war machine down. Sorry I got off topic a little, but I believe that AI will change not only air battles but also sea battles. Land battles as of yet are still too complex for robots to dominate.

      ECM stands the chance of changing the advantages robots have but at the least the robots will definitely change warfare in the next 20-30 years in a way not seen since the advent of Atomic weapons and nuclear missiles. I use the analogy of nuclear bombs and missiles because it was approximately 15-20 years development and deployment time between the showing up of the technology and its ability to strike at the heart of any target. This is AI software tho early in development will have the same impact on tactics, weapons and related systems.

      Delete
    2. What a fascinating comment! I'm intrigued by the idea of a completely automated combat ship. Of course, we sort of have that now with Aegis in full auto mode.

      Your idea of an automated mad dash is interesting but would likely result in the destruction of the ship. If the ship is to have a worthwhile combat capability it has to be pretty expensive which almost rules out the suicidal mad dash. If the ship is cheap enough to be expendable in a mad dash, it probably doesn't have enough combat capability to be significant.

      The problem with ECM as the counter to AI is that, by definition, AI is self-contained within the platform. In other words, there is nothing for ECM to disrupt other than the usual sensors.

      I'm thinking more and more that the next offset strategy should be robotics rather than the idiotic Third Offset that the military is pursuing.

      Great comment.

      Delete
    3. I should have stayed that AI is dependent on electromagnetic spectrum information. The ability to disrupt the information stream is key. All radio, radar,gps and related systems are vulnerable to interference. An automated ship is dependent on the information it received so if you disrupt the information the AI is essential no more than an auto pilot

      As for the the automated ship is only a matter of time before small fast 21 day endurance ships are developed and used for patrols. The weapons package could be limited to suck things as ESSM for antiair and harpoon for strike and anti ship missions. This is just guessing but does anyone remember the automated guard ship for harbors. This is the logical next step. The Israelis are already using automated guard ships for their off shore had and oil fields.

      Tactics have to evolve for this new dimension of warfare which is only at most 5 years out

      Delete
    4. I know CNO is not a big fan of distributed lethality, but the ideas that Howdypartner has really push in that direction.

      What is the smallest feasible ship with the speed, endurance, and seakeeping to be part of a carrier group? Say you believe in AI and are willing to buy into the idea of the F-35 (or dare I say it, B-21) as the sensors and battle manager. At that point, why not replace your billion dollar surface combatants with fifty $20 million ships, each little more than a floating VLS?

      ECM is the huge wildcard here, and probably the hardest for amateurs to figure out. One interesting thing to look at are some of the more serious papers on UUVs--they seem to be thinking seriously on how autonomy is the cure for limited communication ability.

      To tie this back to the "Boom" post from the other day, I think fleets made up of lots of small unmanned vehicles is the end state of the "everything designates for everything" technology.

      Delete
    5. "I know CNO is not a big fan of distributed lethality"

      Let's be very clear about this. I'm not a fan of the Navy's version of distributed lethality. A different version of distributed lethality could, theoretically, be made to work although there are some technological challenges to overcome.

      Delete
    6. "why not replace your billion dollar surface combatants with fifty $20 million ships,"

      Let me help you a bit with your homework on this.

      Why didn't you say, let's replace the $1 billion dollar ship with 1 billion one dollar ships? Why - because you instinctively knew that a one dollar ship couldn't hold a VLS unit, propulsion unit, and all the other things that such a ship would need. Instead, you settled on a $20M ship because, presumably, you believe that such a ship could hold the requisite equipment. Can it? Do you homework and prove it before you make that suggestion.

      Here's a starting data point. The Navy's Cyclone PC ships cost $20M (the most recently built one). Can you fit a VLS module, even just one, on that ship? I don't know but I'm pretty sure not. The dimensions of the VLS and the weight margins of the vessel preclude it. Further, the Cyclones are not ocean-going ships. Yes, they can cross an ocean on a transit but are not what anyone considers to be a routine ocean-going vessel.

      Here's another data point. The Navy has looked at VLS for the frigate version of the LCS and rejected it. Now, that may well have to do with reasons other than just size but it's an indication that a VLS module requires a minimum size vessel and that size is larger than a Cyclone and probably around the size of a LCS.

      One final data point. The Australians added VLS to their Perry FFGs and were only able to fit a single 8-cell module in the ship and that was only with a 6 ft portion sticking up above the deck - far from ideal. Again, the limitation undoubtedly had to do with other concerns than just squeezing in VLS cells. The point, again, is that VLS requires a certain minimum size that's well above the size of a $20M vessel.

      Remember, the VLS not only needs the module deck space but significant internal space within the vessel, electrical power, and other utilities - all of which require space. The VLS control unit also needs space below deck. Add in sensors, propulsion, fuel tanks for ocean-going endurance and range, communications and data links, and so forth and you're looking a significant size even without any manning or any other function.

      So, consider this, do some more research, and let me know what size ship, and therefore cost and numbers, you think can actually replace that $1B ship. This blog is based on data and logic not off-the-cuff guesses. There's nothing inherently wrong with your concept but you need to make sure it's supported by data.

      Have fun with it and let me know what you come up with.

      Delete
    7. "part of a carrier group"

      Keeping multiple smaller ships as part of a carrier group is not really distributed lethality (it might be considered distributed risk, however). Distributed lethality is the SCATTERING of offensive assets so as to complicate the enemy's location and targeting efforts. The implication of that is that each distributed ship is fully functioning because it will have to operate independently. Yes, an AI ship could operate independently but would still require data and command links which then presents an Achilles' Heel in the face of ECM.

      More importantly, what you've described is less distributed lethality and more a VLS barge - an utterly defenseless barge. That's one of the major problems with distributed lethality. The distributed ships are essentially defenseless and subject to easy and complete destruction before they can complete their mission. If the distributed ships are cheap enough to be expendable, this may not be a problem. However, if the distributed ships cost $700M (as the Navy proposes doing with the LCS) then they are no longer expendable and will be quickly destroyed with no ability to readily replace them - hence, my objection to the Navy's version of distributed lethality.

      Delete
    8. To be fair, in the case of the Perry's, the VLS had to fit within an existing hull. There are small ships with VLS. For example, the Israeli Eliat-class at about 1,300 tons has VLS for the Barack SAM. The Russian Buyan-M class has a VLS armed with cruise missiles. You can put good things in a small package, its just a matter of compromise.

      Delete
    9. "the Israeli Eliat-class at about 1,300 tons has VLS for the Barack SAM. "

      The VLS is a very small system compared to the Navy Mk41 VLS. Also, the Eliat class is twice the size of the Cyclone which was my point. A VLS, specifically a USN Mk41 VLS, needs a larger ship than was hypothesized for the distributed lethality comment above.

      Delete
    10. I'm still, I guess, a bit confused about distributed lethality. It seems that the Navy's concept is to include anti ship missiles on *everything*. Bryan McGrath had an interesting interview on midrats where he outlined it, and said it was useful in wargames.

      But I have some questions:

      A) I think distributed lethality across *warships* is a good idea. It's similar to what we had in the cold war when we had FFG's, DDG's, CG's and for awhile PHM's that could all fire Harpoons. But they are talking about putting them on 'phibs and everything else. I'm not against that per se, but I'd really like to see how they plan on using these non ship to ship units in ship to ship fights. Do you really want to risk a 'phib with all its personnel? Or a CVL?

      B) Targeting. I still don't have a great idea how all these uber ranged wonder weapons are meant to be targeted at extreme range, on either side. McGrath's point was that having armed ships all over was a bear for other commanders to deal with while they were trying to slug it out with the CVBG; because as you're dealing with it, you have these little SAG's all over that can threaten your ships and your supply lines.

      Fair enough, but I asked on CDR Salamander's blog once about how the LCS was supposed to target the NSM or LRASM it got, and was told that basically even a DDG really couldn't target the ASCM's it might recieve beyond radar horizon if its fighting outside carrier air cover; as DL seems to suggest it will be' and even SPY 1 is limited by radar horizon for surface targets.

      That's fine, as far as it goes, but if distributed lethality means, as it seems to, that units are going to be fighting outside of carrier air that might provide targeting you essentially are fighting at battleship ranges with super fast missiles as opposed to ballistic shells.

      I know I'm missing something here, because everyone (with maybe the exception of Taiwan and Korea) is developing super long ranged missiles for everything instead of shorter ranged supersonic missiles.

      But if that's true it almost seems as if in the DL environment you could get alot of use out of a ship built with a smaller emphasis on hyper ranged missiles and more of an emphasis on a 8" rapid fire guns that are radar controlled.

      Delete
    11. You solve the OTH problem either by coming up with better targeting methods (difficult to do) or by better AI (fire the missile in the general direction and let it pick its own target). The first maintains our control but is difficult to accomplish while the second is easy but loses all control.

      Delete
    12. Thanks for the comments.

      CNO--didn't mean to caricature your position on distributed lethality. I enjoy being a bit fanciful, but appreciate the value of skepticism.

      As far as doing the homework, I freely admit I don't know enough about naval architecture to make an educated guess. The $20 million number came from two comparisions: DARPA's goal for the Sea Hunter is $20 million. That ship is not big enough, but gives a cost basis for the sensors, computers and other fancy stuff. At the other end, a new 500TEU freighter costs about $10 million. That ship is more than big enough (though not enough beam for VLS), so suggests the costs for a basic hull and engineering. If anything, the most unrealistic assumption might be that US procurement processes can be disciplined enough to build a cheap ship. Happy to hear what better educated people have to say on this.

      In my mind, the threshold mission for the USN is an opposed crossing of the Pacific. If the US can't do that, we're no longer a naval superpower.

      A 50-drone fleet with 250nmi SAMs can cover an astonishing 1000nmi radius circle with no point in range of less than 3 ships. That is what mid-21st century air defense should look like.

      To what Jim Whall wrote, I think distributed lethality is about seperating the components that previously had to be on one ship. For better or worse, the US went high-end with our planes. Procurement nightmare aside, I believe that the F-35 will be an excellent sensor platform. That all suggests that we think about going small with the ships. Planes can see better, ships can carry more.

      Targeting and battle management is an area where an AI like ALPHA can excel. Humans might not be able to fully manage using a dozen aircraft to do targeting for 50 shooters, but an AI well might.

      So, I'm imagining that the "missle truck drones" keep range, speed, and seakeeping, but sacrifice much of what makes naval ships so expensive: large crew, robust hull, sensors. Individually these would be very vulnerable, but not as a fleet. Any hostile force trying to engage would have to spend a lot of time in range of multiple non-emitting OTH ships. In this concept, the decision to make the LRASM long range rather than supersonic also works.

      This would turn a battle into a fight of attrition between attacking aircraft and the drones. I'm ok with that, if the fight ends with half the drones sunk but the other guy doesn't have an air force any more, that's a strategic win. Most important, it keeps the fighting far away from the carrier--in the age of hypersonic boom, we need to make sure carriers stay very much away from the front lines. That's really the driving force behind my crazy ideas.

      Delete
  4. A detailed explanation of what this means is found in this article about "Fighter UAVs"
    http://www.g2mil.com/fighter_uavs.htm

    ReplyDelete
  5. we already have AI controlled planes that manoeuvre at 15g's

    In fact, they manoeuvre at over 40g's we call them guided missiles. An AI controlled plane is just one more step up from that.
    Range is coming, i think our industries are just trying to figure out is it cheaper to have cheap platforms for sensors and long range missiles in reserve on trucks waiting for launch at found targets, or do we have expensive platforms that combine the sensors and a light weapons load. Time will tell.

    As to worrying that the developed AI is a 1v1 platform and may not preform well in unit numbers, sorry, an AI controlled array of assets would have perfect unit cohesion and integration, compared to a human + human pairing that's trying to emulate that. Also, it would have no problem sacrificing some of its assets in order to complete the mission, something humans are loathe to do (call it a suicide run when out of ammo to ensure a mission completion, if evaluation indicates a mission success is that important).
    This is coming, if not already here. The fact that F-16's now have onboard computers that save pilots from killing themselves when they black out shows how close we really are to replacing pilots.

    ReplyDelete
    Replies
    1. So for a few million dollars conversion we now have 1000's of potential f-16 strike planes. The more thought that's put into this the scarier it looks

      That plus it gives life to old systems

      Delete
    2. no, i doubt that,
      dont go overboard. Don't forget, amateurs discuss tactics, generals discuss logistics. As in, how many F-16's are still flyable, and how many of those have had the latest digital avionics inserted, and so far, the upgrade isn't for a remotely operated strike fighter, its a relatively simple ballistics predicting program that calcs when the planes likely on an intercept with the ground and takes evasive action bellow a certain threshold.
      What I'm saying is, the potential is there, and with the way things are developing and the speed at which they are coming along, UCAV's may come on line before we get around to fielding 6th gen fighters. Which means we wont. Hell, depending on how fast they come on line, we may see a similar occurrence to the F-22 proaction line, which got decimated 1/4 of the way in. F-35's may only reach a few hundred produced flyable planes before they can them, especially considering LockMart are only set to produce some 50 planes this year, even less than last year.

      Delete
    3. Not to be amateurish but most F-16 are being transformed into flying drones via modifications for a few million. As drones they take off, fly to , and land without human intervention. Targeting isn't so far fetched with an already proven airframe which still can be produced cheaply and easily in mass quantities.

      You could easily get 10-20 missions out of the airframes and with 1000's of potential sitting in the bone yard. Just a matter of time. However it takes a few months to refurbish them.

      Delete
  6. I feel the need to point out the "skynet" scenario. AI based weapons systems offer numerous potential advantages (greater performance, efficiency, no lives risked, and faster reaction times.)

    That said, the potential for error remains, and in my opinion, is greater with the removal of direct human oversight. To give a machine that kind of power is foolish, we simply could not stop it in time to prevent accidents. If we leave a back-door in the programing to prevent that, that invites foreign powers to hack it. We can't even get the programming to work on the F35, but here we are designing automated weapons systems that could simply malfunction and target are own aircraft, with no outside help, and most likely succeed in destroying it. Human error will remain because humans will build it and program it.

    In the 2011, the Counter-rocket, artillery and mortar system was removed from my FOB in Iraq. The reason why was that it misidentified a UH60 taking off, locked on and armed itself. Luckily, it then decided not to fire.
    AI based weapon systems is a pandoras' box, in my opinion.

    ReplyDelete
    Replies
    1. Let's explore your premise a bit further. The potential problems are fairly obvious as are the potential benefits. You are coming down on the side of caution or outright rejection of AI weapons. Fair enough but what about our enemies? Do you think Russia, China, Iran, and NKorea will limit their AI efforts due to those same concerns? Or, is it more likely that they won't care too much about some indiscriminate destruction if it accomplishes their overall goals?

      If you think the later, do you still propose that we unilaterally limit ourselves and give ourselves a self-imposed disadvantage on the battlefield?

      I'm neither agreeing nor disagreeing with you. I'm just exploring how committed you are to your premise and whether you've thought through the implications as regards enemy AI use?

      Delete
    2. I don't reject the adoption of AI based weapons, thou we should not develop and build weapons systems that we can't physically stop ourselves.
      I am of the opinion AI based weapons is the new nuclear deterrent, they should be researched and developed but never used. If any potential adversary that attempts to do the same, will have to overcome these same issues. I would rather they had the rogue UCAV over their territory then ours.

      Delete
    3. I think its a dangerous game. I don't know if or how we can avoid it; even just simple consumer demand will lead us to quantum computing which may give us the flat out raw horsepower needed to get an emergent AI.

      Its a danger that goes beyond just weapons or a 'Terminator' type scenario. Imagine a multinational banking system that creates an AI. If we're talking true intelligence... we have an intelligent being that is capable of creating its own goals; and it has tons of power and very few limits. Worse, while it may be intelligent, it may well not have the same sense of basic human assumptions and morals.

      Also, for a true AI; how do we handle that with rights? An artificial sentient being is still a being. You can choose to treat it like a pleb, but that likely won't work out well in the long run.

      Some of my favorite stories growing up were Keith Laumer's Bolo series. But in reality there's alot of bad ways for that scenario to end. The whole thing scares the heck out of me.

      Delete
    4. "If any potential adversary that attempts to do the same,"

      According to reports, that ship has already sailed. The Russians are reportedly using robots in Ukraine. I don't know the specific types or degree of AI.

      Delete
    5. I don’t think we are looking at a mega leap here. One can’t look at a Phalanx without seeing killer robot. And as CNO says full Auto Aegis has been around for some time.

      Both examples outstrip human capability by a fair bit.

      An AMRAAM after all is essentially a killer “plane” flying automatous robot with super human abilities.

      But all of the above are not going to take over the world for obvious reasons. We will continue to hold the keys. In terms of fuel, ammo or maintenance. Even if these things go completely autonomous the will still be dependent on us, come the end of the mission.

      Anyway having done quite a lot of AI myself its fairly easy to convince them that killing Red is orgasmic as is following orders, and Blue harm is the worst pain you ever felt.

      Interestingly to avoid ( shall we call it ) the “cylon” scenario, you just have to be a bit sparing with the pain stick, and give them the odd orgasm.

      Otherwise sooner or later they DO figure out killing you is less painful than living with you.

      Even then, their perception of YOU is very abstract and they tend not to link it with actual human beings.

      With careful steps its obviously the next TRUE offset strategy.

      Beno

      Delete
    6. You're correct and your point about AI systems being inherently limited by fuel, ammo, or whatever is good. The difference is that the systems you're citing are short ranged, short lived, controlled most of the way by humans, and, to date, have been used only in isolated (from civilians) locations.

      The next step is turning robots loose in ground combat where civilians may be intermixed, in cities where civilians are densely packed, or in long range, uncontrolled scenarios (a thousand mile cruise missile looking for a target on its own, for example). That's a big step in terms of AI versus human control.

      Recall the recent inadvertent sinking of a Taiwan fishing vessel by a cruise missile? The Terminator has already struck! A machine made a targeting determination and was wrong.

      Delete
    7. Hey the phalanx is only a step in a direction that points and shoots. Its not AI its only acquire and shoot. AI allows discernment, self awareness, change in tactics and threat elevation a Phalanx is not capable of processing. AI now is leaps and bounds beyond that. Open your eyes and look at the old tech such as the Phalanx and Harpoon, and Thomahawk and think this through.

      AI is changing not only tactics but implementation through the use of existing airframes. The Israelis use a small drone which uses AI for flight and target acquisition for surveying their frontiers for fighters and it works. This since 2008. Now scale that up and things get interesting. You could say a drone of such size >100 lbs with a 6 hour endurance on batteries with 100 kph. This could potentially be put on a robotic ship and fired and lost with no adverse effects. The sensors would allow a sneak attack, a small robotic cost effective ship fleet would allow saturation of the target area.

      The UCAV is an example of complete automation that if scaled up and keeping costs down could potentially be used for battles that today's tactics are not prepared for.

      Tactics have to be modified radically because this does and will change warfare. Whose going to send a billion dollar warship to fight in the south pacific when radars, missiles and auto target acquisitions render the sea northwest of the Philippines a high risk environment no matter what you do.

      Delete
    8. "Hey the phalanx is only a step in a direction that points and shoots."

      This borders on semantics but the Phalanx does discriminate targets. It has software that looks at the target's speed, altitude, heading, etc. and decides whether the target is valid. This is what, supposedly, keeps it from firing on friendly helos or small boats. So, it does use a simplified AI to evaluate targets on its own. Whether it is allowed to act on its own assessment depends on whether it is in auto mode or not.

      Regardless of this quibble, your points are valid.

      Delete
    9. Sorry didn't mean to quibble.

      A clearer statement would have been take any object and use the human element and the reactions are better but slow and subject to loss of a valuable asset (this case the human). AI changes that so the only loss is the unit used by the AI. Not a loss you can't recover from. You build and its ready. A human can take years of training to be combat effective. Just saying

      Delete
    10. No,no. You misunderstand. You were fine. I was the one quibbling about the Phalanx example. My apologies for not being clearer.

      Delete
  7. Here is an article that also deals with AI and robots

    Instead of Marines being the first wave in, it’ll be unmanned robotics … sensing, locating and maybe killing out front of those Marines

    Source:. http://www.defensetech.org/2016/10/25/drone-swarms-storm-beaches-says-marine-general/?mobile=1

    ReplyDelete
    Replies
    1. Please don't simply post links. Offer your analysis along with the link. Agree? Disagree? Some point of interest.

      There are thousands of articles and books about AI. I don't want people simply listing them. That makes the blog just a bibliography. I want to know your thoughts, not some other website's thoughts or article. A link is fine as long as you offer something to go with it.

      Delete

Comments will be moderated for posts older than 7 days in order to reduce spam.