Pages

Monday, August 27, 2018

Unmanned Thoughts

The military has bought into the unmanned craze hook, line, and sinker.  They’re jumping on unmanned for every possible application with absolutely no thought given to whether it makes sense or is practical on the battlefield.  Unmanned vehicles can certainly make life easier during peacetime but what happens when the enemy starts shooting and unmanned vehicles find their comms jammed and their lifespans measured in minutes?  Well, we’re not going to dig into that.  Instead, here are just a few updated thoughts to help inform our opinions on unmanned vehicles.

Data

Picture swarms of unmanned land, sea, and air vehicles ranging across the battlefield with their various radar, IR, and optical imaging sensors.  We’ll have total battlefield awareness right down to how many buttons on each enemy soldier’s shirt!  I wonder what comes after quadra-giga-tetra-gazilla bytes of data, because that’s what we’ll have.  There won’t be anything we don’t know!  We’ll be unstoppable and unbeatable.

Of course, history, including very recent history, proves this to be completely false.  As noted in the recent post about the Yemen missile attacks on the USS Mason, despite having multiple ships with Aegis, IR, and optical sensors all backed up by satellite coverage and various airborne regional sensors, we don’t even know if any attacks actually occurred!  We had tetratons of data and yet no actual answer. 

How can we have that much data and yet no answers?  Let me ask you, what is the exact width of the lot your house sits on (renters, just play along)?  You have no idea, do you?  And, yet, you had a survey done as part of your purchase of the house (whether you were aware of it or not) so you have the data.  You just didn’t assign it any importance and probably have no idea where those documents/data are now.

You have the data but you don’t have the answer.

Or, consider that after every terrorist act the post-event analysis inevitably reveals that we had all the data points necessary to predict and prevent the event but no one was able to assemble the data and connect the dots.

More data is not the answer – better interpretation is.

A UAV can record images of a hundred fishing type vessels but which of those, if any, are carrying terrorists or disguised enemy forces?  Having the data isn’t the answer, interpretation is.  Someone has to interpret the images and decide which, if any, are threats.

Those swarms of unmanned vehicles roaming the battlefield and collecting data are, arguably, just making the problem worse!  We already have more data than we can intelligently interpret and now we’re envisioning more?!

We should not be working on putting more sensors over the battlefield (setting aside the fact they aren’t survivable), we should be working on putting more interpretation over the data.

Data without proper interpretation is, at best, a waste of time and effort and, at worst, distracts or misleads from what’s really important.  So, what’s the point of more UAVs?  We already have more than we can productively use.  We think more UAVs will help but we’re proving on a daily basis that we already can’t make good use of what we have.

UAVs are not the magic observation platforms that so many people believe them to be.


Commander’s Intent

Commander’s Intent is the Holy Grail of warfare - subordinates who can act on their own exactly as the Commander wishes with nothing more than the Commander’s Intent as guidance.  This has been repeatedly attempted throughout history with varying degrees of success.  At its best, Commander’s Intent allows a commander to direct a battle with a minimum of interaction with his subordinates.  Nelson’s guidance at Trafalgar is an outstanding example of this.  At its worst, it produces erratic, unintended actions due to failure to accurately convey and/or understand the intent.  Unfortunately, the latter has proven more likely than the former on the battlefield.

The reasons for failure to accurately convey intent fall on both sides of the commander-subordinate relationship.  Commanders fail to clearly convey their intent and subordinates fail to clearly understand the conveyed intent.

Presumably, we’d like to apply this same philosophy to our interactions with unmanned, autonomous vehicles.  However, if we can’t reliably convey Commander’s Intent to humans, how will we convey it to unmanned, autonomous machines?  How will an autonomous machine interpret and act on vague statements of intent like, “Hold out as long as you reasonably can.”?  Will we have to stop the war to write, test, and debug new software every time we want to issue a new statement of intent?

Yet, without some form of intent instructions to an autonomous UAV, we’ll have to “pilot” every UAV and then what have we gained (see, Manning, below)?  Currently, UAVs are incapable of “intent” guidance so we do have to pilot them and, perversely, unmanned platforms require more manning than manned ones!


Manning

Unmanned vehicles have been “sold” as reducing overall manning levels, among many other near-magical claims.  The reality, however, is just the opposite.  While we may, indeed, remove the pilot from the cockpit, we don’t eliminate him, he just moves to a different location.  Further, unmanned systems require more manpower to support.  From an Armed Forces Journal article,

Yet the military’s growing body of experience shows that autonomous systems don’t actually solve any given problem, but merely change its nature. It’s called the autonomy paradox: The very systems designed to reduce the need for human operators require more manpower to support them. (1)

The [remotely piloted aircraft] ... requires much more architecture than, say, an F-16 squadron, Kwast said.  While the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, he said, it takes about 10 people to operate one UAV at any given time. (2)

Industry’s experience has been the same.  Automated systems may remove the worker from the immediate task but they create legions of new workers to maintain, program, troubleshoot, repair, and modify them.  Automated systems increase overall manning levels, not decrease them.

We saw a closely related example of this phenomenon with the LCS.  While not an unmanned platform, it was designed to operate with a bare minimum crew thanks to a large degree of automation.  The reality turned out quite different.  The number of “crew” required to support and operate an LCS is larger than if the ship were fully manned and the numbers are probably greater than for the older, less automated Perrys that they replaced. 


Conclusion

Unmanned vehicles offer some benefits but they are far from being the panacea that so many, including the military, believe.  The Armed Forces Journal article put it nicely, “that autonomous systems don’t actually solve any given problem, but merely change its nature”.  The military’s obsessive pursuit of unmanned vehicles is ill-considered and short-sighted and is distracting the military from larger, more serious issues like maintenance, readiness, numbers, and firepower.



_____________________________________

(1)Armed Forces Journal, “The Autonomy Paradox”, 1-Oct-2011,
http://armedforcesjournal.com/the-autonomy-paradox/

(2)military.com website, "Air Force Wants To Decrease Manning For Its UAVs", Oriana Pawlyk, 24-Feb-2018,
https://www.military.com/daily-news/2018/02/24/air-force-wants-decrease-manning-its-unmanned-vehicles.html



35 comments:

  1. This reminds me of the argument about unmanned space probes versus manned exploration: one hour of work from an astronaut was equal to the sum total of robotic probes.

    The key to all this is not robot versus human, but how to employ both to best effect.

    GAB

    ReplyDelete
    Replies
    1. insert "lunar robotic probes"

      GAB

      Delete
  2. When unmanned systems are autonomous enough to understand intent, they will be too dangerous too use.

    ReplyDelete
    Replies
    1. The problem with that thought is that it won't stop the Russians/Chinese/Iranians/NKs/terrorists from using them.

      The other problem is that if you believe that - and there's plenty to agree with about that - then why are we pursuing unmanned so hard? Why are we pushing for unmanned combat vehicles if it's an ultimate dead end?

      I'm not arguing with you, just trying to nudge your thought process a little further along.

      Delete
  3. You are correct that Russia and China will plow ahead with or without us. I view AI as a threat on par with nuclear weapons. We have to invest in it, and keep our hand on the steering wheel.

    On hand we made it out of the first nuclear age without things getting out of control. On the hand there were a ton of close calls and lucky breaks.

    I do not have any easy answers or sage advice, just a lot of uneasy feelings. Treaties did help keep a lid on nukes (for a while at least), but AI will be orders of magnitude easier to proliferate.

    ReplyDelete
  4. Relative to maintenance and repairs and other non-combat functions at sea, I don't think it will be too long before we see humanoid robots assisting with regular maintenance and repairs on ships at sea. DARPA has an annual robotics challenge with the intent to field robots for humanitarian and disaster relief. They have to demonstrate operating in a complex urban environment and the use of various tools. The Navy wants them for damage control.

    They might also be used in a ship's mess to prepare meals too. While they will need their own set of maintainers, they could free the human crew to focus on other functions.

    As for aerial tanking, the MQ-25, if successful, would make additional Super Hornets available to support combat operations. And, while they too would require maintainers, the need is probably fewer due to the lack systems (e.g., life support, ejection seat, etc.) required to support a pilot.

    ReplyDelete
    Replies
    1. "While they will need their own set of maintainers, they could free the human crew to focus on other functions. ...
      "while they too would require maintainers, the need is probably fewer due to the lack systems"

      From my own personal observations over the course of a career where automation, in various forms, became dominant and from the statement in the post,

      "The very systems designed to reduce the need for human operators require more manpower to support them."

      I'm going to repeat, the use of automation (unmanned) results in an increase in manning, not a decrease.

      The immediate, local effect may be a decrease in manning (a pilot is removed from the cockpit, for example) but the overall effect is an increase. Thus, your statements fly in the face of empirical evidence.

      The other aspect of shipboard automation, such as your thought on robotic damage control, is that the ship's crew is relatively fixed. Thus, if we have, say, 10 billets for damage control on a ship and we replace those 10 people with robots but it requires 20 people to service them, we have a net gain of 10 crew and nowhere to put them. The ship does not have 10 additional berths and all the associated support functions.

      The Navy's Holy Grail is reduced manning and, ironically, the pursuit of robotics may increase manning - the opposite effect from what the Navy wants!

      Delete
    2. The basis for your argument is a story that was published almost 7 years ago. Yes, there is a learning curve with new technology, but I don't believe it is an absolute that unmanned or automated systems always increase overall manpower requirements.

      In your example of 20 people to 10 robots, that might be the case at first. But, that doesn't mean things can't improve over time, maybe requiring only a few people once enough experience is gained.

      Delete
    3. "I don't believe it is an absolute that unmanned or automated systems always increase overall manpower requirements."

      Okay ... what do you base this belief on?

      The highly automated LCS has resulted in far more "crew" than the Perrys.

      The AF states that UAVs require more manning.

      As the Marines have added computers and drones to low level units they've had to add additional billets.

      I can go on and on with examples.

      So, what do you base your belief on? This blog is based on facts and data. Give me some data that supports your belief.

      Delete
    4. "The basis for your argument is a story that was published almost 7 years ago."

      Alright, here's a quote from a military.com article dated 24-Feb-2018 from Lt. Gen. Steven L. Kwast, the commander of Air Education and Training Command.

      "The [remotely piloted aircraft] … requires much more architecture than, say, an F-16 squadron,” Kwast said. While the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, he said, it takes about 10 people to operate one UAV at any given time."

      Delete
    5. @CNO,

      One interesting aspect of the UAV manning issue is the impact of on training and skill sets.

      The medieval knight, or long bowman, were supremely effective warriors, but the intensive, life-long training made it difficult to maintain large numbers of them.

      If, and that is a pregnant question, UAVs can be operated by less skilled personnel, we might reap benefits even if there is a net increase in manpower.

      Right now, this is not the case; I suspect a great part of the increased personnel requirement for UAVs is driven by overly restrictive ROE. We court marshal an air crew that does something stupid, but the UAV kill chain demands a lawyer in the cockpit, supported by an army of intel analysts to peel back the fog of war.

      GAB

      Delete
    6. Excellent aspect that I hadn't really considered!

      We've seen this scenario play out with the Aegis radar system. Initially, the system was maintained by highly trained and experienced contractors but as the Navy took over complete responsibility, the training and experience levels of the technicians declined markedly and the system suffered fleet-wide degradation that I don't think it has yet recovered from and likely never will. The degree of training for a truly competent Aegis systems tech is tantamount to life long and the Navy simply is unwilling and/or unable to provide that level of training.

      While it is likely possible to train "anyone" to fly a UAV from point A to B, the training required to conduct air combat maneuvers effectively is pilot level or beyond (beyond, due to the effect of time lag and skewed perspective - requires additional training and experience). I don't believe it will ever be possible to effectively operate UCAVs with remote pilots. UCAVs will only be possible with totally autonomous control. Sorry, wandering off topic.

      Great comment!

      Delete
    7. From the DARPA Robotics Challenge web page:
      "Some disasters, due to grave risks to the health and wellbeing of rescue and aid workers, prove too great in scale or scope for timely and effective human response. The DARPA Robotics Challenge (DRC) seeks to address this problem by promoting innovation in human-supervised robotic technology for disaster-response operations."

      "The goal was to accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impacts of natural or man-made disasters."

      That's not the same sort of automation as replacing crew performing damage control or even making meals. It's not really automation at all, it's a RPV working remotely in a hazardous environment. That's a great goal and has lots of military applications (EOD and CBW come to mind), but it's not going to reduce crew size.

      Delete
    8. "but it's not going to reduce crew size."

      No, it's not. In fact, it will increase crew size since additional maintainers will be needed to service the robots in addition to "piloting" them.

      One of the interesting aspects of robotic damage control is weight. If a human damage control crew is incapacitated, we can fairly easily move them out of the area so that other crew can escape or re-enter the area to perform damage control. If a robot is incapacited and weighs too much, it becomes an obstacle, possibly an immovable obstruction which may cause more problems than it could have solved. We can move a 200 lb man but we can't move a 300/400/500 lb robot that's blocking a door, hatch, or passageway. Of course, I have no idea what a future damage control robot will weigh but, in general, robots are heavy due to their materials of construction. Just something to think about.

      Delete
    9. ""The [remotely piloted aircraft] … requires much more architecture than, say, an F-16 squadron,” Kwast said. While the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, he said, it takes about 10 people to operate one UAV at any given time."

      From the same Military.com article, Kwast also said, “We’re going to change the game -- I am working with the whole of the Air Force to build a strategy and an architecture that gives us more ISR for less people, for less money.”

      I think there is a little bit of comparing apples-to-oranges going on here. UAVs fly a different kind of missions compared to fighters. According to the Air Force Times in 2015, fighter pilots fly about 250 hours per year compared to drone pilots flying nearly 900 hours per year. And, as currently designed, drones require a pilot and sensor operator. If the control system can be redesigned to be operated by a single pilot, that would cut the number of drone operators in half.

      And, if the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, that suggests there are 36 people in a 24-ship fighter squadron. I don't know how many airmen are in a fighter squadron. But, between the pilots, maintainers, and operations, I'm sure its on the order of 150 or so.

      Delete
    10. " “We’re going to change the game"

      Do you know how many times the military has said that over the years about almost everything? And it never has changed?

      If Kwast "changes the game" then I'll reconsider my position. Until then, I'm good!

      Delete
    11. "But, between the pilots, maintainers, and operations, I'm sure its on the order of 150 or so."

      Let's be sure to include the cooks who support the pilots and the paymasters and laundry men and ... I'll grant you that the article did not lay out its definition of what they counted as people needed to operate a UAV or manned aircraft, however, the key point was the 1.5 versus 10 which, presumably, was counted on the same basis so, yes, we are comparing apples to apples.

      Delete
  5. The US has long suffered from "too much data, not enough analysis". But even having the analysis is still not enough, you then have to act on it.

    Kimmel had been a report of a destroyer attacking a submarine inside Pearl Harbor early on December 7th, but he deferred making a decision until he got confirmation of the data. He wasn't really prepared to make a decision and then act on it.

    Likewise in the Philippines, MacArthur and his staff dithered about launching a bomber strike against the Japanese until the question was mooted by air strikes crippling his bomber force.

    The US keeps recognizing this problem, but continues to be unable to solve it. We keep creating new intelligence organizations and leaders, but always wind up with the same problem. It takes too long for the data to get analyzed, sent to a decision maker who then acts on the information and sends direction back down the chain of command.

    ReplyDelete
    Replies
    1. You're right that the US is, as a general statement, very slow to act although it's often an individual characteristic. For example, Adm. Halsey issued Battle Order Number One on 28-Nov-1941, ten days prior to Pearl Harbor. He and Enterprise were already at war and taking action (see, "Battle Order Number One"

      So, we do have individuals who will act but they have a very hard time getting promoted during peacetime, for obvious reasons.

      Delete
  6. I think a few points are missed here, mostly on data and intent. One of the major advantages of AI is that it can be trained. It can be trained like elite servicemen except that the training never dies. If one platform is "killed", the training survives. The way AI is trained is through trial and error on a massive scale. Millions and millions of trials. For example, we run red flag once a year. It gets our pilots a lot of experience. Imagine if we could run it a million times a day every day in a virtual environment for less money than it takes to run a manned exercise once. And every time it is run, the two copies of virtual commanders learn how to employ tactics more effectively against each other. This is how Google's alpha go learned to beat human players. They gave it the rules of the game and let it go against itself until it got to playing at a world class level. We can run individual scenarios over and over again, weighting things like costs of platforms and munitions, value of various objectives and priorities, etc. Not only can we control intent at a far more fine grained level that we can with human operators, we can run trial after trial, analyzing results and changing weights until we not only fine tune the AI commander's interpretation of our intent, but also fine tune our own intent. Then once we have a match, we run the scenario more. We throw it curveballs, units behaving illogically, more units than it expected, weather that degrades platform performance. And we analyze those variations. And fine tune more.

    And all of this takes data. Massive amounts of data. Data is how AI is trained. While we cannot analyze all the dots pointing to 9/11 effectively, that is a data set we can use to train future AI analysts. We know what we saw and we know what happened. The trick is providing enough data sets that an AI can start to spot relevant connections. Now an AI intelligence analyst is a long way off. That is such a wide problem. AI doesn't do well with wide problems with millions of variables, just like humans don't. AI will be used for spotting faces in images and finding significant items in satellite imagery far before it is able to take millions of different types of data and form a complete picture. This also requires data. And it's a good thing we're getting it, even if we're not yet completely capable of using it. What AI is good at is narrow scenarios. Things like "fly this path and eliminate enemy air defenses" where a swarm of high end drones can react instantly and simultaneously to radar emitters, missile batteries, enemy fighters, etc. I think human air tactics will be dead within the next 100 years. Probably much sooner. AI is just going to be better suited for it. AI will probably not be as useful for ground troops or at a strategic level until much later. But before long, it will own the air.

    ReplyDelete
    Replies
    1. " What AI is good at is narrow scenarios."

      That's a very good comment. I don't agree with the fundamental concept but it's still a logical, reasoned, well written comment that furthers the discussion - exactly the kind of comment I look for and love!

      I think the quote I highlighted sums up both the situation and my reservations. The example you provided about Go is apt. With a situation (the game) that has a very small set of rules and never changes, yes, an adaptive AI can be trained to excel. But for situations that have nearly infinite factors and nuances and "rules" that change constantly, the AI simply cannot train to a useful level. Could we plan a single operation, well in advance, and train an AI UAV to run it perfectly? Sure. And if our entire war was that one operation then, wow, we'd be golden! But it's not.

      Consider the WWII Pacific campaign. The Japanese started by not defending the beaches (Guadalcanal), then they defended them robustly, then they abandoned the beaches in favor of inland defenses. They continually changed the "rules".

      You even sort of acknowledge this difficulty with your statement that AI will probably not be as useful for ground troops or at a strategic level.

      The other thing that an AI can't do is call on memory of similar events. Humans do that all the time to shorten the learning curve. We may remember historical events or our own experiences. An AI, once it has "deleted" a version can't recall that version and what it meant when future conditions change. AI is the epitome of reinventing the wheel. In the middle of an air battle is no time to try to reinvent the wheel. If you can't remember a historic event or experience and learn/adapt on the fly when faced with a change in circumstances then you die. An AI has nothing to draw on or change - it has to run its programming out to the end.

      You mention learning to stop the next 9/11 but that is exactly why the AI can't succeed. The next 9/11 won't be the same as the last one. For example, who could have predicted shoe bombs? The adaptive AI is the classic example of learning to fight the last war/battle/incident. Yes, facial recognition and other helpful techniques will be useful but that hardly constitutes the type of AI we're talking about.

      Again, let me say that while I disagree with some (not all) of what you said, it was an excellent comment. My agreement is not a requirement!

      I hope you respond and continue this discussion!

      You also hinted, at the very end, at an aspect that has intrigued me (not necessarily in a good way) and that is strategic AI that can analyze geopolitical, financial, economic, military, and cultural factors and devise a strategy or, on the somewhat lower end of the scale, war/battle management. Raytheon is pushing exactly that with their latest battle management software offering and it scares the heck out of me - that our military professionals might be so inept as to take guidance from a software program!

      Delete
  7. Although Big Data is problematic, it reveals existing problems, rather than creating new ones.
    The increase in communications led to an increase in the control exercised by senior officers, in perhaps the most ridiculous example, The CiC of the 2.5million strong United States Armed Forces took personal control of a 79 man raid to kill 4 men.
    The "courageous restraint" of Afghanistan saw it requiring a 4* General to sign off on firing a 60mm HE mortar round.

    If I download the UBER APP, I can see every available UBER vehicle for miles around and make a booking, there is no technological reason that the same cannot apply, but there are institutional prejudices that see to the hoarding information

    ReplyDelete
    Replies
    1. "it reveals existing problems, rather than creating new ones."

      I think you're on to something significant and I think (?) I grasp what you're getting at but would you please consider expanding on your idea? I'd love to hear more about it!

      Delete
    2. More on the drone side.

      Any data out there on if the US has realistically stressed it drone forces. I thinking a combination of hacking the control centers and wide ranging jamming? The kind of a near peer/peer would do, not just flying over the Taliban. My guess no. I accept classified results but given the money tossed about to buy an F-35 I am OK with tax dollars ruining first tier gear that's been set to run by wire and breaking a whole pre-built test center via hacking.

      Even a budget item to that effect would feel good.

      By example, I am excited and impressed at the ability the USN demonstrated with the RIM-174 for surface targeting by sinking a frigate. I would be way more happy if the CIWS had been active and it had a bolted SEA RAM as well.

      Delete
    3. "I am excited and impressed at the ability the USN demonstrated with the RIM-174 for surface targeting"

      Not to burst your bubble but the Standard missile has had an anti-surface mode since back in the 1970's or so. I don't know why the Navy tried to make it sound like some new capability. As a historical example, the USS Simpson and USS Wainwright launched Standard missiles at an Iranian Combattante fast attack craft during Operation Praying Mantis in 1988.

      The Army claims to do some fairly stressful electronic warfare training but I have no details.

      The most stressful exercise that I'm aware of is the Pentagon's "bounty hacker" contests where computer hackers are paid to find holes in various computer system's defenses. It's been quite successful and quite scary to think that amateur hackers could find the number of holes that they do. The Pentagon has paid out hundreds of thousands of dollars in bounties to successful hackers - money well spent!

      Delete
    4. I know that about standard missile series. Perhaps I might better have said: I am glad to see the ability is still there in the latest missiles. Back in the 70s the Harpoon was still world class but the USN let that die. In any case what I was trying to show was the penchant for simple/deterministic testing.

      Thanks for info Pentagon testing on hacking. I'll have fun looking that up.

      Delete
    5. Err sorry grammar is a bit choppy in that reply - one should not post and talk to small child.

      Delete
    6. "Thanks for info Pentagon testing on hacking. I'll have fun looking that up."

      If you're not already aware of it, you might want to look into the Navy's Self Defense Test Ship which is a remote controlled ship used for live/dangerous testing of weapon systems. The current test ship is the ex-USS Foster (DD-964). Bafflingly, the Navy has refused to equip a test ship for Aegis/Standard which is the Navy's core AAW system!

      Delete
    7. Thank you again. Perhaps they prefer the results of canned tests to finding out USN weapons would fare against a peer/themselves?

      An Aside - In terms of testing I was delighted to see the USN expanded its purchase of the GQM-163A test missiles and that the French used one in a test that worked (that is they shot it down). Can't ascertain how difficult it was however.

      Delete
  8. K.I.S.S. We have this unmanned B.S. because:

    a- the geeks say that unmanned is the way forward and its technically possible- (IE same people who didn't think it necessary for a manned space program)

    b- Its cheaper (you blow that fallacy up)

    c- Because they (mainly civilians who haven't served) are scared/envious of warriors and that is what fighter/attack carrier pilots really are...or were in my day..

    But here it is 20 years in with this drone stuff... and what do we really got? Not much or not enough...Regardless continuing on this BS path we will end up not having real "warriors" to control war just trained geeks/police types.. At that point it ain't "war" sportsfans. It just becomes killing like one of those sci fi movies or an Omaha hog processing plant..

    No thanks. Your "Brave New World", not mine.

    b2

    ReplyDelete
    Replies
    1. Okay, I'll put you down as undecided!

      "we will end up not having real "warriors" to control war just trained geeks"

      Now that's a fascinating thought that I hadn't considered. Currently, our CAGs (I know that's not the current term) and carrier Captains are pilots. Where do we get these men when there are no more pilots? Where do we get carrier admirals who know how to use an air wing when there are no pilots?

      You've raised a great point!

      Delete
  9. I'm not undecided CNOPs. Drones do have their narrow uses.. however delivering weapons and conducting war autonomously- No.

    Re Warrior ethos requirement- It is a significant point and I bring it up all the time...Its not just a philosophic point. I was one once and this concerns me, leaders come from somewhere. But No, the fools rushed in with gizmos (all take GPS/bandwidth) 20 years ago before 9-11 even, because they could, yet at the same time we simply cannot seem to design and build an air vehicle that performs as well as its predecessor system ... We consider P-8 a great achievement! We're screwed it seems. No way out, either..

    More to come with the Boeing award for MQ-25..

    b2

    ReplyDelete
    Replies
    1. "I'm not undecided CNOPs."

      I know that. That was humor. Your position was crystal clear!

      Delete
    2. "Drones do have their narrow uses.."

      One of my recurring themes is that I don't see all that much use for drones in a peer war because they're just not survivable.

      What uses, however narrow, do you see for drones?

      Delete
  10. Suicide reconn machines for hi SA threat, border surveillance, artillery/BDA roles,etc.
    I have attempted to explain why Organic carrier tanker is not an easy mission to control emcon or think. I have failed to explain adequately because I can't empasis how much human airmanship it takes... I know too much.
    B2

    ReplyDelete

Comments will be moderated for posts older than 7 days in order to reduce spam.