Artificial intelligence (AI) seems to be the future of
warfare or, at the very least, a major component of it. Heck, we already have it to varying degrees
and have for many decades. What we need
to address is what level of control we cede to AI, under circumstances, to what
extent to do we allow it to replace our human actions, and what degree of
ultimate control do we maintain over it?
Before we go any further with this discussion, we need to
define what AI is.
At the most simplistic level, AI is nothing more than
machine (programming) logic which takes inputs (for example, sense an enemy),
performs calculations and analysis, and generates outputs (for example, shoot
the enemy) without requiring any human action.
This can be as simple as an air to air missile which senses a heat
source (input), calculates an intercept course (calculation), and flies toward
it (output) and then senses the proximity of an object (input) and detonates an
explosive (output). This level of AI is
very basic but very efficient and effective.
We’re all comfortable with this level of AI and have no moral qualms
about using it. Of course, one hopes
that the heat source was enemy rather than friendly although accidents have
occurred.
At the other end of the spectrum is the Terminator (from the
movie series) AI which has all the thinking capability of a human enhanced by
electronic sensors and processing speed.
Currently, our technology lies In between the two
extremes. We have some fairly advanced
input and analysis chaining (conditional algorithms that attempt to consider and
evaluate multiple inputs) leading to condition-based outputs. We do not, however, come anywhere near
Terminator AI.
Consider a recent example of flawed AI in which an
auto-driving vehicle was involved in an accident (precipitated by another
human-piloted vehicle) and, after the event, chose to drive to the side of the
road, dragging the injured pedestrian twenty feet and stopping on top of the
person’s leg where the person remained trapped until responders were eventually
able to free them. Even the dumbest
human driver would have known to not move until the injured pedestrian was located
and clear. This illustrates just how far
we are from true AI even in a situation that an ordinary person would deem
simplistic and with only one viable action:
remain motionless until the pedestrian’s location can be ascertained.[1]
Let’s look at some of the arguments for and against AI and
caveats regarding its use.
Arguments for AI
Accuracy. Human
oversight is often detrimental and harmful.
The Vincennes incident occurred only because humans were ‘in the
loop’. The AI (Aegis) had correctly
identified and assessed the situation but humans came to a different, incorrect
conclusion. Had we allowed the AI to
operate without oversight, the incident would not have happened.
Speed. Human
assessment is too slow for the modern battlefield. When an enemy missile appears at the horizon,
traveling at Mach+ speed, there is no time for human decision making. Only AI can react with sufficient speed. If we’re going to send unmanned ships out
onto the naval battlefield, we need to grant them full authority or we degrade
their effectiveness.
Ethical Disadvantage.
Enemies will ignore collateral damage and unintended consequences. China and Russia, among others, will not
hesitate to turn AI systems loose without regard to civilian casualties or even
friendly fire. Countries that have
embraced human wave attacks and massive citizen murders will not be
particularly squeamish about the possibility of unintended lethal effects if it
means they can accomplish their objectives.
If we do not embrace AI we will be at a significant disadvantage.
Arguments Against AI
Dependency. We
run the risk that the use of AI will degrade our innate human abilities. For example, we’ve seen that the use of GPS
has resulted in a dependency/addiction to GPS and resulted in a loss of our
ability to navigate and locate without it despite having done so for thousands
of years prior. This has already been a
factor in multiple ship groundings and collisions.
Similarly, dependence on AI will certainly render our
ability to think and analyze a lost skill.
We’ll come to depend on AI for our thinking and analysis and will be
paralyzed and ineffective in the absence of it.
We’ve all witnessed the phenomenon of younger people who are wholly
dependent on calculators or cash registers (calculators) to determine
change. They have zero ability to do
simple arithmetic in their heads.
It hardly requires any foresight to recognize that military
leadership – already an ineffective and flawed group of thinkers – will quickly
become dependent on AI if for no other reason than to absolve themselves of any
hint of responsibility and accountability (blame). Do we really want to cede our thinking to AI
and become just the unthinking, physical hands for a computer program?
Novelty. It is
impossible to anticipate, and program for, every contingency. Thus, at a critical but unexpected moment our
AI may fail (the pedestrian dragging incident, for example). Having become dependent on AI, how would we
even recognize a flawed AI output (garbage in, garbage out)? This is the Internet or calculator
phenomenon. If the Internet or a calculator
says something, it’s assumed to be right.
We’ve lost our ability to evaluate the output for ourselves.
Susceptibility.
AI is just computer programming.
We’ve already seen that any computer or network can be hacked. It would be foolish to depend on something that
can be easily hacked/attacked.
Caveats
If we don’t allow full control by the AI we’re reducing its
effectiveness. Human oversight is simply
too slow to allow an AI system to function at maximum effectiveness. Our enemies will use AI to full
advantage. If we opt not to do the same,
we’ll essentially be fighting with one hand tied behind our back.
Solution
Bounds. We can
maintain control of AI via bounded authority.
In other words, we can turn AI loose with full authority but limit the
time or area of that authority. For
example, we can grant an AI system full authority for the next 24 hours and
then the system defaults back to human control.
Or, we can grant an AI system full authority within a designated
geographical area, outside of which the system defaults back to human
control.
The magnitude of the bounds would be determined by the
degree of ‘faith’ we have in the AI and the degree of risk we’re willing to
accept. For example, do we have faith,
based on previous experience and testing, that an AI weapon can distinguish
between an enemy ship and a civilian one and are we willing to accept that a
harmless fishing trawler might be attacked if it means we can sink an enemy
ship?
____________________________
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts
Thursday, November 9, 2023
Friday, October 20, 2023
Ripped from the Headlines
I just read a single headline that blindingly exposed the idiocy
of the US (and the West, in general) military’s fascination and obsession with information,
networks, and artificial intelligence as the basis for future warfighting
capability. From a Newsmax article, this
headline perfectly sums up the situation [1],
Despite the cumulative monitoring by the entire Western
world and, specifically, the intensely focused monitoring by Israel, Hamas
managed to utterly surprise Israel with a massive attack that included
parasails, thousands of rockets, hundreds of vehicles, boats, etc. Despite one of the world’s most extensive and
sophisticated sensor systems of radar, optics, ground vibration sensors,
observation towers, and human intelligence backed by the West’s satellite and
signals intelligence, the Israelis and the West completely missed the
preparations involved in a massive assault by Hamas. All of that high tech, state of the art
(state of the universe?) surveillance focused on one tiny strip of land and we
completely missed the months/years long assault preparations
… and the US wants to base its entire future
military capability on that demonstrably ineffective technology.
Despite all evidence to the contrary, the US military has
placed its bet for future warfare on the pursuit of perfect situational
awareness. Armor has been ignored. Firepower has been relegated to an
afterthought. Logistics is a distant
tertiary concern, if even that. Our
entire concept of future warfare is based on perfect sensing: large, all-encompassing, regional sensor
networks that see everything. Of course,
no one has yet explained how, even if this could be achieved, that would
destroy the enemy. Destruction requires
overwhelming amounts of firepower and we have no interest in firepower. But, I digress.
So, despite the fact that some of the most concentrated and
intense surveillance and data collection the world has ever seen was focused on
a tiny strip of land and failed, utterly, we’re betting we can flawlessly
monitor all of China, the entire East/South China Seas, and all of the surrounding
areas, thereby assuring our victory over a hapless and helpless China? That’s some world class fantasy, there.
Just to remind ourselves that the Hamas assault was not some
sort of one-off, fluke occurrence, let’s examine some other well known, real
world examples of the failure of perfect situational awareness.
Afghanistan Drone Strike – During the US’ Afghanistan
evacuation, the US executed a drone strike on terrorist leaders in a vehicle
based on perfect observations from a UAV.
The only problem was that the target was actually an innocent family.
USS Mason – Despite their own Aegis radar system,
scores of regional surveillance assets, nearby ship sensors, overhead
satellites, and extensive signals intelligence, the USS Mason falsely detected
three separate missile attacks and launched defensive missiles.
Malaysia Flight 370 – A Malaysian Boeing 777 vanished
from one of the world’s most heavily travelled and monitored regions despite
multiple radars, IFF, an established flight plan, regular communications, and
satellite surveillance.
Vincennes – Despite their own Aegis radar system,
scores of regional surveillance assets, nearby ship sensors, overhead
satellites, and extensive signals intelligence, the USS Vincennes mistakenly
shot down a commercial airliner.
Riverine Boat Seizure – Despite GPS navigation,
regional fleet surveillance assets, and unhindered communications, two riverine
boats obliviously got lost and wandered into Iranian territorial waters where
they were promptly seized.
Port Royal Grounding – Despite GPS navigation,
regional fleet surveillance assets, established charts, automated navigation
software, and visible landmarks, the USS Port Royal got lost and grounded.
McCain and Fitzgerald Collisions – Despite Aegis radar,
navigation radars, EO/IR sensors, regional surveillance assets, and IFF systems
on both the Navy and commercial ships, the destroyers managed to run into
giant, hulking commercial ships in known, well defined, shipping lanes.
Helo Shootdown – In 1994, two F-15C aircraft misidentified
and shot down two US Blackhawk helicopters enforcing a no-fly zone over
Iraq. This occurred despite the
sophisticated radars and sensors on the F-15s as well as concentrations of
regional sensors aimed at the no-fly zone.
I can continue with example after example but these should
suffice to make the point. In each case,
there was overwhelming surveillance technology and situational awareness assets
and yet they failed spectacularly.
Making the failures worse is that none occurred in the face
of cyber or electronic opposition as would be the case in a war. Whatever degree of surveillance success we
enjoy now (none?) will be greatly reduced in a real war when the enemy applies
cyber, electronic, and kinetic attacks against our surveillance and network
systems.
The only possible conclusion is that surveillance technology
is highly unreliable and ineffective.
On a closely related note, we’ve talked at length about the
dependency and vulnerability that inevitably develops when technology replaces
human skill. Discussing the Hamas
assault, a senior Israeli reserve officer clearly pointed out the problem with
technology and dependency:
That’s exactly what happens when technology replaces human skill – we become blind, unaware, and dependent and, like any addict, we lose the ability to function and reason.
Of all the things we could possibly base our future military
capability on, information, data collection, and networks is the least
effective or desirable.
____________________________
Hamas Attack Shows Limits of AI, Tech
for Global Security
“We were living in an imaginary reality for years,” …“We became over-reliant on the sophisticated underground barrier, on technology.[2]
That’s exactly what happens when technology replaces human skill – we become blind, unaware, and dependent and, like any addict, we lose the ability to function and reason.
https://www.newsmax.com/platinum/hamas-attack-israel/2023/10/19/id/1138867/
Wednesday, September 9, 2020
AI Beats Fighter Pilot - Not Really
You’ve all heard about the recent test which pitted an
artificial intelligence (AI) against a human fighter pilot in a simulator and
the AI won 5 of 5 dogfights. I can’t
tell you how many people I’ve heard use that event as ‘proof’ that AI is more
capable than humans and that we should be moving immediately to pure unmanned
air combat aircraft. Well, the results
are not what they appeared to be. The
link below is to a review of the event by a former pilot and includes the
actual simulator footage of the fights, analysis of the fights, explanations of
the conditions of the fights, and discussion by the AI programmer
representative.
I was stunned by the overwhelming advantage that the test
conditions gave the AI. For example, the
AI was given perfect 360 degree situational awareness and perfect knowledge of
not only its own performance but that of the human pilot’s aircraft which is
utterly unrealistic. Another example is
that the AI merely had to have the target in a conical field of view to count
as a gun hit – not actually place the piper on the target. And so on.
The results proved absolutely nothing. I’m not going to go any further with this
because the linked video does a much better and more complete job of analysis
than I can. All I’ll say is that you owe
it to yourself to watch the video. It
will completely change whatever impression you had about the results.
I went into it thinking that I’d see some pretty impressive
AI and I came away from it thinking the event was almost a waste of time for
all parties and was tantamount to the usual staged war games and exercises that
the military is so famous for.
For those who can see the imbedded video, below, check it out for yourself. If you can't see the video, follow the link and see for yourself.
Link: https://www.youtube.com/watch?v=ziCQqmEllZo
Link: https://www.youtube.com/watch?v=ziCQqmEllZo
Monday, September 7, 2020
Artificial Intelligence
Artificial Intelligence (AI) may not mean to the military
what it means to casual observers. Most
of us think of the Terminator when we think of AI. We recognize that’s a far off fantasy but
that’s our vision. The military, on the
other hand, thinks of AI as a way to assist in more comprehensive
micro-management. They don’t state it in
those terms but even a cursory examination of the military’s goals shows this
to be the case.
____________________________________
“We
believe that the current crop of AI systems today are going to be cognitive
assistance,” he [Nand Mulchandani, acting director of the Joint Artificial
Intelligence Center] said. “Those types of information overload cleanup are the
types of products that we’re actually going to be investing in.” (1)
So, the military seems to view AI as a data ‘clean up’ or
streamlining service which will, of course, be used by remote commanders to
more closely micro-manage their subordinates who are the local commanders.
The military is also intensely interested in micro-managing
the assets in the field by connecting sensors and weapons and exerting control
over them (see, “Command and Control”).
…
DoD’s focus on developing JADC2 [ed., Joint All-Domain Command and Control], a
system of systems approach that will connect sensors to shooters in near-real
time. (1)
AI is also viewed as a common connector between disparate
platforms – whatever ‘platforms’ means in this context – so as to, again,
enable more effective micro-managing.
“JADC2
is not a single product. It is a collection of platforms that get stitched
together — woven together ― into effectively a platform. And JAIC is spending a
lot of time and resources focused on building the AI component on top of
JADC2,” said the acting director. (1)
We see, then, that the military views AI not as an
autonomous, near-sentient, battlefield killing machine but as a tool to aid in
the exercise of even closer and tighter micro-managing. They don’t say it in exactly those words but
that’s what they want. The Admirals and
Generals grew up in a zero-defect environment where you didn’t trust your
subordinates to operate without your moment-by-moment direction. Ship command, for example, was an
excruciating period of time where you, as Captain, prayed every minute that
none of your crew would make a mistake that would result in the dreaded ‘loss
of confidence’ pronouncement from your superiors and the loss of your command
and career. You achieved this by
micro-managing every action aboard your ship.
Not surprisingly, then, today’s Admirals want to apply the same model of
command – meaning micro-management – to large scale Command and Control and
they see artificial intelligence as the way to do it. The more you can see what your subordinates
see, the more closely you can micro-manage them and AI is just the thing to
allow that – so the Admirals believe.
I’ve repeatedly posted and commented about the ills of
micro-managing. There’s no need to
repeat myself. What I’d like to address
is not the self-evident evils of micro-management but, instead, the proper use
of AI in the conduct of a war. What our
Admirals (and civilian leadership!) should be using AI for is not assisting in
micro-managing but, instead, assisting in the formulation of strategy. We should be using AI to identify enemy patterns
of resource use that might indicate strategic vulnerabilities, identify overall
enemy force movements that might presage future operations, assess enemy
leadership performance and patterns that might allow us to predict actions and
decisions, track enemy munitions expenditures to identify weapons usage
patterns and inventories, and so on.
None of this is directed toward telling the individual soldier or sailor
which way to turn and when to pull the trigger.
None of this is directed toward telling the individual ship’s Captain
what course to steer. Admirals and
Generals should have much bigger problems to worry about than setting a course
or monitoring individual soldiers.
![]() |
Strategy - Not Micro-Managing |
Properly used, AI has the potential to be a strategic-level
aid for upper command. One of the
difficulties, I’m sure, is that our current – and future – leadership has been ‘raised’
in an environment totally lacking in strategic and operational thought. Thus, it’s undoubtedly difficult, if not
downright impossible, for them to even conceive of using AI for strategic
purposes. They’re simply not wired to
think about strategy. They never had to
before so why would they now? Somehow,
we need to break out of that limited thinking and start thinking strategically
instead of micro-managing. AI can offer
valuable assistance but only if properly applied.
(1)C4ISRNet website, “Pentagon AI center shifts focus to
joint war-fighting operations”, Nathan Stout, 8-Jul-2020,
https://www.c4isrnet.com/artificial-intelligence/2020/07/08/pentagon-ai-center-shifts-focus-to-joint-warfighting-operations/Wednesday, July 8, 2020
Artificial Intelligence - Garbage In, Garbage Out
Artificial Intelligence (AI) is the current fad sweeping the
military. The military is feverishly
working on vast, interconnected, AI battle management systems. AI will allow us to vanquish opponents who
outgun and outnumber us, or so the fairy tale goes.
There’s just one small problem (aside from the utter lack of
firepower in this vision!). Who programs
AI? Idiots, that’s who!
There is a fundamental principle in computer programming
that has been recognized since the first program was written: GIGO, which stands for Garbage In, Garbage
Out. If you feed a computer garbage
input, data, and instructions the computer will dutifully produce garbage
output. In other words, the output is
only as good as the input.
Before we go any further, let’s consider an analogy. Say I want to learn calculus. No matter how many times I observe an idiot
trying – and failing - to solve basic arithmetic, I can’t learn calculus because
my ‘teacher’ didn’t know it and couldn’t demonstrate it. Similarly, AI can’t learn something that
isn’t there and what isn’t there is combat competency. No one in the US military has it.
So, back to the problem of idiots programming military AI -
this is the Garbage In portion.
As we’ve thoroughly documented, our professional warriors
are complete morons when it comes to strategy, doctrine, and tactics. Unfortunately, these are the exact people who
will be providing the input and instructions to the AI program. No, they won’t be doing the actual
programming – programmers will do that but the programmers know nothing about
warfare. They’ll be programming what our
so-called military experts tell them. They’ll
be creating an AI that perfectly mimics the ineptitude of our admirals. Thus, the AI will produce idiotic output. GIGO.
It will just do it faster than the human idiots could. Humans take a while to produce
stupidity. AI programs will produce
stupidity in the blink of an eye!
Now, there’s a continuum of AI ranging from the simplistic,
‘if you see a certain set of conditions, take the following action’ to true,
self-learning, nearly sentient artificial intelligence.
Unless we have true self-learning AI, which we don’t, the AI
will be no better than the people programming it and the people programming it
are the same people who are constantly demonstrating their complete ignorance
of warfare.
A true, self-learning AI in the field of warfare can only
learn by conducting warfare and observing the results. Unless we plan to engage in a series of real
wars just so our AI can learn, this is a non-starter.
Sure, we can conduct table-top wargames for the AI to
observe but the wargames are conducted by idiots with unrealistic constraints
and premises. We’ve seen plenty of
examples of that. Look what our Marine
Commandant learned from the wargames he observed. Is that what we want an AI to learn from? An AI raised on what passes for wargames in
our military will be an idiot AI. I
can’t watch someone attempt basic arithmetic and magically learn calculus. Neither can an AI watch idiots conduct
fundamentally flawed wargames and learn warfare. The AI will simply hone in on, and optimize
for, what we tell it is ‘good’. The
problem is that what we tell it is ‘good’ is horribly flawed.
If I want to program an AI to conduct combat, I need to find
a good warrior for the AI to learn from.
This is where it all falls apart.
We don’t have any good warriors!
Garbage In, Garbage Out
Thursday, March 5, 2020
Where Are You Shopping?
Of late, I’ve been seeing nothing but a non-stop parade of
proposed products from industry that are the cutting edge (and beyond) of high
technology. I’m seeing exquisitely high
tech, artificial intelligence assisted, battle management, unmanned, cross domain,
synergistic … well, that’s enough
buzzwords strung together. You get the
point. No one is offering the military
basic, simple firepower that costs next to nothing even though that is likely a
better product. Why not? Well, it’s all about where you shop.
____________________________________
On a personal level, if you shop at the local thrift store,
you’re going to see cheap, basic items.
If you shop at the Apple store, you’re going to see the latest
technology stuffed with every function and app that the designers at Apple
could conceive.
If you’re the military and you shop at Lockheed Martin,
you’re going to see AI-assisted battle management computer products. What you’re not going to see is offers of
bulk quantities of mortar shells. Which
one is likely to be of more use on the battlefield to say nothing of which one
is more likely to work? LM is going to
offer what they know how to do and what can make them the most money. There’s nothing wrong with that. It’s what business is supposed to do. The problem is that if the military doesn’t
shop at the low level, basic firepower stores then they’re never going to see
low level, basic firepower solutions even though those solutions are far more
likely to useful on the battlefield.
Here’s the kind of product being offered at the store the
military is shopping at.
The
Air Force is launching a next-generation airborne surveillance and command and
control technology intended to successfully synchronize air, ground, drone and
satellite assets onto a single seamless network, service officials said.
ABMS
[Advanced Battle Management and Surveillance] seeks to harvest the latest
ISR-oriented technologies from current and emerging systems as a way to take a
very large step forward – and connect satellites, drones, ground sensors and
manned surveillance aircraft seamlessly in real time across a fast-changing,
dispersed combat area of operations. (1)
Wow! You can’t get
any more exquisite and less firepower than that!
Wait … I bet we can
do better. How about this,
The
Missile Defense National Team is conducting a feasibility study on connecting
the Missile Defense Agency’s integrated command and control system with the
U.S. Army’s Integrated Air and Missile Defense Battle Command System.
The
team, which is led by Lockheed Martin, consists of Lockheed Martin, Northrop
Grumman, Boeing, Raytheon, General Dynamics and other companies. This team
develops C2BMC, the system that integrates separate elements (Aegis, THAAD,
SBIRS etc.) of the ballistic missile defense system into a global network.
Through C2BMC, commanders can link any sensor, any shooter, at any phase of
missile flight in any region, against any type of ballistic threat. (2)
Outstanding! A global
network linking anything to anything. Is
that pretty much the definition of exquisite?
Is there any possible way to top that?
How about this,
Lockheed
Martin today unveiled its new HI-Vision Air-Space Integration Lab, a
state-of-the-art development and integration facility designed to enable
collaboration with customers and industry partners for future advancements in
Joint command and control (C2) across air and space domains.
“We are focusing our resources within Lockheed Martin to help address our customers’ increasing requirements for horizontally integrated, network- centric solutions,” said Al Smith, Executive Vice President, Lockheed Martin Integrated Systems and Solutions business area. …
“We are focusing our resources within Lockheed Martin to help address our customers’ increasing requirements for horizontally integrated, network- centric solutions,” said Al Smith, Executive Vice President, Lockheed Martin Integrated Systems and Solutions business area. …
Modeled after a functioning Air Operations Center, The HI-Vision lab creates a
collaborative environment where military and industry personnel will address
current and future architectural challenges to achieve greater synergy across
air and space go-to-war C2 systems. The lab will serve as an integration
proving ground for a wide range of current and future systems, including the C2
Constellation, Theater Battle Management Core Systems (TBMCS), E-10A Battle
Management Command and Control (BMC2), Integrated Space Command and Control
(ISC2), Distributed Common Ground Station (DCGS), and Missile Defense National
Team efforts. (3)
Now we’re integrating the totality of air and space and
doing so without any mention of firepower.
Zounds! I’m lightheaded and my
hands are trembling from excitement.
What do all these proffered products have in common? None of them involve actual firepower. You remember what firepower is, right? Yeah, it’s that stuff that actually kills the
enemy.
Another thing they all have in common is price tags that
would make even the US government take notice and gulp.
It’s all about where you shop.
When the Navy wanted a gun for the Zumwalt did they shop at
the local 155 mm artillery gun maker who’s been cranking out artillery and ammo
for half the world’s armed forces? No,
they went to BAE for the exquisite, non-standard, rocket assisted 155 mm gun
maker and bought a gun that, despite nominally being the NATO standard 155 mm,
couldn’t use any other gun’s ammo. A
gazillion dollars spent and we got nothing out of it.
When the Navy wanted a new stealth fighter did they go to
Grumman Iron Works (now Northrop Grumman) for the latest version of a rock
solid, basic fighter? No, they went to a
worldwide consortium representing a United Nations of suppliers for an aircraft
so complex that it’s been twenty years in development and is just barely now
entering service and has sustainment costs that even the military has
acknowledged is unaffordable.
When the Navy needed a way to kill drones did they go the
neighborhood Oerlikon store and look at the basic, 20 mm machine gun (Oerlikon
20mm/85 KAA, for example, capable of 1000 rds/min at a range of 1+ mile) that’s
been a staple on naval ships since the Revolutionary War? No, they immediately jumped on the laser
bandwagon.
When the Navy needed a weapons elevator for their new
carrier, did they go to elevators-R-us?
No, they went to the science fiction, quantum physics store for non-existent,
theoretical electromagnetic pulse elevators.
How’d that work out? Well, I’ll
let you know if and when the Ford elevators ever get working properly.
If you shop at high tech stores you’ll get high tech
products. Unfortunately, that doesn’t
mean high performance products. In fact,
all too often, it means products that flop miserably. With high technology comes a high risk of
failure and a certainty of difficulty in support.
![]() |
We Don't Need No Stinkin' Artillery, We Got Data. Let's Buy Ten Of These! |
We can’t blame the high tech companies. It’s what they do. Of course they’ll push high tech
solutions. That’s where the money is and
making money is what they’re supposed to do.
Lockheed isn’t going to suggest binoculars and a note pad for battle
management. They’re going to push bio-matrix,
artificial intelligence battle management, with nodes linked by telepathic
relays using subspace inductive communications and multi-dimensional,
interactive, spatial displays. Sure, the
binoculars and notepad are probably more reliable when the enemy starts raining
artillery shells on you but that’s not what will make Lockheed money. It’s incumbent on the Navy to know what will
really work and not be lured by shiny toys.
We need to stop shopping at the high tech stores and start
shopping at the bargain basement, firepower clearance stores.
(1)The National Interest website, “The Air Force Is Creating
a System to Manage the Military's Forces in War”, Kris Osborn, 1-Mar-2018,
https://nationalinterest.org/blog/the-buzz/the-air-force-creating-system-manage-the-militarys-forces-24701Wednesday, October 26, 2016
UCAV AI
A reader put me onto this
article about artificial intelligence (AI) for unmanned combat aerial vehicles
(UCAV). According to the article, an AI
program has been developed that is capable of consistently beating fighter
pilots in simulators. If this is true,
and that’s a huge if, this is the enabler that makes UCAVs a reality and I’ll
have to readjust all my thinking about force structure, doctrine, tactics, etc.
I’m not going to recap the
article. Instead, I urge you to read it
for yourself via the link at the end of this post (1).
The basic concept of the
program is claimed to be a combination of fuzzy logic and programmatic
evolution. Fuzzy logic has been around
for a long time. The most interesting
aspect of this, to me, is the use of programming evolution – survival of the
fittest program. Apparently, numerous,
differing versions of the program were created and exercised. As time went on, the more successful versions
survived and were utilized to create newer and better versions and so on in a
form of Darwinian evolution.
I’m sure there are limits to
this program that prevent it from replacing pilots, yet. For example, I’m assuming that it’s been
developed as a one-versus-one (1v1) combat program as opposed to a
one-member-of-a-team program. An actual
pilot not only needs to be able to win 1v1 aerial duels but also function as a
member of a group and make evaluations about supporting other team members,
assess mission accomplishment versus personal survival and supporting
teammates, make decisions about mission accomplishment versus collateral damage
risks, etc. I’m sure the program can’t
do any of that. However, if they’ve
managed to create a working UCAV AI then there’s no reason they couldn’t fold
in the other aspects of piloting.
______________________________
(1)phys.org website, “Beyond
video games: New artificial intelligence beats tactical experts in combat
simulation”, M.b. Reilly, June 27, 2016 ,
Subscribe to:
Posts (Atom)