AmiGanguli
Junior Member
Re: J-xx
You guys are driving me nuts with the whole video game thing. The bottom line is that video games don't rely on sensor data. They _simulate_ the fog of war. Thus video game developers don't have to think about the really hard problems in AI.
When I talk about acting on incomplete information, I'm talking about deciding whether and where there is a threat based on conflicting and incomplete sensor data.
If you could rely on your sensors then you could probably also rely on your data-link, in which case this whole issue is moot. You can just have a human pilot operating the plane remotely.
Under much more limited conditions than what you would need to make a fully automated fighter work. Once the pilot has determined where the enemy is (with the help of sensors), he can tell the missile to track an object. The reason countermeasures work is because the missiles can be fooled. A human pilot wouldn't be fooled by flares, for example, but a missile can be.
For a missile, being fooled by flares isn't the end of the world. The missile is wasted, but missiles are relatively low cost and you can fire another one. Not so if the fighter itself is a robot and it starts shooting it's entire missile payload at a decoy flare.
Yes, and I've said repeatedly that maneuvering isn't the problem.
And not accurate enough to rely on. The computers provide hints, but he pilots use their judgment.
True. The captain appears to have been a nut. But the decisions were made based on information the computers provided. A competent captain would have questioned the data.
... Ami.
Unless the AI is intended to cheat, the AI still acts under what is called fog of war. Otherwise the AI would not act in a way inconsistent to the fog of war.
You guys are driving me nuts with the whole video game thing. The bottom line is that video games don't rely on sensor data. They _simulate_ the fog of war. Thus video game developers don't have to think about the really hard problems in AI.
When I talk about acting on incomplete information, I'm talking about deciding whether and where there is a threat based on conflicting and incomplete sensor data.
If you could rely on your sensors then you could probably also rely on your data-link, in which case this whole issue is moot. You can just have a human pilot operating the plane remotely.
Face recognition is not important for a dogfight AI. Plane recognition is. Currently, IIR missile seekers **are** already doing plane recognition. MAWS and RWRs are already doing recognition.
Under much more limited conditions than what you would need to make a fully automated fighter work. Once the pilot has determined where the enemy is (with the help of sensors), he can tell the missile to track an object. The reason countermeasures work is because the missiles can be fooled. A human pilot wouldn't be fooled by flares, for example, but a missile can be.
For a missile, being fooled by flares isn't the end of the world. The missile is wasted, but missiles are relatively low cost and you can fire another one. Not so if the fighter itself is a robot and it starts shooting it's entire missile payload at a decoy flare.
Aircraft maneuvering happens to be something fairly easy for modern computers because it is essentially still computed logic.
Yes, and I've said repeatedly that maneuvering isn't the problem.
I am sorry to tell you but human judgement do not participate in ECM, flares and decoy recognition and discrimination. All these are strictly automated.
And not accurate enough to rely on. The computers provide hints, but he pilots use their judgment.
That is straight human error. It was a human that made that decision.
True. The captain appears to have been a nut. But the decisions were made based on information the computers provided. A competent captain would have questioned the data.
... Ami.