Shenyang FC-31 / J-31 Fighter Demonstrator

Status
Not open for further replies.

latenlazy

Brigadier
To be serious, I don't think it will be a huge issue. Establishing settings that define the rules of engagement will eliminate most of the worries. The activities of the machines can be monitored at all times by human operators who can choose to take over if they want to.

One of the biggest worries would be decision making. Whether to pull the trigger under certain conditions? Many believe that human operators are better. That may not be the case. We humans make mistakes. We miscalculate and misread situations, especially when under stress. This becomes especially the case when our own lives are being threatened. Many mistakes are then made, such as friendly fire and causing collateral damage to civilians.

Sending machines in, on the other hand, could potentially eliminate these issues. Just think about it. You send a few machines in. No human lives on your side will be in danger. You won't be as stressed. When you mass produce those AI's, their cost won't be too high either. Then you can establish a very tight settings for your AIs, which means they would have to make absolutely sure who and what they are shooting at. For instance, the AIs must make 100% confirmation of the humans before opening fire, even when their own safety is threatened. Thus, eliminating friendly fire and less collateral damage on opposing civilians. Let's face it. We can never make this kind of demand on our own soldiers. Can you imagine telling your troops that they have to be 100% sure who and what is coming at them before opening fire even when their own lives are threatened? We can however make such demand to AIs.

Let's then move a step further. Imagine both sides have AIs fighting for them. Humans sit back and watch the fight from a safe distance. And both sides agree that the outcome of their AIs would determine the outcome of the war. Then no human love is lost.
Humans make very discrete and adaptive decisions based on multiple inputs from circumstance and context. An AI's decision making might end up being less adaptive and more general. How well an autonomous AI makes a decision will largely depend on how sophisticated its decision making and situational awareness capabilities are, and that is going to depend on human coders. Big problem here is whether human coders understand and can program the logic of these conditions well enough to not lead to unintended consequences. Maybe this isn't as challenging as we think because neural nets, but maybe we don't train our neural nets as well as we think we did because our training data informs certain patterns and characteristics we don't foresee.
 

vesicles

Colonel
Humans make very discrete and adaptive decisions based on multiple inputs from circumstance and context. An AI's decision making might end up being less adaptive and more general. How well an autonomous AI makes a decision will largely depend on how sophisticated its decision making and situational awareness capabilities are, and that is going to depend on human coders. Big problem here is whether human coders understand and can program the logic of these conditions well enough to not lead to unintended consequences. Maybe this isn't as challenging as we think because neural nets, but maybe we don't train our neural nets as well as we think we did because our training data informs certain patterns and characteristics we don't foresee.

Like any other weapons platform, AIs would be good for certain situation and bad for others. You don't use the F-22 as a heavy bomber. So eventually, we will figure out what kinds of missions are ideal for AIs, like defusing a bomb... We could develop a whole line of AIs specifically for bomb squads.

Another possibility would be crowd control. No human police officer will be thrown into an angry mob. Lots of times, officers are simply trying to protect themselves and make mistakes (like opening fire into the crowd). AIs and drones with absolutely strict rules of engagement will be sent in. They would be absolute prohibited to hurt human, even when they themselves are being damaged. In fact, these AIs won't even need to have arms and legs, just a block with some kind of wheel to move. Thus no chance of them hurting protesters. Like I said before, we simply cannot issue this kind of order to human police officers.
 

latenlazy

Brigadier
Like any other weapons platform, AIs would be good for certain situation and bad for others. You don't use the F-22 as a heavy bomber. So eventually, we will figure out what kinds of missions are ideal for AIs, like defusing a bomb... We could develop a whole line of AIs specifically for bomb squads.

Another possibility would be crowd control. No human police officer will be thrown into an angry mob. Lots of times, officers are simply trying to protect themselves and make mistakes (like opening fire into the crowd). AIs and drones with absolutely strict rules of engagement will be sent in. They would be absolute prohibited to hurt human, even when they themselves are being damaged. In fact, these AIs won't even need to have arms and legs, just a block with some kind of wheel to move. Thus no chance of them hurting protesters. Like I said before, we simply cannot issue this kind of order to human police officers.
Yes, but that's a matter of how we choose to use our tools. It requires being sharply congnizant of their properties and limitations. Either way, if there is a failure it will always involve a problem with human judgement, somehow.
 

kwaigonegin

Colonel
To be serious, I don't think it will be a huge issue. Establishing settings that define the rules of engagement will eliminate most of the worries. The activities of the machines can be monitored at all times by human operators who can choose to take over if they want to.

One of the biggest worries would be decision making. Whether to pull the trigger under certain conditions? Many believe that human operators are better. That may not be the case. We humans make mistakes. We miscalculate and misread situations, especially when under stress. This becomes especially the case when our own lives are being threatened. Many mistakes are then made, such as friendly fire and causing collateral damage to civilians.

Sending machines in, on the other hand, could potentially eliminate these issues. Just think about it. You send a few machines in. No human lives on your side will be in danger. You won't be as stressed. When you mass produce those AI's, their cost won't be too high either. Then you can establish a very tight settings for your AIs, which means they would have to make absolutely sure who and what they are shooting at. For instance, the AIs must make 100% confirmation of the humans before opening fire, even when their own safety is threatened. Thus, eliminating friendly fire and less collateral damage on opposing civilians. Let's face it. We can never make this kind of demand on our own soldiers. Can you imagine telling your troops that they have to be 100% sure who and what is coming at them before opening fire even when their own lives are threatened? We can however make such demand to AIs.

Let's then move a step further. Imagine both sides have AIs fighting for them. Humans sit back and watch the fight from a safe distance. And both sides agree that the outcome of their AIs would determine the outcome of the war. Then no human life is lost.

Well when I said AI, I meant leaving the decision making skills to it as well.
As to the outcome, I agree that a computer with millions of computations per second will likely make less mistakes than a human however there is the psychological impact of such an engagement especially when life and death is at stake.

It's not too different than self driving cars. Would you let your car drive itself even if say some scientist shows you analysis and stats that says it's 10x safer than human drivers?
 

Air Force Brat

Brigadier
Super Moderator
In all seriousness the day will come in the not so distant future where we have to decide if the AI should autonomously fire at human targets based on it's own programming and algorithms.

I wouldn't trust that mechanized krap as far as I could
No; I meant internally in the bays. Unless there is clear evidence that the FC-31 cannot carry 2 tons of armament within the weapons bays, there is no reason to brush off a promotional video from SAC.

No, 4 of those won't go in the bays, not gonna happen with the FC-31, or the F-35 for that matter, or PAK-FA or J-20?? NO
 

Air Force Brat

Brigadier
Super Moderator
How did the discussion end up talking about drones and termiantors :eek:

I don't know, but I do know I wouldn't let these characters make a decision like that?? to assume that we would allow our machines to fight with other machines is naïve beyond belief, they will be used to "terrorize the op-for", so NO, its an insane idea!
 

SinoSoldier

Colonel
I wouldn't trust that mechanized krap as far as I could


No, 4 of those won't go in the bays, not gonna happen with the FC-31, or the F-35 for that matter, or PAK-FA or J-20?? NO

The length of the weapons (estimated to be between 4-5 meters) should be shorter than the length of the weapons bay (believed to be 1/3 the length of the aircraft). Each bomb weighs 500 kilograms.
 

Air Force Brat

Brigadier
Super Moderator
The length of the weapons (estimated to be between 4-5 meters) should be shorter than the length of the weapons bay (believed to be 1/3 the length of the aircraft). Each bomb weighs 500 kilograms.

you "might" get two in?? four, I just can't see that, they would have to be line abreast? and theres no way?
 

Air Force Brat

Brigadier
Super Moderator
Well when I said AI, I meant leaving the decision making skills to it as well.
As to the outcome, I agree that a computer with millions of computations per second will likely make less mistakes than a human however there is the psychological impact of such an engagement especially when life and death is at stake.

It's not too different than self driving cars. Would you let your car drive itself even if say some scientist shows you analysis and stats that says it's 10x safer than human drivers?

Here I'm gonna disagree wit ya bro, its a lot different than self driving cars,,, its an ethical question, and I'm gonna put the Ka-bosh on any such weapon,,, ISIS is bad enough with out and compassion, as kold blooded killers! NO
 
Status
Not open for further replies.
Top