To be serious, I don't think it will be a huge issue. Establishing settings that define the rules of engagement will eliminate most of the worries. The activities of the machines can be monitored at all times by human operators who can choose to take over if they want to.
One of the biggest worries would be decision making. Whether to pull the trigger under certain conditions? Many believe that human operators are better. That may not be the case. We humans make mistakes. We miscalculate and misread situations, especially when under stress. This becomes especially the case when our own lives are being threatened. Many mistakes are then made, such as friendly fire and causing collateral damage to civilians.
Sending machines in, on the other hand, could potentially eliminate these issues. Just think about it. You send a few machines in. No human lives on your side will be in danger. You won't be as stressed. When you mass produce those AI's, their cost won't be too high either. Then you can establish a very tight settings for your AIs, which means they would have to make absolutely sure who and what they are shooting at. For instance, the AIs must make 100% confirmation of the humans before opening fire, even when their own safety is threatened. Thus, eliminating friendly fire and less collateral damage on opposing civilians. Let's face it. We can never make this kind of demand on our own soldiers. Can you imagine telling your troops that they have to be 100% sure who and what is coming at them before opening fire even when their own lives are threatened? We can however make such demand to AIs.
Let's then move a step further. Imagine both sides have AIs fighting for them. Humans sit back and watch the fight from a safe distance. And both sides agree that the outcome of their AIs would determine the outcome of the war. Then no human love is lost.