1. How much autonomy in AI is appropriate in war or the military?
2. What is autonomy exactly when relating to artificial intelligence? When did the military begin implementing AI technology? Have there been any autonomous accidents that were life threatening or detrimental so far? What are some possible future technologies that the military is currently working on that will be used in the future? How many people actually agree towards using these technologies vs traditional human warfare? How many lives have been saved because of the use of robots in warfare? Do the benefits of using robots in warfare outweigh the risks? What would happen if the AI got hacked by foreign enemies and turned against us? Should the AI be able to make autonomous decisions on taking an enemys life? How much of the DoDs budget reflects on the technological AI growth?
3. I do not know much about what types of robots are used in the military already. I have heard of some ideas revolving around actual drones marching and fighting just as human soldiers do however I have not researched this particular part and therefore have no evidence to have reason to believe it is true. I do know that there are drones that are remotely controlled by humans and have little autonomous decisions to make. This is basically all I know regarding AI in the military. Before I decide if I am against robots having the ability to make their own decisions when fighting in wars, I would need to do more research. I like the idea of drones in the war because it provides for safer conditions and protection for human soldiers. I am interested in this particular topic because I have a future in the military whether that is fighting on the ground or not. I chose this subject because I believe I can find a lot of information on it and I am interested in the growing technology of artificial intelligence. Since I have interests in the military and AI, it is beneficial to study both of these topics and how they relate to each other.
4. I have found substantial information regarding this topic so far on the internet ranging from scholarly articles, UNCCs library, and various government and military websites. It seems that the pentagon does have a robotics department and the implementation of battle robots is expanding. The idea is that in the future robots will not necessarily have to wait for permission before making a military decision because it will only make the tactical approach slower and less efficient. This is when machine morality comes into effect. I do not think that my topic is neither too broad nor too short. After condensing my ideas I think I have found the right topic that result in a good inquiry project. Later on in my project I might experiment with my inquiry question by extending the ethics/ morality that can be integrated in the AI.