InstructionMini-Paper If humans create artificial intelligence, some people worry that it will eventually rise up and take over, as in the Matrix of Terminator movies. No one truly knows if this is even possible, but if it is, there is another options I want us to consider. Before artificial intelligence is advanced enough to harm humans, it is likely that humans themselves will have the opportunity to harm artificial intelligence. My question is: what are our ethical obligations to artificial intelligence? In answering this question, I want you to refer to specific ideas in the articles for this week. Be sure to touch on the following points: 1. What are our ethical obligations to artificial intelligence? 2. Where does this obligation come from? 3. How does our obligation to artificial intelligence differ from our obligation to a dog? From our obligation to a small child? Mini-Reply Post a reply to the paper after yours in the forum. If your paper is the last one - you got it - reply to the first submission in the forum. The reply should be 300-400 words. Start by briefly thanking the author and saying something positive about the paper you are commenting on. Then respond to the paper. This could be a critique, you could extend the argument, fill in details, or apply one of the ideas in a different context. Finally, end by again saying something positive about the paper and summarizing your comment in a sentence or two. Be respectful! The goal is to learn from the exchange, not to score gotcha points. Ideally, everyone involved grows from the back and forth.