Artificial Morality
An autonomous machine performs an unethical act. Who takes the blame?
This is the hypothetical premise of the presentation I made yesterday to the directors of the fellowship and to the 11 other fellows. I really drew the short straw with my time slot -- 2p on a Thursday. In a way, it was nice to go late in the week, because it allowed me time to develop a powerpoint (having worked solely on the actual paper while in Paris). But as each of the other fellows made their presentations, I became increasingly envious. One by one, the anxiety on their faces gave way to a look of calm relaxation, quiet comtemplation, and the obvious desire to go out drinking. Plus, the room was probably 85 degrees when my turn came around. (I wish I could say that the temperature explains my choice of dress, but, well, I do my best work in this outfit).
The other fellows spoke on a wide range of topics ranging from the effort to prove precognition in dreams to the "right of conscience" in healthcare (should a pharmacist have the right to refuse, on religious grounds,a morning-after pill to a rape victim?) to the role of religion in AA, and the search for the nature of consciousness. My piece was a mix of artificial intelligence and moral philosophy. I argued that our progress in robotics development is driving us down a dangerous path where "autistic robots" can and will do great harm to humans if left unchecked. Rather than focusing on human-grade intelligence, as AI researchers tend to (and which I think is still a long way off), I argued that we need to make the machines moral. To do this means deconstructing the nature of ethical decision-making and allowing machines to learn to be moral.
This was all very troubling to a couple of the directors because of the suggestion that many of the commonly perceived prerequesites for morality (consciousness, intelligence, religion) are completely unnecessary. A few of the fellows also jumped on my argument (Kant would roll over in his grave!) and asked some pretty tough questions -- which have helped me think my argument through a bit -- but everyone was complimentary of my idea, my style, and my argument. (We were all encouraging of each other, and deservedly so.) Several said I should turn this idea into a book and take the show on the road. One of them thanked me for shaking things up and said, "It was like you threw a molatov cocktail into the room." Ha. At least no one fell asleep.
Most importantly, when I was done, one of the administrators walked up and handed me an envelope containing a check for $5,000. One of the other fellows called it the "Italian wedding moment." I'm calling it my 42-inch plasma moment!
An autonomous machine performs an unethical act. Who takes the blame?
This is the hypothetical premise of the presentation I made yesterday to the directors of the fellowship and to the 11 other fellows. I really drew the short straw with my time slot -- 2p on a Thursday. In a way, it was nice to go late in the week, because it allowed me time to develop a powerpoint (having worked solely on the actual paper while in Paris). But as each of the other fellows made their presentations, I became increasingly envious. One by one, the anxiety on their faces gave way to a look of calm relaxation, quiet comtemplation, and the obvious desire to go out drinking. Plus, the room was probably 85 degrees when my turn came around. (I wish I could say that the temperature explains my choice of dress, but, well, I do my best work in this outfit).
The other fellows spoke on a wide range of topics ranging from the effort to prove precognition in dreams to the "right of conscience" in healthcare (should a pharmacist have the right to refuse, on religious grounds,a morning-after pill to a rape victim?) to the role of religion in AA, and the search for the nature of consciousness. My piece was a mix of artificial intelligence and moral philosophy. I argued that our progress in robotics development is driving us down a dangerous path where "autistic robots" can and will do great harm to humans if left unchecked. Rather than focusing on human-grade intelligence, as AI researchers tend to (and which I think is still a long way off), I argued that we need to make the machines moral. To do this means deconstructing the nature of ethical decision-making and allowing machines to learn to be moral.
This was all very troubling to a couple of the directors because of the suggestion that many of the commonly perceived prerequesites for morality (consciousness, intelligence, religion) are completely unnecessary. A few of the fellows also jumped on my argument (Kant would roll over in his grave!) and asked some pretty tough questions -- which have helped me think my argument through a bit -- but everyone was complimentary of my idea, my style, and my argument. (We were all encouraging of each other, and deservedly so.) Several said I should turn this idea into a book and take the show on the road. One of them thanked me for shaking things up and said, "It was like you threw a molatov cocktail into the room." Ha. At least no one fell asleep.
Most importantly, when I was done, one of the administrators walked up and handed me an envelope containing a check for $5,000. One of the other fellows called it the "Italian wedding moment." I'm calling it my 42-inch plasma moment!
2 Comments:
Somewhere in San Francisco, there's a 42-inch plasma with my name on it!
Nice to see that after this intellectual odyssey you remain a deeply spiritual man, ever devoted to the higher things.
i promise only to watch davey & goliath on sunday mornings after church.
Post a Comment
<< Home