Building Moral Brains

Thursday, May 2, 2019 - 15:00 to 17:00
JHL7
There are those who claim that we must morally bioenhance the human due to existential threats (e.g. climate change and the looming possibility of cognitive enhancement) and a failure in the moral will. By moral bioenhancement, they mean that we must intervene in the biology of the human animal in order to get it to behave morally to reverse these existential threats. In this lecture, I will place this moral bioenhancement literature into conversation with thinkers in the philosophy of technology, exploring questions of human agency and humanism. As Verbeek has noted, we have always built morality into our technologies. What could it mean to build morality into humans?  Does this (supposed) duty to build morality into machines and into brains originate in humanism, human agency, or the hybrid agency of technological man? What could it possibly mean to build better humans?