Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Filosofi, ämneslärare"

Sort by: Order: Results:

  • Alakurtti, Jonni (2023)
    In the future we’ll see increasing collaboration between humans and artificial entities. Many of our jobs will be outsourced to them as they advance and outperform us in various tasks. The rapid progress of these artificial entities raises important ethical questions, one of these is whether we should strive for artificial moral agency. It would allow for ethically inclined robots. Having ethically acting artificial entities not only ensures that their behavior aligns with our norms but also builds trust in their actions and enhances collaboration. This is also a means of protecting humans from potential apocalyptic scenarios depicted in science fiction, as morally inclined artificial entities could prevent such outcomes. I focus on the philosophical question of how artificial entities would satisfy the conditions required for moral agency. In this thesis I examine three conditions that entities must satisfy to have moral agency: intentionality, autonomy, and moral responsibility. My argument is based on the premise that, since humans are essentially biological machines operating according to complex algorithms, when artificial algorithms attain a comparable level of complexity, they meet the criteria for moral agency and should be held accountable for their actions. Therefore, artificial entities could satisfy these conditions and possess moral agency. To identify moral agency in others, I propose that we should take the stance of mind-reading, contemplating whether the actions of others tell us whether they have inner states. I make the claim that artificial entities can achieve a state where they display behavior complex enough for us to treat them as moral agents. By displaying complex behavior and expressing it in correlation with their inner states, artificial entities would meet the requirement for identifiable moral agency. This thesis argues that artificial entities can be blameworthy, as they can possess authentic inner states, enabling self-reflection and responsiveness to reasons. They can have the ability to project these inner states into the actual world as acts, and of their own accordance. I also argue, through psychopathy, that moral agency doesn’t require empathy or emotions, but that morality can be achieved in other ways. This thesis also offers a new perspective on 'punishment' and how it could enable us to reprimand artificial entities.