Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "justification"

Sort by: Order: Results:

  • Laamanen, Heimo (2021)
    Abstract Faculty: Faculty of Social Sciences Degree programme: Master of Philosophy Study track: Theoretical Philosophy Author: Heimo Laamanen Title: Process Reliabilism, Justification in the Context of Artificial Epistemic Agents Level: Master Month and year: 06.2021 Number of pages: 84 + 7 Keywords: Epistemology, justification, process reliabilism, artificial epistemic agent, philosophy of artificial intelligence Supervisor or supervisors: Markus Lammenranta and Jaakko Hirvelä Where deposited: Helsinki Unversity Library Additional information: Abstract: The main topic of this thesis is justification for belief in the context of AI--based intelligence software agents. This topic deals with issues belonging to the joint domain of the philosophy of artificial intelligence and epistemology. The objective of this thesis is to discuss a form of process reliabilism for the collaboration environment of human beings and intelligent software agents. The motivation of the study presented in this thesis is due to the ongoing progress of artificial intelligence, robotics, and computer science in general. This progress has already enabled to establish environments in which human beings and intelligent software agents collaborate to provide their users with various information--based services. In the future, we will not be aware of whether a service is offered by human beings, intelligent software agents, or jointly by them. Hence, there are two kinds of information agents, and this gives rise to the following key question: Can an intelligent software agent be also an epistemic agent in a similar way as a human being? In other words, can an intelligent software agent have beliefs, justified beliefs, and more importantly, can it know something? If so, then there is a clear motivation to extend epistemology to include the context of artificial epistemic agents. This, in turn, raises several new questions, such as the following: First, do artificial epistemic agents set any new requirements to epistemological concepts and theories concerning justification? And second, what would be the appropriate theory of justification in the context of artificial epistemic agents? At first, the reader is provided with necessary background information by discussing the following topics: introductions to epistemology; artificial intelligence; a collaborative environment of human beings and artificial epistemic agents; the concepts of information, proposition, belief, and truth; and scenarios with which main ideas are clarified and tested. Then, this thesis introduces a form of applied epistemology including its aim and some requirements for the theories of justification set by the development and operation of artificial epistemic agents. Finally, after setting the scene, this thesis explores process reliabilism including main objections and proposes an enhancement to process reliabilism so that it better addresses the context of artificial epistemic agents. The results are as follows: First, this thesis supports the view that an intelligent software agent can actually be an artificial epistemic agent capable of having beliefs, justified beliefs, and knowledge. Second, there is a clear motivation to extend the domain of epistemology to include artificial epistemic agents. This extension is a form of applied epistemology that has not yet been discussed much in either epistemology or artificial intelligence. Third, this thesis gives reasons for the supposition that the context of artificial epistemic agents sets new requirements to epistemological theories. And finally, this thesis gives motivations to support the idea that a form of process reliabilism called pragmatic process reliabilism could be the appropriate unified theory of justification for belief in the collaborative environment of human epistemic agents and artificial epistemic agents.