Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "ethical AI"

Sort by: Order: Results:

  • Jaana, Hautala (2023)
    Artificial Intelligence (AI) has revolutionized various domains of software development, promising solutions that can adapt and learn. However, the rise of AI systems has also been accompanied by ethical concerns, primarily related to the unintentional biases these systems can inherit during the development process. This thesis presents a thematic literature review aiming to identify and examine the existing methodologies and strategies for preventing bias in iterative AI software development. Methods employed for this review include a formal search strategy using defined inclusion and exclusion criteria, and a systematic process for article sourcing, quality assessment, and data collection. 29 articles were analyzed, resulting in the identification of eight major themes concerning AI bias mitigation within iterative software development, ranging from bias in data and algorithmic processes to fairness and equity in algorithmic design. Findings indicate that while various approaches for bias mitigation exist, gaps remain. These include the need for adapting strategies to agile or iterative frameworks, resolving the trade-off between effectiveness and fairness, understanding the complexities of bias for tailored solutions, and assessing the real-world applicability of these techniques. This synthesis of key trends and insights highlights these specific areas requiring further research.
  • Laakso, Atte (2023)
    This thesis conducts a systematic literature review on ethical issues of large language models (LLM). These models are a very prudent topic, as both their presence and demand have skyrocketed since the release of ChatGPT - a free to use generative language model. The literature review of 116 studies, both conceptual and empirical, identifies 39 recurring ethical issues. The issues range from methodological to fundamental ones, for example Environmental impacts" and "Biased training data or outputs". These identified issues are analyzed based on the Ethics guidelines for trustworthy AI (Artificial Intelligence), released by the European Commission’s High-Level Expert Group on AI. The guidelines detail requirements that all trustworthy and ethical AI applications should adhere to, e.g., Human agency, Transparency, Accountability. All identified issues are mapped to these requirements, and the conclusion is that LLMs have significant challenges relating to each one. The findings indicate that the use LLMs comes with significant issues, both demonstrated and theorized. While some methods for mitigating these issues are identified, many still remain unanswered. One of these unanswered issues is the most identified one - inherent biases in LLMs. Since there is no universal understanding on biases, there is no way to make LLMs seem unbiased to everyone. This thesis collates the current talking points and issues identified with LLMs. It provides a comprehensive, but not exhaustive, list of these issues and shows that there is much discussion on the topic. The conclusion is that more discussion is required, but more vitally, even more (regulatory) action is needed along with it.