Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Author "Boyle, Brendan"

Sort by: Order: Results:

  • Boyle, Brendan (2024)
    The linguistic landscape of artificial intelligence (AI) is an established high-interest topic, with AI researchers expressing concern toward anthropomorphisms (the use of human terms to describe inanimate entities or concepts) in AI design and discourse (Watson 2019; Salles, Evers, and Farisco 2020; Weidinger et al. 2022; Deshpande et al. 2023). Even the term itself implies “intelligence” in order to illustrate advanced computational capacity. By employing Conceptual Metaphor Theory (CMT) (Lakoff and Johnson 1980) and Metaphor Identification Procedure (MIP) (Pragglejaz Group 2007), the specific human terms employed in a technology journalism context, and what these metaphors may convey to their readers, can be named and examined. For this purpose, a corpus was compiled of AI news from the US technology news site The Verge during the year 2023. Concordance lines were analyzed for anthropomorphic language surrounding a single term: the acronym “AI”. This study sought to answer whether anthropomorphic conceptual metaphors can be identified within AI reporting in a tech news site corpus (RQ1), what source domains are represented among these metaphors and how these metaphors can be categorized (RQ2), and what effects these metaphors may have on their readers (RQ3). The roughly 300 articles examined reveal a pattern of agency, labor, and physicality metaphors. Agency metaphors illustrated AI in terms of self-determination and autonomy, further categorized into cognition, behavior, and affect. Labor metaphors had AI take on the role of the laborer in a broad variety of AI use cases, as opposed to the role of a tool. Physicality metaphors used physical human terms for the structural analogies they provide. Conceptual metaphors that do not fit this categorization are analyzed as miscellanea. This study argues that these anthropomorphic conceptual metaphors can function as effective explanatory analogies but may at times create or proliferate misleading notions about AI capabilities.