Publications

This page only lists some of my recent publications. See my Google Scholar profile for a full list. Most publications can be downloaded for free through ResearchGate.

Advancing data justice in education: some suggestions towards a deontological framework

This article proposes a pragmatic approach to data justice in education that draws upon Nancy Fraser’s theory. The main argument is premised on the theoretical and practical superiority of a deontological framework for addressing algorithmic bias and harms, compared to ethical guidelines. The purpose of a deontological framework is to enable the evaluation of justice claims, not as a burden for individuals or groups, but as part of an ongoing public dialogue about the disparate impacts of algorithmic decisions. In a deontological framework, scrutiny is not limited to differences in treatment but is broader and more demanding, involving the evaluation of design approaches and a wider-ranging political discussion about who might be harmed.

Artificial intelligence and the affective labour of understanding: The intimate moderation of a language model

Interest in artificial intelligence (AI) language models has grown considerably following the release of ‘generative pre-trained transformer’ (GPT). Framing AI as an extractive technology, this article details how GPT harnesses human labour and sensemaking at two stages: (1) during training when the algorithm ‘learns’ biased communicative patterns extracted from the Internet and (2) during usage when humans write alongside the AI. This second phase is framed critically as a form of unequal ‘affective labour’ where the AI imposes narrow and biased conditions for the interaction to unfold, and then exploits the resulting affective turbulence to sustain its simulation of autonomous performance. Empirically, this article draws on an in-depth case study where a human engaged with an AI writing tool, while the researchers recorded the interactions and collected qualitative data about perceptions, frictions and emotions.

Automation, APIs and the distributed labour of platform pedagogies in Google Classroom

Digital platforms have become central to interaction and participation in contemporary societies. New forms of ‘platformized education’ are rapidly proliferating across education systems, bringing logics of datafication, automation, surveillance, and interoperability into digitally mediated pedagogies. This article presents a conceptual framework and an original analysis of Google Classroom as an infrastructure for pedagogy. Its aim is to establish how Google configures new forms of pedagogic participation according to platform logics, concentrating on the cross-platform interoperability made possible by application programming interfaces (APIs). The analysis focuses on three components of the Google Classroom infrastructure and its configuration of pedagogic dynamics: Google as platform proprietor, setting the ‘rules’ of participation; the API which permits third-party integrations and data interoperability, thereby introducing automation and surveillance into pedagogic practices; and the emergence of new ‘divisions of labour’, as the working practices of school system administrators, teachers and guardians are shaped by the integrated infrastructure, while automated AI processes undertake the ‘reverse pedagogy’ of learning insights from the extraction of digital data. The article concludes with critical legal and practical ramifications of platform operators such as Google participating in education.

Deep learning goes to school: toward a relational understanding of AI in education

In Applied AI, or ‘machine learning’, methods such as neural networks are used to train computers to perform tasks without human intervention. In this article, we question the applicability of these methods to education. In particular, we consider a case of recent attempts from data scientists to add AI elements to a handful of online learning environments, such as Khan Academy and the ASSISTments intelligent tutoring system. Drawing on Science and Technology Studies (STS), we provide a detailed examination of the scholarly work carried out by several data scientists around the use of ‘deep learning’ to predict aspects of educational performance. This approach draws attention to relations between various (problematic) units of analysis: flawed data, partially incomprehensible computational methods, narrow forms of educational’ knowledge baked into the online environments, and a reductionist discourse of data science with evident economic ramifications. These relations can be framed ethnographically as a ‘controversy’ that casts doubts on AI as an objective scientific endeavour, whilst illuminating the confusions, the disagreements and the economic interests that surround its implementations.