Plug-and-Play Education: Knowledge and Learning in the Age of Platforms and Artificial Intelligence (2024)


If you know me, you know I am a slow writer and nowhere as prolific as I should be. Completing and submitting a (short) monograph is a big deal – and a huge relief – for me, so I am sharing the news here. To celebrate, and to inject a bit of life into my dormant blog (things have been busy in the past few months), I decided to post a brief section from the book, hoping it might generate a bit of interest. The extract I chose deals with the notion of “unreasonableness”, which is somewhat related to what Gilles Deleuze and Isabelle Stengers call “stupidity”, that is, a “perpetual confusion with regard to the important and unimportant, the ordinary and the singular” (Deleuze, 1994 p190).

Why this passage? I am currently at the 2023 Australian Association for Research in Education (AARE) Conference, and I was lucky to attend a great session from my colleagues Kal Gulson, Sam Sellar and Marcia Mckenzie, where the very notion of stupidity was mobilised in the context of a critical discussion of AI in education. I am not sure my use of the word unreasonable is completely aligned and perhaps it fails to capture the connotation that Stengers gave to stupidity: a historical condition rather than a state of psychological stupefaction, and an agentic and vibrant form of existence that shapes a distinctive “modus operandi” in the real world. My take on AI as an unreasonable project engages with Whitehead’s notion of speculative reason, which also underpins Isabelle Stengers’ philosophy.  The argument touches on another problematic that also interests me a great deal, and which is often invoked in the AI debate – broadly and in education: responsibility.

Deleuze, G. (1994). Difference and repetition. Columbia University Press.

Photo by Mahdi Bafande on Pexels.com

The following is a pre-proofs extract from my forthcoming book: Plug-and-Play Education: Knowledge and Learning in the Age of Platforms and Artificial Intelligence. The book will be published by Routledge and will be available in 2024.


The unreasonableness of AI and its bearing on educational responsibility

At the heart of the current enthusiasm for AI lies the profoundly unwarranted tenet that our past technological lives, that is, our multiple entanglements with devices and software over time, represent a reasonable proxy that can inform the development of predictive systems and automated agents. The word “reasonable” is key here and it should not be confused with “rational”. The data collected from digital platforms, social media, smart devices, and the Internet of Things (IoT) are very much rational proxies with proven predictive power when aggregated at scale. The real question is whether the knowledge extracted from these technological systems is a sound – indeed reasonable – representation of our lived experience, upon which we can build governance structures and autonomous “generative” agents that have real and life-changing impacts. I use the concept of reason here in a Whiteheadian sense (Stengers, 2011), as a process not limited to abstracting and generalising from antecedent states, but which is instead capable of “speculation” following an upward motion from experience: “what distinguishes men from the animals, some humans from other humans, is the inclusion in their natures, waveringly and dimly, of a disturbing element, which is the flight after the unattainable” (Whitehead, 2020, p. 65). 

This may seem like an obscurantist position that clings to a vague notion of human exceptionalism, in the face of mounting evidence that even the most ineffable components of intelligence can now – or will soon – be computationally modeled.  Some forms of reinforcement learning, for example, can now incorporate in their reward policies a generalist quest for novelty and intellectual curiosity. Despite these advancements, the argument still stands, since it is not concerned with the ability of models to adapt to unexpected scenarios (i.e., what AI innovation pursues), but about the inclination to be “more” than what our present or past conditions would warrant.  This is what “speculative reason” means and is very much related to what Hanna Arendt called “enlarged mentality”: the ability to make present to ourselves what was absent in our experience and in the historical data (Arendt, 1998).  There is, after all, plentiful evidence that people, when included in certain categories based on their past behaviours, may resist. Indeed,  these are the “looping effects” described by Ian Hacking: the overuse of synthetic and arbitrary elements of differentiation may lead to scenarios where people react negatively to being classified: “what was known about people of a kind may become false because people of that kind have changed in virtue of what they believe about themselves” (Hacking, 1999, p. 34). This issue speaks loudly to the “alignment problem” of AI: the challenge of ensuring that AI systems reflect human values and democratic principles. As the best-selling author Brian Christian noted in his titular book:

This is the delicacy of our present moment. Our digital butlers are watching closely. They see our private as well as our public lives, our best and worst selves, without necessarily knowing which is which or making a distinction at all. They by and large reside in a kind of uncanny valley of sophistication: able to infer sophisticated models of our desires from our behavior, but unable to be taught, and disinclined to cooperate. They’re thinking hard about what we are going to do next, about how they might make their next commission, but they don’t seem to understand what we want, much less who we hope to become. (Christian, 2020, pp. 328-329)

AI’s inability to reproduce this speculative function of reason (hoping to be more than what we are) is the result of three structural flaws: a) the unsound knowledge platformed systems depend upon (internet data), b) the opacity of the knowledge they produce through their outputs, and c) the political and practical knowledge they impair when they (partly or completely) take over decisional processes.  All these problems deserve our attention, but the last one is perhaps the most consequential from an educational perspective. At its core, lies a key problem: responsibility.

Responsibility is a central, often overlooked concept in education. The very foundation of this institution reflects a collective act of delegation, as elements of upbringing historically transitioned from the household to informal and formal structures of knowledge transmission and behaviour control.  This transition was of course grounded in a great deal of common-sense wisdom about the role of communities, social institutions and relevant “others” in the cognitive and moral development of young ones, and it was supported by psychological and sociological evidence showing that humans seek and cherish shared responsibility because it may increase the collective good. At the very centre of this process of collective delegation, was the educator – the teacher, the tutor, the professor, and so forth – engaged in relational and embodied pedagogical practice, which manifests in multiple forms during the life course, is sustained by an evolutionary and biological substratum, and is deeply embedded in linguistic and cultural traditions – not necessarily Western ones.

Out of this rich tradition stems a deontological view of education, understood as a site of great collective and individual (professional) responsibility.  Dewey (1930) examined this educational responsibility extensively in his philosophy and identified two overarching aims of education: 1) emancipation:  to create environments where young people have an opportunity to break away from the “mental habitudes” (p. 25) they acquired elsewhere; 2) to prepare for the future, not by placing young people on a “waiting list” (p. 58) but by valorising their present experiential conditions. These aims are noble but also meaningless if pursued irrespective of the specific social conditions of education. They must be translated in the sphere of pedagogical practice as a distinct form of educational responsibility connected with goal-oriented action. Like farmers, educators are individuals pragmatically engaged in producing an outcome. Both perform certain actions daily, which allow for a degree of discretion but are never arbitrary because they are shaped by the opportunities and constraints of the environment: seasons, pests, fires for the former, and human and social variance for the latter.  Understanding the contextual conditions in which action occurs is therefore a key dimension of responsibility. Neglecting these conditions may lead to a fallacious and possibly harmful pursuit of a “noble” aim.

Aims mean acceptance of responsibility for the observations, anticipations, and arrangements required in carrying on a function—whether farming or educating. Any aim is of value so far as it assists observation, choice, and planning in carrying on activity from moment to moment and hour to hour; if it gets in the way of the individual’s own common sense (as it will surely do if imposed from without or accepted on authority) it does harm. (Dewey, 1930, p. 112)

This quote takes the argument back to the unreasonableness of AI.

The farming analogy may appear naïve to the cynic’s eye, but in fact adds a material and “down to earth” dimension to the problem of educational responsibility, configuring it as a reasonable endeavour grounded in space, time and relationships. What happens when educators place their trust in unreasonable systems, where human and social variance has been expunged? The answer, echoing Dewey, is that harms may materialise. In a best-case scenario, automated classifications may prove to be erroneous or biased and thus require constant oversight; in a worst-case scenario, teachers have become unable to exercise judgment, as multiple automated systems operate synchronously behind the scenes impacting upon the sphere of professional work and leading to a fragmentation of responsibility. Drawing on research on the ethics of innovation (van de Poel & Sand, 2021), we can plausibly hypothesise the emergence of the following deficits in contexts where educational decisions can be fully or partly delegated to machines:

  1. accountability deficits: no one can provide a rationale or take responsibility for an educational decision.
  2. culpability deficits: no one can be held morally responsible for an educational harm.
  3. compensation deficits: no one is responsible for restorative measures to an educational harm caused by an automated system, financial or otherwise.
  4. Obligation deficits: no one is responsible to ensure that future uses of a platformed system reflect sound educational values.
  5. virtue deficit: no one is responsible to foster a culture of responsibility for the action of a platformed system in education.

A final thought on reasonableness and responsibility

Until now, critical commentary in the social study of data and AI has mostly developed from a recognition of the power exercised through classifications and measurement:  “what the categories are, what belongs in a category, and who decides how to implement these categories in practice, are all powerful assertions about how things are and are supposed to be” (cf. Bowker & Star, 1999; Gillespie, 2014, p. 171). The potential impact of AI on human responsibility requires us to apply the same critical lens to another order of phenomena: the partial and error-prone inference logics that underpin current forms of computational intelligence. Bringing into view these key aspects of computation will help us cast a critical light on the uncertainty that permeates AI, from the ways in which neural activations operate automatically, to the biases that creep in when models are being trained. Heterogeneity is the only ever-present constant here: it is always possible to pick alternative models because, from a purely probabilistic perspective, there are always multiple reasonable courses of action, and the relationship between models and outcomes (classifications and predictions) is constantly updating as new observations become available. Acknowledging and valorising this heterogeneity may be the key to reframe in more reasonable terms the role of AI in the dynamics of platformed sociality and education. For the time being, these are just speculations or, perhaps, aspirations. The current situation is rather different because the outcomes of AI are unreasonably framed as revelatory and superhuman, through the systematic “manufacturing of hype and promise” (Elish & Boyd, 2018, p. 58).  A more accurate framing should instead emphasise their plausibility over their accuracy and, more importantly, their dependence on contingent factors: the inference strategies of choice, the structure of the environment, and the likelihood of new incoming observations.

References

Arendt, H. (1998). The Human Condition, 2nd Edition. University of Chicago press.

Bowker, G. C., & Star, S. L. (1999). Sorting things out : classification and its consequences. MIT Press.

Christian, B. (2020). The Alignment Problem: Machine Inteliigence and Human Values Atlantic.

Dewey, J. (1930). Democracy and education: An introduction to the philosophy of education. Macmillan

Elish, M. C., & Boyd, D. (2018). Situating Methods in the Mmagic of Big Data and AI. Communication monographs, 85(1), 57-80. https://doi.org/10.1080/03637751.2017.1375130

Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167-194). MIT Press.

Hacking, I. (1999). The social construction of what? Harvard university press.

Stengers, I. (2011). Thinking with Whitehead: A free and wild creation of concepts. Harvard University Press.

van de Poel, I., & Sand, M. (2021). Varieties of responsibility: two problems of responsible innovation. Synthese, 198(19), 4769-4787. https://doi.org/10.1007/s11229-018-01951-7

Whitehead, A. N. (2020). Whitehead’s The Function of Reason. Lindhardt og Ringhof.


Posted

in

by

Tags:

Comments

2 responses to “Plug-and-Play Education: Knowledge and Learning in the Age of Platforms and Artificial Intelligence (2024)”

  1. helenbeetham Avatar

    Thank you for this thoughtful piece. You may be interested in something I wrote (less philosophically grounded I’m afraid) about embodied responsibility as one of the founding conditions for language, and why language models will never have it (https://helenbeetham.substack.com/p/on-language-language-models-and-writing). I plan to incorporate your use of Arendt into a keynote I’m preparing for a summit on AI in education and ethics, so a particular thank you for that.

    Like

    1. Carlo Perrotta Avatar

      That piece is sooo good Helen. I shall be using it most definitely.

      Like

Leave a comment