ChatGPT and reflective writing

The question of whether ChatGPT can write reflectively is of particular interest in tertiary education, where the ability to reflect critically on past experiences is considered an important dimension of authentic learning.

A paper published in the journal Computers and Education: Artificial Intelligence (Li et al. 2023) recently made the remarkable claim that ChatGPT “may be capable of generating high-quality reflective responses in writing assignments administered across different pharmacy courses”. As we wait for the systematic replication of these empirical findings (which we should wholeheartedly encourage regardless of theoretical and critical inclinations), I believe that some scrutiny is already warranted.  

Let me preface my argument by saying that I fully respect the efforts of fellow researchers in this area, and I agree that generative AI’s capabilities are higher than what many of us sceptics would have believed possible. Nonetheless, I find the above claim and the related implications highly problematic, as they unwittingly validate the notion that the sort of reflective writing tasks currently being designed in higher education are meaningful learning opportunities. They are not.

A more realistic claim would be that ChatGPT can effectively reproduce certain forms of formulaic and predictable reflection that have become commonplace in tertiary education. These types of caveats are very important to add a much-needed sense of perspective to the current frenzied debate about GenAi and assessment. They should not be glossed over or taken for granted.    

The article from Li et al draws on Boud’s framework of reflective activity, which was proposed in 1985 (Boud, Keogh, and Walker 2013). However, the ideas therein were so misinterpreted and misused during the following decade that Boud and Walker had to make a clarification 13 years later (Boud and Walker 1998). In this later article, they noted that a misguided emphasis on reflection had led…  

… some practitioners to translate reflection and reflective practice into such simplified and technicist  prescriptions that their provocative features–such as the importance of respecting doubt and uncertainty and distrust of easy solutions–become domesticated in ways which enable teachers to avoid focusing on their own practice and on the learning needs of students. Some applications of reflective practice which we have identified also exceed the bounds of ethical practice and expose learners to activities which are not only personally insensitive, but are not likely to lead to any productive learning (Boud and Walker 1998, 192).

I don’t believe it’s controversial to say that the use of reflective writing/practice in education has not changed much since this indictment was made, and indeed more (relatively) recent research suggests that students are highly sceptical of forced reflective writing as a form of assessment, viewing it as inauthentic and constrained by unrealistic expectations of affective performance: a maudlin display of professionalism which can be easily simulated for maximum benefit (Birden & Usherwood, 2013).

These are the conditions in which the remarkable reflective performance of ChatGPT occurs. We should always keep these conditions in mind when we engage with the still nascent (inevitably tentative) research on genAI in education, to avoid falling victim to the hype. Unwarranted hype in ChatGPT’s capabilities will undermine even further how we conceptualise and value important educational constructs, such as reflective understanding. The problem (which may well be computationally intractable) is that reflection cannot be abstracted away from epistemological and dialogic contexts. Therefore, any attempt to foster it and then assess it should focus on the situated phenomenology of classroom interaction (the highs and lows of a very contextual experience, as well as the ground rules which were established at the outset to create a safe climate of mutual trust), rather than on an idealised and trivialised (made up) notion of professional authenticity.

With this more demanding notion of reflection in mind, the surprising effectiveness of ChatGPT in simulating reflective writing is not an achievement for generative AI but, more modestly, an indictment of formulaic and trivialised assessment practices. Any possible form of contextual reflection which has been codified to death to ensure its replicability and thus its ‘learnability’ is doomed to fall prey to the combinatorial logic of automated language models.   

I really hope we are not moving towards the immensely paradoxical scenario where the “good enough” performance of generative AI is viewed as representative of meaningful human learning. 

References

Birden, Hudson H., and Tim Usherwood. ““They liked it if you said you cried”: how medical students perceive the teaching of professionalism.” Medical Journal of Australia 199, no. 6 (2013): 406-409.

Boud, David, Rosemary Keogh, and David Walker. 2013. “Promoting reflection in learning a model.” In Boundaries of adult learning, 32-56. Routledge.

Boud, David, and David Walker. 1998. “Promoting reflection in professional courses: The challenge of context.”  Studies in Higher Education 23 (2):191-206.

Li, Yuheng, Lele Sha, Lixiang Yan, Jionghao Lin, Mladen Raković, Kirsten Galbraith, Kayley Lyons, Dragan Gašević, and Guanliang Chen. 2023. “Can large language models write reflectively.”  Computers and Education: Artificial Intelligence 4:100140. https://doi.org/https://doi.org/10.1016/j.caeai.2023.100140.


Posted

in

by

Comments

Leave a comment