EP 36 - Miriam Reynoldson: The Open Letter Shaking Up the AI-in-Education Conversation

11/11/2025 53 min Episodio 36
EP 36 - Miriam Reynoldson: The Open Letter Shaking Up the AI-in-Education Conversation

Listen "EP 36 - Miriam Reynoldson: The Open Letter Shaking Up the AI-in-Education Conversation"

Episode Synopsis

In EP 36, John and Jason talk to Miriam Reynoldson of Melbourne, Australia, about the Open Letter From Educators Who Refuse the Call to Adopt Gen AI in Education.
See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)
Guest Bio:
Miriam Reynoldson is a learning design specialist, educator, and design facilitator working across higher ed, VET, and professional learning. She is currently completing an interdisciplinary PhD exploring the value of learning beyond formal education in postdigital contexts. Miriam researches and writes about education, sociology, and philosophy, and teaches educational design at Monash University.
You can connect with Miriam at https://www.linkedin.com/in/miriam-reynoldson/ or her blog https://miriamreynoldson.com/
Resources:

The Open Letter: https://openletter.earth/an-open-letter-from-educators-who-refuse-the-call-to-adopt-genai-in-education-cb4aee75
The Library of Babel listserve space: https://lists.mayfirst.org/mailman/listinfo/assembly
The Design Justice Network: https://designjustice.org/
Michelle Miller’s “Same Side Pedagogy”: https://michellemillerphd.substack.com/p/r3-117-september-15-2023-reflection

Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License.
Middle Music: Hello (Chiptune Cover) by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
Miriam Reynoldson EP 36
[00:00:00]
Jason Johnston: Miriam, you are part of an open letter from educators who refuse the call to adopt gen AI in education. Would you, for us, summarize what this letter's about before we get into the details?
Miriam: So it's a really short letter. It's a 400-word statement that essentially positions a certain stance for educators, in saying, "I choose not to use GenAI to teach, to assess to build my course materials. And I do not want to sell these products to students to do their work, either.
John Nash: I'm John Nash here with Jason Johnston.
Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the second half the Online Learning Podcast.
John Nash: Yeah. We're doing this podcast to let you in on a conversation that we've been having for the last almost three years now about online education. Look, [00:01:00] online learning has had its chance to be great, and some of it is, but a lot still isn't. And so how are we going to get to the next stage?
Jason Johnston: John, that's a great question. How about we do a podcast and talk about it?
John Nash: I think that's a great idea. What do you want to talk about, today,
Jason Johnston: Today I'm not sure we've covered this at all. How about we talk a little bit about AI for a change, right?
John Nash: Never
Jason Johnston: That's a joke. Never heard of it.
Well, I'm Just very excited today to be talking with Miriam Reynoldson.
We connected on LinkedIn, and she is somebody I just really wanted to have this conversation with around ai. She's an instructor and a student, a learning designer in Melbourne, Australia. Welcome, Miriam. Would you maybe just introduce yourself to our listening audience a little bit?
Miriam: No worries. I am a bit difficult to introduce because I really don't know where I am. I'm kind of juggling multiple identities at the moment and across multiple universities. So, [00:02:00] probably my primary identity in this conversation is mostly my teaching at Monash University. I'm also doing my PhD exploring non-formal learning in digitally mediated spaces at RMIT. I do a little bit of teaching there as well, and I'm also a digital learning design specialist.
Jason Johnston: That's great. Yeah, we on LinkedIn and we'll probably talk a little bit more about how that came about, but a lot of it was around an open letter that you are part of an open letter from educators who refuse the call to adopt gen AI in education. And we'll include the link if anybody wants to preview that before we get into the conversation, we'll put the link in our, podcast. But Miriam, can you talk a little bit first about, how this open letter came about, what led you to do that and who you letter? open letter.
Miriam: Yeah. The dirty secret really is that I was having a bit of a chat to a friend [00:03:00] of mine in Ohio, Melanie Dusseau, who as the first signature on the letter. And she had sent me a link to this letter that had been put together by Literary Hub in the us a consortium of publishers.
And it was essentially a position from the public publishing industry. We don't support the use of AI to replace our authors, our editors or any part of the work that we do in furthering human creative expression. And I went to Melanie, "Why don't we have something like this, but for educators?"
And I think she said to me, "Oh yeah, yeah, the Netherlands have just done that." And she sent me another one. And these amazing people, initially out of Radboud University in the Netherlands, had written this incredible really strongly worded letter presenting a position against the uncritical adoption of AI in academia.
And I went, yeah, yeah, like that except not just [00:04:00] universities. So, we literally went, yeah, okay. We'll just we'll just put something together for like-minded educators who have made the personal choice and we're not going to say we're banning it or anything like that, but just essentially trying to create a space for educators like us who don't feel our voices are being heard.
And I was going away for the weekend, so we kind of just whipped it up. Some exchange of messages. Melanie went, "Yeah, yeah, that's great. Let's go up." And it just went up and then kind of blew up. And so, I think we're just butting up against a thousand signatures now. But what's been much more striking to me is the hundreds of messages I've received from educators who are unable to publicly put their names to it but who feel profoundly sympathetic and struggling with the dissonance and challenges of being faced with mandates to adopt [00:05:00] or encourage students to adopt generative AI tools in their education spaces. So, I think that we're really just trying to create a space where It's safe to speak about how we feel. Even if that is not feeling, identical to the sentiment in the letter.
Jason Johnston: Of these hundreds of educators that you've talked to, why do you think they feel like they support it, but they can't publicly support it.
Miriam: It's a profoundly political situation. And we probably don't have enough time for a huge unpacking of global politics. And obviously I'm speaking to people in North America and I'm sitting here comfortably in down under.
Jason Johnston: What's market like down under? Asking for a friend.
Miriam: In university is absolutely shocking. So, I speak as a learning designer. That's been the vast majority of my career. And for learning designers, this is profoundly difficult [00:06:00] because we don't have our own syllabi, we don't have our own courses and our own ability to determine this is what our curriculum is going to be.
We're there as support and we work with academics across universities to guide them, particularly in the technological aspects of the work that they're doing. And so, it generally means being agnostic to a whole range of things. But particularly to the technologies that are being trialed either by the academics or by the universities that have made partnerships with certain technology companies that makes it an incredibly political position to have more so for people in the third space than in academic roles.
But as I'm sure you're both aware, academic freedom is a very fraught concept. And so, we, we do often self-censor because we're very [00:07:00] conscious of how tight the education job market is.
Jason Johnston: it seems, anyways, yeah, that's a whole thing. It seems like we have educational freedom until we don't have it. Would you, for us just summarize what this letter's about before we get into the details?
Miriam: No worries. So, it's a really short letter. We used a platform that doesn't allow hyperlinking. There's no references or anything like that. It's a 400-word statement that essentially positions a certain stance for educators, whether they're in K to 12 early childhood education, university, community education, professional training, any aspect of education.
Walking in saying, "I choose not to use GenAI to teach, to assess to build my course materials. And I do [00:08:00] not want to sell these products to students to do their work, either. It's not about a ban. It's not about preventing students from making their own choices or evaluating the outcomes of those. So, from my perspective as a signatory, not as the author of the letter I work with my students really closely on their use of generative ai. protect their right respect to
use it to if they choose that.
Jason Johnston: Great. Well, it's a very well-crafted letter. What are your what are your first questions about this? What would you like to get into?
John Nash: I just wanted to get into less of a question, but more of a maybe a reiteration of something you said, Miriam, which is that it's not a request for a ban. And I think if you read between the lines here it aligns with this idea that, there could be a world where large language [00:09:00] models could be developed and work in a way that we could all agree with if some of these issues were resolved, but they aren't. And the world we live in now looks like what is laid out here in the letter, and therefore if you agree with these notions, then you should be a signator to it, I think.
So, I, could see how groups could, stripes of people could see this as a salvo against ai. But rather it's about allowing people to have agency to say that they believe that this track that we're on now is not viable. Then there could be one later, but right now we sign to say "no" is. Is that fair?
Miriam: Most certainly. As you were speaking, I was thinking about the concept of a ban and its relationship to legal regulation. As educators, we are not in a space where we [00:10:00] we can say outright, this should be made illegal. And therefore, it's banned by default. But I think
the trouble with bringing in a legal conversation is that it becomes quite a final and conversation-ending decision what you're talking about, John, is the potential for a generative AI technology to become valuable and deployed in, in service of human goals and educational goals.
If we were to ban or to make illegal these technologies that would foreclose that possibility. I think there's always space for exploration, but I think there's also a critical role for education to play defending people's choices and values in this moment too.
Jason Johnston: Yeah.
John Nash: I think that's great, Miriam. I think and I really enjoyed looking [00:11:00] at the open letter because, Jason and I were talking, I've been coming to a personal reckoning of my own cognitive dissonance some of these aspects and others in the list I love I wholly agree with a lot of these and then, and so I'm starting to ask myself what about all these other matters that I think are really important or that I actually know outright?
I love Timnit Gebru's work. I know that these models are biased. I know, I know all kinds of things, awful things and, yet I'm asked to think about, and I use, and I work with generative ai. So, I'm starting to really think more carefully about how I want to be in that space.
Miriam: That's really intriguing. So, I know when you initially reached out, Jason, it was probably, what, two months ago? It's been a little while.
John, you're saying you, you looked at the open letter recently. Did, is that the first time that you'd had a look at it or um,
okay. I'm purely curious [00:12:00] because a lot of. People have come to me and said it's really quite strident language. You know, it's really emotive and it's really and I’ll be honest with you, I tried to tone it down as much as I could. It's the most fact-based dispassionate set of concerns. Because I was conscious that we were potentially writing for hundreds of people. I thought maybe, we maybe 17 or something. but you know, who knows?
And I don't want to put emotions in other people's mouths. I simply wanted to state, these are concerns that we have, that mean we've made this choice. So, it was really surprising to me to find that it produced those kinds of emotions for people. Did you find that, or was it different for you?
John Nash: No, I did not. I did not find it strident, and I appreciate you asking that. 'Cause I was wondering now "why didn't I?"
It reminds me of some of the things that I've signed onto before with another network, the Design Justice [00:13:00] network comes to mind, who have principles around the ways in which designers ought to be thinking about the world and using design to for instance, I'm looking at them now, "sustain, heal, empower," " centering the voices of those who are directly impacted by the outcomes of the design process should be center." So, it's really decentering designers as experts on top of people to really, and prioritizing community and others as experts. And that rung true with me in thinking about the importance of valuing human intelligence and the spirit around academic integrity curriculum development, and honoring students' rights to resist and refuse as well. And so just it's looking to others and honoring the voice of others in the process of considering the use of GenAI.
Miriam: I, I have to say I remind myself constantly there are no authors in an open letter. I think [00:14:00] it's. Important to sort of center or de-center and recognize that everyone is a cos signatory. But those words that resonated with you uh, Melanie Dusseau's she was the person plotted with me on putting those words together. And she comes from a literary and creative writing background. So, yeah, she's able to distill some of those values in, in very few syllables which is wonderful.
John Nash: I think it's that not endorsing the automation and exploitation of intellectual and creative labor. Not only in the sort of the background of like the creative labor that was used to train these models, but also in thinking about how we want to enhance the intellectual and creative work of our students, and therefore, what role should or should not generative AI be? Or should there be a presence of generative AI in that process? And my inclination is less so to the extent that we want to engender critical thinking on [00:15:00] the part of our learners. I think that this is, yeah. These are good.
Miriam: It's a question I ask, not because there's a right answer or an answer that I want to hear but I think it's interesting to find what sparks emotion where you perhaps don't imagine there is any, or indeed what emotions those are.
But I also just really appreciate that you looked at it particularly from a specific perspective and you spoke about design, because one of the things that we absolutely couldn't include in that I think it's less than 400 words, you know, was that every discipline, every field, every occupation is its own universe of practices and values.
And so, for Melanie. Coming from the perspective of poetry and art and creative, like painful [00:16:00] gut, wrenching human expression to have that emulated by a text extruding algorithm of whatever kind seems like some kind of profound violence. And of course, that, is not a feeling that you would have if you were looking at this tool and thinking this is a tool that can support me to crunch enormous volumes of qualitative data that is dispassionate and not associated with human pain or struggle, you know, the context is completely different.
So, I come at it from a perspective of educational design which I think is obviously a context that we all share. And I look at it thinking, well, I've kind of given it my best shot. I've tried to scope out all of the possible ways of applying generative AI text and visual tools that I can.
And it's [00:17:00] really produced nothing but irritation. So, I'm not going to recommend it to my students who are learning design, because that would be disingenuous. But that's going to be a different story for every educator, every practitioner. It's really difficult to encapsulate
John Nash: It, is And as I was
talking
with Jason earlier about this, in sort of a, I have now in the last, I'd say even six months a growing cognitive dissonance around the fact that when I look at number one, we will not use generative AI to mark or provide feedback on student work. Okay. I can get behind that, nor design any part of our courses. Oh my God. I am, I'm using it as a partner to design a lot of stuff in my courses. Read number two, I will not yeah, I'm, I do believe that they were unethically developed in many ways. I don't accept the sales evidence or the evidence of the sales agenda very well, et cetera, et cetera. So, I got through number one and I kind of cringed a little and I thought, oh boy.
And I don't know what it's would [00:18:00] be like for me. I think it would be, I've never been a smoker, but I think, I mean, if I have to quit cigarettes, that would be. The designing the part of my courses. I think that would be the, where I would go through some withdrawal. And then as I think, more about that, I think, well, why would I go through withdrawal?
And he said, well, because I would really put in more of the intellectual, creative labor, maybe that the partner, that whatever partnership generative AI provides for me there, I would be putting more time in time - aha. So that means I'm really not managing my time. I need to look at as if, so as a sort, as a professor and a person with a family, you know, I do teaching, research and service. and so, if I'm really valuing the intellectual labor and creativity that goes in. I need to not do other things because the generative ai
is probably doing things for me that allow me to do things faster. No, but is it at quality? So, these are these discussions I'm having in my head. But I, think that it's really a thought about what do we value in our time. And so, [00:19:00] that's been interesting.
Miriam: That is super, super interesting. Acknowledging my positionality again. I teach really small cohorts of postgraduate students. And I'm not a full-time academic. I'm not dealing with student cohorts of hundreds of students in a class. I, don't have that kind of struggle of thinking, how on earth am I going to get, through marking week? And so, I have a lack of that context and that challenge of this all has to get done, come hell or high water. And there are tradeoffs that I need to make. Because when marking week comes on, I go, oh my God. I finally get my best opportunity to dig into what my students are trying to do feels very different.
You know, 15 or hundred.
John Nash: Yes. And that's a lovely point because I think that when some others get to that period when it's marking period or marking time, they may see that as a [00:20:00] drudgery, and, not an opportunity. It reminds me, we keep referring to a, who's someone who's sort of become a friend of the podcast, but Michelle Miller is a cognitive psychologist at Northern Arizona University who talks about "same side pedagogy," that we should be on a learning journey together. So, the marking period is an opportunity for us to get to know our students work better, become part of the ride with them through their learning journey. But I think that also many things have been set up, and we can talk more about like, what's happened in online learning and the way assessment is considered and things like that.
It becomes more transactional than a co-learning journey and then can be seen as drudgery and
then dread like, oh my God, I've got to do the
marking period now, and how can I, that takes too much time. Can I do something else?
Aha. Generative ai, should I Hmm. Could be alluring. So, I think, and thanks to the,
The product placement in the product agenda it can be made to sound alluring.
We were recently talking in an episode we're about to release
about what instructor's doing to sort
of [00:21:00] fake the professor of a course so that the student can
get prospective potential feedback of what the professor might
say,
Before turning anything

Yeah, it's really, so there's all kinds of, it's Grammarly.

Thank you. Thank you.
Yes. Yeah. So yeah. That's really, yeah. Interesting that.
Jason Johnston: We
connected on LinkedIn.
I was very intrigued
by the letter. Really agree with much of it in the same. Generous spirit. I will say that you are entering into this conversation with us on LinkedIn.
You're really opening up the door for people to kind of give some feedback. And I think that what invited me into a conversation with you was I
I kind of responded something.
to the effect of I agree with a lot of these things. I, I have a slightly d different perspective on some, and you asked me what some of those things were.
I think probably the places like John that I [00:22:00] think about where I'm not, and maybe it's just an unwillingness to give up ai, but I also see part of my purpose as a teacher to be educating my students how to move into a world that is using AI and to educate them to use it effectively. And so, and then when I say effectively in a way that is, has a human in the loop, I talk a lot with my students as well as my staff about the human ai, human sandwich. Where, we start with our own effort and creativity. Perhaps we use some AI in the middle in terms of efficiency. We always come back to the human review at the end to make sure that it's accurate and it's unbiased and all the things.
So, my kind of approach to some of this is, is probably more of a moderate use, where I'd say, yeah, I agree with some of these things. But I think I'd prefer to be [00:23:00] a solution in the middle of it and to help guide students and people and staff and educators to use it in a way that is thoughtful versus a full resistance altogether.
But I'm curious what moved you into this space of more of a full resistance to it in education? And then what then compelled you to write this letter and to try to move it in this direction?
Miriam: So, I think it’s interesting that that you suggest a, a quite a common position is that I feel as an educator, I have a responsibility to, to support my students, to understand. And it's the choice of verbs under that to support my students to something. That is really where the cookie crumbles, I think. We have all these amazing debates about [00:24:00] AI literacy and what it is actually encompassed within that. And if we put critical on the front of it does that change the contents of our sandwich? To run with the analogy.
I appreciate that your own experience as a user has been different from mine. And I think that we do as educators have a responsibility to be honest with our students about what we find valuable as people who they're learning from.
I don't want to stand up in front of my class and tell them, well, this is a really amazing tool that I don't like. And I personally find to be like, worse than useless and adds time to our workloads and requires a great deal of rework. So, an example, I led a project last year across a university seeking ways of incorporating generative AI tools in learning design across the sort of end-to-end course development [00:25:00] process. And it was a really disappointing project.
And we ultimately, we found sort of isolated use cases that worked for a single person but didn't work for the rest of their team, or things that produced one-off resources that wouldn't be edited but needed to be rebuilt from scratch if they ever needed to change.
And we sort of walked away from it going we were going to produce a toolkit here. Can we actually release this? Are there any tools? And so having done that work, I personally would find it incredibly disingenuous to sound and tell my students. I don't find it useful, but I’m sure there are uses because I'm not sure there are uses not for what I do. And I think that somebody who has found, particularly effective things, it would be disingenuous for them not to say so, because every teacher and our experiences are [00:26:00] different. This is where I think ethics is about decisions and actions that a person makes, not universal principles that should be followed by everyone regardless of who they are and what they do. Can you remind me what other things were you asking?
Jason Johnston: I know it was a long thing. It was, more around this kind of question then of rather than full on resistance. Saying that, I will have no part in AI at my school or in my classrooms. I don't advocate it for my, student use, trying to be kind. The, change in the middle. and I know even as I say that it perhaps sounds a little trite to say you're going to be the change in the middle of things. But that really is my position on this, that I want to learn and understand all that I can about it. I try to find where it's useful and where it's not useful and try to advocate for a good use of [00:27:00] it as much as possible.
And within that, like you said, create contextual ethical agreements. I was part of that with University of Tennessee overall, as well as then contextualizing it into my own team, which is a learning design team about how we use that and how, really thinking more about almost ethics of care because we have various types of people in our, on our team that do different work. And so, probably the things that I feel is most strongly about in terms of the ethics is having, AI replace one of your colleagues. And so that's a lot where my ethics come in. So, our instructional designers should not be using it to replace our graphic designers, and our video people should not be replacing our instructional designers with it and, and so on.
So, that was kind of my larger question then about full on resistance, what drew you to [00:28:00] that versus being a change in the middle.
Miriam: Something that I will say and obviously it doesn't come out in the letter because I'm me, I'm not everyone. And the letter doesn't say, I will not allow my students to use generative AI tools. Although I think it, it often reads as though that's the position. It's often been it's often been read as we advocate for a ban, which is. Absolutely the opposite of the fundamental driving principle of this is about choice. And we've made a choice and we're asking our institutions to, to support it. The thing is I teach adults and I joke, but it's not a joke that they wouldn't do what I told them if I told them.
Because they can make their own choices. I'm just a person that they meet on their journey. And so, we don't actually restrict the use of generative AI at all in any of their learning or in their [00:29:00] assessment submissions. And that means that I absolutely have a responsibility and I take it very seriously to talk about the ways that they are using or thinking about using. Mostly large language models. Mostly it is, text that is generated, but also, we do some digital design work. So, there's some video and visual material that they do produce. Sometimes I'm less concerned about the quality of the visuals. Although you know, we are at the moment setting aside all of the fairly glaring issues of plagiaristic and data exploitation that makes those things happen. But I am extremely concerned by what I see when students use large language models. That's not to say that it's all junk. It's absolutely not by and large what I've seen over the last three years, because we've never restricted this, is that the [00:30:00] students who are doing well continue to do well. And the students who are struggling continue to struggle because they're not able to discern or synthesize quality material from what is extruded, from the tools they're using. So that that is one indicator for me that using Tool X or not using Tool X is completely immaterial in the skills that we're teaching. So, I then from my own an again, my own entirely limited totally ideographic perspective say I'm not going to stop you, but I'm going to ask you to tell me what you're doing because that helped me provide that the most. Targeted, tailored, meaningful feedback that I possibly can. Because I'm going to help you evaluate your outputs. That's my responsibility. I don't care whether AI was used to produce them. I care about whether they're quality and whether that you [00:31:00] are able to consistently produce quality. That's a different story. That's about my students' choices.
Jason Johnston: That's interesting. And perhaps I didn't read it carefully enough, but I think some of my impression from this, I would assume then that your syllabus would have a very strong no AI position. So, like for instance,
our institution the provost has provided three kinds of like examples where one is like a strict No ai approach, moderate and then an open, right? I tend to adopt the moderate one because of a variety of reasons, because I want to have more transparency. And I think I assume from reading that it, if I stepped into your classroom, that it would be, it would look like a strict no ai policy.
Miriam: Sure. You've got to remember that letter is not me. That there is no I in, in [00:32:00] any of those statements. There's simply a determination for any individual who chooses to sign it. I'm not going to be using it myself. And that's something that I, have publicly agreed to and a lot of people have privately agreed to and have reached out to me and told me I can't sign it.
I can't be seen publicly to be saying this. But that is the position I've personally taken. It's entirely a personal decision. It's got absolutely nothing to do with what they instruct their students, aside from not selling whatever open AI product, has signed a partnership with the university.
Jason Johnston: Yeah, and I think that's a place that, I don't know what you think, John, what your thoughts on this, but I certainly wholeheartedly agree with the, the right to resist in terms of that this is not an inevitable, future. that in order to be an upstanding and productive [00:33:00] faculty member, you don't have to jump on the AI train.
I think that students do need to have awareness, and I think it actually is good for them whether or not they use it productively in either their classwork or their work. think it's really good for citizens, all citizens to understand what's going on here. Some of it to understand the kind of how that it can be fairly powerful and it's getting better and how much it can mimic human responses. I think those are the things that are really, could be eye-opening for people, whether or not they choose to use it.
Miriam: Something that I will say is that it's getting better and better is a very debatable statement. I'm extremely conscious that the scaling laws are collapsing at this point. From a lot of estimates, we are going backwards in terms of the performance of the leading models. As well what we're actually seeing it is a rise in the multimodal models that are promoting [00:34:00] things like access to producing deep fakes for anyone.
So, I can't remember this is embarrassing, but I can't remember the name of the, what is it? Nova model that was. Recently announced that will enable children to produce deep fakes of themselves social media, deep fakes. That really frightens me. And I'll tell you one of the reasons that frightens me is that I had a student and again, all my students at Postgraduates for their mature age who submitted a deep fake of one of my colleagues.
So, one of my peer teachers doing a bond dance in their final project assessment. We found that. Confounding. I don't think it was submitted in bad faith. I think that student thought it was interesting and that it was something that the person marking it would find amusing. And my colleague went, I don't even know what to do. It's a failure, but I also do not know what to [00:35:00] do. That it, it becomes incredibly frightening. Something I did also want to add was that again idiography, right? My syllabus has always included ai. And when I say always, I mean, since we started running the course in 2021 because it is a digital education design program.
And so, we've been including that as a specific component of our curriculum. So, we have a component in our curriculum that is specifically about the use of not just generative ai, but also other forms of AI in education design and education systems.
And as you can imagine, learning analytics and big data are a massive part of that. But of course, it's been completely taken over by generative AI tools in the past probably two years or so where I draw the line is inserting how to prompt engineer Fairly limited [00:36:00] time that, that we have with our students. There's material all over the free internet that enables them to have a play with prompting. And students are much more likely to look for that stuff than to listen to what their teachers are telling them. But they are much more likely to listen to feedback and to listen to
The kind of advice and support that they receive about the subject matter if they're struggling. So, I think that's where we as educators do need to have a sense of responsibility about what it is. We are qualified to teach. I'm a bit of a nerd when it comes to this stuff but I'm not, a computer scientist. Not by any means. I probably more of a digital sociologist. And I think that while I can share some of my perspectives, I don't want to spend all my students' time lecturing about the, those things or making them feel bad about their own choices either.
So, I guess that's where I need to be [00:37:00] careful about how much time I choose to spend in class discussing these things and allow them to make their own choices then, and then them what I'm there
Jason Johnston: Right.
Miriam: Them.
Jason Johnston: Yeah. You don't want to be at a you know, this class has nothing to do with ai. Okay. Now for my first lecture on AI.
John Nash: Hey, we're taking a quick pause here 'cause we're wondering, is this conversation useful?
Because if it is, we'd love it if you'd take a moment to follow the show, so you don't miss any new episodes.
In Apple Podcasts. All you have to do is tap the plus sign on the show page, and in Spotify you tap the follow button.
Jason: Yeah, and also if you find it useful and you are liking this show, we'd really appreciate it if you rated us, it would help us in the algorithms and get us in front of other people.
In Apple Podcasts, you kind of scroll all the way down, you'll find some stars, and that's where you rate.
And then in Spotify podcasts, on the podcast page, there's a three-button menu and you click that and then rate this [00:38:00] podcast.
We would appreciate it.
John Nash: Because the algorithms are run by AI and remember, AI needs to be our friend.
Jason: That's right. We want to do whatever we can really to support the AI and the work that they're, the hard work that AI is doing these days to help move us all along into a better future.
John Nash: It's the hard work AI is doing to make sure that you and I keep talking about AI.
Jason: That's right. That's right.
John Nash: But mostly what we're interested in is whether you are seeing changes in your own work. We're collecting some testimonials. Tell us a story. If the show has influenced your thinking or your practice in any way, you
can share that with us at onlinelearningpodcast.com. We have a link at the top of our page that points to a short form. We'd love to hear from you.
Jason: Yep. Bright yellow letters. 'cause somehow, we ended up on that color. I'm not sure how, but we did.
John Nash: But on its black background, it's accessible. Is that what you're saying?
Jason: Absolutely. Of course it is. John. We're the online learning podcast.
John Nash: We have to be [00:39:00] accessible. Excellent. Alright, back to the episode.
Miriam, I also teach postgraduate adults. I'm in a department of educational leadership, so ostensibly we train teachers to become school leaders, but also in post-secondary and higher education settings. We have doctoral programs where people aspire to lead inside colleges and universities. and so, I appreciate the group that, you're working with.
And also, a, comment you made that when it comes to, I think you said, but with the use of ai, the students who are doing well will do well and the students who struggle will struggle. And it made me think about then, because I'm in a department with colleagues who think about the life cycle of learners from preschool to post-secondary and particularly what happens to learners when they decide to enter into university.
I'm wondering then Why is it we have students who enter university who are [00:40:00] in the stage of struggling still and maybe didn't do so well in in high school. And I wonder if there's a, we think all the time actually, like how might we prepare students better for entry so that they're not struggling? And I don't think there's a pat answer for that, but it made me think about that is that as you have a post-secondary adult student who's in ostensibly good faith, creating a deep fake of a peer teacher I would've wished that they knew that wasn't a great idea. And so how might they have been socialized to, that's a form of a struggle in a way.
It's, I mean, and it's not an academic struggle, but it's a, you know, a critical thinking or a decision struggle. But I'm wondering how we might be supporting our learners along the way to be, more. Prepared.
Miriam: No, I think it's a really, really interesting question and one that, we could have a very long weed session on. But when I think of students struggling at [00:41:00] university the first thing that comes to mind is a student who lacks the cultural capital that is required to kind of master the hidden curriculum.
Which I know is a cliche, but it's a cliche for a reason. And so, we're often talking about academic writing skills or we're talking about the particular forms of logic that a teacher is looking for. But then bringing up a case like a student producing something that is really, really ethically questionable and not being able to recognize that it, it is kind of not on, is a different kind of struggle, isn't it? That's one where I start to get on, my high horse a little bit. Forgive me about our devaluing of the humanities a across the entire life cycle of somebody's educational journey. I'm getting more and more into ethics and I recall, Jason, you spoke about [00:42:00] ethics of care earlier, which I yeah. Kind of had a bit of a squee moment. I think that it's something that we're terrible at. I think it's something that if we are going to start inserting something broadly into our curricula, it shouldn't be AI literacy, I don't think that there's any way of supporting students other than to center values.
In our education when we're talking, if I shift back to equity and I shift back to those spaces of cultural capital that are lacking. And particularly a lot of my students who are, yes, they're in a postgraduate course. But that doesn't always mean that they've completed an undergraduate program. Sometimes they have been admitted on grounds of a significant amount of prior experience. Sometimes they've been working in an educational space for a significant amount of time. I'm thinking of one student in [00:43:00] particular who told me that they were being given a significant amount of additional support through the university as a part of this kind of bridging, so that they were able to do the postgraduate program. And it was absolutely natural that of course they were going to struggle. It was their first time at university. And this is a person who that, you know, I couldn't tell you how old they were. I have not got that information, but I would suggest that they were, at least in their fifties that there are all kinds of struggles that someone is going to have that are related to being in this space that is brand new to them. But being at an age where it's assumed that they have really significant levels of understanding of how to conduct themselves both at university and in an online program. I don't necessarily think that the question is about preparing them so that they're able to fly the moment they get [00:44:00] in but about holding them while they're there. I really quiet like it when my students do badly because it means they're pushing themselves further than they can go yet. what else is the point of school?
John Nash: Yeah. Thank you for that. One other thing I wondered, and I liked looking at the letter and then liked looking at your, a recent post you. made on LinkedIn talking about this inevitability argument. And you posted "there are possible worlds where large language models and other big data algorithms are developed in ways we can support, regulated with clarity and values and deployed in service of meaningful goals."
That would be lovely. If that were to come about would that solve some of the issues that the signatures are signing onto in the letter?
Miriam: In this beautiful utopian imaginary, absolutely. You might notice early on in the letter it says [00:45:00] current generative AI technologies. And then it, it moves on to specify, I think we list particular companies that are behind 99% of the generative AI technologies that are available. And of course, within universities it's, it's rounding up a hundred percent.
What's available is open I AI or spun off open AI or spun off copilot. It's not really about generative AI as a concept. That's a, a, very nebulous concept in the first place. What kind of, what kind of computing technology is not generative,
But it's about how we are currently responding to what was essentially a sneaky release of a completely illegal product approximately three years ago that a number of other companies already had [00:46:00] in gestation but weren't saying anything about because they knew that they had stolen millions of copyrighted documents to produce they were afraid to make those public OpenAI did it. And they went, maybe the water's warm. I think we can do better than that. I really
John Nash: I like what you say here: the, AI inevitability argument, the throwing up the arms, that this is inevitable because it's inevitable is really saying that that we have to that it's inevitable that we'll have to take these unethically developed tools as they are, and that, that that a regulation with clarity and values and deployed in service of meaningful goals it will be probably long down the road if ever coming and so therefore let's just take it as it is because the value proposition being put forth by the companies and their agents suggests that [00:47:00] this is going to be good eventually.
"Look, they're, look, they're trying, aren't they? Trying? Look how they try, they're trying," I think is sort of, and so therefore, "don't worry, it is inevitable." Yeah. I think, yeah. Thank you for this.
Yeah.
Miriam: Oh, thank you for that. One of the most terrifyingly insidious arguments that we do come up against is the notion that yes, we have reversed our greenhouse gas emissions trajectory. Yes, our emissions related to data centers have quadrupled and are rising horrifyingly and steadily as more and more data centers are stood up across the world. But if we keep developing ai, eventually it will get so good, it will solve this problem. And to me, while I think any, artificial intelligence, computer scientist worth their salts is going to be [00:48:00] able to refute that one pretty rapidly. I don’t know if you read Gary Marcus
John Nash: Yeah, big fan.
Miriam: it's not an excuse to keep doing it until that happens. It may be a reason to hold out hope. I think what you asked earlier, John, could we develop systems that resolve and address a lot of these challenges? And then could a lot of these concerns go away? Of course, but we don't keep supporting the slavery while we wait.
So, I think there's space for acknowledging that some people absolutely need to use whatever they possibly can to achieve short-term goals that help us get towards long-term ones. But is not work in service at that long-term goal. I think it fundamentally does come back to values. What do we want to see? And is what we're doing right now serving that? [00:49:00] And if it's not, what could we do instead?
John Nash: Thank you.
Jason Johnston: As we kind of try wrap this like
John Nash: what,
Jason Johnston: what's the best way to, either engage with you or the letter, or to, further
their own thinking, on this matter?
Miriam: sure. I am incurably on LinkedIn. So pretty easy to find on LinkedIn. I think I'm the only Marian Reynoldson in the world. I do have a blog. Again, it's just m reynolds.com. But if people are actually in, people who are in the education space in whatever way. Teachers, parents, leaders, administrators, librarians, students, anybody who is connected to education and feels this is something they do feel strongly about. I have been rallying with some sympathetic folks across the world to pull together a little bit of a, an organized space called the Library of Babel [00:50:00] group. So, I'll share the link to access our List Serve. So, I won't try to quote the URL on audio but that might be something that if people are looking to connect and to kind of find the right orientation for them that could be a useful way to do it and network.
Jason Johnston: Sounds great. Thank you so much. Yeah, we'll get those links from you, your Substack, to your LinkedIn and for that group we'll make sure we put them on our website. And for those listening, it's online learning podcast.com.
That's online learning podcast.com.-
Miriam, thank you so much for visiting with us today. You've, I think really challenged us and gave us a lot of food for thought not today. Just a delightful conversation. So, thank you so much.
Miriam: It was a pleasure. I hope I wasn't too ranty.
Jason Johnston: No.
John Nash: Not at all.
Miriam: It's too tempting.
John Nash: Yeah. Miriam, this was a delight. I think that you are part of a good [00:51:00] conversation that's going on that we all need to have. As I said, sort of what I'm calling a cognitive dissonance that's occurring. I'm running into more and more people in my circles who are not interested in bringing generative AI into their world for a number of reasons, whether it's around environmental reasons or around ethical reasons. And I think that after almost three years of this sort of a, of juggernaut of" look at where this is going" and "this is inevitable," I'm seeing more and more people think, well, maybe it doesn't have to be inevitable and there's ways we can think about this more thoughtfully. And so today really helped me solidify some of that thinking. And I thank you.
Miriam: Yeah I really, really enjoy it and I, I just find that this space is dynamic, decisions are dynamic, our minds can change and I. The world changes around us as well. It's exciting that there are more people near me not [00:52:00] necessarily geographically proximate but that I'm able to connect with who resonate with where I'm sitting.
But it's also incredibly valuable to me to have all kinds of conversations with people sitting in All kinds of other places. 'cause otherwise it's just me talking to me. And,
I mean, that sounds amazing to, me.
John Nash: definitely.
Miriam, thank you So, much.
Jason Johnston: yeah.
John Nash: just been delightful, really. Thank you for staying up late with us. Yeah.
Jason Johnston: Yes.
Miriam: Likewise. Thank you very much.

END OF TRANSCRIPT

More episodes of the podcast Online Learning in the Second Half