A.I. in the Classroom with Anna Mills

LISTEN TO THE EPISODE:

 

Learn how to leverage artificial intelligence for teaching and learning.

Apple | Spotify | YouTube

Anna Mills, a writing teacher at the College of Marin, discusses how A.I. changes the rules for coursework and what faculty can do about it. She maps practical ways to use A.I. for learning and feedback while naming the real risks around privacy, security, and unregulated growth. 

Key points:
• What A.I. agents are and why they matter in education 
• Where A.I. can help faculty with OER, accessibility, and quick learning tools 
• Using A.I. for writing feedback inside a human-centered process 
• Building skeptical A.I. literacy through prompting and pushback 
• STEM-friendly uses such as comparing explanations and checking reasoning 
• A peer plus A.I. review workflow with required reflection and metacognition 
• Academic integrity as a "wicked problem" with tradeoffs not perfect fixes 
• Detection as a deterrent when paired with student conversations and trust 
• Course design strategies that increase intrinsic motivation and reduce outsourcing 
• Communities of practice that give faculty time to learn, test, and share 
• Looking toward 2030 with concerns about AGI, guardrails, and valuing the human 

"I like to say, 'A.I. for input and not A.I. for output.' [For example], ask A.I. to explain its reasoning for how it solves a problem, and ask that in two different ways that are contradictory. Then, the student looks at which way makes sense. Do either of them make sense? Using A.I. in ways that push us to be skeptical of it, I think is really valuable for students, especially as it becomes more sophisticated, more plausible. We can bring value by questioning it. You want to prepare for a workplace where students are going work with A.I. You have to be suspicious of A.I. You have to be ready to push back and not accept what it gives you. I favor pedagogical applications that help students practice that and help them build some confidence in relation to A.I." - Anna Mills

About Anna Mills
Anna Mills has taught writing in California community colleges for 20 years and is author of the widely used open educational resource textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College and the newly released AI and College Writing: An Orientation. Her writing on AI appears in The Chronicle of Higher Education, Inside Higher Ed, Computers and Composition, AIPedagogy.org, and TextGenEd: Continuing Experiments. She serves on the Modern Language Association Task Force on AI in Research and Teaching. As a volunteer advisor, she has helped shape the pedagogical approach of MyEssayFeedback.ai, and she currently serves as co-Principal Investigator for the Peer & AI Review + Reflection (PAIRR) project funded by the California Education Learning Lab.

About Dr. Al Solano
Dr. Al Solano is the Founder and Coach of the Continuous Learning Institute, where he partners with colleges and universities to strengthen student success and equity through sustainable, campus-driven practices. A strong believer in the power of kindness, Al coaches higher education teams using his signature framework—the “Three Cs”: Clarity, Coherence, and Consensus.

With decades of coaching experience, Al has worked directly with more than 50 institutions and trained thousands of educators nationwide. His widely used, practitioner-focused articles on student success strategies, institutional planning and implementation, and educational leadership are embraced by campuses across the country.

Al began his career in K–12 education, later serving in roles at two community colleges. A proud community college transfer student, he earned his bachelor’s degree from Cornell University and a doctorate in education from UCLA.

 

Transcript

Al  

Welcome to the Student Success Podcast. If you work in higher ed and want to learn ways to support students, check out today's episode. Welcome, Anna.

Anna

Thank you. I'm excited to be here.

Al

So tell us a little bit about yourself.

Anna

So I am a community college writing teacher for many years in the San Francisco Bay Area. I got into the open educational resources world and wrote a textbook with other California community college faculty. And I was I um kind of early before ChatGPT and really discovered that I love being in conversation with other educators and experimenting with these systems to understand what this means for education and writing about it on social media. So I've gotten involved in various endeavors, including a modern language association task force on um on AI, and I give a lot of faculty workshops. I've done an AI um orientation OER textbook. Um and I'm you know a volunteer advisor on a not-for-profit app, my essay feedback. So I've kind of got a got a lot of projects going. But mostly I'm excited to be both teaching writing and College of Marin and in this kind of intense conversation space that goes beyond my institution, including being on a podcast with you.

Al  

Well, and I don't know if you're aware, uh, I think you're very much the humble type, but you're pretty well known. You have a good amount of followers, people listen to you. I believe you've been featured on the Chronicle of Higher Education and Inside Higher Higher Ed. So it's such a pleasure to have you on. And today's topic is AI. And AI is just becoming so big, so vast, so much involved in it. Uh, wanted to start off with AI agents. Because you you've presented on this, wrote about this. Tell us a little bit about AI agents in education.

Anna  

Agents is a funny term that people use in different ways, but um basically there's been this big push from the AI companies to build systems that would not just talk to us um, but would take action, use tools, and do complex process sort of for us. Um and they've been trying to do that for a long time. And it's really only since you know early fall 2025 that um they had some systems that worked better, that became more accessible to more people. Um, and those are the browser agents, so sort of like um the chatbot that can do things in your browser for you, right? That can move the cursor and click and fill out forms and schedule things or research things and summarize them and fill out, yeah, I said fill out forms. That became more broadly available and at the same time, or around the same time, we had Claude Code. So we had these agents that got a lot better at um kind of coding and making apps and working with files and doing things on our computer for us. Um they just got better at it. And so it became more worthwhile to try to use these systems in those ways. So we're at this strange moment when, you know, there are some people experimenting with agentic AI, and there are a lot of people who are not who are still thinking of AI as sort of a chat bot that you took to. It's a little bit dizzying because there are so many possible ways to work with these agentic systems, and there's so many risks to them. They're still not perfect, they make mistakes, you have to make all these decisions about you know what to delegate and how much to check it, and what to do about security. And you know, it's kind of a wild west moment, I think, uh in terms of what AI means and how we're we're using it. I don't know if that kind of seems like a useful broad definition.

Al  

Yeah, thank you. And and how are you seeing it in the classroom setting? What are some uses for it? And dare I say abuse it.

Anna  

Absolutely. Well, I think that the first way that educators started to think about this was oh, this means that students can tell a chatbot to do online coursework. So it this reduces the barrier to cheating logistically, and it actually, you know, makes it so that the student wouldn't even have to see what the assignment is. They could say to the chatbot, you know, I'm I'm logged into Canvas, go see what my assignment is and do it. Um don't ask me questions. And these chatbots from the big companies like OpenAI and Google's Perplexity and will most of the time in my testing, um, will actually comply. They won't even say, oh, I can't take a test. They will just do it. Um and so there's an immediate academic integrity, you know, problem that goes beyond what we've already had been um with. Um and so that raises all these questions about uh how do we uh how do we make that less tempting or how do we stop it? Who get who can stop it? Um and how do we talk to students about it? I don't think that kind of cheating is yet is very common yet. I don't think the awareness hit among students that this is so accessible and so easy, um, which, you know, is fine with me. Um but you know, so that that's a big, big question that I've been kind of writing about and talking about and trying to to push, honestly, to push the big companies to tell their systems not to perform work for students, right? Um and at the same time, you know, we need the LMS companies on board. We need um we need some technical barriers here that will help us preserve more space for online learning. And we also need to recognize the limits of those barriers, as we do with all AI, that, you know, um there will be some kind of arms race around this. Um, if we're trying to know what's human and what's not, there will be ways that people try to work around and so that AI systems can masquerade as human. And that will be an ongoing struggle on online spaces to know what's AI and what's not. You know, that's one, that's the biggest part of the conversation in education right now about agents. There's also a growing interest and excitement among some educators, especially on LinkedIn, um in using agentic AI workflows in education. Um so things like um Liza Long built a process for having um having an agent like fix her OER textbook and make it accessible and just do it. Um, and it worked pretty well. Um, you know, another uh Rebecca Forden, I just saw was on the Teaching in Higher Ed podcast. And she was she built like eight apps for her students to engage with, you know, and each one she just did it the night before using Claude Code. And um, she's not a coder and had never considered she would do that. Um so there is a sense of like, oh my goodness, there's so much more I could do now with with these systems. And maybe I could harness that in in creative ways or ways that create major efficiencies for me. And people are really just exploring that and starting to share what they're trying. Um a lot of concerns though, um, because we really can't use these systems with our institutional data. Um, at this point, there's not, there's no safeguard on that. There's no institution saying it's fine, you know, come let your agent into your faculty account and let it um take do some of your work for you. Like, no, like really we're we're not allowed to do that. And there's a lot of reasons we we probably shouldn't be. So there are limits on where and how we could we could use these systems. But I think we're it's just dawning on us what the power of it is. And and you know, that educators need to understand what's possible and how people are working with it, but that's the shape of sort of AI literacy to come and the shape of what our students are starting to realize and starting to do and maybe expected to do in the workplace. Um, so there's another layer of AI literacy that we need that I do think depends to some extent on us trying these kinds of working with AI. Um, and that's where it gets tricky. You know, maybe we don't want to try it for ethical reasons um and for privacy security reasons. So, you know, at this point, it's just it's just worth developing an awareness of like, oh, I actually could could tell the system to like make me a spreadsheet of this and then format in this way and make a handout based on that, and then make a game based on that. And you know, like it's actually possible to tell systems to do things and they do them to some credible level. They take actions. Um it's it's kind of uh awe-inspiring, um, scary, and you know, and and I recognize all the problems with how these systems have been built and the risks involved. And I still think we need to acknowledge this sort of that that this is quite amazing, even if I think it's terrible. I think we have to wrap our heads around like this is a lot of power that's being unleashed.

Al  

I want to go back a little bit to what you discussed in terms of the students, because I have very little faith that we can get the companies to say, hey, when a student logs on and and they say, do this assignment for me, that it's gonna say no. Because the reality, there's a work around that, right? Like uh I shouldn't say a student, because we can't assume that all students are going to be bad actors, right? There's always just a few, right? I think when teachers set boundaries, most students. I think uh Jesse Stommel's the one who said, you know, trust students, that when you set boundaries, they want to learn, right? Not all of them want to take the the easy route. So setting some boundaries because the workaround is quite easy, right? They t can go into any AI and say, hey, I'm a self-learner. I'm going through this textbook and it's asking me this. Can you give me the the answer to this? So that's the workaround right there, right?

Anna  

Yeah, yeah.

Al  

But I think setting some some boundaries, I've been seeing more and more of that of how faculty are incorporating AI as a teaching and learning tool. How have you done that in your classroom? And or as you've done your speaking engagements, what have you learned from your colleagues on how they're incorporating it?

 

Well, I've really focused on um AI for writing feedback within the human-centered writing process. So sort of, you know, not substituting for the instructor or the peer feedback, but coming in at inviting students to engage with writing feedback and engage skeptically. So encouraging students to say what resonated and what didn't, what did it get wrong? What do you want from it that you didn't get yet? And encouraging them to prompt to prompt it and to push back. I think that is a good way to build that kind of skeptical habits of mind with AI and also um support the writing process, but support them doing their own writing and making their own choices. Um and helping them see you can use AI in a lot of different ways. It's not just as a replacement for thinking, it can stimulate thinking, and you can choose to use it that way. And I think that's you know the trend in terms of uh pedagogical applications. People are looking for ways that that AI use might stimulate thinking and learning. I like to say um AI for input and not AI for output. That's sort of like one general paradigm. But you know, some interesting things. There's a great collection on aipedagogy.org where I'm you know an advisor for that site, which is, you know, really well constructed collection, uh curated collection of activities that include things like um, you know, ask AI to explain its reasoning for how it solves a problem. And ask that in two different ways that are contradictory. And then the student looks at, okay, which which way makes sense? Do either of them make sense? Using it in ways that push us to be skeptical of it, I think is really valuable for students, especially as it becomes more sophisticated, more plausible. We can bring value by questioning it. Like even, you know, you want to prepare for a workplace where you're gonna work with AI, you have to be suspicious of AI. You have to be ready to push back and um not accept what it gives you. So I I really favor pedagogical applications that help students practice that um and build help them build some confidence in relation to AI, some bold.

Al  

I like that because you're you're giving them directions on how to use it for for learning, right? I was a uh re-entry community college student. I, when I was in high school, I worked 40 hours a week, so academics really wasn't my focus, right? It was 80s New York City, it was more about survival. So I went in the military and all that, came to California and I started at at a California community college. Fortunately, had really fantastic teachers. The Veterans Center had a a list, uh it was a whiteboard, a list of faculty that were called friendlies. And it was the faculty that the veterans had had written there that like these people care. Oh we don't want easy because we like a good challenge, but these are good teachers and and they care. And I had a couple, one in particular, who uh really put me under his wing and he really encouraged me to, because I'm originally from New York, to apply to a couple of the Ivy Leagues there, and I did, and I got in. So here I am at Cornell, which is super, super heavy, heavy on writing. It's really known as this writing place. Uh, and they gave us this little book, Strunk and White Elements of Style, and they had this amazing writing center, right? So I always went to the writing center before I brought in any paper, and I asked, all right, how uh tell me my argument, and then you know, how's this transition? And the reason I'm mentioning all this, because that's how I use AI for my writing. Well, yeah, interesting. I have my first drafts, uh, and then just like I use the writing center, that's that's how I approach AI to give me feedback to challenge me, and I think that's if we just give those boundaries, those directions to students, right? I think they will be good consumers of of AI as opposed to what I've seen a lot uh actually on LinkedIn. You had mentioned LinkedIn. I'm going, there's a lot of posts here that all sound the same. Because no one really what I'm seeing is like, go just write this piece for me. And then what you start to see is you know, AI has a particular way of writing, and then it all sounds pretty much and they they didn't take the time to do a full go through that productive struggle, right? That thinking process, right? I know you're in the humanities, right, social sciences. Have you heard anything about how AI is being used uh with your colleagues over in in the STEM area by any chance?

Al  

Even the AI feedback is really useful for writing across the curriculum. So feedback on a lab report or anytime, you know, it's I think it's useful to assign writing for thinking. And I many times STEM instructors don't feel like they have the resources to support that or grade. Um but I think if you're inviting students to get feedback on it from AI and also hopefully go to the writing center, and it it sort of it supports using writing in those disciplines. Um But like the the reasoning example I mentioned applies to STEM. So, you know, okay, here's the answer to this problem. Give me three different explanations for how to get there. And um which one is the right reason why it's like you can use it to um again like in a Socratic way, um, in a tutoring style. Um and hopefully you have to build in ways to check the tutor, right? Because you can trust I think you know, STEM classes are also about thinking. It's not it's not distinct, it's about explaining your reasoning and understanding the code. And so you can certainly use AI in those ways just as you would for for writing feedback and sort of thing. And I think like in peer and AI reviewed grew out of UC. They've done some great research there. They've published and um and also with my essay feedback, this not-for-profit app, I mentioned, you know, one of the trends we see is that students are saying, This is really, this is really kind of refreshing and new. I want to use AI in this way. This is very different. I'm not using it to cheat, I'm not using it to replace my thinking, but it is helping me. And so I think students, as you said, like they can really pick up on and latch on to that paradigm shift and sort of see it as a stimulating thinking partner.

Al  

So unpack the this PAIRR Process (Peer and AI Review + Reflection), uh, break it down for us.

Anna  

Okay. So the idea is let's bring AI feedback in um as for AI literacy and for writing support, so for equity in writing instruction. Um and let's not give up the human. Like what motivates us is often the human relationship, the instructor feedback, the peer feedback, the actual audience for the writing. And so we start with some basic AI literacy readings and videos, a little bit of discussion so they know what they're dealing with and what some of the ethical issues are. And then we just, you know, whatever writing we're assigning, we're assigning and doing peer reviews so they get peer feedback first, and then they engage with AI feedback. And we've done a lot of testing of our feedback instructions. So we do have a prompt that two prompts that we share freely as OER. And so they're they're using that. So they're getting feedback that's tailored to their assignment, the rubric, and that reflects sort of this whole ethos of you know, nudging their own curiosity, pushing their own thinking, supporting the development of their own voice. And and they have to chat back at least twice and they have to reflect on it. So there's a metacognitive piece where they're thinking about what did I get from the peer feedback? What did I get from the AI feedback? What am I actually going to use? What did I not like? And they're writing that down. And that's Really basically, uh, well, we do like them to do a reflection also when they turn in the final draft of like what feedback did you actually use looking back, what was actually useful. But basically, it doesn't add that much to the instructional process. It's one mostly one more process assignment where they're engaging with the AI feedback. And it's pretty easy for them to do that logistically. But it's bringing in AI literacy and um and putting AI in this very specific place, which is not the AI hype place of like now it's the king of the classroom and it's everything and it's doing it for you. It's no, it's it's something that you can use within this process that's about human conversation um and your thinking and your choices.

Al 1  

I love it. That what our students need. Heck, that's what society needs as everybody's trying to figure out AI and which ones to use, because it's I every other day I find myself watching you know, two or three YouTube videos on on just anybody explaining anything, and then it's kind of hard because you have to go through the the ones that all they're just trying to say something, but some are actually quite good, and I have to tell you it's overwhelming. Like, for example, Claude for me is overwhelming in the good way, but also in the bad way, because now you can do so much with Claude, but then it's like do we even have the time? But at the same time, it's figure it out because every day you can do something new with with Claude, right? And then this whole Gemini with with gems, and I have to tell you, one that I'm I'm really fascinated with, and I think it has a lot of potential, especially for students with with uh learning disabilities, uh, is the notebook LM. I don't know if you've been on Notebook LM, but it's got this studio, and the things you can do with that studio uh to convert content into flashcards, quizzes, um, maps, uh, documents that'll explain things to you a little bit clearer. I I mean the notebook LM for me was is the one right now that I'm really excited about. I actually put one of my um my book that I wrote many years ago into it and on a few Substack articles, and then it it generated this podcast talking about my my content, and I found it quite entertaining, but also I learned a lot of of how they were discussing my work. It was fascinating. Um, but yeah, I love the fact that that you know you're one of those faculty members that that are looking at this in in such a way to help students navigate it. Uh because I see I do see some you know faculty colleagues who you know have shut it down and and they have all these um detectors that I really don't know how good they are. Um I I do remember Anna when I was in fifth grade, I got this writing assignment and I worked so hard. I was like, you know what? I'm gonna put my all I spent the whole weekend, right? And I really worked hard. I wanted I wanted to feel proud, right? And uh the teacher used to have on the board major assignments. Uh it was a long time where we don't really do these things uh nowadays. I hope we don't. And then everybody sees everybody's grades, but it was numeric. And then I saw mine and it had a C. What's C? And he says, Oh, you you definitely copied. Oh how demoralizing that was. And I have to tell you, like that after that, it was one incident, right? I was like, screw it, I'm not trying hard. So I'm wondering, obviously, there are some people who, you know, some students who use it in an appropriate way, but when do we know that they did? And how do we approach that? How do you how do you navigate that, Anna, as a as a longtime faculty member?

Anna  

I really like this um idea of a wicked problem.

Al  

A wicked that did come from Boston or something?

Anna  

Oh, right. You can't always say wicked. It's actually it's the Australians who have so much great academic integrity and AI work, so much great research coming out of there, folks like Phil Dawson. And they've talked about it as a wicked problem where there's there's not really uh there's no perfect solution. There are sort of better and worse things to do. There is a constant negotiation and and constantly we have to make compromises. So I think like at least not expecting perfection in terms of how we grapple with that question um is a little bit of a release to me um because it is so challenging. And I am sort of trying to do both, let me redesign for intrinsic motivation and build relationships and um incorporate AI in certain ways, um, sort of and also make it a little less tempting to cheat by making it a little bit more logistically difficult. It's just really hard. Like I did go back to using detection and I wrote about it. I don't use it punitively. I have a conversation with the student. If they can explain the paper and they say they didn't use AI, I just trust them.

Al  

Okay.

Anna  

If they did use AI, I say, please rewrite, you haven't done the learning. And it's tricky because Turnitin has gotten um, it has more false positives now than it used to. And so you know, I don't honestly know. I have conversations with students and I want to be on their side and believe them, and I I go with that, but I think sometimes um, you know, we just don't know. We can't necessarily know. So I'm building in more in-class writing. I'm bringing in more like video and audio posts that students do to talk through their writing. I do a lot of process assignments, a lot of choice about their topics. So they're staying motivated. I'm trying to combine all these things because I think on the whole, given all the things I'm doing, I don't think very many people are turning in work that's not their own and then passing the class. Like I think I am, I am talking with people and giving them other chances. And I do think the detection is a deterrent. I have had students where, you know, they they'll admit that it was AI, but without the detection score, they wouldn't admit it. It's really tricky. And I feel like those students, they need that accountability moment. Those students to learn, I don't want them to skate, you know, to like skip it, right? I want them to have the moment I can do this, and my voice does matter, and I can get something out of it. That's where I am, is kind of the kitchen sink, like all the approaches at once.

Al  

Yeah. See, if my fifth grade teacher, instead of just assuming it's a fact and writing that for everybody to see, that C, I wish he would have said, Hey, listen, so this sounds really good. And let me have a conversation with you about this. How much of this I think a conversation with me where I could have defended that, yeah, I spent my entire weekend. I poured, you know, I looked at the dictionary for better words, and you know, and but never had that conversation, right? So I think it's wonderful that that you do that. I think it is interesting something for them to a deterrent, like you said, right? So it doesn't mean like you're always taking it as fact, you understand there's a lot of false positives, but it does have that that effect uh of deterrence, right? Um, you said designing for intrinsic motivation, uh motivation earlier. As we start to wind down a little bit, can can you unpack that a little bit? I'd love to hear about that.

Anna  

Sure. I mean, I just I think this is like the process I've been engaged in, learning from my colleagues um at City College of San Francisco for so many years of thinking about like, how do we connect with students? How do we make the classroom, you know, the place where they feel like they're owning their own learning, they're in they're inspired, they see the point of it. And there's so many great strategies that we that we have for that, right? And in in writing instruction, we have teaching process and supporting students developing their voice and not having to sound super academic and letting them choose their topics and explore things they're interested in, and making writing social. So things like social annotation are great, where they're not just doing the reading and taking a quiz, but like commenting in the margins and replying to each other's comments, things that make what we're doing more fun. Also just being really explicit about what we're learning with each assignment. Like I, you know, what's the purpose? What are we getting out of it? How does it relate to uh the purpose of the course and to things you might use later beyond the course? Um so I think all of those things are should be our first step, right? And most powerful step um toward maximizing learning, you know, reducing cheating. And we have a ton of resources, right? Like educators have been in this conversation for ages about how to do this. And um so I think that you know that's that's a strength that we can draw on.

Al   

And as I often, I work with so many colleges, I'm really privileged to coach them. I remind them that our institutions are not only places of learning for students, but they should be for those serving students. So I think we unfortunately in higher ed, because of just the way it's structured and how uh time is that issue, right? And uh, we don't uh we're not able, it's not built in into the system for faculty to be able to meet more regularly. Uh community of practice, if you will, but not not like a department meeting that's updates, but like actually meeting weekly and grappling with uh teaching and learning and AI and learning from one another and continually improving our craft that way. We we're not really set up uh for that. Some there are pockets of that, but I wish there was more of that. Uh are you part of a community of practice? Is that something that you're able to do at your campus? And if not, have you seen other places do that?

Anna  

Yeah, absolutely. So we do have a community of practice right now where we're sort of coaching each other with teaching projects that we've designed for ourselves. We did an AI-themed one last year where we combined some AI workshops with our own projects and a faculty showcase at the end. And I think um Lance Eaton has been curating a collection of like projects like that, like um community of practice kinds of AI professional development. And I think it's such a powerful model. And I think we need we need to sort of compensate that time for faculty is key to it. And it's it's so energizing and delightful to have the time to have those conversations with other faculty and kind of inspire each other and and experiment. And I think that's also huge in terms of what we bring back to the students. I think if we are learning and talking with each other and trying things and not feeling like we have to have the final answer um to deliver to the students, but that we're in process with this and we have some support, I think then we can say to the students, well, here's what I'm thinking and trying right now. And here are the discussions I'm having with colleagues. And and that opens it up for students then to to talk about what they're thinking and trying and wondering and worrying about with AI. So I think it's I think it's essential. And I've been encouraged by how many programs like that I've seen. So I've been invited to give workshops for a number of programs, like in the SUNY system and uh College of the Siskiyus, West Valley College, where they had something similar where faculty were they had a cohort, they had some compensation, they had some autonomy to build their own project and goals, but they also were doing these workshops that guided them. And there was maybe a rubric around like their project and the outcomes and what they were going to share at the end of it. I think there's a lot of great examples of that. And that we should look to each other's programs and steal from each other.

Al  

Absolutely. So as we wrap up here, Anna, it's 2030. Four years from now. No, not even four years from now, right? Almost three and a half. What does AI look like to you? Given how it's changing almost every day. It's 2030. What what what or or you can't even think that far because it's moving? What what are your what are your thoughts?

Anna  

Well, um, I talked to uh an old friend who who works at Anthropic and Okay, he is sincerely expecting, you know, AGI and vast societal transformation within the next couple of years. And I think that's pretty common among people who work in the field and people who are very serious about it and not just hyping it up. But they're saying that and they're saying, I'm concerned because this is happening too fast, and not enough people realize how much better it's gonna get and how transformative that's gonna be. So honestly, I am scared about that. I'm concerned. Um I think I don't know exactly how far it will go. Um, I hope it slows down. I hope it doesn't get to the level of superintelligence that some are anticipating. But I do think even what we have now is pretty powerful and we will see improvements to that. So we'll see systems, probably with robotics too, that can do most of the things humans can do, or simulate them at least. I'm not expecting systems to be conscious. I don't, I mean, I don't think there's any reason to think they would be since we don't understand consciousness. We haven't designed for it. But I think we're gonna have to do really serious thinking about guardrails, boundaries. How do we know when something is human and when it's not, so that we can still have human relationships and still have humans who are making our own decisions, leading our own lives. And we're gonna have to do that in education too. So we're gonna have to say, okay, for certain things we set the AI aside because we want to know what you or you're engaged with it, but we need to know that this part is you, right? And so I think we're going to maybe start to value the human and value that transparency more, at least in some areas of of life and society, I hope. And try to to think about how we keep AI in its place within our own lives and the ways that we create meaning, rather than letting it take over and run everything. I think that's the grand challenge.

Al  

So basically crossing our fingers, we don't get Terminator.

Anna  

Absolutely, literally. I mean, well, not literally, but literally, yes, or maybe a I mean, yeah.

Al  

It's by the way, I must yeah, I must ask, because I'm hearing uh in the background like noise of something maybe eating or do you have a pet by any chance?

Anna  

What is that my cat did just make an appearance?

Al  

See, I love animals, so when I was hearing in the background, I'm here you know, smiling because I'm sure you know your cat's back yeah, you know, back there eating, and you're doing your podcast, and we can hear your cat eating, which is perfectly okay. I think it's beautiful. Uh, how old is your cat?

Al  

She's eight.

Al  

Okay, she's eight. Do you have one or just one?

Anna  

One. And it's it's amazing how powerful that is. Even we're we're on Zoom, we're doing this whole thing, and like I'm the same. I want to see the animal. Well, connect with the animal in the moment. And but that's true also from the the age of AI. It's like um, we still value the organic. Yes, that's live, right? That's what's interesting to us. And we hold on to that, but then also figure out how does AI support that? How does it help us do more at what we care about for that?

Al  

Well, Anna, I've learned so much. Uh, really appreciate you participating in the student success podcast. I'm gonna continue to follow all of your your good work. And uh, do you have any uh last kind of comments or tidbits or an anything else you'd like to leave us with?

Anna  

No, I just think it's a great energizing moment for us as faculty to sort of be in the uncertainty with students and to try to help them navigate it um while being honest about what we know and what we don't. And and I think it's really energizing. You know, it it's um it takes away from that hierarchical, we have to have the answers and tell them what to do. But it's like we as humans are all facing this great change together in society in what's possible. And um the CSU survey that they did on AI, one of the big findings was that faculty and students were like 97% together on were concerned about the future of unregulated AI. We're we're wondering about guardrail regulation. And I think like taking the moment with the students to to acknowledge that, to acknowledge that sense of uncertainty and common concern, but also common purpose. Um and the sense that we could we could participate in shaping that future, um, both faculty and students. Um part of our our job as educators to bring that that sense into any conversations about AI.

Al  

I love that. So partnering with our students through the productive struggle. I love that. All right. Anna, thank you so much for participating in the student success podcast and to your cat for also uh participating a little bit. Thanks a bunch.

Anna  

Thank you for a great conversation. I really appreciate it.

Al  

My pleasure. Thank you.

 

Thank you for listening to the Student Success Podcast. You can subscribe to the show and newsletter on the Continuous Learning Institute link below. And of course on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts.

How to implement culture change & continuous improvement at your institution.

Access

Guide: Why Colleges Struggle to Implement Priorities & What To Do About It

Access
Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.