In this episode of The Future of Learning Podcast, I sit down with Dr. Tiffany Snyder and Dr. Tasha Bleistein of Indiana Wesleyan University to explore what AI literacy looks like beyond the hype. Rather than focusing on urgency or fear, this conversation centers on faculty agency, institutional culture, and thoughtful change. Together, we unpack how AI literacy can be embedded into courses without overwhelming faculty, how integrity conversations can shift from enforcement to formation, and how institutions can cultivate ecosystems of support instead of one-time training events. From AI-enhanced course refresh institutes to discipline-specific innovation and human-centered policy design, this episode offers a grounded look at how higher education can respond to AI with clarity, care, and courage. If you're navigating AI in your own institution and wondering how to move from resistance to resilience this conversation is for you. Listen on your favorite podcast channel below👇
Matt Larcin (00:00)
Welcome back to the Future of Learning podcast where we explore how education evolving and how we as educators and leaders can evolve with it in thoughtful and human centered ways. I'm your host, Matt Larcin and today's conversation will sit right in the intersection of AI, faculty sense making and the future of teaching and learning, not from the hype driven lens, but from lived institutional practice. I'm now thrilled to be joined by Dr. Tiffany Snyder and Dr. Tosha Blenstein.
Two leaders whose work I deeply admire for its clarity, care, and intentional leading in a moment that can feel overwhelming for many educators. Dr. Snyder serves as the Director of Faculty Engagement and Associate Professor at the Indiana Wesleyan University, where she leads professional development for faculty across hybrid and online programs on the national and global campus. Dr. Blenstein is the Director of the Future Learning Lab.
At Indiana Wesleyan University, where she helps lead initiatives focused on educational innovation, AI literacy, and the ethical use of emerging technologies. So in today's episode, we are going to explore how faculty makes meaning in this moment, how institutions can support agency rather than anxiety, and how AI literacy becomes much more something we've lived with and embedded rather than imposed. So with that, and Tiffany and Tasha. Thank you so much for joining me and for being here today. I'm very excited to learn from both of you today.
Tasha Bleistein (01:34)
It's great to meet you, Matt, and to be on the podcast with you.
Tiffany Snyder (01:38)
Yeah, thank you.
Matt Larcin (01:40)
sounds good. So my first question will be about framing the AI literacy you guys are leading right now. So when you talk about the AI literacy, what do mean and what do you intentionally try not to reduce it to when you start to introduce it to especially to faculty members?
Tasha Bleistein (02:00)
I think the whole time we're gonna have a struggle of who goes first. Yeah.
Matt Larcin (02:00)
Who wants to go first?
Tiffany Snyder (02:02)
Yeah, we're going to be looking at each other. We do work together a lot, so I think we can do it.
AI literacy terminology is something that Tasha really brought to Indiana Wesleyan University where we are. So I'll just comment that when CHAT GPT 3.5 was released in November 2022, the main focus for that first year and a half, two years was on demystifying the technology. What is it? What isn't it? And trying to reduce fear and create opportunities for faculty to talk. And we did start looking at some frameworks for AI that were being emerging in the field. But when Tasha arrived, Indiana Westland from her former institution, she really spearheaded us developing our own AI literacy framework. So I'll turn it to her to say what that actually means for us.
Tasha Bleistein (02:51)
Yeah, so I was coming out of being a faculty member directing an online program at another institution and I was just sort of sitting in this, are we, how do we navigate this time? And I looked at some of the existing frameworks for AI literacy and definitions like the Perkins model. And so what we thought for AI literacy is not just understanding what AI is, but how do you evaluate it? How do you use it? How do you interact with it? And especially because we're a faith-based institution, we look at responsibility and ethics. And what does it mean for us as people of faith to interact with AI in a responsible manner? So lots of different aspects to it but that moral responsibility piece in conjunction with the, do you actually use these tools and use them well?
Matt Larcin (03:42)
Yeah, definitely. So I know you guys have been working a lot together. So how has your understanding of AI just evolved over time since 2020?
Tasha Bleistein (03:52)
Yeah, I think at first I thought, maybe this is just like digital literacy. We just put it in the back of a syllabus and no one fully understands what that means. Or even if they do, they don't really focus on it in their classes. And I think the reality is that this has shifted substantially what we do on a daily basis in the field of education. So it's not something at the back of a syllabus. It's the forefront and it's totally changed the way that we teach and the way that we ⁓ conceptualize learning. So I think it's gone from, yes, we need to deal with this to, okay, let's take this as an opportunity to change how we do higher ed. What do you think, Tiffany?
Tiffany Snyder (04:34)
The shift that I've seen just from the faculty development kind of role is again, although we started with demystifying and a lot of general talk about AI and digital literacy, like Tasha mentioned, we have shifted now naturally into more discipline specific conversations and initiatives. So at first you can demystify AI by talking to the different schools at your institution about AI literacy.
But at this point, there's so many rich applications within disciplines that we're now able to have AI literacy conversations within very specific schools, like nursing, for example, and say, hey, bring a nursing course to us and let's talk about what ⁓ AI literacy looks like for you and for your students, making it much more specific and applicable.
Matt Larcin (05:23)
Yeah, definitely. Also to me, getting into AI literacy is like Tasha mentioned, the, like a digital literacy we had when the internet first came out or it were like back in the late nineties, right? Like the early two thousands, like 30 years ago almost. I think more serious and way more different than digital revolution now though. Cause how we see the AI
Tiffany Snyder (05:33)
Yes.
Matt Larcin (05:46)
versus digital literacy is because digital literacy was one thing and then it just didn't really evolve over the years, but AI is constantly evolving in the past three years. As we've noticed, you know, it's affecting all the fields. It's not just the workforce, but other fields too, like from healthcare to, you know, digital literacy doesn't really impact it that much in the past,
Matt Larcin (06:09)
Yeah, workforce is impacting so much than ever and plus other fields, even the Hollywood is dying and you you can basically create a almost Hollywood grade movie with AI almost, we're almost there. So how do you have faculty now that's the kind of the key question. How do you have faculties to see AI to say something connected to disciplinary thinking rather than more like a separate or external skill?
Tasha Bleistein (06:38)
Tiffany, you wanna start off for this one?
Tiffany Snyder (06:41)
For this one, I took notes, but you just commented on how it is different. AI literacy is different than digital literacy. And at the same time, there are some core principles that when we're working with a discipline, we can go back to that are bigger than AI or broader than AI. So the three things I have here are, where does judgment matter in a discipline? Where does synthesis matter? And where does ethical reasoning matter?
And if you, if they're reflecting on those questions already, then you can ask, well, how does AI intersect with that? And it does intersect. And so trying to find some common ground and sense of purpose with the faculty and their discipline, like you're already considering these things. Now let's talk about how AI intersects with that and invite them into the conversation. Oftentimes our faculty are the experts in their field.
Even students, we want to invite students in to reflect on those same questions within the discipline. How do you see AI intersecting with these questions in your field? We can begin to shape those conversations together.
Matt Larcin (07:48)
You want to add anything, Tasha?
Tasha Bleistein (07:50)
I think part of our effort is let's build faculty's AI literacy and then they will build their discipline specific AI literacy plan instead of trying to guide that so much.
Because I know my field, the field that I've come out of, and I can look at what AI literacy means and I can tell you what I think we should do in that field, but I can't tell you about nursing or healthcare administration or some of these other fields, but I can support faculty in that. sort of the, we have some hackathons that we're starting, for example, to build AI literacy. And when we start those sessions, we say, okay, what are the graduates facing upon graduation? what is happening in the workforce in your discipline? And let's look what we think is going to happen five years from now. So how can we start to look at AI literacy in the future for them? And then what do we need to do in our coursework now to get them ready for that?
Matt Larcin (08:46)
One thing that consistently comes through in your work. I noticed that how intentional you are about pacing, not about overwhelming the faculty, because that's kind of the critical point in a moment that are already very feel overwhelmed and disruptive. that's great that you guys are carrying on that. That will lead me to the second question.
Tiffany Snyder (09:00)
Yes.
Matt Larcin (09:07)
That will be about like more about the faculty agency and then change management. So when institutions like feel pressure to respond quickly to AI especially right now, they feel behind, I don't know how many of them, but you know, how do you create a space for faculty agency, you know, with their reflection and professional development or judgment instead of urgency driven compliance if you can touch a little bit on that.
Tasha Bleistein (09:34)
I think Tiffany leads up faculty engagement and does a really excellent job at it. And one of the things that we have collaborated on is multiple pathways and meeting people where they are. And I think understanding that the faculty are across the spectrum and they have needs that are in different locations. So how can we address those needs? And so I think one of our favorite things we've done is the
AI Enhanced Course Refresh Institute that were just in the middle of our third 12-week session. So we've had two groups go through and we have a third group going through. So I'll let Tiffany talk about what that is maybe.
Tiffany Snyder (10:12)
Sure. For that institute, and we tried to model it similar to what our faculty are used to seeing professional learning communities, but usually historically with professional learning communities or communities of practice at our institution, there's scholarship at the end. Everyone works together on a publication or everyone works together on a training module. This is a little different. So we have a new name for it. Like I said, it's an institute. We really want an applied change at the end. So we say bring a course, no matter what discipline you're in, bring a course that's been on the shelf for a while and could use a refresh already. And we're gonna challenge you to, through this 12 week process, take that course and through bi-weekly sessions, I mean every other week sessions, we'll teach you a little bit about AI and then we'll give you live working time with our learning technologists to implement changes in that course and to aim for four to five redesigned discussions or activities that intentionally bring in literacy. And so they've got plenty of time. Like Tasha said, it's 12 weeks. So they're only meeting five or six different times across the 12 weeks. And then they have access to someone from our team in between for mentoring and coaching around AI literacy and around those changes in their course. So that's been a really exciting effort and way for them to just tackle what they're comfortable with. We have faculty that will graduate from the Institute with a course that feels really new and some AI literacy activities that are exciting in a more overwhelming way. But we also have some faculty who are making just slight changes and we celebrate both. So we just want to see a little bit of growth in any case.
Tasha Bleistein (11:54)
One thing that's been great too is after the first institute, we had the faculty share with all the other faculty. And I think that really builds up some energy around it.
Matt Larcin (12:07)
Sounds like a great approach to the institute because otherwise can be overwhelming and on the self pace but when you kind of sometimes they need the hand holding and they prefer that. if especially working with the professionals who are already like doing a lot of work on it, you mentioned, like learning technologists and ⁓ maybe instructional designers who are already involved in it. So that's great. But what about, what kind of tensions do you notice, or if you notice any between institutional expectations and faculty readiness, did you receive any pushback?
Tiffany Snyder (12:42)
Think those urgencies are different. I I'm sure there is some tension, but mostly they're just different. So the institutional urgency for us at Indiana Wesleyan National Global, because we were one of the first programs to go online, we want to be innovative. And so there is an urgency to have an enterprise AI solution and to make big, big changes and to be out front.
And our faculty have an urgency, but to be honest, that urgency at the beginning was mostly around students are misusing AI and I need you to tell me how to guide them in this. That's their urgency. And so they hear that the institutional push to be innovative, know, integrate AI, but they're like, but what about my student situation from last night that I was on the phone with for an hour? So there's totally different urgencies there and we have to address both. That's why anytime we offer something like an institute, there is a dedicated time to also address what the faculty need and to spend time saying, we're going to help you develop clear AI expectations for your course and rework some assignments with AI literacy. We have to give them what they need today as we're pushing them towards like creating tomorrow's assignment.
Matt Larcin (13:57)
I will move to the...
Next one. this basically emphasizes the agency. I think it really shows up in how we talk about curriculum and not adding more work for them, but also I think working a little bit differently than we used to with what is already there. So yeah, that makes the whole difference now. So you also mentioned about embedding AI literacy into courses, faculty already teaching, right, rather than asking for full-scale redesigns for their courses. That's good, but how do you help faculty identify natural entry points for their work?
Tasha Bleistein (14:34)
So we have a couple different ways that we do this. One thing that we did is look at the data. So we pulled recent data and then we created bots to create heat maps telling the faculty where their areas of most need are. And then recent, we've added to our data.
First, we just did end of course surveys, but then we also added in our LMS data that showed where there's problematic assignments. So in the first week, we're data informed and we say, okay, here's what we know about your course. Does this align with what you're feeling, what you're seeing as you go through the course and teach it? So that's one starting point. And then often they have in mind some really problematic areas of the course that they want to work on. we created bots to help them. So we give examples from all the different AI literacy levels on our scale, and they're inspiring examples. And I think when faculty see that, they're like, wow, that looks like a really interesting assignment. I want something like that in my course. And we try to pick them from different disciplines so they can get a sense of what that could look like.
But they can then plug their existing assignments into the bots and based on recent ideas around what AI literacy looks like in assignments, it'll create an AI literacy assignment out of what's already existing. And of course, they need to iterate and they need to change it. But that helps them to go from that sense of, I don't even know where to begin to, okay, here's a creative way that you could change what you have that's a problem into something that maybe is the strongest piece of the course now.
Matt Larcin (16:19)
What LLM did you use for creating the bot?
Tasha Bleistein (16:23)
So we used two. Tiffany, you can answer.
Tiffany Snyder (16:26)
We are using OpenAI, ChatGPT but we're also using BootleBox. BootleBox is something, it's a toolkit, so it has access to several different LLMs through it. And we do have some seat licenses for our full-time faculty. But just to be able to compare multiple options, we're creating them in both places. And we've found we're asking feedback from faculty about their experiences using
Matt Larcin (16:45)
Yeah.
Tiffany Snyder (16:51)
The neared bots and some prefer the chat GPT ones and some prefer bootle box so we just keep offering both but... plot, yes.
Tasha Bleistein (16:57)
Yeah. And we use clod to run the bootlebox, but just yeah, so they're getting Claude and they're getting Chat GPT, but, but we could recreate them and run it with a different LLM too.
Matt Larcin (17:09)
So which one did you get the most useful? from your own experience or any faculty, they give any feedback on it.
Tasha Bleistein (17:17)
I find that Claude a bit more creative, but that our sciences faculty prefer Chat GPT, but they're always changing. We created them in gems as well. Well, I did for some other trainings, some similar bots, and I find that all of them were helpful because they give them a starting point.
Matt Larcin (17:34)
Cloud code is really good. I've been working with it recently and I think it's coding and I think the each LLMs are for different purposes. For example, I like Google Gemini because there's the nano banana. I think that's the most advanced kind of compete with the mid journey,
ChatGPT is more for daily tasks, emails and stuff like that. And then the cloud is more like specific tasks. you know, if you do coding, cloud code is really do a pretty good job. If you know a little bit of HTML and CSS and Java, you can tweak it. ⁓ But it gives you a pretty accurate, like whatever you ask for it. Like basically it can do text with anything that you want from it. And then it turns into a code perfectly fine and you can create a website or like HTML templates and stuff like that so really good. So are there any moments that when faculty realized they are already doing a lot of a lot more than they used to with this than they taught or did they give any resistance
Tiffany Snyder (18:39)
We've faculty join the Institute saying that they join, not many, but a few join, saying that they join because they want to know what we're saying about AI. Like their resistance was so high that they're like, I want to know what these conversations look like. I want to know what you're telling other faculty. They get in there and by the end, they have some of the richest examples of AI literacy assignments in their course. And so we haven't mentioned this yet, but faculty modeling for each other has been, addition to, you we said we pay them. Hearing from each other is very motivating. So anytime at the end of the institute that we can provide opportunities for faculty to share what they learned, even if it was a mindset shift in some cases, is a huge deal. And faculty love to hear from each other. So we've tried presentations where each person that went through the Institute presents on what they did in their course. But probably my favorite thing that we've tried is a panel where we just had a whole panel of people who have been through the Institute and some set questions. And that panel represented quite a variety of comfortability coming into the Institute, but a shared love for the experience coming out of the Institute. And then they get to hear that and be inspired like, if so and so can can do this, I can do this.
Matt Larcin (20:01)
When you use the phrase like AI resilient assignments, what kind of mindset shift are you hoping faculty will make and what would you as directors for this leading institution will be your expectations from them?
Tasha Bleistein (20:18)
I think a lot of the joy has been taken out of teaching in recent years for some faculty with the COVID disruption and then AI disruption. they maybe later in this podcast, we'll talk about academic integrity. But I think that those have, they've taken away some joy. And I think that the mindset shift that we're hoping that faculty will have is this is a fun assignment. wait to see what my students do. What will they create and how will AI scaffold them to get to a place where their, thinking is way beyond what it would have been with just a standard discussion.
I think there's a lot of faculty who've just kind of sat back and use the same assignments for years and they fail miserably in AI. If it's read an article and write a five page paper, well, you're not going to get much interesting. But you're integrate those things too. It's so disheartening, I think, just because it's the same thing over and over again. And now it's not even the same thing over and over again. It wasn't even written by your students. If we can totally transform assignments, then I think that's the mind shift that we want them to make, that mindset shift. yeah.
Matt Larcin (21:40)
Is this expected mind shift can be taught or should come naturally out of them?
Tasha Bleistein (21:48)
What do you think, Tiffany?
Tiffany Snyder (21:50)
I do think it can be, I think it can be cultivated and that's where that modeling piece has been.
I can't even take credit for it. We're just creating spaces for it to happen. But the biggest signs of faculty mindset shift that I've seen have happened because faculty have heard from other faculty. So I do think it can be cultivated. I think it has to do with creating those very intentional spaces where you've got some of your lead faculty sharing what they've tried and having testimonials and examples and then others hear that and it just kind of is a culture where everyone is learning and growing together.
Matt Larcin (22:24)
Right. Yeah, that makes sense. But then how do you help faculty move away from trying to outsmart AI and toward much more designing, much more meaningful, you know, for, for their learning and teaching in their classes, especially on online education. it's really hard right now because the, all these browsers can basically do all the assignments on the learning management systems.
Tasha Bleistein (22:48)
Yeah, I think one thing that we try to focus on is integrity formation and not surveillance. And so if we can invite students in to authentic learning experiences that engage deeply with who they are, and we can articulate why we're here and what the purpose of our courses are. And we're not overwhelming them with busy work and assignments that don't seem to connect to anything important. I think that's the foundation for those conversations and to get the kind of work we want out of students. But I think one thing we're really pushing is for faculty to have those conversations. Like here's the AI guidance here's why we have this, here's what it means, here's what you're going to learn in this course. And if every course starts with that and when every main assignment comes up, you re bring that conversation up and you say, hey, this is what we want you to do with this assignment. think that's foundational to to moving away from the AI resistant trying to trick them.
Tiffany Snyder (23:51)
In our very first institute, I remember we had the data bot. We give them a bot each time they meet for a synchronous session. And so the first week they get a data bot. But it made me laugh because Tasha developed an AI resistant assignment bot. That's not what it's called, but basically you can copy and paste your assignment into there and it helps generate ideas for delivering that assignment, meeting those learning objectives in a way that is not so easy for AI to accomplish it.
But she also that same week developed a bot that was more dreaming, more aspiring to have an AI literacy assignment. And she basically told the participants, look, I know you want this one. You want this bot over here, this AI resistance bot. Like basically, I'm going to hold it behind my back. You have to give me this, this AI aspiring bot, the AI literacy bot. And then you can also have this too. basically making sure that they knew if we give you the AI resistance one, you're not gonna finish this Institute only putting all of your assignments in there and trying to make them AI resistant. You're just, that's not enough. We are going to dream bigger and redesign some things. And then you can also have that too. So just makes me laugh. Did I articulate that okay, Tasha? Did I, you know, when I'm storytelling, I kinda add in a little bit. Okay. Yes.
Tasha Bleistein (25:05)
No, I like to, we had one faculty putting them against each other. Like, this bot says this and this bot says that. So I think we're just trying to scaffold the experience for the faculty and make it fun and for them to get that joy back. And I think by the end, they're there.
Tiffany Snyder (25:13)
Right.
Matt Larcin (25:25)
I know it's so much disruption lately in the past five years, starting with COVID and now the AI after two years or three years later. So now it's much more pressing. Like it's not going to be optional anymore that we have to tackle with it, especially those assignments. I think the conversations will naturally lead into, you know, much more questions to like Tasha mentioned, academic integrity, trust and expectations. So that will be my next question. So you have been deeply involved in university-wide trainings and around academic integrity and then of course, in AI use now. So how do you refine or reframe integrity conversations academic environment so they feel like teaching opportunities rather than enforcement mechanisms.
Tasha Bleistein (26:08)
Tiffany's area has a mandatory training that they've tried a new just-in-time approach to that has rolled out just in the last two months. And it's really, I think, a great approach. And as part of that, one of the things that we've asked that I partnered with Tiffany on, she can talk more about the whole training, but was AI guidance in every course. So clear AI guidance so that students aren't confused about what is and isn't allowed. And so our affiliate or adjunct faculty and our full-time faculty should all complete this training and then add that statement. And I think even just that because we see the research. Students want to know very clearly what is allowed and what is not allowed. And so some academic integrity is this, you know, there wasn't clarity on both sides. So this is forcing both sides to be clear. And then if we do have those conversations around ⁓ academic integrity, at least we started from a place of mutual understanding. And I'll let Tiffany add more to that.
Tiffany Snyder (27:16)
Yeah, we from day one and I consider day one, November, 2022, when Chad TV2.355 was released, there have been people asking around the institution, asking for policy, policy statement. We need a policy statement. Like the policy statement is going to come in and just save the day for everybody. But instead we really focused for the first couple of years on guiding principles because to be honest, there's a lot to figure out before you go establishing a policy, which is harder to change.
So we had guiding principles that we used and then in recent months decided that we're ready for the policy conversation and action to happen. But rather than doing that at the school level, for our entire institution, mean, as complex as it seems, it really needs to be at the course level because we have certain, we have to go to the learning outcomes, learning objectives. And there are certain instances where students need to learn foundational skills and we might opt for the, I would say the lowest level, but if we're scaffolding level one AI literacy, like we don't want you using AI here because we need you to learn these foundational skills. This is why AI could actually inhibit your learning here. So being very transparent with students about how AI either supports or could detract from their learning, giving them that why.
In other cases though, in another course in the same program, we might have a level three or level four on the AI literacy scale assignment saying, all right students, here's how you are welcome to use AI because this is what we want you to learn and we actually think AI can help you learn that. So we are really emphasizing the why and making it a course level effort to clarify expectations. But that is hard work, making it at the course level is really hard work, which is why this training module is just in time. within the same month or within two months, I guess, of someone being assigned a course at Indian Westland, they receive a module that says, hey, you're about to teach a course. If your course doesn't already have language, which we hope it does around AI guidance, this is how you're going to be prepared to do that. You have to offer this. You've got to clarify for them right off the bat.
Matt Larcin (29:31)
One question on that. how do you guys try to do the one size fits all courses if they don't have the AI on their syllabus? Or is it like tailored to each academic department or academic subject?
Tasha Bleistein (29:47)
We give them the scale, it's a five level scale. we ask them to reach out to their academic leadership if it's not already in the courses. So we're putting pressure on our academic leaders to make this a standard. which it worked. Today I was meeting with a faculty lead and she's like, I guess I need to add these to all of our courses because my adjunct faculty keep asking me what level is this course, which was part of our intention so that every course has an assigned level and there's guidance for all faculty
Matt Larcin (30:17)
Even every course, like if it's an undergrad or grad or like a career or like a certificate course or something. So you guys have different for each level of those courses. Okay. Yeah, that makes sense. What about the academic? So for example, like from biology comparing to business. So do you have any distinction?
Tiffany Snyder (30:28)
Yes.
Tasha Bleistein (30:28)
And yeah.
Matt Larcin (30:40)
While you are providing these type of resources.
Tasha Bleistein (30:43)
So we offered this, it's the AI literacy scale that we personalized to our institution. And then there's AI guidance at the different levels that can go at the start of a course. And then we're encouraging them to do it in announcements and other places too, like live sessions with their students where they talk about it. But we introduced it as a framework to... that they can use as is, or they can personalize to their class. So if the discipline is different, we just know a lot of times you need at least a starting place of common language. But we're not trying to police that you must use exactly what we wrote. So if biology wants something different than business, that makes sense.
Tiffany Snyder (31:24)
but they might have the same. So they might choose to alter the language, but they might say, you know, we, this, this course based on where it's at in our program, we're going to invite students to use AI, you know, to this, this much. And then this is where we want it to stop. And that could be the same in biology as it is in business. ⁓ If they don't, because it's written in a way that it's not really specific to the assignment. It's just, you know, whether students can use AI to ideate versus as a collaborative partner where they're like intentionally going out and using AI to collaborate on the assignment. And it just kind of is more defining what that AI use looks like. And it can be cross-disciplinary, which is exciting.
Matt Larcin (32:07)
Yeah, so you kind of mandate them or force them to put the AI language in their syllabus. That's mandated. And then at course level is the kind of tricky part, guess, but you still kind of mandate that as well , how do you support faculty in crafting these AI statements that makes it more sense and clear more, not scary for the students Do you guys give any one-on-one support or you just give them the resources out there and then, hey, you know, this is what you can use or, or you just open to meet with them one-on-one basis and if they need help or how do you guys support that?
Tasha Bleistein (32:49)
I think the Institute is one way. we've had good responses to the number of, especially full-time faculty who've been through these institutes. We are offering an AI guidance hackathon where they can come in and they can work across a number of courses together or across a whole program where they add AI guidance in a collaborative online time. so that they can support each other in that. But I also want to point one thing out about it is our guidance includes a lot of AI literacy embedded within it. So it's things like we want you to tell us how you used AI and then reflect on how well the AI did. So there's, I think when faculty see that it's not just about you can or can't use it. It's about how do we build the skills for the future. And that change is based at level five. They don't have to tell us everything they did. They don't have to share a prompt. Like this is an innovation level that we're hoping they'll get to. But it's building that academic integrity, that resilience, that who do they want to be as human beings? Who do they want to be as students and professionals? So that ethical component is throughout. I mean, I think that's the only answer there is to those browsers we're talking about is how do we build the students, how do we help them to see who they want to be and how those everyday decisions make them the kind of person that they want to graduate being.
Matt Larcin (34:20)
I will move to the faculty development as sense making, through, you know, institute initiatives, you've been holding and doing over the maybe since last year or so you are currently leading all those maybe having newsletters and weekly prompts for faculty. Those are great and inspiring. So you have created multiple spaces for faculty to think together with them, peer reviews and all that and and perhaps maybe they are learning together. How do you intentionally design faculty development or these programs as a collective sense making rather than skill transmission?
Tiffany Snyder (34:57)
One of the things that we have been charged with in National and Global, so I'm not even gonna take responsibility, we have wonderful leadership in place, supervisors, we both work in the opposite of academic innovation. We have a vice president for academic affairs at IWU National and Global who's very supportive. And one of the things that he challenged me with years ago was that faculty development is not an event, it is an ecosystem.
And I want you to always be looking two to three years out and thinking about culture and thinking about change that takes place over time and not just they show up for an event and then they fill out their satisfaction survey or, you know, they go to the event and then we assess their skills right after and we call it done. And that still is something that can be challenging even now. Cause I remember one of our early AI events, was exciting, it went better than I expected. And the next few months later, when we had our next gathering, and we were going to talk about AI again, someone commented, Oh, I thought we already did AI. And I thought, what do you mean we already did AI, like as if it's a one and done kind of thing. And I laughed. It's like, no, the ecosystem. And so when we're designing things, we just try to think beyond events. So it's pathways and it's culture. That's why the weekly communications, even there, sometimes there might be a critique like, you know, what's your ROI on those weekly communications where there might be an AI prompt of the week or, and it's like, no, I mean, there doesn't need to be an ROI on that, I understand. But some of that you do intentionally to keep the dialogue alive and to show where the university is going and what you value. so as long as we continue to have that thinking ecosystem, not event, ecosystem, that's where that sense making comes into play like a skills checklist.
Matt Larcin (36:48)
How do you help learning to continue after the workshop you put or training constantly communicating like you keep the conversation going with the newsletters or prompt a week I think those are innovative approaches. At the same time I'm learning from you. we are sharing what we have been doing. I also value the impact of like social media, not maybe all faculty are using social media because of time constraints, especially LinkedIn, sharing even your department's page and sharing the links there. And it can have the broader, of course, other than your institution, but.
Matt Larcin (37:28)
We do like newsletters monthly, faculty newsletters too, and, you know, highlights what's going on in the distance education department or, you know, what's coming up, mandates. How do you see also, I'm going to squeeze another question there, but how do you see the upcoming accessibility mandates and nationwide that's going to impact the basically all courses, especially online. Do you see that AI can enhance or help with the meet the mandates or do think you have a lot more work to do?
Tasha Bleistein (38:04)
That is a big question. And Tiffany and I are both part of the AAC &U Institute on Pedagogy and AI that's running right now. I think there's 191 different groups going through it. And this conversation comes up a lot about accessibility and how are we navigating it. So it's really great to be a part of a larger network like that. we're, you workshopping solutions to like audio descriptive captions and who are you using and how are you using them or how do we get our, you know, the documents in all of our courses up to date.
We have a little bit more time as we're not a public institution. So we have some more time to be fully compliant, but this is a long process and an important process. So yes, I think AI can help us. I think there's just a lot of confusion out there among many institutions of how we'll fully become compliant. But the closer we get, the more we can serve students in a way that's meaningful to them we're on board and we're trying to work with the partners that are out there and they're offering us AI infused bots to sort through and decide which ones. So yes, AI is a big part of it, but we're also in the mix with people trying to figure it out.
Tiffany Snyder (39:17)
Helps to have the group that we're a part of, the AAC and you and the broader community. But also, one thing we could have mentioned, I suppose, at the beginning around what we do with institution. I'm in faculty engagement and Tasha's with the Future Learning Lab. We have another office that we work very closely with, Learning Experience Design with our Learning Experience Design team. And so many of them are passionate about this subject and follow it closely. so, thankfully, I just think the more people you have in the institution that can come together collectively to navigate it, the better. And don't have all the answers yet, but I'm thankful that there's some teams of people that will.
We'll go through.
Matt Larcin (39:58)
Do you have any kind of like a student disability center at your institution, like works with the learning experience design team, or if they do all work with that accessibility work?
Tiffany Snyder (40:11)
We do on our residential campus. So Tasha I, being part of National Global, are a part of our primarily online, there are some on site, but primarily online campus. And so the full-time folks that we have in that area are on the residential campus. And so our learning experience design team does take on quite a bit in terms of translating that to our online courses.
Matt Larcin (40:33)
You have also mentioned that you work with the faculty wide range of disciplines, how do you translate the conversations about, AI so that they remain pedagogically meaningful, whether someone teaches, like you said, nursing or theology because you are faith-based institution, business or even arts.
Tasha Bleistein (40:55)
I think because we are offering so many options in this ecosystem that Tiffany mentioned, all of our options empower faculty, which historically at our institution, you might think that faculty, you know, they always have a lot of autonomy, but we work in a master course system because we're online and so they if they're not as SME as subject matter expert maybe they haven't had that much input into their courses and they're teaching courses that they didn't design.
And one of the things that we've changed across our office, not just the two areas that Tiffany and I are in, is giving faculty a bit more autonomy in that. And so we've said in these institutes, you have the editing power. We will help you too to navigate the LMS if you don't know how to do that at the building site. And we can create really great things that partner with your ideas. But we're giving more power to faculty and we're saying we want you to take the lead. And especially in this area of AI. So that really helps because we're honoring their expertise, which I think the standard faculty in a residential campus, which is what I came from, know, yes, you're the king or queen of your own domain, but that's not what our faculty have had. And yeah, so in this ecosystem this next week, we're offering a webinar, for example, where we're looking at how they can add games or simulations or studios or AI, other AI enhanced creative activities. And we'll go through what those are, pedagogically why you would add those. And then we created bots that will transform existing assignments into a game or into a simulation. and they get to try those out. And then if they want to add that to the...
Matt Larcin (42:45)
Yeah, that's awesome. I love the games part.
Tasha Bleistein (42:48)
It's great. So H5P, we just trained them in our Institute. Two days ago, they had that where we were saying, hey, here's H5P. If you want to create your own activities doing that, which is part of the Creator Plus Suite. And we have Lumi, all the Lumis, which is Bright Spaces, AI Enhanced. So we're getting feedback and tutor and all of them. so, yeah, we're...
We're pretty committed to innovation.
Matt Larcin (43:13)
Two many tools can get overwhelming and because they will require trainings, right? it's kind of keeping the balance. Okay. One thing at a time. But I love the idea that you ⁓ doing with the master courses.
Matt Larcin (43:26)
What extent do you also adopt a course disciplinary framework and where do you also intentionally tailor your approach to that? Epidemiological or pedagogical ⁓ norms of specific disciplines in your institution.
Tasha Bleistein (43:40)
I think, so one of our goals as a faith-based institution is faith integration. And so that can provide sort of a common grounding across all of our courses and disciplines where we look at values that are important to us like stewardship and integrity. So that I think helps with some of these conversations.
Matt Larcin (44:01)
Great. Thank you for that. So my next one will be about infrastructure and culture at your institution. I know you have helped developing a lot of training pathways and use a lot of shared language and everything and data-informed systems, I love, backing the trainings with data. That's what I most like to allow because we can have some proof. Okay, so this is the proof, right? To share with them. So how do you think about balancing institutional infrastructures with much more trust-based, human-centered faculty support in that connection?
Tiffany Snyder (44:28)
Yeah.
Matt Larcin (44:43)
Who wants to go? Then this one first. Maybe Tiffany. Tiffany, Tiffany. You got this one.
Tiffany Snyder (44:48)
There you go. I'm fine with this one. I got this one, think. I hope with infrastructure, I mean, we kind of talked a little bit about our platforms do matter. I mean, they're not always fun to talk about, but we have our LMS system. Tasha and I haven't had to do a lot with that because again, we have a leader that's been advocating for a lot of like the full package, the full Brightspace package. And so that matters. That helps our efforts significantly. So within the past year, we have the Creator Plus and just recently we have the Lumi tools. And so we are trying to get some infrastructure in place that allows us to continue to be innovative. We are researching enterprise solutions with AI. Our full-time faculty have access to Bootlebox and leadership are communicating regularly about our values, our desire to be innovative as an institution and to move forward together. And so that helps. But I think that the work that we're trying to do in the Office of Academic Innovation is to show faculty where their agency comes into play. So these things are happening around you. The last thing we want is for you to be overwhelmed. We just want you to show up with one course and let's see what you can do with that. And really trying to say like, look, all this is there, it's exciting, but we just wanna talk to you. So bring us a course and let's just see what you can do and feel comfortable with and watch your confidence grow over the next several weeks or over the one hour workshop, whatever it is that we have in front of us. And that's the culture piece, I think. So the infrastructure, the resources we have, but the culture is kind of happening on this one-on-one level or within the faculty ecosystem we're working in.
Matt Larcin (46:28)
That's great. Thank you for that. That was a great answer. So yeah, we are up top of the hour now and I will wrap up with the last question. Well, looking forward as you think about, you know, future of learning and AIs, saturated space, especially right now, how do you hope this time will reshape the higher education in ways that deepen human effect and flourishing and faculty flourishing rather than diminishing it.
Where do you see it? Let's look at the future.
Tasha Bleistein (47:03)
That's a great question. I think maybe we can both answer this one. I think the human connection and what it means to be human, we hear that a lot, that there's just such a need in this era of AI. We're going to continue to transform the way that education happens, but how do we help people to connect?
Matt Larcin (47:06)
Yeah, what about you, please?
Tasha Bleistein (47:24)
Connect with knowledge, connect with people, their classmates, their professors in ways that are meaningful and where they feel like the learning is significant. mean, I think that's, those are the big questions that we have to ask in higher education because I think in a few years, what a classroom looks like, what education looks like will be so different. But what are the values? What are the things that we can't leave behind?
And I think those are those are what I hope our office is looking at. you know, we know the changes are coming, even bigger changes. But what can we do in this intermediate time? And then how can we be preparing for the future without losing sight of historically what's been best and how we can translate that into this new age or this new era?
Tiffany Snyder (48:12)
It could be tempting as the AI tools become more robust and infiltrate higher ed. It could be tempting for faculty to just kind of feel defeated or depleted. But instead, especially if some of the affordances of those technology allow their grading to be easier or their ability to kind of keep tabs on students' engagement to be easier then hopefully that they replace that time that they're saving the AI is assisting them with with that human investment. So people are craving connections so much in our society. And so the last thing I think we need is to just be like, well, this AI, technology is happening to me and but at least it's making this thing easier. No, turn around and take that time that you're saving have the agency and lean in to the student relationships. So keep that connection alive. And I don't know exactly what that looks like, but that's what I'm going to be cheering for is the connections that we can cultivate with students, the imperfect but imperfect and beautiful connections that we have with each other. I think we all need them.
Matt Larcin (49:25)
I know we all need each other and I guess get through this. I don't know the phase It's only going to grow. it's technology. So it's always going to evolve anyways. And I see it the same way. think we all as educators should focus also on the soft skills that are becoming important.
Tasha Bleistein (49:42)
I think working with people like Tiffany and our faculty who find joy in serving others well gives me a lot of hope for the future.
Tiffany Snyder (49:53)
Yeah, the faculty that we talked to, they care about formation and it's inspiring. You know, no matter what's happening around them and how higher ed's changed over the course of their career, they really care deeply about formation and it reminds me why I'm doing what I'm doing. So it's.
Matt Larcin (50:12)
That's good. It sounds like you have a great culture with you know, collaborative academy and faculty. So I think that's the most important. I agree with that.
Tiffany Snyder (50:23)
And Matt, you could have just generated this podcast on AI. the fact that you're seeking, you're contributing to our hope for the future and human connection by saying, look, I'm going to take two imperfect humans, bring them on the shelf, talk about AI. thanks for, yes, thanks for choosing us and not choosing avatars and you know.
Tasha Bleistein (50:29)
Thank not that we call them. They would have
Matt Larcin (50:40)
Of course. I know.
Matt Larcin (50:44)
Definitely. No, I still believe that. we are still educators and our jobs has not replaced by AI as long as it's not replaced by AI as educators, We still can run this type of podcast. I think it's most important.
Tiffany Snyder (50:52)
There you go.
Matt Larcin (50:58)
This is what is needed by other educators to kind of sharing is caring to at the same time and we are in education too. So I think it's very important to have these conversations going on, thank you for both generous and grounded conversation.
And I think what really stays with me is how consistently you return to a lot of meaning and also trust and human judgment in your work, even as these tech tools are changing the landscape of education And we will continue the shift in the long term, I think. And then I think this conversation also reminded me that the, again, feature of learning is...
not going to be only keeping up with AI, but it's I think also about helping faculty, what we serve mainly, right? And also students, are also serving students make sense of their work, data and their values. I think also their responsibilities as well. And this changing landscape too. And also I think from faculty agency and also collective sense making to AI resilient assessment and humane integrity practices, right? I think your work also shows us, you know, everybody to that thoughtful change doesn't have to be rushed to be relevant. I think that's also another part. And for who are listening to this one, episode, I will say I hope that they will get some ideas from you and us from this and maybe there's some slow down ask questions, to each other to, embed AI literacy, which is important and not optional anymore. what you're already doing so well. And I think this will be the opportunity for deeper learning rather than disruption.
thanks so much again to interview both of you.
Tasha Bleistein (52:52)
Thanks for letting us come. We were very talking with you.
Tiffany Snyder (52:53)
Thank you.
Yeah. Thank you so
Matt Larcin (52:56)
Thank you.