Today on the podcast, we are joined by Dr. Linda Berberich, Founder and Chief Learning Architect at Linda B. Learning, a consulting company based in the Greater Seattle Area. Dr. Berberich brings over 30 years of experience in designing learning experiences and technologies. Her work focuses on helping organizations and leaders create innovative learning solutions with a strong emphasis on both effective design and the use of learning technology. We are looking forward to hearing her insights on how learning design is evolving and how we can create better learning experiences for today's learners.
Hi Linda, welcome to The Future of Learning Podcast show. And I am very happy to have you and thank you for joining me.
Thanks for inviting me.
Of course. So I will start with the first question. So you have been immersed in learning sciences and behavioral design for decades and witnessing the shifts in how we design learning experiences.
So when you think about how we use to approach teaching and learning, especially before the AI era, what do you feel we have gained? And what might we have lost in this shift toward these intelligence systems?
I have been very blessed in that I have seen the growth of the Internet and the advent of the Internet from the beginning, when it first became widely available. So I think Internet access and specifically user-generated content and the data that is collected through social media, all of those things have shifted our ability to scale learning. And so that's one of the things that I think has really been prevalent and noteworthy in terms of learning science and instructional design and learning experiences and what's possible in terms of scale.
But the downside of that, of course, is that we become dependent on the technology. So while technology can be used for good, the things that make us human are slowly being stripped away from us when we have the shift to virtual experiences as opposed to face-to-face in person. Because the virtual, even where we are with virtual reality, with the ability to do digital calling, the kind of, you know, the era that we live in now, we're still really limited to our voice and the visual.
So, what we hear and what we look like, and there's so much more to what makes us human than that. And when you're not in person, you miss out on those things. So, while we can scale our experiences, what we're losing is the human touch.
I think the human touch is getting more prominent and much more important, especially with these intelligence systems, and because they cannot really become a human. So, biological humans are still going to be important and much more valuable in the future. So, I have one follow-up question on that.
How can we retain those values in an AI enhanced learning environment, in your opinion?
I think it's really looking about what is the purpose of the AI. And AI is not generally a term I use. I really refer to it more often as machine learning, so we don't get lost in the shuffle of what it is that we're doing.
We're automating processes. We're mimicking behaviors that occur in mostly human populations, right? But yeah, I think that it does give us the opportunity to connect with people that you might not otherwise have the opportunity to connect with on other parts of the world, in different time zones, to be able to learn from each other in that regard.
But I think one of the most beautiful things is the time when virtual colleagues actually get to meet in person for the first time. I have a really great example that happened this week with two colleagues of mine who've been collaborating. One of them is at the University of Washington, and the other works for the DARE.
Institute, the Distributed AI Research Institute, headed up by Timnit Gebru. They've been collaborating since 2020, create a podcast together, have written several articles together, but they met for the first time at a debate this week. All the time working together, you would have thought from the rapport they built virtually, that they've known each other, but they'd actually only met in person that one time.
So while you can build human connection online, and I think that that's a wonderful way to set up in-person interactions, there is something special about when you finally meet someone in person.
Yes, totally. I think it's in person, connection is still going to be valuable. I mean, the online, I think environments and learning experiences are great and makes us closer than ever before.
And but I agree with your predictions for, you know, the connections, human touch will be always important. So we will shift to another question now. So I'm going to get into a little bit of the present state of AI integrations, especially in learning.
So we are now in an age where AI can personalize, you know, adapt and even predict learning means, right? So but at the same time, education systems are still catching up, especially universities, colleges, K-12 educations. So in your experience, how well are institutions currently integrating AI, but not just using the tools, but reshaping the deeper learning experience for students?
In schools, it's really a matter of where you're having the experience, because it does dramatically differ depending on, if you're talking about the United States, which state you're in, which county you're in, what school district you're in. Other countries have more standardized access to technology and also have standardized curriculum. That's where the challenges in my mind come into play, is that looking at what the foundations of technology, what they can actually do for you.
Technology, especially artificial intelligence, requires defined parameters. This is true of any kind of technology you're talking about, whether you're talking about an LLM or whether you're talking about AI that's immersed in a virtual reality environment. You still need to define what the parameters of those things are.
The challenge is when you will live in an understanding of what the technology is that you're using and don't know where the boundaries actually are because then it becomes a safety issue for especially children who are being educated in these environments and that way they get exposed to content and experiences that they wouldn't have gotten exposed to in a live and person environment. Those are the same kinds of challenges. When you open up an environment through technology, are you putting the same safety checks you would in a physical environmen
And I think if people put a little more thought into that and less on like right now, what I'm noticing mostly in academic settings is concern about cheating. And in fact, not even just academic situations. Using an LLM, for example, so that you can pass a tech interview and get hired at Amazon.
Literally somebody wrote technology to do that and demonstrated that they weren't blowing smoke. They literally published their actual job offers and what the process was actually like using their tool to basically cheat the system. So the question then becomes, is it the tool that's flawed or is it the “tools that need to be examined or is it the very systems under which they are underpinning that actually need to be reconsidered?
I think the cheating is the big obstacle right now that all academic institutions are coming and if it's possible to overcoming. But in my opinion, I think using AI tools and encouraging students to use them and work with them is going to be eventually the solution. I see it that way because AI is not going anywhere, it's going to stay and it's going to only advance for us to use it and we will become mandatorily using it in a couple of next years and even it started now.
So that shift is starting now. That's how I see it. I think even the students has to be encouraged to use and how we can learn or I guess work with them.
That's going to be the future of learning in my opinion, that how we will be integrating with them and then take the most advantage of these integrations.
Well, definitely too because if you consider how pervasive technology is around the world now, I mean, if you don't teach it in school, if you don't have access in school, kids have access outside of school, so they're still having experiences with technology. It kind of reminds me of when I worked with car dealers who were concerned about their customers going on the internet and doing research and knowing more about the vehicle than they did. That's kind of where we are with kids and digital natives.
You know, they are already using technology. They're already comfortable with technology. So it's really a matter of making sure that we're comfortable with technology ourselves in classrooms, if we're working with kids and others.
But it is a complete disservice to not give exposure because that's sort of the point of education in general. How are kids supposed to know technology exists if they like, for example, if you are coming from a community that happens to be have less access, if you're not given access in school, like some of your more privileged classmates, maybe, or kids in a “different school district, then you're missing out entirely and you're getting behind because kids have it. You know, kids who have access to technology have that advantage.
To your point, it's not going away. I mean, even when I was in school, we had computer science class, you know, so we saw the evolution of being able to have a computer go from being this mainframe big thing in a building to being something that sits on your desktop, to being something that you can hold in your hand or put in your pocket, to being something that you can now wear on your wrist or with a pin or however you choose. So, I mean, it's just an evolution and you're not doing anyone any favors by thinking that you're going to shield them from technology.
I mean, unless you're living off the grid, you're not going to shield them from technology.
So, let's shift to the mindset shift for educators. So, many educators and leaders feel overwhelmed already by the pace of change, especially after post-COVID era and also this AI technology is evolving so rapidly. What do you believe is the biggest mindset shift that all “educators need to make if they want to try in this new learning ecosystem, I will call it?
I think it's just embracing a learning mindset. I mean, people who work in technology, myself included, know that going in, right? We know that the technology that we're working with, the coding languages that we're working with is ever evolving and changing and you need to come at it with the mindset of, I'm always going to be needing to learn new things.
And that takes a level of curiosity. And so that's what I, and educators who haven't got jaded, who are still in love with the reason that they started in their profession in the first place would do that anyway. They would just approach with curiosity.
How can I use this tool to improve the lives of my students? Where is the places where there could be potential shortcomings? I mean, it makes even for a great critical thinking exercise for teachers to work in tandem with students.
Totally.
I think that will be the key for all educators to how they can. I think some of them already started who sees this is not going away, and they are already putting all this effort and changing their assignments, and evaluations are all around their teaching methodologies and strategies for online or offline in classrooms. It's easy to cheat now because AI can write your essays.
You don't really need critical thinking even because AI can think for you. But I think teaching the critical skills is a little bit important too.
That can be done by comparing technologies. For example, I always encourage people to take a question that they have, or a prompt that they have for one LLM, and then put that same prompt into other LLMs, and then compare and contrast the answers that you get. It's brilliant because you get different perspectives on the same topic, and it also opens up the questions of why are we getting different answers?
How if we change this prompt in this little way, would the answers come closer together? Assuming that the answers are disparate to begin with.
So I think it's really more about, rather than being concerned that they're writing the essay for us, what is the thing you're trying to accomplish by people having a essay assessment anyway?
And how can you use the technology to get at that same level? So if the goal is to assess critical thinking, how can you use LLM-type technologies to determine how kids are thinking critically and using the technology? Because that's going to be their future.
So I'm going to shift our conversation a little bit towards you now and talk a little bit about the cognitive science and AI. I notice a lot of your work focuses on behavioral science and how people learn. And that's what you research on while you were doing your PhD.
So not just what tools we use, but how do you see AI enhancing or possibly complicating the psychology of learning? Are we at risk of designing for convenience over cognitive depth at all?
I think we got to kind of bear in mind when we're talking about learning, what do we actually mean? Are we talking about human learning? Are we talking about a particular domain?
Because ultimately, when we talk about AI, we're talking about statistical learning. We're talking about machine learning, which is phenomenal at doing a lot of things really well that humans are not great at doing, specifically high-speed computation. So I think it's really more about understanding how machine learning works.
Because one of the things I think I personally provide a lot of value in the field of learning is because I understand human behavior from a behavioral perspective. So when someone talks about reinforcement learning from a behavioral perspective, that's a very different thing than reinforcement learning is to engineers who code in machine learning. And so when I start to introduce that concept that, you know, reinforcement as a principle came out of behavioral science, you know, generative AI, when we talk about what we mean by generative machine learning, that comes out of generative instruction.
So what is generative theory? It comes from psychology. So I think that it would be incredibly helpful for folks who are coming at AI, machine learning, statistical learning, from a purely computer science or engineering background, to study behavioral psychology.
Because once they understand what Skinner was talking about in the 1930s, how he created teaching machines and start to see, that's the actual underpinnings of almost everything that we're seeing with the development of computers, user generated data, behavioral economics, all the concepts that I learned, you know, having three degrees in behavior analysis. And it's frustrating to me that if you were to look online and do some research on behavior analysis online, you're mostly going to find it as it applies to people with autism. Whereas in fact, people like myself and others have applied it to technology.
And sometimes we get shut down for things that we are actually providing significant insights to because people don't understand the underpinning, which comes from behavioral psychology. But once you make that connection, especially if you're working in the field of machine learning, statistical learning, and these generative and, you know, because honestly, it goes beyond just the prediction. It goes on to ultimately trying to control and modify behavior.
And ultimately, we can see that these systems have the capability to do that for good or bad. Not understanding how reinforcement learning works, what behavioral economics are.
We're talking about Monte Carlo experiments and data, what that actually means and how that translates to all the data we generate every single day when we interact online.
An important thing to understand.
You point a lot of stuff, like so many follow up questions, so I guess I can go one by one. You mentioned a lot of things that made me think that because there's a resistance at the same time, that comes from the psychological perspective of it. In your opinion, is resistance more emotional in art, because humans are emotional, or either practical or philosophical in your view?
Where do you see that?
I think it's a difference in philosophies, because even within psychology, even when I was an undergraduate student applying to graduate schools, I was already told that behavioral psychology was dead, Skinnerian psychology is dead, no one's doing that anymore. Whereas my direct experience working in tech and seeing the development of technology shows exactly otherwise. For me, I find myself pointing out to people, well, why would you think that an amalgam represents individual behavior?
Even something as simple as that, as designing systems based on mass data points does never represents the behavior of an individual organism, kind of blows people's mind to realize that, when they start to put that in the context of how decisioning is made using big data.
So when we come to the acuity is one of the big topics that's been around since AI started to take it over a couple of years ago. And security and inclusion are major concerns in education, especially in this AI era. We now have both opportunity and risk.
And for you, what are some ways AI can close acuity gaps? And what are some of the biggest red flags? You think that we should be mindful of, you know, this AI enhanced learning experiences?
I think the direction we're going in with these very large systems is problematic and where we see the data that's training the systems putting out a bias output that actually does harm significant portions of the population.
The way I see it addressing itself and correcting itself is by using more closed set data, still using machine learning techniques, of course, and predictive and all of the ways that algorithms have been designed now to learn and improve upon themselves. But being more intentional about what is the purpose of this, like this whole drive towards general artificial intelligence isn't the goal.
Nobody asked for that. We're not trying to have this Star Trek omnipotent service that can answer any question for you. Where AI and machine learning really become potent is when it's focused on a particular problem.
Like the detection of breast cancer is a great example of that. If we were including data sets that are completely irrelevant to the problem at hand, it would dilute the results. Similarly, to get at the equity question, is making sure that if we are creating these closed data sets, who are they serving and are the population that you report to serve also represented in the training data?
So being specific about what the purpose of your implementation is and focusing on who it serves.
If it's meant to be more general purpose, then you have to be more inclusive in who makes up that population. If it's targeted very specifically to a very specific demographic, make sure you're clear about that and do make sure that you've created a critical rational set of data representative of that population.
Again, there's another term. I can mention that to many, many people who've got doctorates and years of experience in the learning sciences. They have no idea what I mean by a critical rational set.
We also have a lot of things to think about as far as the QD as well. I think that's going to be the one of the other problematic part of the AI because it's so confusing. It's so complicated and I don't think we will ever understand.
I was watching this video on YouTube a week ago. How can we not understand AI because humans build it, but we only build the system, but not the integration of it and how it works. That will be the hardest part because they can communicate with each other that we won't know how they connect with each other on their own..
I think that's the scary part, at least to me.
Well, I would suggest that we live with hundreds of thousands of species on this planet that we also can't understand and communicate with, yet we've been able to more or less coexist with them successfully. That's true. But also, we also assume that we're at the top of that food chain, which makes me chuckle a little bit because it's like, really, a virus could take you out.
Yeah, now I know. It's different races and different languages. So many, I think it's over almost maybe 100 different languages that we don't know.
AI can make it all easy and for us to just understand for everybody in the world, especially when we're traveling to different countries, that we don't speak their language. That's what I'm excited about. At the same time, I think that technology will be breakthrough in another way that we can actually instantly hear and translate.
I saw these little earplugs now that can translate, I think over 50 languages instantly, the way that you talk and then it speaks to the person.
But I think that when it comes to that translation or the dialect, and maybe even the accent is going to be the hardest part for us to translate from the AI. I think it's better for them to talk to the person who is the native speaker.
I did do some early work when voice recognition, voice synthesis technology was just developing and just even getting a consistent input, representative of a single speaker of the language. Because even from day to day, if you have a cold, your voice sounds different and so things can get lost in translation. So my concern is when we outsource anything like that, I'm not comfortable having people just speak for me in general, even when they understand the words coming out of my mouth, because oftentimes that's not what I meant.
So the nuance, and even people who speak the same language, we misunderstand each other all the time. So to have this expectation that technology is going to get it right, I don't think it's a fair thing to begin with.
The fact that we have come so far with voice recognition and voice synthesis, that we can literally give voice commands to technology.
I mean, from where I started in the 90s to where we are now, blows my mind and yeah, I get excited about that. I really love the idea of translation tech to be the Tower of Babel and being able to speak for you, right, so that you can communicate. But the sad thing is when it gets it wrong and then you've unfortunately offended someone and that's not what I meant, that's not what I meant.
So I'm going to shift to our conversation again to another part, which is how we define innovation, especially in educational technology. I know you help a lot of organizations create innovative learning products very fast and yet we have a lot of purpose. So that's really great.
So in your view, what defines an innovative learning experience today? In just about using the latest technology or is there any deeper creative in it, or it could be the ethical component that separates trends from transformation?
I think one of the biggest concepts that I like to give people when it comes to innovation is based in creativity and what do we mean when someone is being creative? And that is there is a novelty aspect to it. So when you're talking about innovation, there's novelty associated with that.
And with novelty, it can be new to you in that, you know, it's been around, but you're just discovering it, or it can be new to the entire world. Right. And so I think, first of all, addressing that that idea is this peer learning
You might not have ever done it at your organization. So it's new to you. So that's an innovation at your organization because you've never done it before.
And that was the case when we started implementing it at Google. Google hadn't thought to do it at the time. But now, I mean, they've been doing it for over a decade.
So not really innovative there anymore. And the concept of peer learning itself is something that's existed. So in that way, it's not really novel.
And then you have, like, for example, here's a VR studio. We've had VR technology forever, but no one has used it for this purpose. So that's innovation.
So there's innovation in how you use tools. There's innovation in the tools themselves. And then there's innovation in who the audience is for that tool.
But also, what does thoughtful innovation look like when time and budget are really tight, for example, especially at this time, because of the federal, we are getting a lot of cuts and especially in education. Education, Department of Education is going to be dissembled probably soon. What are your thoughts on this?
I think when we're thinking about what is the most cost-effective way to do something, it's about teaching your people the skills. So I use the term generativity not just to refer to the type of instruction that I create or to refer to a particular type of AI or machine learning. I also refer to it as the ripple effect you have when you teach something to someone who is going to then teach it to other people and so on and so on and so on.
That's generativity when you're building that kind of a web, that kind of a bubble. So, I think if in the era of budget cuts and having to do more with less and being able to scale things, don't underestimate the impact of your own ability to be generative. That you know things that you could pass on to other people that they can then pass on to other people.
And that is a way of keeping things alive. I happen to be of an Indigenous background and so the oral tradition of sharing and preserving knowledge is something that's deeply ingrained in me. And I think that that's another thing we have to consider in the Age of AI, in the Age of Technology.
Don't stop telling stories. Don't stop being generative with your own knowledge and passing that on.
So what about the role of immersive tech? When we look at that, there's been so much excitement about the, you know, these immersive technologies, virtual reality, these augmenting experiences, XR, AR, VR, all around the education going on. And even these AI driven avatars now we have.
So, but in your work, what role should these emerging technologies actually play in educational settings? Where should we be more cautious not to overreach or use them?
I think there's a couple of things to consider here. First of all, what platforms you're using. So is this a publicly available metaverse, for example, or is this a private metaverse?
Because that's a difference between a closed and an open dataset. And that's where security issues can become a real thing. The other thing is the why of why you're doing it.
Oftentimes, virtual reality and extended reality is used because doing it in real life is dangerous and people could die. So we simulate those experiences so that there's less harm, so people have a chance to practice and make mistakes in an environment that's going to have less impact if mistakes are made. But they become then become a teaching experience.
So there's definitely better uses for extended reality than others. But at the same time, I really don't like to limit how people think about things.
Because again, recently, I figured out a new use case for VR that I would have never thought of, had I not been prompted by something else that I'd seen.
The benefits of virtual reality is that, like I said, the whole safety aspect of it and the fact that you can bring people together from different parts of the world, different areas, different time zones. You can recreate things that no longer exist based on pretty good predictive analytics of what those things might have been like. But at the same time, if you're not using accurate data or if you're using bias data, then not only are you presenting something that's factually inaccurate, you're also influencing a generation of people about that bias and all the ramifications of that.
So there's definitely, just like with technology, it giveth and it taketh away. And it's when you're using it, being mindful of what some of its shortcomings are, using it in ways. And again, I mean, one of the things with VR and AR especially, it used to be cost prohibitive and now it's not, because of technology and tools and platforms that are now available.
But then the trade-off is at what cost? Why are these things less expensive? What does that mean in terms of, especially if you're working with children?
Technology can be very, very helpful to kids. It can also be very, very harmful. So it's always the benefit of what are we teaching and what are we enabling?
And what are the risks and benefits of doing that?
I know you keep mentioning students, so let's talk about them now a little bit. Preparing future learners. When we look at the students, what skills, habits and mindsets will these upcoming learners or lifelong learners of the future need to succeed in an AI-shaped world?
Are we preparing them not just technologically, but cognitively and socially for what's coming?
That is a challenging question because there's a lot to unpack in that. In terms of the challenges that students today are having relative to earlier eras, is that it's becoming really difficult to determine what's actually real. This generation in particular has grown up with filters and social media, having ways to distort reality.
And if their educational systems contain those distortions and children don't have the opportunity to have authentic experiences in the real world, it soon becomes blurred, well, what is real? What is the real world? And granted that can open up some brilliant, wonderful philosophical questions, I'm sure.
But it also, when people can't distinguish real from not real, they're open to manipulation. So one of the things that I always teach anybody that I care about is when you're confronted with information that maybe you haven't heard before, or that goes, that contrasts with your current understanding of a topic or a subject. The two questions that I always suggest people ask is, who told you that and why did you believe?
And then start to unpack that, because people generally don't have that conversation. And I think when the world is becoming more and more difficult to distinguish between real, not real, what do we even mean by truth, it impacts your decision making and ethics comes into play. So what do we even mean when we say someone is acting ethically?
Ethically according to who? Who is benefiting from this? Who is being harmed from this?
And I think just those kinds of critical thinking skills are going to be key to continue to impress upon students as they are navigating this technology rich world.
I know it's changing a lot and the students' perspective is going to change eventually. I'm also thinking like, are these, we have current curricula designed with future of work and learning in mind. Is it going to help with current curricula and help with AI and change that?
Or do you think that's coming?
Well, I often wonder about future of work and this whole idea of predicting what skills you're going to need in a generation that doesn't exist yet, in a future that doesn't exist yet. Because I've seen a lot of things, this is the benefit of having lived and worked for several generations, several decades is that you can see where predictions didn't pan out. You can see where we thought it was going to be like this, but it was actually like this, like where are our flying cars?
I remember the Jetsons, I was promised this by now. In fact, first Blade Runner should have taken place by now.
Flying car coming by Sony, I recently just posted a link on that one. Yeah, Sony actually created, I don't know if it's in the concept or AI generated version of it or something, but it's really coming like Blade Runner comes through and I'm so excited about that, but I don't know when. But yeah, it's real, the black and very extra futuristic, but yeah, it's almost coming.
So I'm going to shift to, I don't know if you want to add anything?
No, no.
Okay. I'm going to shift to the leadership and AI adaptation. So in our previous conversation for podcast, you said that the leadership in the space requires speed, curiosity and courage.
So what does courageous leadership actually look like in today's educational landscape, and especially when dealing with this disruptive technologies like AI?
The couple of things that come to mind right away when we're talking about leadership, and that's considering what your North Star is for how you're leading your population of people that you are charged with. What is things that are going to actually benefit them, and what are things that you're going to absolutely take a stand against? Because without that North Star, without that compass yourself as a leader, you can be put into a position where you can be manipulated by forces outside of yourself.
And so I think that that's the big thing for folks to really focus on is, where's my reason for doing this? Where's my line in the sand? Where do I pull the line or say, this isn't appropriate for my population versus saying, my population needs this, and here's the rationale and justification for why they need this?
So leadership has a responsibility for the people that they lead. And without putting them in the fore, without considering who's benefiting from this, who's being marginalized from this, you're not really doing your population service if you aren't coming at it from that perspective.
It's changing. I think even the leadership is changing too. In my opinion, I don't think we should give the 100% control to AI, and we should be controlling them for our benefits, because the world is ours and we were here before, and then why would want something we created to take it over on us, because they are more intelligent than us.
But eventually, I think we will be working with the AI system side by side in the next decades or so. I don't know what the future holds, but I think it's exciting times, and I'm glad to be alive in this time of this, our evolution, we call it maybe, because I think this is going to be another evolution for humanity. That's how I see it, and including, of course, leadership and how we learn and everything.
So I have last question that I wanted to ask you. So we will zoom out a little bit. So when you envision education 10 to 15 years from now, not too far away, what excites you the most and what keeps you at night?
What kind of learning futures do you hope you are building towards?
I would really love to see a deeper integration of technology and the natural world. I'd like to see sort of more of a symbiotic kind of development of those in that a kid can go out in the natural world and look around and technology can help them navigate their environment. That's what I would love to see.
Not control, not curb, but inform and assist, right? So this whole idea of having these virtual assistants, they should be useful and useful to the person and the population that they're intended to serve. That's what I hope to see.
The promise of the internet, and from when I started doing this work, has been personalized instruction, right? It's been personalized experiences. We understand you, we get to know you so well that we can assist you and even, you know, the person that you love the most can generally read you and can predict what it is, you know, can anticipate your needs.
Same kind of, that's the promise of technology. So if I'm learning something, instead of me having to learn, you know, grade one, grade two, grade three curriculum, why isn't it adapting to the circumstances of my world and what I need to learn to survive in my world? I would love to see that.
So it being a combination of an individual person's interests with the rate of content that they need. So paste out an appropriate content for that particular learner, so that they're able to function well in their own particular environments. I'd love to see that in the future.
I think that'd be awesome.
That's wonderful. Hey, I just want to say thank you so much, and I truly appreciate your time, Linda, and thank you for taking something as complex as this topic, AI and education and breaking it down in such a clear and thoughtful way for us. And it really means a lot.
I'm genuinely grateful for this conversation.
Thanks, Matt. I appreciate you extending the invitation. Happy to have done it.