
Higher Ed
Digital Credentials in UK Higher Ed: Efficiency & Revenue
Discover how UK universities can cut costs, boost efficiency, and unlock revenue with modern digital credentialing. Read the full report from Parchment.
How is AI going to shape the future of work and education? What skills will be needed? What evolution will take place? In this episode, we are joined by AI ethicist and strategist, Rebecca Bultsma. We discuss how to think and talk about AI, as well as the need to “become more human”.
Matt Sterenberg (00:01.292)
Rebecca, welcome to the podcast.
Rebecca (00:04.259)
Thank you so much for having me. I’m so excited to be here.
Matt Sterenberg (00:07.394)
Well, I wanted to bring you on because I everyone’s talking about AI. It can be sometimes nauseating. but I’ve been following you and some of your stuff that you’ve been posting. find your perspective really interesting. We’ve got a big topic, kind of a tall task ahead of us today. We’re talking about AI and the future of work. So the first thing I want your help with is set the table for us. Like where are we at with
Rebecca (00:29.433)
Mmm.
Matt Sterenberg (00:36.864)
AI’s impact on work today? Like how significant has it penetrated work and what recent grads are going to experience today?
Rebecca (00:51.075)
There’s so much to unpack with that to just dive into, but there are some interesting things to think about. think I just actually read this MIT study that said that 95 % of generative AI and projects in enterprises and businesses actually showed no measurable impact on profits and losses. So I thought that was so fascinating because here we’re so worried that it’s going to…
take over all the work that we do. this just shows us that if it’s not implemented in a smart, strategic way, it literally doesn’t matter. It’s not benefiting people in the way that we thought it would. But I think maybe what’s most interesting is this kind of paradox that we’re seeing. Unemployment is kind of low right now, like relatively speaking. And businesses are starting to like report, hey, we’re having some productivity gains with this.
But I think it was Reuters just did a survey and 71 % of the people who responded to that survey said they were worried that AI was going to take everybody’s jobs. So there’s this disconnect, I think, between what people think is going to happen and what’s actually happening right now because there was another, and I’m trying to remember where I saw, I think it was the World Economic Forum report, said that they expect 78 million net new jobs.
by 2030, which is a ton with only, maybe it was 178 and with only 92 million displaced. So were thinking like a net increase of 80 million jobs. And so that is telling us a very different story than how people are actually feeling. And people are closer to what’s happening. So you have to wonder who is right and where is this actually going to end up? And the truth is probably somewhere in the middle and it’s a slow burn, right? Like, so it’s not
entire jobs necessarily being automated away, although in some cases it is right now. But it’s different parts of everybody’s jobs. And I guess it will just be a matter of if you have one of those jobs that is made up of tasks that can all be taken by AI or most of them, that will look a little bit different. think Klarna, the company, they just reported that they had AI that does the work of like 700.
Rebecca (03:13.941)
agents. And that’s a little bit scary too. this asymmetry, right? Like the businesses are like, this is going to be great. We’re going to be so much more productive and make tons of money. But that doesn’t mean anything to Sally who just lost her job at Clarna because AI took her job. So individuals are feeling differently about the future of work in AI than businesses, I think is the headline there mostly.
Matt Sterenberg (03:40.074)
Yeah. Well, I think it’s like, there are a lot of people that are hesitant or have reservations about it in general. And that’s why I wanted to have this conversation. And there’s so much noise around it that it’s like, what are the actual signals? Like where is this actually something that we should be really skeptical of, or we should really change our behaviors to like accommodate what’s coming? Or is it just
There’s so much conversation around it that it’s maybe this much bigger thing. Cause I even see this like every company, if they want to get additional funding, they’re like, we have to say that we do AI, right? You know, like there’s this component where people feel like they have to participate in the conversation to be relevant. And that might be adding to some of the concerns around AI, but you brought up the example of the.
Rebecca (04:19.095)
Yeah.
Matt Sterenberg (04:34.008)
Klarna eliminating those 700 jobs. Those are kind of entry level jobs, support agents, service centers, that type of thing, which I think one of the challenges in higher ed today is underemployment. So you go to college, you don’t exactly have the job that you want long term. And if more of these entry level jobs are going to be eliminated, that adds a little bit of pressure potentially.
the jobs that you go to college for might not be eliminated, right? So does this make the college degree more valuable, less valuable? How do we start to think about a college education? It’s a big question, but I really would be interested in your perspective.
Rebecca (05:06.809)
Hmm.
Rebecca (05:17.123)
I think if we think about it, I wonder if it’s going to be less about front loading your education than it has been, right? Like you invest six, eight, 10 years of your life in your 20s to go to school and front load your education. And then in theory, it pays off for the rest of your life, right? That’s the model that we’ve been taught. And I just don’t know if that’s going to work anymore because we’re seeing now, again, I’m
trying to keep all the studies I read straight in my head, but there was a recent study that talked about how 30 or 39 % of the skills that we use today in our jobs are gonna be basically obsolete by 2030. I think it might’ve been the World Economic Forum. And so that just shows that we’re just gonna have to keep adapting skills and re-skilling for the rest of our lives. And I don’t know that that’s totally new, right? Like…
Matt Sterenberg (06:10.734)
There’s lots of conversations about that now. think the hardest thing is higher ed and education broadly is not, you know, it’s not super nimble. You know, they’re not known to be willing to change quickly. They’re not super agile. So yeah, how do we navigate? And it’s like, what are the new skills? It’s like, all right, you’re eliminating 39 % of my skills.
Rebecca (06:27.16)
Yep.
Yeah.
Matt Sterenberg (06:36.588)
What are the ones that I do need that don’t exist today?
Rebecca (06:41.817)
And it’s not even that they don’t exist today. They were just skills that weren’t as important as they were maybe before or becoming critically important. We hear this all the time, this critical thinking imperative, right? If we could only gain or invest in one skill development, that would probably be it. Not the watered down version where you just like, you know, say that you’re critically thinking when you’re not like lots of people on social media do. I just think like critical thinking in real world contexts like
about AI outputs specifically, and just thinking even more deeply about the information we’re getting, the data we’re reading, the sources, where it’s coming from. I think that we need to be even better at that than we ever have been before. And that needs to be the number one skill people have. However they gain it in college and whatever it is, that needs to be huge, as well as ethics. And I don’t mean like the philosophy level ethics. I mean like applied.
ethics in real life, being able to algorithmic bias or understand data consent or recognize manipulative AI design in real time so that we can not necessarily be learning like the fundamental principles of ethics itself, ethics applied to the world that we’re living in now and making sure that people understand how important that was. We’ve been kind of like…
willy-nilly about it for a while like, yeah, social media, fake news, nothing we can do about it. But like, it’s getting more convincing than ever. We can’t ignore it anymore and we just have to be smarter about those things, as well as I would say emotional intelligence, like as a survival skill. Because AI can’t like read the room or understand the nuances or a lot of time it struggles with like cultural nuances and differences, conflict resolution or active listening or reading body language, right? So
that’s going to be a critical skill. And we learn a lot of those things, for better or worse, I think in college, right? Like you’re learning how to exist in social situations and think deeply and differently in college. You’re usually interacting with people from places and cultures that you’ve never interacted with before and encountering new perspectives and points of view that challenge your own. And so I think those things that we learn in college will be, I think those things will still matter.
Rebecca (09:07.293)
and we’ll just have more of a chance to practice those things and work on things like creative solutions through those critical thinking arenas maybe. And then still continuing to learn like technical skills throughout our lives. Like, don’t know how old you are, but I remember kind of the internet and email kind of just coming in when I was in college and…
the internet becoming such a huge part of my career as we went along. A lot of people have been through some of these technology shifts. So it’s just a matter of kind of adapting in that way, understanding what AI is and how it works. But I think we all have to become somewhat of data experts too, which I know sounds weird, but we just have to understand how our data might be used against us, how algorithms operate, why your loyalty card to your supermarket is not.
doing what you think it’s doing. That’s not the reason that it’s there. We all have to start understanding and learning these kind of skills to protect ourselves, I think, in this new AI era.
Matt Sterenberg (10:12.534)
Yeah, AI is getting so good that it is hard to tell the difference. I haven’t shared this with you, but I am actually AI. forgot to disclose that.
Rebecca (10:21.241)
well, could tell your finger looked a little… Just kidding. I probably wouldn’t know. I am shocked at how good some of this is getting. There’s been a few times that I’ve been like, I actually don’t know if this is real or AI. I have a workout app that I like, that I’ve used for quite a while. And just the other day I was like, I wonder if this girl is real.
Matt Sterenberg (10:45.91)
and the instructors AI.
Rebecca (10:51.319)
I’m actually not sure. And then I started thinking through like, it would actually probably make so much sense if you could have like an AI personal trainer that looked exactly real. You could just be like, this week’s workout demo a lunch demo this demo this, and you just like program it in and it happens.
Matt Sterenberg (11:06.274)
We need like a secret handshake to let someone know that you’re human or something or like a wink or some sort of.
Rebecca (11:10.763)
It’s funny you bring that up. It’s funny you bring that up. I’m trying to remember what it’s called now. It’s like this orb. It’s actually Sam Altman has a company that’s doing this proof of personhood initiative where they basically scan your retina. And everybody at the time was like, that’s insane. This was a few years ago when it kind of hit the news.
I’m trying to remember what it’s called. Someone listening to this is probably screaming it at their phone right now. But I know it’s this orb thing that will do proof of personhood. And at the time, it seemed crazy. And now it actually doesn’t because, yeah.
Matt Sterenberg (11:51.16)
But the thing is it’s like you’re like for someone that’s a critical or skeptical of AI, the idea that you’re like, well, let’s go get around AI by proving that you’re or let’s prove that you’re not AI by scanning your retina for those people. They’re like, so you’re going to just give another machine my retinal scan, you know, like it ends up having a loop where you are like, I don’t want to.
Rebecca (12:12.773)
Yeah.
Rebecca (12:17.088)
totally.
Matt Sterenberg (12:20.248)
give you anything of my personhood. I don’t feel like I need to prove that I’m a person, but yeah.
Rebecca (12:25.131)
Yeah, well, we’re already so worried about where all our data is going and how it’s being used against us and how it’s kept secure. This would be something else to worry about. the company, it’s called the WorldCoin Project. It’s called a World ID. It’s an orb that scans irises to generate a unique privacy preserving digital identity called your World ID. And I wonder if in five or 10 years from now, we’ll look back and be like,
Matt Sterenberg (12:45.745)
man.
Rebecca (12:49.081)
remember that one time we talked about it and I couldn’t even remember what it was called and now it’s the only way you can get into a concert.
Matt Sterenberg (12:52.94)
Yeah, right. have to, yeah, you can’t be led into your workplace until you scan your retina or something like that.
Rebecca (13:01.145)
At the same time, I have to say there would be something so freeing about not having to remember any more passwords for the rest of my life. Like just scan my retina and let me into my Gmail account, you know?
Matt Sterenberg (13:13.09)
Yeah, this isn’t a side, but I was like anti the face unlock of the iPhone and then it’s like so convenient.
Rebecca (13:21.081)
I know you would hate going to the sphere in Vegas. I had to sign a ton of paperwork to go to a concert there saying that they were using facial recognition and everything on site there. And if I didn’t like it to not go.
Matt Sterenberg (13:34.306)
But I think that’s a good kind of segue into like, feel like there’s so many people that have reservations, but yet it’s sort of like, they’re not against everything AI, they’re thinking about where it leads. like, you can just be like, AI helps me write an email or a blog or helps me think about, you know, how to solve this complex problem. Or I’ll sometimes you’d have used it to just say, Hey, what a
Rebecca (13:49.593)
Mm-hmm.
Matt Sterenberg (14:03.246)
How does this generally work or operate? How are teams generally structured in this environment? And it spits out really helpful things. But I think it’s okay. This usage is acceptable. helps me kind of be more efficient at my job or kind of just rounds out the difficult edges because it’s got this amazing computational power, but we’re in the infancy. And so it’s like, I’m not scared of what AI is today. I’m scared of what it will become. And I’m constantly feeding it.
Rebecca (14:28.323)
Mm-hmm.
Matt Sterenberg (14:32.704)
And so it’s not necessarily a fear of, and not that these people are thinking like in this huge dystopian place, but it’s also, I think it’s fair to start thinking about those things. And I, yeah, I think that’s mostly people’s reservations of like, we don’t know where this is going. No one does. And it seems like we’re going to just keep going until.
Rebecca (14:45.079)
Mm-hmm. It is.
Matt Sterenberg (15:01.806)
Perhaps it’s too far. Right? yeah.
Rebecca (15:03.095)
Hmm. And that’s the million dollar question. That’s kind of a lot of what I think about too. And I think a lot of technology in general is all about trade-offs, right? Convenience for something else and even ethics in general is, is it what’s best for everybody? Is it, if we decide it harms X number of people versus this, is that okay? If we give this up now, are we prepared for what happens later?
Matt Sterenberg (15:13.068)
Yeah.
Matt Sterenberg (15:20.212)
Individual.
Rebecca (15:31.743)
or if we give up this degree of autonomy or of our information. Now, like I said, like scan my retina, what does that mean down the road? And will I live to regret that? And we just, we don’t know. And that’s, think the hardest part is we’re navigating blind. And when the internet rolled out and other major technologies throughout history, there was this really gradual rollout where, you know, it’s the frog in the water. We had a chance to kind of get used to it.
We lived in the country, so it took a long time for us to get a high-speed wireless internet. Or I had like a Motorola flip phone for a long time while other people had iPhones. With this, it just kind of dropped on society on one day with no instruction manual, and we’ve just been struggling to figure it out ever since. It’s not like there’s people who are light years ahead of us with better versions of this who’ve already figured it out.
Like when something drops, it drops to all of us on the same day and we’re trying to make sense of it and figure out what we’re willing to trade. And also coming from a position where we don’t have a lot of say or control over what’s happening. Big picture.
Matt Sterenberg (16:38.2)
That’s actually a really interesting point, whereas I could opt out very tangibly to other things, right? I can choose to buy a phone or not buy a phone. could choose to have a computer. And now if I, you know, call up my credit card company, cause I want to dispute something, like, I might be talking to someone that is, you know, AI or I’ll be using a system and it’s like,
Rebecca (16:48.824)
Mm-hmm.
Matt Sterenberg (17:07.34)
Those things are, yeah, people don’t fully know what they’re participating in. And it feels like that’s part of the scary element is that I can’t just choose to be a laggard. It’s like it’s pulling me along with it, whether I like it.
Rebecca (17:19.711)
Mm-hmm. Totally. And we’ve just never done this before. Any other technology that was introduced, the existing laws that we had more or less pertain to it, right, with a few minor kind of tweaks, right? Like, there wasn’t any really dramatic brand new rules that had to be implemented that were fundamental to our
daily lives. Whereas this, because AI is making decisions and often bad decisions or biased decisions that impact real people, and we have no laws about anything like that, that’s the scary part because we’re all part of this giant social experiment. We don’t know how it’s going to go wrong or what the ultimate consequences will be. And if something does go wrong, there’s actually nobody to be held accountable right now.
That’s the problem. Like with general ethics questions, it was as simple as, you know, the person making the decision, do they factor in this or do they factor in this? Do they factor in what’s best for everybody? Do they make these decisions based on shared values? This is this diffusion of responsibility where, okay, is it my fault because I downloaded chat GPT or is it my company who adopted chat GPT or is it open AI’s fault or is the government’s fault for not regulating
Matt Sterenberg (18:46.166)
Regulating,
Rebecca (18:46.925)
this. Right. So it’s just this diffusion. It’s everybody’s fault and nobody’s fault when something goes wrong because there’s just no guardrails or guidelines. And that’s, think the scariest part because you’re seeing this go wrong a lot of times in a lot of different ways now. And everyone’s like, well.
Matt Sterenberg (19:08.64)
Yeah, it’s a different technology in the sense that most other technologies are moving from analog to digital, right? It’s like email, emails like we used to write letters and now we’re just doing it electronically in it.
Rebecca (19:15.449)
Yeah, they just amplified existing human things.
Yeah, like humans would make music and then there would be like electric guitars or mixers that would help humans make music better. Now we have this thing that will just make its own music. And what does that mean? And what does that mean for musicians? And we’ve never had to make a law about machines that make their own music before. And now we have to figure out how things like copyright and, and trademarks and the future of work all are impacted by that one thing. That’s, think what’s tricky.
Matt Sterenberg (19:51.502)
So you talked a lot about ethics and critical thinking and like those are the skills that we need to develop. you know, lot of people that listen to this podcast are in higher ed or K-12 and they’re thinking a lot about AI because they’re being inundated with how are we gonna adopt and embrace AI? And then also like, we need to think about the skills that we’re giving these learners. But I guess for educators,
What do you recommend for them in terms of, we don’t wanna just not embrace it at all. We wanna embrace it to some degree, right? It’s an amazing, powerful technology. How do we ask the right questions? How do we be skeptical and critical in the right way? What are the big questions we need to be asking? And I think this is important for education in particular because, you know, like,
This is the development of the people that will ultimately be driving what we do with it next. What questions we’ll be asking. How do we embrace, but also be critical.
Rebecca (20:56.089)
Hmm.
Rebecca (21:01.529)
I think schools are just stuck in this impossible position right now, honestly. They need to be teaching kids to use AI and understanding how to use AI themselves, but also teaching them to not trust it at all. And that is hard, I think. I think that the idea of… I’m not necessarily a tech or an AI evangelist. I’m fascinated by it, but…
I’m also a huge skeptic and I ask a lot of questions and I think that’s kind of a healthy way to live. think you have to have some sort of AI fluency slash AI literacy slash whatever we’re calling it this week. Like you have to understand what it is and how it works. You can’t just stick your head in the sand because it’s happening all around you. Kids are using it. Teachers are using it. It’s just a skill and something that you kind of have to understand how it works. If for no other reason, and I tell teachers this all the time,
take your assignment that you’ve given every year for the last 14 years and give it to AI and say, tell me all the kids that my way that my freshman class might use AI to cheat on this assignment. Right? Like use it for nothing other than that. Help me figure out how to reach the outcomes of this assignment and come up and reconfigure it in a way that AI can’t do it for somebody. Right? So that’s useful. We do need to teach kids how to think.
critically about outputs. And I’ve seen a lot of teachers do that in a lot of really, really interesting ways. But I’m fundamentally against AI detectors. think that that is us just trying to safeguard an archaic system that just isn’t going to work anymore. We’re just trying so hard to keep things exactly the same no matter what.
Matt Sterenberg (22:42.156)
Seems like a losing battle too, right? There’s… yeah.
Rebecca (22:50.489)
instead of adapting. And I’m seeing like even some new twists on it, which is funny because I wrote a whole thing last year about where I thought these AI detectors were going. And companies are just releasing that now. Like write your essay in this box and we’re going to film you writing it and keep track of every keystroke. And then note how many edits you made and how long it took you. And we’re going to surveil.
your entire creative process and it has to fit in this box.
Matt Sterenberg (23:22.346)
And there’s the, when you’re being watched, do you have a different output, right?
Rebecca (23:26.777)
Totally, and if you have to confine your creativity to a box that you know is being surveilled, in a society where we now have endless capabilities for creativity and ways to think and approach problems, we’re not preparing kids for the future. To teach them to literally think within a box. So I hate it. I don’t even want to go down that rabbit hole. So I think that us trying to safeguard the way things have always been is going to be a losing battle. But…
Matt Sterenberg (23:30.552)
Right.
Rebecca (23:55.523)
Critical thinking on steroids needs to be a priority that we’re teaching. We need to implement that into curriculum, teach kids to question everything. That’s like gonna be basic survival, one-on-one. And side note, like we need to do it for kids. We need to be doing this for adults too, because our grandparents and our parents on social media, they weren’t even around when, like the internet wasn’t even around when they were young. So they missed out on fundamental internet one-on-one. They’re the ones who are sharing crazy stuff on Facebook right now. So.
You know what, they’re the ones, we all need some sort of basic level of this checking logic, checking sources, spotting hallucinations, challenging outputs. So I think that that needs to be a critical, like 101 level course in post-secondary. 100%, you take algorithmic bias or societal ethics and technology 101.
Like that should be a mandatory class. In my mind, there’s a set that we all have to take as soon as we hit our undergrad year in post-secondary. We need to take that and every teacher needs to take it. I think in work, we safeguard AI tools until you’ve taken a basic course and you can pass that course and then you know how to ethically use those tools. And I think we have to think about assessments differently. I really, and I’ve said this from…
probably the first week that ChatGPT came out. I think we’ve overcorrected so far as a society to this written culture, right? We text, we email, everything is writing and we can’t live like that anymore. I see us really going back to a more oral culture where we talk like this. If I have a student and I’m like, I wanna know that you understand the industrial revolution, explain it to me. Give me your arguments, give me some rhetoric.
I will know if it’s coming out of your mouth, that it’s coming out of your brain and that you understand. I don’t have to surveil your process for that necessarily. And I think so many kids these days, I have four kids in college, they lack something, not even just because they were COVID kids in high school, but they lack something with that basic human verbal interactions. The joke is these kids don’t like to even make their own doctor’s appointments and talk on the phone. They hate talking in person because they grew up.
Rebecca (26:11.513)
Being able to think everything through in a text and clear it with their friends and everything before they hit send. It was a very, very low margin for error. Whereas when you’re talking to somebody live, so much could go wrong. So I think that we’ll have to go back to something like that. yeah, that’s, and be continually learning it. 39 % of skills, I’m like 99%, that’s right. I think world economic form, 39 % of our skills will be obsolete by 2030.
I don’t think verbal communication skills and emotional intelligence and critical thinking will be those things.
Matt Sterenberg (26:45.666)
Right. We’re still going to be in relationships, right? Like that’s not going away. So like really your point is we like, let’s not compete on AI turf, you know, like let’s not compete on their battlefield and try to become like, try to block them out or to become like, how do we get better than AI at this thing? like the, it’s almost like we have to become more human, the things that it cannot do. so, but that’s like,
Rebecca (26:48.266)
I hope so.
Rebecca (26:58.155)
Yeah.
Rebecca (27:11.993)
100%.
Double down, lean in.
Matt Sterenberg (27:15.916)
Yeah. Yeah. Yeah. That I think is it’s the tougher stuff though. Right? Like it’s, it’s, we always like, we all had a teacher that had like the easy multiple choice test, you know, and it was like, what are the right answers? Those are the things that you can find answers for. Like, how do we know? But I was a history. I liked history more. was never like the math person. So yeah.
Rebecca (27:25.158)
it’s messy. Yeah.
Rebecca (27:36.631)
Were those the teachers that changed your life? No. The teachers who changed your life?
But did you love history because you had a history teacher or a certain teacher who made it exciting for you, who helped you engage in it, who helped stroke your intellectual curiosity and saw something in you. Like those are the things we have to double down on, especially in education, because nobody’s favorite teacher was their favorite teacher because they had easy tests or really great looking slide decks, right? It’s those human things. And those are the things that inspire people to be better and inspire them to greatness. And I just think
Matt Sterenberg (28:05.869)
Yeah.
Rebecca (28:14.869)
We’ve over-corrected in this standardized test society with like levels of gatekeeping what is honors and not honors and that we just have to start re and it’s hard and it’s going to take time, right? Like it’s unnatural. Schools are kind of failing at all of these things right now while they’re figuring it out. They’re not like embracing AI enough to make kids competitive for the future, but they’re also like not skeptical enough to make them safe. A lot of them are just pretending like this isn’t happening and totally banning it.
But for every semester that we delay this balance, think we’re causing harm to a degree in not helping kids get ready for the future.
Matt Sterenberg (28:55.458)
Yeah, it’s really interesting because as I think about like, what is the ideal graduate, you know, and it really becomes someone that knows how to navigate, use the tools at their disposal for the purpose that fits it really well, but can stand up in front and explain why they’re doing what they’re doing.
Rebecca (29:15.319)
Yeah.
Rebecca (29:18.68)
Hmm.
Matt Sterenberg (29:21.004)
to think critically, solve problems, to develop strategy on their own, to communicate effectively. That’s hard, know, that we all have certain gifts and that’s difficult. But I do think, you know, there is going to be a push for this. Like we are, we all have a longing for like connection and relationships and we’ve got to find a way to do that.
Rebecca (29:29.778)
totally. Totally.
Matt Sterenberg (29:48.832)
And I think maybe there will be not a reckoning. think that’s a strong word, but I think there is going to be an element where people want to go back to like an oral test or some sort of different form of testing that they feel like, cause
This is also a thing where students are like, I’m not using AI to cheat, but all my classmates are like, they want a level playing field too. And they want to be able to like, be proud of their accomplishments. They want to stand out. And the only way to do that is if you have a system that people can trust and yeah.
Rebecca (30:21.303)
Yeah. And it’s a rigged system right now because like that’s why AI detectors don’t work, right? The only kids you’re catching with an AI detector are the kids who suck at using AI at this point. you know, like the, anyone who even knows remotely, like how to leverage AI properly can make it completely undetectable. Plus there’s no way that teachers can even prove it. Like we’re wasting our time on the wrong things. What is so interesting to me is that this kind of gen…
Matt Sterenberg (30:32.888)
Yeah.
Rebecca (30:50.801)
Z, Gen Alpha, they are actually some of the most skeptical about AI. As opposed to like people like me, like who are leaning in, like people who are older, like Gen X, elder millennials. I think because we see what can happen in the future of work and we see the implications and we read the news, whereas a lot of these really young kids…
We’re told all through high school, it’s cheating. It’s cheating. Don’t use it. Don’t use it. And they were scared of it and it was blacklisted and they were told that they should not trust it and never use it. And then they were like threatened with it, right? If you use it, you’re going to fail. We’re going to catch you. You’re going to be in trouble. So it’s just, this is kind of the brainwashing that’s happened for that generation who are now scared to use it and think it’s terrible as opposed to more mid-career professionals who are like, this is something, this is going to matter.
We need to learn this skill immediately because this is important. And more people who are in their mid-career are recognizing that, which I think is fascinating. Then the younger kids, granted the younger kids and young adults today, they’re also more skeptical of what they see online. I showed some of these robotics things that are coming out from Tesla and from all of these different companies that are at least…
releasing robots. There’s actually some pretty amazing ones. And my son’s like, that’s AI. And I’m like, actually, no, this isn’t AI. But they just automatically think everything’s AI. They don’t trust anything they see online, which I think is kind of a good thing moving into the future. have that critical. Whereas, you know, like my mom or your mom or your grandma would be like, did you see that? Those bunnies jumping on the trampoline? That was so great. You know, whereas, so it’s just kind of an interesting, everybody’s at such different.
Matt Sterenberg (32:24.214)
Yeah, that’s fine.
Matt Sterenberg (32:33.876)
Yeah, yeah, Yeah.
Rebecca (32:40.217)
places and what that means for the future of work will be interesting. am optimistic about this generation. think, you know, whatever the older generations are always worried about what’s going to happen to the younger kids and the younger generations like figure it out. So I am optimistic about that. The people I worry about are more the mid-career people who won’t have the chance to course correct or upskill or entire countries full of people.
like in the Philippines or India who are operating and they make their livelihoods working in call centers and places that are going to be Widely automated very very quickly and what happens to all of those millions of people? And what does that mean on a global scale for equality? And what’s the economic impact of that long term because we’re not gonna up skill Everybody to do something that an AI is going to do. I think maybe the solution has got to be coming from
governments incentivizing businesses to keep people. Like, listen, we know you could save $100,000 if you went to All.ai. We’re going to give you $100,000 to keep all the people and teach them to work with AI. So something’s going to have to happen, I think, in that way, plus the upskilling, plus people just trying to kind look out for each other. Because as a society, if we end up in a situation of mass, mass unemployment, that’s not going to be good for anybody.
Matt Sterenberg (34:09.002)
Yeah. Well, I don’t really want to end there because that’s scary. But what I’ll give you the final word, like, is there anything I should have asked you Rebecca or anything that you want to bring up that we didn’t talk about? And again, check out RebeccaBoltzma.com and, and you’re speaking internationally. You’re talking with educators and businesses. AI ethics obviously is a theme that’s come up. You’re very passionate about that, but
Rebecca (34:12.633)
You
Matt Sterenberg (34:37.204)
anything that you want to highlight before he cools up today.
Rebecca (34:41.069)
think it’s easy, and I find this happening to myself sometimes too, to kind of get caught up in all the doom and gloom of all the big things that can happen and that are going wrong. And in my mind, I’ve just tried to break those into three separate buckets. Like I think we’re all living in a society in a situation we didn’t ask for. We weren’t prepared for this kind of technology to pervade every single circumstance of our life.
And some of the, there’s things that are really out of our hands. We can’t control that there’s bias in these systems. So they’re being deployed recklessly or the environmental impact or what is going to happen on the future of work. We can, we can’t control those things. There are things that we can influence. And within our sphere are things like maybe you step up in your company to be someone who learns and understands these things to help influence policy or in your sector. But then I think that things we need to focus on are on the bucket of things that are within our control. And that is our
own personal AI literacy. There is absolutely no reason, if you don’t know anything about AI, that that should be the case anymore. There are so many free courses and tests. Google just put one out. Anthropic put one out for free. OpenAI has an OpenAI Academy. There are places where you can experiment. These tools are free and you have to understand them if you want to survive in society.
cold and calc, but you have the opportunities to learn that is within your control and everything that you learn will be to your benefit. If for no other reason than to understand what’s happening, how it’s being used in schools, how can it can help you because there is some way that it can help everyone. But we have personal responsibilities to understand it, use it responsibly and kind of stay informed so that we can move into the future with confidence. And I think just focusing on what we can control.
versus the things we can’t, that’s what’s gonna be the most important thing and will help us kind of move forward feeling empowered instead of paralyzed because there’s a lot of things to worry about about the future and all we can control is what we do and how we think. And I think knowledge is power and that’s a good place to start.
Matt Sterenberg (36:50.049)
Rebecca, thanks for joining me.
Rebecca (36:52.056)
My pleasure.