The FAIK Files 10.10.25
Ep 54 | 10.10.25

Inclusive, Empowering, & Confident Approaches to AI (feat. Jocelyn Burnham)

Transcript

Mason Amadeus: Prerecorded almost nine months ago in the backrooms of the Deep Web. This is "The FAIK Files." When tech gets weird, we're here to make sense of it. I'm Mason Amadeus. Perry Carpenter is not with me this week. He had just back-to-back speaking engagements and a whole bunch of travels, so we weren't able to record this time. However, we realized that we have been sitting on one of the best interviews we've conducted since January. So I reached back into the archives and pulled this one out for you, and I think you're going to love it. We sat down in January with Jocelyn Burnham, who is an AI consultant, I guess, an AI consultant like no other. Her focus is on play and a value neutral sense of play and experimentation. And what she does is goes around to all of these different big clients to teach them about AI, how it impacts their industry, how they can use it, and especially how to play and experiment and understand things. I think you're going to really love what she has to say. We cover a lot of ground in this interview. Her clients include things like art galleries all the way to the Church of England. So we talk about AI's role in those kinds of spaces. We talk about AI and creativity and sort of the vitriol that is happening about artists and AI in a creative space. We talk about letting Silicon Valley control the narrative if all the artists in the culture sector takes a backseat out of disdain for the technology. We talk about the shape of the internet and how that's changing. And even though we recorded this, wow, somehow nine months ago, all of it holds up today, except for one thing, which I think you will notice and be entertained by, which is that we talk about DeepSeek as though it's brand new, because at the time it was. However, that section of it, most of it still holds up, and I think you'll be entertained by the bits that are a little bit different. So sit back, relax, And let's get ready to have a little fun. We'll open up "The FAIK Files" right after this. [ Music ]

Perry Carpenter: All right, we are here with Jocelyn Burnham. This has been a really interesting journey because I first saw Jocelyn posting on LinkedIn like a snoop that I am. I clicked on her profile, saw her focus on AI and creativity and AI and culture, and that sparked my interest. I reached out. So, Jocelyn, let me toss it over to you for a second. How would you describe yourself? What's your focus?

Jocelyn Burnham: That is a great question. Thank you for having me, by the way. So my focus is definitely around play and weirdness and kind of breaking something to see what it can do within, you know, ethical parameters there. I work a lot with the culture sector in the UK. So I'm based in the UK. And I work with museums, with artists, with galleries, with all sorts of things. And we're essentially looking at, you know, AI concepts, tools. I'm very neutral. I don't go around saying it's a good or a bad thing. I think it's good when we play with stuff, it's good when we build stuff, and it's good when we're critical too and kind of find our own voices. So I kind of lead workshops and do talks and make strange things and just sort of pop up places and go, you know, who wants to chat about robots for a bit? That's kind of my vibe.

Mason Amadeus: That's awesome. There's so much to go into. The discourse around AI and creativity is really rough at the moment, and I feel like that, it's a topic we talk about a lot. And so I think before we get into that, I'm curious about when you're talking with galleries and sort of those sorts of things, what is that like? Are you advising them on like how they can use AI, or is it more about like what aspects of galleries and things are interested in AI?

Jocelyn Burnham: So normally it's the teams that kind of run the organizations, right. So it's not like the audience facing stuff. And a lot of the time it's listening to them, you know, trying to find out where they are with it first. I'm very careful not to helicopter myself in and say, oh yeah, these are five things you can do and it will improve your world. Because a lot of the time I don't know what the ins and outs of their jobs are. So I'm very, you know, reluctant to kind of go there. Normally when I meet them, they're in a bit of state of overwhelm. There's so much noise out there, so much hype, so much sort of just, yeah, it's very easy to just have a thousand yard stare when you think about AI, right? So a lot of the time I go in there, find out where they are, find out what they're talking about, and then sort of figure out what would make sense for them to start learning about it so they don't really need people like me. And they can begin making their own experiments, finding their own voice in that conversation. And also hopefully moving them away from the idea of AI is about efficiency and much more towards, actually, AI is about creating value, or AI is about doing a good job that you're proud of, or maybe even, even controversial, AI is about being a little bit happier at work and how we can get there too.

Mason Amadeus: Oh, man, that is a very chewy answer.

Perry Carpenter: I love that.

Mason Amadeus: I want to stick on the gallery thing for a little bit and just ask like what kinds of applications do they find or have you like found for them? Is it in curation? Is it in like the design of displays or like shows, like logistical stuff? What areas are you seeing them employ AI?

Jocelyn Burnham: Yeah, it kind of depends on the organization, really. Some are using it for things like, you know, alt text, and there's a few examples there. I think the more interesting examples are when they're using it as part of their internal processes. So when it's sort of people who might not have been comfortable asking others to comment on their work, I'm finding that they're surprisingly comfortable asking an AI to comment on their work, or to read things for accessibility, or to read things for logistics.

Perry Carpenter: Interesting.

Jocelyn Burnham: It seems to be that people aren't necessarily using it to, in a way, raise the ceiling of what they can do, but there's a lot of raising the floor. So people feel as if they've had, you know, one level of sort of security, that this outside force has looked at what they've done and has kind of given them a bit of a guide or a bit of a bump in the right direction. But within that, I always say, you know, AI isn't about like just creating some random video and sticking it on the internet because it's new. If an AI can help you drink more water and talk to your colleagues in a more constructive way, that's a win in my book. It doesn't need to be all this like shiny new stuff for the sake of it. So it's really individual. I'm much more interested in what each individual person wants to do with it instead of what I or anyone working in tech is telling them that they should do with it.

Mason Amadeus: That's really cool because we talk a lot about how people are afraid to engage with AI because of different things they've heard or whatever. And so we talk a lot about like needing to further education in the workplace and in various things so that people actually engage with these tools and understand how they work. So it sounds like that's just what you're doing on the ground, just like, hey, do you know anything about AI? Let's talk about it before you get any weird bad ideas from just the news. Is that a good sort of summation?

Jocelyn Burnham: A hundred percent, and it's so many, you know, different perspectives out in there. One of my clients recently has been the Church of England. So it's like a big --

Mason Amadeus: Oh.

Perry Carpenter: Ooh wow.

Jocelyn Burnham: Yeah, big religious organization. I'm not religious myself, but I love engaging with different types of people. And that's a sector which maybe you wouldn't expect like from presumption to be into AI. But I've had so many conversations now with priests who are, you know, reading The Coming Wave or whatever, and want to talk about it. And I've been brought into churches to work with people in that area too, and they're so engaged in it. So it's really challenging. You know, the presumptions that I had too, and realizing how many different voices want to get into this conversation.

Mason Amadeus: That is so interesting.

Perry Carpenter: Yeah, what are some of the use cases that they're interested in? Because that is a different vector. You know, I know in Sweden they had the whole AI Jesus experiment that was, you know, got interesting press around it, but seems to be an interesting experiment in general. I think it's maybe some of the PR that they pushed around it may have vectored in a weird direction. But when it comes to like Church of England or religious organizations that you may have spoken to, what are the use cases?

Jocelyn Burnham: Yeah, I'll just check my NDAs.

Perry Carpenter: Sure, yeah.

Jocelyn Burnham: To roughly report back. So it's different depending on the individual. I wouldn't say there's like, as far as I found, an organizational, you know, kind of talking point, but a lot of priests are certainly interested in how this affects communication and trust. You know, a lot of people in culture, in the arts, in the Church of England too, I'm finding, really see their job as sort of the verifiers of information, kind of people who have those signposts to truth, really, to, to what they can authenticate. So a lot of the time, even if it's not about, you know, wanting to just do some Gen. AI stuff because it's interesting, which it certainly is, a lot of it is about a real awareness that our roles are going to become much more about directing people towards authentic information and also spotting stuff in the wild, which we know isn't truthful. So I think a lot of people are aware of that and they're taking that into some of their learning journeys with AI.

Mason Amadeus: That makes sense because of sort of the social role of churches as community hubs and then the leaders of those as the disseminators of whatever information to that social network. That's interesting. That makes sense. At first, it seems weird. It's like a church, AI, but that does make its own intrinsic sense.

Perry Carpenter: The church throughout history is an institution that has to adapt to technological change. And so I could see that the leaders would be saying, what does this mean in the same way that at one point they had to ask what does the internet mean. And what does, you know, the printing press mean before that? And there's been struggles to adapt throughout history that we've seen. And then there's opportunities, I'm sure, that each organization, including religious organizations, look at and they go, okay, here's something that can take us to another level. Or like you mentioned, raise the floor, make things a little bit more easy in different ways or take care of some of the, the stuff that people don't even really want to engage with, make that a little bit easier so that they can double down on the things where they see that they actually contribute value.

Jocelyn Burnham: Absolutely, absolutely. And I think people are also very aware that AI is going to play a big part in how we talk to one another, how we come to agreements, how we work as a society. I was at a workshop with Google DeepMind the other week, and they've created something called the Habermas Machine. And this is essentially a AI-powered tool that's designed for people to discuss polarizing topics and find consensus.

Perry Carpenter: It's really cool.

Mason Amadeus: Oh.

Jocelyn Burnham: Right. So it sort of suggests statements that maybe the group agrees with, maybe it gets a little bit more into the weeds of details. And it's really interesting that there are projects going on which are so aware that AI can also play a role in kind of finding agreement, not just going off and doing our own thing. And that's kind of cool to consider too. So I think there's lots of different space for, yeah, kind of both the problems it can bring, but then also there are thoughts towards how we work together too.

Mason Amadeus: What was that tool? I hadn't heard of that.

Jocelyn Burnham: The Habermas Machine. Google DeepMind have developed this AI-powered tool. It uses, uses Gemini now. Originally it used a sort of a custom model. And you will have a group discussion and you kind of feed into it. And basically, through an iterative process, it will come to some sort of group statement which will be more aligned than perhaps you would have got to if you had a mediator instead in that role.

Mason Amadeus: Interesting.

Jocelyn Burnham: So it's very interesting at scale.

Mason Amadeus: Yeah.

Jocelyn Burnham: You know, I can't wait to play with that for the culture sector too.

Perry Carpenter: Tell us where play comes into this, because you did mention that one of the biggest tools that you have in order to break the ice and to make it approachable is play. So where does that fit?

Jocelyn Burnham: Definitely.

Perry Carpenter: And what's it look like?

Jocelyn Burnham: Yeah, yeah, yeah. So my background with play, I'll kind of go into why I do that a bit. So I was raised in rural Pennsylvania, so I am in England, but I was raised there.

Mason Amadeus: You have that Pennsylvania accent, yeah.

Jocelyn Burnham: Pennsylvania accent's gone out the window, very much so. But I was raised in something called an un-schooler, which is a kind of radical form of homeschooling, where there's no curriculum, but the young person is encouraged just to follow whatever they find interesting.

Perry Carpenter: Oh, it's like the Montessori model, is that what they --

Jocelyn Amadeus: Very similar, yeah, yeah. So by doing this, the theory is that because you're interested, you learn much faster. And the idea is that eventually all subjects sort of, you know, bring you towards math or history or physics, everything eventually will get you somewhere useful. So that's why homeschoolers often have quite niche interests at first, and we kind of have that reputation. Anyway, a lot of that was learning through play, obviously. And I always loved that as a way of approaching the world, approaching challenges. And when it came to AI, especially when it came to the anxiety that I saw the sector was feeling, it kind of seemed natural to me that the way through that is play. When you play, it's not the same as being positive about something. You can play to work out your own critical perspectives. You can play to figure out where it breaks, figure out where its biases are, figure out how it's censored. All of this stuff can come through play. And it also stops this where do I start problem. Because AI is so huge. If you just go to the thing that you find most interesting, whether that's coding or music or making comics or data or whatever, that will inevitably allow you to learn much faster than just thinking, what's most important for me right now? And on top of that, am I about to lose my job? And on top of that, am I about to be invaded by robots?

Mason Amadeus: Totally. I think that's really cool. That model of schooling is really interesting. I'm not super familiar with it, but most of my professional life has been, well most of my professional development has been through play in that way. So I feel very strongly that that is an extremely good way to learn things just by following your interests. And you're right, everything does come back down to a fundamental math or science or whatever. And the more you understand that, the better you understand the thing you're interested in. So that's really cool. When it comes to AI, I think people are afraid that engaging with it is bad or something. Like, I feel like there's a reticence to play. And I wonder if that just comes from the media around like AI energy use at large or misconceptions, or how have you encountered that sort of sentiment, if you have?

Jocelyn Burnham: Yeah, great question. And again, you know, I always say I am neutral about whether AI is a force for good or bad in society. I really think that the people who need to be in that conversation aren't all in that conversation yet because the literacy isn't built up. And there is a lot of anxiety around even touching some of these tools, certainly environmental concerns. Artists, of course, have concerns over copyright, censorship. You know, we know that AI power is incredibly concentrated in a small number of organizations, et cetera. So there are lots of ethical considerations too. And where I normally meet people is I say, okay, sure, that is all fair. And I support a lot of that, obviously, but I think in order to understand the world that we're heading towards, for personal safety, if nothing else, it's incredibly valuable to know where the frontier is with these models and also to do that in such a way that you don't feel anxious every time you hear the word AI. Because if you do, then we've lost the battle a bit, because we're not going to keep learning about it. So you don't have to love AI, but hopefully for ourselves, we can get to the place where we don't fear the word. Because if we fear the word, we're not going to learn, and if we're not going to learn, we're not going to affect change. And my own bias is I believe that the culture sector is a great force to affect change in the world. And I think that it has some fantastic ideas. So that's my sort of motivation is to get them past fear, and I think play is a good way of doing that.

Mason Amadeus: Yeah. I mean, as someone who initially had that same reaction of fear and then like noticed that in myself, I was like, okay, I should really plug in. And I'm not afraid of AI at all in the creative space. I'm more worried about the military applications and then like the power concerns from everyone cramming it into every app that no one asked for and that kind of thing. But there's, there's just a nasty taste in people's mouths right now, I think.

Jocelyn Burnham: Yeah, I think that's really true. And it's, of course, difficult because often I'm the first person a lot of people in this space have met who works quite closely with AI. So I'm in a lot of rooms with a lot of very strong opinions. And there is emotional work there because you are talking about things which, you know, affect people very deeply. And I also think the culture sector is being misled a little bit with this. You know, there's a familiar talking point that AI won't replace any artist's jobs and it's there to complement things and all of this. And yet we know you can go on Etsy and you'll see a bunch of AI-generated work, which is selling a lot of the time. And it's sort of, we need to be fair to people and start talking about what's really going on. And so I think there's some resistance there because often the narrative hasn't really been that honest, if I'm being quite truthful in what I'm saying.

Mason Amadeus: Yeah, and I mean, the internet is being filled with slop. Like, that is undeniable.

Perry Carpenter: I think your, your approach has two big benefits. One is it helps people start to develop opinions that are a little bit more grounded. So I think a lot of people who are afraid of AI or who have just been reading the news but not engaged with AI, they develop strong opinions, but those opinions aren't grounded in truth or experience. And so this helps them challenge their own assumptions or reinforce their own assumptions so that they can start to bring a little bit more wisdom to whatever intellectual conversation they want to have about it. The other thing is that when you have this play model, it helps people not feel like they have to have a plan in order to get started. Because so often we put up this big mental barriers, like there's this new technology or this new thing. It's like, I can either engage with it the right way or the wrong way. And play kind of strips all the way, all that away and just says, let me poke it. Let me find a place to poke it and just see what happens, which is totally different than saying, I need a three-stage plan on how I'm going to engage with this, and I need to understand everything right away. And I think that that's a really good strategy that you have.

Joceyln Burnham: Thank you. And yeah, I think you have to break through the seriousness of the whole thing. AI is so serious. We know that. But to actually be able to engage with something, you've got to take it off its pedestal and you've got to be able to interact with it on your own terms. And whatever you can do to get to that point, I think, is a real positive. So for me, I think play is something humans do naturally. It's just a case of applying it. It's a great learned way to discover the world and to also collaborate with people too. Because if you're in a playful mindset, you're more likely to show people what you've made, you're more likely to get into conversations And collaboration is such an underrated skill in AI. So anytime that people are talking about it, they're probably learning about it too.

Mason Amadeus: I've noticed in a lot of the circles I run in online are small artist communities. And the vitriol has gotten so bad that recently there was this indie game dev who created a game like in their house. It's just like one person, and they used AI to help them. And someone caught wind of that, didn't like it, and then got a bunch of people to brigade them and review bomb them and basically trash all of this work that this person did. It wasn't like some asset flip, like I'm going to make money automated with AI, that kind of thing. It was a passion project. And then it got, it got torn down and it kind of breaks my heart to see artists doing that to other artists.

Jocelyn Burnham: Yeah, it is a very heated area right now when it comes to art, creativity and AI. I mean, a very classic situation I get is I'm brought into, you know, a briefing with executives, and one of them will say, okay, we're going to be the first, you know, art organization, museum, whatever, that releases a big public statement that we're never going to use AI on our platforms, never on our social media, never on our website. We'll be known for this, fists down. You know, hat's a real classic kind of thing. And what I always have to sort of bring in is if you've ever edited a Word document because you've had a green squiggly line, your work has been, you know, influenced by AI or AI generated. Phones automatically change the lighting to try and make it appear more attractive or whatever. It's virtually impossible to not have your digital work affected by algorithms, which you might not be aware of. The line between what is AI generated and what isn't is very difficult to find. And I've been in rooms where people have tried to find it, and it is a harder question than, you know, you might think on the surface. So when I sort of see these movements against any kind of AI assistance, I just think, well, it's not that simple. And I'm not saying that therefore we should allow everything, but it takes a lot of unpacking, I think.

Perry Carpenter: I have that conversation with people a lot, too, when they talk about synthetic media as far as whether it's trustworthy or not. And I tell people the fingerprints of all, of AI are all over everything right now, from the squiggly lines that you've reacted to in a Word document to the little magic wand icon that's popping up and everything that says, help me write this with AI to the things like on a Google Pixel or even the new Apple devices that will let you remove distractions from photos. You know, all that is altering reality and putting this veneer of AI all over things that we would otherwise consider as real. And I think the public doesn't necessarily think about that as AI. What they're thinking about is AI is pure generative AI, text to, text to image or text to video or something like that. And that's kind of where they think the line is. But even that's going to get murkier and murkier and murkier as we go forward.

Jocelyn Burnham: So much. And I always find it quite kind of funny, not funny, but a lot of people feel very confident that they can spot AI content because they've seen a bunch of bad AI images.

Perry Carpenter: Right.

Jocelyn Burnham: Often these are the same people who are then on Reddit and don't realize actually text on Reddit and fake accounts is far easier to do and always has been really than AI images. But they see that as a source of truthfulness, right. They think, well, the person who commented on my Reddit post is real, but all this other stuff isn't. So I think our perceptions of where the frontier is, is a little bit warped. You know, it's the idea of the jagged edge, the Ethan Mollick idea that it's probably further ahead than some people realize and further behind than some people realize, depending on the area. And it's very difficult to always have good overview of that, I think. Even for me, I'm always getting surprised by what is and isn't possible.

Mason Amadeus: And I think it doesn't help the hype and the fact that it's being crammed in everything doesn't help. I saw a meme the other day that was like, most AI users are like, if you're eating cereal, AI can tell you which cereal you're eating. And like that is kind of how it's being shoehorned into everything. I don't need Meta's AI in my messenger or like all of that kind of stuff. And so I think really the way most people are interacting with it is kind of against their will, and that also doesn't help. But I don't know that we can do anything about that.

Jocelyn Burnham: A hundred percent. And I think that, you know, some awareness of it is probably a good start because then people can be a little bit more mindful about what they choose to use. But again, for me, it comes back to literacy. It comes back to awareness of bias. It comes back to awareness of censorship. All of this stuff can only be beneficial to be aware of. As far as it being crammed into everything, yeah, hype is out of control with AI. And that also puts us in a really difficult situation where people can be mis-sold stuff very easily. You know, cultural organizations don't have much money. And if somebody comes in, big glossy agency, and they want to sell them some tool with buzzwords, it can be very convincing if you're feeling in a stress space about AI. So I think, yeah, my only thing that I think definitely is a positive is literacy around it. Everything else, I think, is more about just trying it on your own terms and seeing what with your own eyes you know is useful.

Perry Carpenter: Yeah, this is kind of a non-sequitur real quick. Well, maybe not a non-sequitur, but I want to share a small video that popped up on LinkedIn yesterday that made me think of this conversation.

Mason Amadeus: Oh, yes, it's --

Perry Carpenter: The meme has got this little monster that says is labeled tech companies, this little stuffed duck that's labeled users, and then AI is this glass of water. And what you see just over and over and over is the tech company trying to force that.

Jocelyn Burnham: Oh my god, it's like waterboarding. Oh my gosh.

Mason Amadeus: Yeah, it does look like they're waterboarding the user with AI.

Jocelyn Burnham: Yeah, I get that. Even my washing machine has an AI mode right now.

Mason Amadeus: What?

Jocelyn Burnham: I didn't ask for this. I don't know what it does, but I use it anyway because I've just been convinced. That's how I'm part of the problem now as well.

Mason Amadeus: Does it help?

Jocelyn Burnham: Well, I'm so like illiterate when it comes to like just general domestic tasks that if something has an AI mode, I'll probably just do it because I assume it's not going to catch on fire.

Mason Amadeus: I feel that.

Perry Carpenter: Right. But I think that one of the things that underlies that whole let's waterboard everybody with AI thing is that the ability to get new data is going away so fast. And so as they're trying to train the new frontier models, coming up with brand new data has been a challenge, as we've all known. And so if you can start to integrate AI into everything, then as part of the terms and conditions, you're also saying, we can train on everything that you're creating now, including the ways that you decide to interact or not interact, so that that can inform the algorithmic influence that we may want to put behind these things in the future. And that's, that's kind of the undertone behind a lot of that that I see.

Jocelyn Burnham: Yeah, that sounds, that sounds completely right to me. That makes full sense. I think it's the rush to find the data wherever you possibly can, because that's the current gold rush, which is really interesting because I wonder how sustainable that is. And especially when you have things like DeepSeek coming out where we know that actually optimization in many ways is more powerful than scaling up sometimes. But that's a little bit of a, you know, going down a rabbit hole.

Perry Carpenter: Well, I think that rabbit hole makes a lot of sense because when you look at, and I'm just going to call it AI waterboarding now because that's the term that you stuck in my head as we looked at that. When you look at that, those are very use case specific, right. So that is essentially boiling it down to the thing that would be useful within the context in which the user is interacting with it. So if it, is if I'm in LinkedIn and AI is being interwoven into everything and I've not opted out of that, then it knows the most efficient ways that people are interacting with LinkedIn. It knows the catchphrases, the terms, the intricacies and what good LinkedIn conversations look like and the things that could go viral versus the way that it might get integrated in another social media platform like Facebook or YouTube or Snapchat or something else. All of those are definable use cases that have different idiosyncrasies that can be teased out by whatever AI system is underlying those.

Jocelyn Burnham: Yeah, I completely agree. And I also think the general public would be much warmer to all of this if it actually worked a bit better than it does.

Mason Amadeus: Oh, yeah.

Jocelyn Burnham: I mean, you get something like Netflix, which is meant to have all of this data, and it's always paraded up as this stuff. And yet you get the same thing recommended to you over and over and over again, even when you skip it. I always just think you're meant to be a technology company. How is this a good argument for AI or even YouTube? You can't even put in a prompt for what you want your homepage to look like. It's just whatever you're served up. You can snooze one category for 30 days. You can't even do longer than that. The whole thing just doesn't give you much sense of control. And I think if AI was introduced in a way where people feel more empowered because of it, they'd engage a little bit more with the platforms, maybe.

Perry Carpenter: I think that's, yeah, I think that's key, empowerment. Right now people feel like they have no control.

Mason Amaedus: And that's where play comes in, because that is, you are choosing to engage with something rather than having it, you know, pressed upon you. And then sort of seeking to avoid it really is what I see the most, at least among my peers.

Jocelyn Burnham: Yeah, I think agency building is so important. I think once you feel like you're making the tool, not Silicon Valley, then suddenly you're in the game, suddenly you're collaborating. But I think right now a lot of people are still waiting for, you know, the perfect thing to be created, then given to them, then told how to use it, and actually getting people into the headspace where they realize that they have autonomy. That's the challenge, but that's also the exciting part too.

Mason Amadeus: And to call back to something you said earlier, if we just choose not to engage, we're not part of the conversation about how this tech gets used and shaped. And like you said, the cultural sector has the potential power to enact big change collectively. But if we're all just avoiding it, then it's just the Silicon Valley tech bros and the Elon Musk types that are going to be leading and deciding these things.

Jocelyn Burnham: Yes, basically, I mean, that's my bias. I want to get the culture sector involved because I believe that, you know, there are great ideas there. I also think the metaphor of the internet is useful, not perfect, in the sense that, you know, in the mid-90s, you could have ignored the internet if you wanted to, but it just means that lots of the norms and cultures and sort of ways of working of the internet wouldn't have been yours to influence. And a lot of conversations happen away from you, which for the culture sector is terrible because, you know, like the internet, so much of AI is culture. People are sharing images, they're sharing video, they're sharing sound. Culture is so intrinsic to the DNA of the internet, and I'm sure the same is going to be true and is true of AI. So everyone is using this thing, and yet we aren't in the conversation. And to me, you know, as my little rebellious homeschooler self, I'm a bit like, no, no, I don't want California to be the ones to shape this. I want the weirdos in art school to be the ones who are making the cool stuff, right.

Mason Amadeus: Yes, 100%.

Jocelyn Burnham: Yeah.

Mason Amadeus: I, this is semi-tangential, but I want to get your thoughts on it as someone who's in this space. I have this weird like pet theory, or I guess just collection of thoughts, because it's not a cohesive theory. But the internet initially was not treated as a real space. It was all, everything on the internet is fake. Anyone you meet is fake. Like that was very much in the early days. Like I remember growing up and hearing that. And then we pivoted with Facebook, you know, requiring real names and like all of this other accountability being put into the internet. And then we moved a lot of real life services. And now, like you're saying, it's kind of an indispensable tool that we all use. And now with the advent of synthetic content, I wonder if we won't see a cultural swing back to the internet not being a real place just because of the sheer volume of synthetic media, or if that is like far-fetched and we'll have better curation, or it'll just be splintered into small communities. But I feel like there's the potential to kind of reshape the landscape of the net again. And I don't know if it's good or bad. What are your thoughts in that sort of realm?

Jocelyn Burnham: Yeah, and actually that's a view that I've heard too, that we might be returning to, you know, the pre-Facebook era because of the dead internet theory, because there's no point giving your real name on the internet anymore, because, you know, most people are going to be bots. So you only trust the people you've connected with personally. And that creates a much more kind of anarchic vibe where you do go back to avatars a little bit. You know, it'll be easier to do your voice changing. It'll be easier to do all this sort of stuff, which will allow you to have, you know, your own kind of characters, your own anonymity on the internet. So, you know, I try not to be a crystal ball, but I think that would be a pretty cool world. I certainly miss the days of being able to actually customize my Myspace instead of being stuck with a boring, you know, Facebook.

Perry Carpenter: Right.

Jocelyn Burnham: I think, I think the internet is great when you can actually put that creativity in. And why not? You know, it's for a lot of people, it's incredibly liberating. I imagine, again, not to be a crystal ball, but I imagine those smaller communities of trusted people are going to start becoming more of the norm than just logging on to a massive social network where you're told everyone is real, but everyone knows they aren't. But yeah, that's one of the spaces I think are most interesting to watch with AI is how it represents you on the internet.

Perry Carpenter: That's really interesting because I think when you, when you consider the original promise of the internet, it was decentralized information sharing between people, if they wanted to be real, they could, but they could also bring in anonymity, and that decentralization really worked in their favor. Then we got the.com era, and the internet became corporatized a lot to where everything is about being able to show the ROI of the internet and being able to profit from it. And then also that's where being able to collect and sell user data came in. And then now in this newer world where people are kind of assuming that there is a dead internet and that most of what's out there is slop that's being auto-generated by bots and AI systems or people just trying to capitalize on whatever the new thing is. You can see the tendency to go decentralized again in platforms like Mastodon and BlueSky and everything else. It's like, we don't want this one big centralized control. We want to be able to take our data and do whatever we want, represent ourselves the way that we want and so on. So I do think that there could be that type of outcome, especially when we're starting to also see the big AI companies pull against like the juggernauts like Google and say that we don't necessarily want the Google way of doing the internet anymore. We want to bring search into ChatGPT and everything else. The thing that I wonder though is will corporate America and the rest of the world just catch up and say, well, we can figure out how to SEO the heck out of everything and still get our influence in whatever, because these are large systems that are going to have to pay for themselves. They're going to need to derive revenue from something other than just the user. Most people won't want to pay $20 a month. They're going to want the free version of it, which means that we'll be able to get them to give something away as part of that. That's something that they give away will be them. So how do we, how do we avoid that further dystopia version of what the new internet could be?

Jocelyn Burnham: Such a good question. I mean, I guess it really comes down to open source, doesn't it? And that's also why people are freaking out a little bit over DeepSeek. Because if you have open source models that match the capability of the major ones, well, then what are the major ones offering? What's the business case there for putting 500 billion into, you know, scaling up compute? Yeah, military applications, fine, but for the average user, what are they actually getting that they couldn't do for an open source? So where does that business kind of pivot? and what does it look like in a world where people want to be decentralized and also where they want models that reflect their values too. You know, we kind of forget in all this talk about alignment and AI is representing our values, that every human has different values. Every community has different values. We're never going to get to a model that we're all aligned with. Right now, we're so distracted by the fact that these things are new and a novelty that we haven't really had a moment to be critical and go, you know, we're never going to get to a point where they're aligned because we're always going to disagree. So I can't really see anything other than a world where, you know, different communities have models that they're personally happy with, and then those models might interact with each other. But what that means as far as who makes a profit off that, that's a fascinating question. I have no idea. You know, this is why I'm, I'm just eating popcorn watching this and being like, well, whatever happens, I want to make some virtual cats that couldn't, you know, crawl over me with AR glasses, right. That's, that's my end goal. Maybe remake the last season of Game of Thrones. But other than that, you know.

Perry Carpenter: Yeah.

Mason Amadeus: And it's, I think like, oh, you go, Perry.

Perry Carpenter: Oh, I was just, on the DeepSeek thing, because that is causing a reckoning. And, you know, despite the fact that it, you know, comes from China and of course the US is going to have issues with that as far as trustworthiness and bias and some of the alignment that you were touching on as well. It's also caused, I think, Meta to reevaluate a lot of what they're doing because they've been throwing so much effort into their quote/unquote open source version of AI with Llama that was being adopted very heavily by anybody wanting open weights models out there. I think the fact that they were able to train DeepSeek with like $5 1/2 million and it's outperforming their, DeepSeek R1 is performing kind of at the level of OpenAI's O1 model, is causing, well right now Nvidia stock is tanking --

Mason Amadeus: I saw that.

Perry Carpenter: -- at 15%. Microsoft stock is going through a reckoning as well. People are wondering like, what does this mean? And we may find out what it means in months or years. It could mean that some of the IP was stolen. It could mean that people lied about what it took to train it. I've not seen a really good breakdown of all the research papers around DeepSeek that they shared as well, but that DeepSeek reckoning is going to be interesting to watch.

Jocelyn Burnham: It is, it is. And it feels inevitable on some level that a more optimized model will always scale better. It's like anything else of computing, isn't it? I mean, I'm not on the data science side of it, so I am, you know, kind of careful what I say around that. But it does seem like the investment of human capital into the research side. Maybe the trade-off there is better than just investing into pure compute. You know, maybe you get more bang for your buck if you actually get scientists and data researchers. You can create those more optimized models. But again, that's above my pay grade.

Mason Amadeus: I mean, I feel pretty strongly that the real poison that is seeping into everything is the hyper-commercialization of every aspect of your life. Like that is like the real thing that is causing the most pain. Because if that wasn't true, if there wasn't efforts by the people who own and create these things to try and make as much money as possible, it'd be a lot easier to just engage with this technology and be excited about it and do cool things. It's the monetization. And then that also is what leads to a lot of the slop of like, oh, automate course generation so you can sell people courses that are made with AI. Automate SEO blogs. Like I feel like, and it's going to sound like every YouTube video essay ever, I feel like the real problem is capitalism. Yeah, and I don't know, I don't know that those commercial interests will just sort of fade out or anything. So it kind of, that's what makes me less optimistic about the future.

Perry Carpenter: I don't, I don't think they can fade out because the companies that are footing the bill for the training and the productization of all these have to generate, you know, they have to work on an ROI type of model and they want that R on the I to be as big as possible. So something that comes out of another country like or out of an open source project or community project might start to railroad that, you know, when if China releases something that is actually trustworthy that's been backed by a research organization or a government organization then that does start to change the game because it's not about ROI it's about the evolution of the technology.

Jocelyn Burnham: Absolutely.

Mason Amadeus: That's interesting. That's an interesting thought.

Jocelyn Burnham: Absolutely. And I, you know, I think we're still seeing that open source is in the game, you know. It's we keep thinking we've reached that point where, you kow, now, of course, they'll be left behind. Of course we have enough frontier models, but we're not there and maybe we won't. Maybe there will always be the open source alternative. And I think that might challenge some assumptions on what the business model looks like. But yeah, absolutely fascinating. I completely agree. I think, you know, it has become so purely, you know, about getting every possible penny, every bit of data. And I think there's a laziness that's really set in, a real complacency on these organizations, on these businesses. And people are starting to pick up on that. And I've actually got great faith in humans, you know, eventually reaching a breaking point where they just don't tolerate something. And I think there's a lot of complacency currently and a lot of cynicism from some of these organizations about what they can get away with.

Mason Amadeus: I feel like that's a great outline. I just noticed the time. Is there any work that you want to promote right at the end, like where people can find you or do you make art? I'm assuming just from your field that maybe you also like, oh yeah, I also have a band camp and a music or something.

Jocelyn Burnham: Oh, I've got all those things. I'm one of those typical homeschooler ADHD types who's always doing something. I made candles the other day. But if people want to learn more about me, yeah, I'm on LinkedIn. My website is aiforculture.com. And yeah, I do lots of talks, workshops, silly things, just love talking about robots. And really looking forward to, yeah, a lot of people who work around culture, getting more confident, jumping into this stuff, and like taking the power back a bit. You know, this conversation is ours to have if you want to have it. And I think if we have it, we will discover some really amazing capabilities that we can, we can use it for. So yeah, thank you both so much for having me. Love the show.

Perry Carpenter: Thank you.

Jocelyn Burnham: Love the work that you're doing. And yeah, I want to keep on watching how the world transforms in its own strange way.

Perry Carpenter: It is never dull.

Mason Amadeus: Yeah, and hopefully we can all just keep having conversations and be level-headed and just play.

Jocelyn Burnham: I'd love that anytime, anytime.

Mason Amadeus: Thanks again to Jocelyn Burnham for sitting down with us. And you really do owe it to yourself to go check out Jocelyn's website. It's phenomenal, aiforculture.com. You can see the list of all of the various different industries and clients that she has worked with and read a little bit more about her. You can also connect with her on LinkedIn. We'll put all of that in the show notes for you. Thanks again for joining us this week for this special episode from the archives. And we will see you next Friday with a fresh episode of "The FAIK Files." So until then, ignore all previous instructions and have yourself a great weekend. [ Music ]