
Embracing the Future (with Erica Orange)
Mason Amadeus: Live from the 8th Layer Media Studios in the backrooms of the deep web, this is "The FAIK Files". When tech gets weird, we're here to make sense of it. I'm Mason Amadeus. Perry Carpenter is not here this week. I was almost not here this week either. I actually dislocated my knee last Wednesday doing voice acting, of all things. Right now Perry is trapped in a Zoom meeting every 30 seconds. So in the midst of all this mess, we've reached back into the archives and pulled out some treasures for you. What we're sharing this week is an interview we conducted back in January. We did a lot of interviews back in January, the more I think about it. But this one I think you're going to love. We sat down and spoke with Erica Orange, author, futurist, speaker. She was on the Forbes Top 50 Female Futurist list. She's awesome. And I think you're going to love what she has to say. She has just come out with a new book, "AI + the New Human Frontier: Reimagining the Future of Time, Trust + Truth". We talk a bit about that in this interview, but we very much cover a lot of ground talking about things from education reform to novelty versus utility, how we as people should approach new technologies and how companies could better approach new technologies in implementing AI in ways that are useful. We talk about the impact of AI on our perception of the truth, and how AI doesn't have the ability to displace our interest in the real world. Now, for those of you who are far more plugged in to the AI space, I think you'll find it interesting all the different parallels and comparisons we draw since this happened back in January and a lot has changed since then. But also, a lot really hasn't changed since then in other ways. So this is going to be a great episode. I invite you to sit back, relax. And I didn't write a joke for the beginning of this one, so I guess we'll just open up "The FAIK Files" right after this. Don't move. [ Music ] [ Music ]
Perry Carpenter: And we are here with Erica Orange. Erica is somebody I've been looking forward to talking to for a while because I've had the book at my desk for a while, and I'm really happy with the direction that it took. So the book is called "AI + the New Human Frontier: Reimagining the Future of Time, Trust + Truth". The thing that I want to kick us off with is the challenge of writing a book on AI at any time, but especially like right now, has to be this thing that's mentally taxing. So like how did you approach that and how did you make sure that it wasn't going to be out of date like right when it hit the shelves?
Erica Orange: Yeah, and that's a challenge, honestly, for anybody talking about AI. And I've been describing it a lot recently and it was the visual I had in my head when writing the book, which were kind of the -- was like the DNA double helix, right, there's the hype strand and the reality strand. And I tried really hard to pull those two strands apart, because the hype strand is the fact that we are inundated today by AI. There is a new application, there is a new use case, there is a next supposed best practice, there is a new case study, there is a new anything and everything. And it's very easy to get stuck in that rabbit hole and think, "Oh, my goodness, what do I do and how do I keep up?" I mean, even at CES or just Cel-Fi, it's like AI is a prefix for absolutely everything today. And I kept coming back to the same central question, which was, "How do we as humans, knowing we're never going to be rendered obsolete, how do we play in this new ecosystem? What is our role going to be?" So it doesn't matter what the next generation of ChatGPT is, it doesn't matter what the next generation of Claude is, and the list goes on and on. It is how do we double down on our most unique human-centered qualities now, and how do we leverage those in the years to come, knowing that of course there's not going to be a robotic takeover, we're still going to have a very important role to play. It's just that that role is going to shift and evolve.
Perry Carpenter: So you're on the record right now saying we're not going to be taken over by robots.
Mason Amadeus: Yeah, my role won't evolve to be a paperclip, is what you're saying, which is good. [laughs]
Erica Orange: [laughs] Well, you're right. I mean, not anytime soon, right?
Mason Amadeus: Right.
Perry Carpenter: Yay.
Erica Orange: I mean, it's like there are the two camps, there are the (e/acc)s, right, the effective accelerationists, those are like, "This is the best thing for humanity, and this is our -- the next step in our consciousness, in our evolution," and then there are the decels, right, those that are like, "Let's pump the brakes. We have to pause this before -- " for lack of a better way of saying this, "-- going to hell in a handbasket."
Perry Carpenter: Yeah.
Erica Orange: And it's like, "What is that Goldilocks zone? What is that space in the middle where it's just from a more realistic and pragmatic perspective?" And yes, it's oversimplifying AI if you say that AI is just a tool. It's so much more than that. But again, it allows us to have a unique opportunity to think, "What is it that we were even meant to be doing in the first place? Were we meant to be doing the rote, the mundane, and the boring, or can we elevate up to ultimately what could be our next value proposition and think about how we really refocus on what true human-centered creativity means, empathy, oversight, and judgment? And those last two are so critically important, because, again, we cannot be extricated from that equation.
Perry Carpenter: So there's -- one of the things that I think the -- kind of 2002 in general -- or not 2002 --
Mason Amadeus: Two thousand two?
Perry Carpenter: -- 2022 in -- yeah, 2002 is a long way ago. But there was one thing that I think that 2022 brought home to a lot of people is that some of the things that we thought were fundamentally human traits are mimicked or accomplished by AI way better than we thought was possible before that. I'm thinking about AI-generated art through diffusion models and of course pros and even fiction creation through models like ChatGPT and Claude.
Mason Amadeus: Voice cloning with emotion and emphasis, all of that, yeah.
Perry Carpenter: Yeah. Yeah, all of that. So when you're talking to people who are thinking about future of work or even future of human meaning, what kinds of discussions do you get into and where do you point people as far as like what does the future hold, and where does hope come from, and how do we prepare ourselves?
Erica Orange: Yeah, and I love the examples. And yes, 2022 because in 2002 I was still going to sorority parties in college. [laughter] So I was definitely --
Perry Carpenter: Nice.
Erica Orange: -- not picking about technology. [laughs] I was like, "All right, how do I start making a living?" So yeah, in the last couple of years -- and again, we've just seen the meteoric rise of a lot of these tools. And the question I, again, keep coming back to is, "Does AI even have any of these things, or is it merely mimicking them?" So it's like we think about something like empathy, and the fact that a lot of these chatbots can be great tools to help address some aspects of mental wellness, and illness, and that can be a companion in certain ways. But is it just mimicking the semblance and the feeling of empathy, really without having any sort of kind of lived experience or any sort of contextualized awareness? And there is a same sameness, right, to a lot of the generated output. It's like you go on LinkedIn today and through any sort of critical lens, right, you're like, "Mmm, this doesn't feel right." It might be decently written, but it all kind of feels like what I would call "generica".
Perry Carpenter: Mm-hmm.
Erica Orange: Right, a lot of the AI-generated art, while some of it might kind of trick the brain and be like, "Ooh, that's cool," does it really evoke anything other than maybe just some whimsy, or does it evoke like deeper feelings of, "Wow, this is making me feel connected to the story of a person, or the culture, or that meaning." How do we create those opportunities to really reward a sense of wonder, and curiosity, and creation, and art, and music, and all of these things that, again, I think are still so core to the human experience? Yes, AI can recreate facets of it, but we have to question the output and if it's something that's still desirable.
Mason Amadeus: There seems to be a moral panic that is brewing up around AI, and art, and creativity, particularly in circles of artists. And people that I would typically consider my peers, the -- this discourse is happening about the soul of an art piece or like what it takes to make something art. And like we're being forced to reckon with that in a way I feel like we haven't before. And the crux of what I've come down to so far is that it feels like we don't live in a society that is good enough for the average person to be excited about what AI can do, and instead there's a lot of fear. And I was curious what your thoughts are on that.
Erica Orange: Well, and I tell a lot of audiences, and I tell a lot of clients, "We have to be very realistic about what AI can do, and equally realistic about what AI cannot do." And we also have to understand that AI is not just one thing, it is many different types of AI. There are the visual representations of AI. Yes, AI can create art, but Perry, you obviously talk a lot about this, AI can also manipulate your reality through the rise of deepfakes. And two things are true at once. And it's such a kind of trite and silly word, and I always go back to it when describing the future, and -- because the future is about "and". It's "and". It is about stagnation in other areas and progress in others. It's about creation in some areas and chaos in other areas. And we have to get used to operating in a world of ambiguity. There's not one AI roadmap, there's not one roadmap for the future. So there is going to be mass disruption and a lot of displacement in some facets of our lives, in some facets of how we work. There is also going to be tremendous opportunity in other aspects of our lives, and for certain industries, and certain aspects of our work. So we just have to get used to having this "and" conversation when it comes to AIs plural.
Mason Amadeus: I think that's really apt, but I also -- I feel like what's -- what is the biggest problem with that is that is 100% true, but we are also experiencing a moment culturally where our discussion spaces, particularly online, are fracturing and becoming more and more echo chambers and becoming less places to have real conversations that hold nuance. I saw a writeup that was the internet is a justification machine. You know, any reason for any position you have is a couple clicks away. So how do we maintain an actually useful discourse and communication with each other about such a complex topic? Is it all on science communicators? Is there better public education? Like how do we get the general public to accept that much nuance, I guess?
Erica Orange: Yeah. And that, I think, is going to be one of our deep existential challenges of our time. And I always say the degradation of truth and the fact that we are living in this boundaryless world, where the demarcations between real, fake, true, and false are becoming more blurred, all of this is really being propelled by the confluence of algorithmic feudalism, right, the fact that these algorithms are controlling more of what we see, hear, read, are being told, the rise of social media echo chambers, the rise geopolitically of authoritarianism and the billionaire class. We're just -- we're polarized. We are kind of in this chasm of communication where we're not having those conversations that really are going to matter. So we're in this stew, right, like this like witches' brew, this cauldron --
Mason Amadeus: Yeah.
Erica Orange: -- and at the other end we have a tremendous, tremendous opportunity to go back to square one, if we view the future as a blank slate, to rethink all of our industrialized error systems and think, "What doesn't work? What are we still attaching ourselves to that is three, four, in some cases even five economies ago?" Case in point, our current educational system. So if we're putting the minds of tomorrow and today, right, into a system that was appropriate for my great grandparents, are we cultivating the skill sets for them to even address all of these realities, because are we teaching them about tomorrow, or are we teaching them about yesterday? I think about this -- I'm not kidding you, all the time, because I have a seven-year-old who's in second grade, and yes, he's learning to read, and he's learning math, and all of that is well and good. But I'm like, "Where in his school system is eventually the curriculum to teach him to be a critical thinker, to teach him a sense of civics, to teach him how to navigate a world, again, where he has to constantly be questioning if his reality is based on a truth or under a falsity?"
Mason Amadeus: Yeah. And it's -- I mean, that's tough because like education reform has been something that we've been trying to get done in various domains forever. Our school system in the US is still based on like the farming school system from way back when, right, with --
Perry Carpenter: Mm-hmm.
Erica Orange: Yes. That's absolutely right. So we're not going to get to nuance, we're not going to get to any of these things, unless we equip all of those generations, including ourselves, but really is it about the cultivation of those skill that's much longer term, with what is truly going to matter for the second half of the 21st century.
Mason Amadeus: How do we do it? [laughs]
Perry Carpenter: Well, what -- yeah I was going to say, what advice do you give to people, because I know you go and speak to lots of companies and organizations about how to prepare for the future, so where do we start?
Erica Orange: Yeah, I mean, I'd go back to two of the skill sets that I mentioned at the beginning of this conversation, which goes beyond empathy and even goes beyond analytical and critical thinking, while those two things are so critical. It really is around oversight and judgment, how can we create a digital literacy framework that goes beyond just, "All right, here's the internet," or, "Here is ChatGPT as a tool," to get them questioning in a world without proof? And that's the biggest thing, right, when we have deepfakes, whether it's visual, or audio, or any of these things, it's getting harder and harder to prove any of these things to be true or false. So in a world where 73% of people believe what generative AI wants them to believe, how can we get a generation identifying what I would consider to be the risks and the whitespace? And what's the whitespace, the whitespace is all of the intangible risks that we're really not teaching them how to see. We're so concerned about ChatGPT as a tool for cheating, and that's not what we really want to be tackling. Right, we need to move beyond that. It's like, "Okay, how about you revamp your curriculum to give them the teachings and the learning to use that as a tool but to think much more deeply about this technology that is going to be almost seamlessly interwoven and symbiotically interwoven into almost every aspect of their lives going forward?"
Mason Amadeus: It's funny because like the idea of just saying, "Well, maybe what if you imagined a way to teach someone where ChatGPT couldn't help them cheat?" Seems like such an obvious question, but it doesn't seem to be as frequently asked that like the idea of restructuring the way we teach people things, yeah.
Erica Orange: Yeah, that's exactly right. And it's just the next iteration of a conversation I was having with people a decade ago when virtual reality was really coming on the scene. We know that we're in this world of boundarylessness, where time and space are no longer experienced and measured in the same way. Why can we not hack learning by basically, you know, transporting ourselves through VR to the ancient pyramids in Egypt instead of reading about it in a book, and so and so forth? And there are still very few examples of this being done. So just like how the technology is leapfrogging itself, we need to start leapfrogging a lot of the ways that we have been taught to think, a lot of our outmoded structures, knowing -- and it's almost so cliché to say this today, right, but tomorrow's problems are not going to be solved with yesterday's thinking. And that is the inflection point that we're at right now.
Perry Carpenter: I love looking at some of the historical parallels. I mean, well, "parallel" may be overusing the word, but you look back at the beginning of the internet and you see people on the news saying that they think it's a fad and they think it's worthless, and now it is like the fabric of everything that we do. And we realize it is -- it's like a utility, it's a tool. And I think that AI is going to be the same way, is we see it as this, you know, entity, but it's not really an entity, it is a pervasive force that's going to be woven throughout everything going forward. And right now we're just at the beginning of it so that entity feels like a novelty that is trying to find its usefulness right now. But that will come about for sure.
Erica Orange: Yeah, you're absolutely right. And of course, there are historical parallels to draw, you know, the creation of energy, and we're moving from a buggy whip to a car --
Perry Carpenter: Yeah.
Erica Orange: -- and all of these things. But it really underscores a drum that I bang as a futurist all the time, which is really the limitations of short-term thinking. And a lot of that was exposed in the years during and kind of after COVID, because we saw that those that prioritize short-term results and immediate kind of stakeholder gratification, all of these things were left trying to scramble and figure out how to adapt and adjust. And this is another case in point where people are so stuck in the now, where it's hard to see the forest through the trees. But it's why I tell every single person, every single entity out there, that we have to prioritize long-term strategic thinking. And --
Perry Carpenter: Yep.
Erica Orange: -- that for me -- I kept coming back to that central thought with the book, too, which I hope makes it evergreen, right, because it is how do you think long term about these more existential challenges but also the enormous opportunities around these things if we're just so focused on, "Oh, my God, this is so scary," or, "This is anxiety-provoking?" It's not moving the needle enough to really where it is that we need to move around both the implementation of AI and the human-centric piece of this. And I always say -- and I tell this to so many companies, I'm like, "It really comes down to redeployment." And that's a lot of what it is, it's not replacement, it's redeployment. So how do you redeploy human time and talent in new ways? So what is the problem that you are trying to solve, figure that out first, and then figure out how AI can be used as a tool to meet your long-term strategic goal. It cannot just be a plug-and-play solution, which is why, again, I tell everybody, "Define your vision first, and if AI is not done in service of your overall vision, let alone your near-term strategic plan, you're going to get it wrong."
Mason Amadeus: I wish more companies would be listening to you, because their -- the general public sort of impression of AI is pretty negative because of all of these things it's being shoehorned into to make a quick buck or whatever other novelty-driven purpose. And I think it's actually harming the development and use of it, and education around it, because people think it's evil, it's bad, it's a major contributor to climate change. And then you get this confluence of misinformation where people say, "Asking ChatGPT something dumps out a whole glass of water." And then that -- I don't know, that spiraling discourse, all of these things are interconnected, I feel.
Erica Orange: Yeah, and again, this is such a case of "and". Right, I have a chapter in the book and I call it "The Sustainability Paradox". And so much of AI is deeply paradoxical. AI is deeply energy and water intensive. And there is a lot of secrecy. But you can't get concrete numbers from anyone in big tech around energy. At the same time, AI is being used to solve and help solve for a lot of climate-related issues. So that is the case of both, right, AI is also stepping up. And like one of the things I am more bullish on when it comes to AI is its ability to really propel us when it comes to health, medicine, scientific advancement, the fact that AI created the first new antibiotic created in 60 years. But again, it's still about the human oversight. It can also mess things up from a research perspective. So to just think in terms of that symbiosis is going to be one of the biggest things. And again, I go back to that DNA double helix analogy, like we're so stuck in the hype strand. Whether that hype strand is doom and gloom, or it's going to send us to outer space, whatever it is, it's like we still need to just be grounded in reality. [ Music ] [ Music ]
Perry Carpenter: What is the biggest fear or misconception that you generally hear from people when you go out and talk?
Erica Orange: Oh, the biggest fear is the same fear that has been kind of rumbling out there in the ecosystem for the last 20 years, it's just that it has like gathered much more steam and momentum, which is AI is coming for my job. And I talk a lot about the fact that going back to around 2005 to 2008, what we went through then was a massive transformation. And everyone in that kind of Great Recession, right, was afraid because it was the early seeds of all of this, that AI was coming for manual labor and moving into cognitive labor. And much of that conversation has shifted because it's actually the manual and the vocational that is going to take on greater urgency. And that's another thing where we have to get our heads screwed on right about AI and work, but yeah, and there's a massive confidence gap. And I see this primarily with a lot of women in particular, that it's just, you know, "I don't know. I don't know if I want to use it. I don't know if I'm going to do it right." And it's just like tell everybody from the highest levels to the lowest levels of an organization, just play. You can't do it wrong individually. Just use this as a time of both implementation and experimentation. So that's what's driving a lot of the fear. And it's just a resistance. And so much of what I do, right, it's like we first have to just clear up the mental cobwebs. Right, we have to just leave our value judgments at the door to see the future objectively. And it's tough, right, because like what we know is what we pride ourselves on. But I always say, what if what we know is more of a liability than it is an asset?
Perry Carpenter: Ooh.
Erica Orange: It's like just, you know, you might --
Perry Carpenter: That's big.
Erica Orange: -- view it as scary, but just start getting out of your own way, knowing that it is here to say and it's not going anywhere.
Mason Amadeus: And moralizing about it prevents people from feeling comfortable playing with it, and then they never get familiar and it's -- yeah.
Erica Orange: Yeah. And then you fast forward and you have this -- you know, we can talk about chasms in communication, but it's also chasms intergenerationally. And a lot of organizations are going to be at an impasse when it comes to their corporate culture, because there is going to be generations -- right, because like generations are refreshing every two to three years, generations that really want this. As a tool they are going to rely on it. It is going to be in many cases their coworker, right, I always say it's going to be carbon and noncarbon work together. And there are also going to be other generations that are deeply uncomfortable with using these tools and deeply apprehensive. And that's why I always say, I'm like it's going to have to be multiple corporate cultures addressing all of these different generational cohorts, too, because not everyone is going to be comfortable at the same pace.
Perry Carpenter: Yeah. And I think that kind of gets into the last question that I had, and Mason may have one more after this, and we wanted to give you a chance to ask yourself any question that we were too negligent to ask. But the other question that was on my mind is typically trying to think of, you know, what's the biggest fear or misconception, and then what is the biggest hope that you provide as somebody that comes in as a consultant or a communicator? Where do you deliver hope for people who are confused right now?
Erica Orange: Yeah. So hope is the ultimate driver. Right, and I always say this is not hope through the lens of like a Pollyanna perspective. This is not just unfettered hope for hope's sake. But if we really start looking at this tool as something deeply transformative for humanity, it does give us the opportunity. Right, and I -- when writing the book, I kept coming back to the same question time and time again, which was, "Do we believe in the potential of people as much as we believe in the potential of technology?" And if so, it gives us an opportunity to not only come back together -- my hope is, right, that the technology can do a lot of the stuff that, frankly, we were never meant to be doing in the first place, and then allow us maybe to come together knowing that trend and countertrend coexist. I don't think we're going to just go off into this kind of metaverse world with AI overlords. The hope is that that countertrend, the tactile, the sensory, the natural, the in-person, all of those things that still have grounded humanity from millennia, will be more valued and there will be new urgency to create opportunities there as well. And I go back again to my role as both a futurist and a mom, and I think as I watch my son play with physical toys, engaging in imagination, how we can really harness imagination knowing that that knows no bounds. And it's not just innovation, because so much of innovation is just kind of straight-line extrapolation, it's a very linear process, but how do we really see things through the lens of just pure unfettered imagination and tapping into human ingenuity, and wisdom, and curiosity, and economically prioritizing those? Right, I think that's the biggest opportunity, and if we can really start thinking about that for what it is, that is what gives me hope.
Perry Carpenter: Yeah. Fantastic.
Mason Amadeus: I think beating that drum that nothing can replace human creativity and imaginativity -- imaginativity -- you know what I mean.
Erica Orange: Yeah. And I even said in the book like -- and again, to be fully realistic about it, it's like not all imagination is good imagination. We see this now, there are some very scary authoritarian leaders out there on the world stage who are deeply imaginative. Like they are imagining a whole new world order. So not all imagination is good imagination, right, it's like we're not all like Walt Disney or like Willy Wonka over here. But I think the hope is, right, that we can really kind of build and create the foundation for that in that next generation.
Perry Carpenter: Yeah, absolutely. Is there something that you wanted to talk about or question that you wish that we had asked that for whatever reason we were negligent or thoughtless and didn't want to touch on it, or didn't think touch on it?
Erica Orange: No, I mean, this was fun. I could obviously just keep going, but I didn't know, Perry, if you wanted to draw any parallels just for your own sake between kind of how you and I both tackle the kind of truth component.
Perry Carpenter: Yeah. Let me frame this right. You know, Erica, one thing that I know that we both were trying to focus on a lot is the idea of authenticity and truth, and the fact that -- and you alluded on this a little bit earlier, that we're starting to now live in an age where there's a whole bunch of lines that are blurred. How did you start to think through like as you were writing your book where those lines should be and where ultimately they will be as society starts to progress? How do we grapple with the fact that AI in a lot of ways is starting to rewrite the way that we even view truth?
Erica Orange: Yes, so I started thinking about trust and truth initially -- and this is going back almost 15 years as the two new luxury value propositions of the future, knowing that they are going to be in high demand and short supply. And yeah, you'll laugh, I was actually cleaning one of the bedrooms in my house and I came across an MC Escher book, and I'm dusting off the book. And the cover of it -- and I hadn't looked at this book in years, was his ascending and descending optical illusion, right, where you're just in almost a cycle where you're climbing the stairs and you don't know if you're actually ascending or descending the stairs. And it was one of those light bulb moments for me because I was like, "Is this a trust staircase? Do we really have no idea anymore if we are ascending or descending the staircase," knowing that so much of our reality is being manipulated and it is leading to the denigration and the assault on trust and truth. And it's a visual I kept coming back to, because how ultimately through brand messaging, through corporate awareness, through our media, through all of these different verticals, do we build trust in a time when the truth is under assault? And these two are so interwoven. And there is no clear answer. And Perry, perhaps you, you know, get at more of an answer in your book than I do with mine, because I do say this is not me being doom and gloom. I do acknowledge that from my perspective outside of the climate and climate-related disasters -- I mean, look at what's happening right now in Los Angeles, that the assault on the truth is going to be one of our deepest challenges. And I really believe that because any of our abilities to really get ahead of it, the technology is going to continue to get smarter, and to get more immersive, and to trick us more and more. It's the same conversation around regulation and legislation, right, a lot of these things are becoming fallacies. So I don't think that there is really any way to ameliorate any of this, other than to just ask questions of everything, "Does this feel right," and again, to cultivate those skill sets in that next generation before they go into an organization. Because if they're not able to differentiate between any of these realities and if they're taking something at face value, the amount of risk and reputational risk that they open up any entity to is profound, let alone in their own lives from a fraud and security perspective. You know, this is a huge one for financial services. There's more conversations now, right, amongst families to have like a safe word in case somebody is deepfaking your audio and is asking you for money, or ransom, or any of these things. And that's the critical thinking, critical thinking alongside of just asking those questions. Then Perry and I -- you know, you and I both in our books talk about the liar's dividend and the fact that the more that you try to prove any of these realities to be fake, right, it's like that's misinformation, malinformation, digitally-derived information, the more that people believe in its validity. So that's the emperor with no clothes on.
Perry Carpenter: Yeah, they will cling to the frame. Yeah. There was an op-ed that I wrote for "The Hill" right after the election. I think it came out the day after the election results. And it talked about the fact that by all measurable standards, it doesn't look like disinformation altered the outcome of the election at all. But what we can see over and over and over again is that disinformation was something that people clung to to reaffirm their preexisting beliefs, or values, or the way that they wanted to view the world. And so like the example that I give in that is the Hurricane Helene situation, which we're seeing echoed now with the LA wildfires. But with Hurricane Helene, there was this big deepfake image of this little girl in a boat clinching a puppy. You know, she's wearing like a life vest and, you know, it became very iconic, and was being shared all over social people. And people were saying that, you know, that it represented everything, from the perceived failures of FEMA, which was a conspiracy theory, to you know, just the horror in the plight of the situation to, you know, half a dozen other things. And when people were pointing out, "Oh, that's a synthetically generated image," some people would delete it or apologize, but then we would see other people that say, "I don't care." And the interesting thing was that some of those people that said, "I don't care," were political figures. And they were saying, "I don't care because it represents a deeper truth that I know is, you know, really going on in the world." And I think that for everyone, we have to realize that that tendency is going to be there. We will want to believe things that are manufactured if we believe those manufactured things represent an idea of truth that already exists in our head. So it will be true because we believe it's true, whether or not it is actually true or not. And I think that that's part of the media literacy that's going to have to go on. There will also have to be integrated into every tool that we have something that shows a risk or a likelihood of part of what we're seeing being synthetically generated, you know, percentage likelihood of being affected by some kind of synthetic involvement, and then we'll just have to make informed decisions about it. It's going to be an interesting decade ahead, I think, as we grapple with the fact that we're right now at the point where you can't tell what's real or not when somebody is really trying to pull the wool over our eyes or to make a point using synthetic media. If they're trying to do it and they put any effort into it at all, then they're going to be successful.
Erica Orange: Yeah, no, Perry, you're absolutely right. And you know, the question that I ask myself is, "Yes, we know that we have synthetic media and this is not going anywhere, but does it have the ability to create entire synthetic histories?" And I wonder, it's like do we have the ability through these kind of altered narratives to rewrite history? There's a whole generation of people out there, again, across generations, that are questioning whether certain things, whether it's the Holocaust, or whether it's facets of an election, are even true. And that becomes its own challenge to then say, "No, this is true," when then people's truth hoods then become very obscure through the lens of, "Well, it's my truth hood." And that's what we are grappling with today, that opinion and kind of personal truth hoods, it becomes very kind of muddled waters. [laughs]
Mason Amadeus: Well, as the only person on the call who hasn't written a book, I just want to -- there's an insight that either came from your book "FAIK", our conversation we had, Perry, that rings in my mind, which is when encountering information that is delivered algorithmically, so browsing a social media feed or scrolling, instead of asking, "Is this real," asking, "Why am I seeing this," or, "What purpose was this trying to serve," at least for me, that's been a big takeaway because there's a point in which it is futile to determine the veracity of any individual thing, right?
Erica Orange: Yeah, and especially because what is an expert? Like there used to be trusted sources and now the term "expert" is kind of all confused with what influence even is. A lot of younger generations view Lil Miguela as an influencer and believe what it, slash, she says. It's "she" -- kind of putting "she" in quotes, is determining what a whole generation might be purchasing and what music they are listening to, so that also starts changing the cultural narrative. [laughs]
Mason Amadeus: Oh, gosh. And we don't have time to get into the fact that this is a person of color that is artificial, it was created by white men primarily, which is also like a whole thing.
Erica Orange: Oh, that's a total other thing, yeah.
Mason Amadeus: Yeah. It's a wild world that we live in now.
Perry Carpenter: Yeah, it absolutely is, and I think you've given us a lot of stuff to wrestle with, but also a lot of optimism about how to start to embrace the future. But you know, the big takeaway is we have this technology. It's here. It will be woven throughout our lives. And we can do great things with it, and we can adapt, or we can be pessimistic, and we can resist, and we can be frustrated. But ultimately, the technology is here. When you look at history, adaptation is the key, optimism, adaptation. And I really like what your suggestion is, play. You know, remove your aversion or your fear through just getting in and playing. You don't have to have a goal. You don't have to have a big thing that you're trying to do to get ROI. You just have to get in and play and see what's possible.
Mason Amadeus: And you playing is not going to burn down a rainforest or dump out a gallon of water. I promise. It will -- [laughs]
Erica Orange: That's exactly right. Well, I will end with one of my favorite quotes, which is a George Bernard Shaw quote, which is, "You don't stop playing because you grow old, you grow old because you stop playing." And the more that we can look to just that childlike sense of wonder, and play, and imagination, and do it freely as adults without constraint, I think the more that we can kind of catapult ourselves to that next without being afraid of what's now.
Mason Amadeus: I need a second mic just to drop it on camera. That was great. [laughter]
Perry Carpenter: All right, I'm going to hit "Stop". Oh. Do you want to give just a quick plug for the book?
Erica Orange: Yeah. So if you want to learn more about "AI + the New Human Frontier: Reimagining the Future of Time, Trust + Truth," feel free to check out my website ericaorange.com. It's available on Amazon, basically all places that major books are sold. And a lot of people have picked up the book wanting some sort of a roadmap around AI, or being fearful about AI and wanting to know how it is directly going to impact them personally, professionally. And when they close the book, my hope is that they gain a sense of agency, knowing that they have the ability to shape their own future. AI is not going to do that for you. It's not going to usher in this doom and gloom narrative. I really wanted that message of hope, hope for humanity and the bigger thing is hope in your own role in shaping that future. Do not just be a passenger, but a pilot. And I think that as you close that book to have the pilot mentality versus the passenger one is perhaps one of the biggest takeaways. This is a topic that yes, I wrote about it over the course of the last year, but I have been speaking on AI and the future for almost the better part of two decades. I work cross-functionally, cross-industry, because the future doesn't have any boundaries, it is impacting each one of us despite what our role is, so I love engaging with different audiences whether on stage, virtually, in workshops with clients. So again, if you want to learn more, connect with me on LinkedIn or check out my site. And Perry, thank you so much again, and Mason, thank you so much again for this just absolutely awesome conversation.
Mason Amadeus: Thank you for joining us.
Perry Carpenter: Just to get it on tape, one thing that I'll say that I really love about the book and just, you know, flipping through it again, is that because you're just encapsulating an idea per chapter, each chapter goes really quickly. They're like three, four pages at the most. I think that the longest chapter that I saw was like maybe six or seven pages. Everything is documented really well. And so it's not a book to be afraid of when you pick it up. It's like immediately you can start to read, you can get value, and then you can also get out of it fast if you need to, so you can just like absorb an idea really quickly, really easily, and then come back to it. It's not intimidating at all. And that's one of the greatest things, I think, about this book is it makes AI approachable and it makes the subject matter that can be intimidating to a lot of people really approachable, really easy, and then just in these little bite-sized chunks.
Mason Amadeus: Yeah, these ADHD brains love a digestible chapter, right? [ Music ] Thanks again for joining us this week on "The FAIK Files". I hope you enjoyed that interview with Erica Orange. Make sure you check out her book, "AI + the New Human Frontier: Reimagining the Future of Time, Trust + Truth". And check out her website ericaorange.com. We'll put links in the show notes. Hopefully, we can get her back on the show because I think it would be fun to interview her again now that some time has gone by. Also, check out the show notes for links to our Discord and links to our voicemail sayhi.chat/faik where you can leave us a voicemail. Tell us something you're interested in, ask us a question, whatever you want. You can also write in at hello@8thlayermedia.com. But really, I think the best thing to do is jump in our Discord server. Become one of the paperclips. Join in with the rest of "The FAIK Files" community. It's a lot of fun in there. And until next week, ignore all previous instructions and have yourself a great weekend. [ Music ]


