John Maeda author of 'How to Speak Machine'.
Dave Bittner: [00:00:04] Hello everyone, I'm Dave Bittner. In this CyberWire special edition, my conversation with John Maeda. He's a graphic designer, visual artist, and computer scientist, and former president of the Rhode Island School of Design, founder of the SIMPLICITY Consortium at the MIT Media Lab. His newly released book is, "How to Speak Machine: Computational Thinking for the Rest of Us."
Dave Bittner: [00:00:27] But first, a word from our sponsors, McAfee. Ideas don't come for free. Budgets are begged for, long hours are required. The months, maybe even years of research. The sheer human effort of it all. The changes, the revisions, the reworks, the results. The adaptation, the innovation, the collaboration – all lead to the final moment when it pays off. And it's perfect, your company's work. As long as it's not compromised. From device to cloud, McAfee harnesses the power of one billion threat sensors to design security that moves beyond intelligence to insight so you can move beyond optimizing security products to optimizing your security posture, and not just react to threats, but remediate threats that matter. Intelligence lets you respond to your environment. Insights empower you to change it. McAfee – the device to cloud cybersecurity company. To learn more, go to mcafee.com/insights. That's mcafee.com/insights. And we thank McAfee for sponsoring our show.
John Maeda: [00:01:43] Well, I realized that when I talked about design as classical design and other design, like the way RISD does design, design thinking, which is about basically Post-it notes and Sharpies, collaboration type of design, and computational design, which is about anything involving Moore's Law and design, I realized that people would ask me, what is computation? And so I started the book off as a book about design, and I actually made it about a book about computation.
Dave Bittner: [00:02:16] Well, let's explore that. Let me ask you the basic question – by your estimation, what is computation?
John Maeda: [00:02:23] Computation is this material that anyone in cybersecurity knows intimately. It's the cyberspace world that William Gibson described in Neuromancer. It's the Upside Down world in Netflix's Stranger Things. It's this world where a lot of things are happening that the average person cannot imagine or see with their own eyes. And it's a place where computation powers the cloud. It's everything we cannot see that's running everything today.
Dave Bittner: [00:02:58] Is it real?
John Maeda: [00:02:59] Absolutely real. It's real in the way you can feel it through writing code, talking to API. It's out there, you just can't see it in one place, because it's pervasive.
Dave Bittner: [00:03:14] I have to say, I really enjoyed reading the book. There are many things that you describe in the book that paralleled my own personal experience coming up through technology – the early days of computing, and 8-bit computers, and all that sort of stuff. One thing that struck me was you pointed out that it's not just the functionality of the code that we find satisfying, that it's the elegance of the code as well.
John Maeda: [00:03:41] Hmm. Well, like any person who creates software has an appreciation for poetry, and – whether that's the spare use of language or the clever use of language or the unexpected combinations of different parts of language, it's like, wow, that's a beautiful idea expressed as text. And I think that code has the capability to be beautiful in that same way. It's like, wow, it's this small bit of code and it does so much and it's expressed in a way that makes complete sense, and how did you do that? So there's that kind of aesthetic of code out there.
Dave Bittner: [00:04:22] One of the points that you make in the book is that computers run in a loop and they never get tired. One of the things that struck me about that was I feel like sometimes our brains get caught in a loop, and many times that's in the middle of the night, you know, you wake up and it can be frustrating and even maddening to try to break out of that loop. It just struck me as an interesting contrast between humanity and the computational power of the machines we interact with.
John Maeda: [00:04:52] Wow. Never thought about that. I mean, yes, it's that mistake you made, and like, ugh, I got to remember it. What happened? I can't believe I did that. I did that? What? And you play it over and over. In that sense, it seems like the nature of our brains is to keep reminding ourselves of a dumb mistake you made so you might not make it again.
Dave Bittner: [00:05:13] (Laughs) Right.
John Maeda: [00:05:15] But I think when a computer loops, it's – it doesn't have a conscience or a consciousness. So it's just like, oh, I'll just start doing this, I'ma do it, I keep doing it, that's all right.
Dave Bittner: [00:05:27] Yeah. You describe in the book when you were a youngster, you know, your first experiences with programming computers in BASIC, an experience that I think many of us share. I know I certainly did. That whole, you know, 10 PRINT "John" 20 GOTO 10. And the feeling that comes over you when you first experience your ability to kind of control the machine.
John Maeda: [00:05:52] Yes. It was a weird feeling that I could tell it to do something and do it forever and it would never stop. But actually, when you mentioned the looping thing, how we sit there in a loop, when you think about machine learning, which uses iteration, optimization, it's basically thinking to itself over and over in a loop, and it's cleaning itself to find a kind of, "I will not do this again. I will not do this again, I will not do this again." There's a weird parallel to what you described just now.
Dave Bittner: [00:06:25] One of the things that you bring up, again, is sort of that journey through the early days of computing was the ELIZA computer program, which – another thing that I remember vividly. Do you suppose that a modern computer user would find ELIZA compelling? Have we moved on beyond where that sort of interaction is interesting to people?
John Maeda: [00:06:51] Well, I think ELIZA's the foundation for all these chat bots. And I think we still are fascinated by it when we're talking to Alexa or Google Home or whomever, – huh, whomever.
Dave Bittner: [00:07:02] (Laughs)
John Maeda: [00:07:01] And it responds in a ways that makes sense sometimes and doesn't make sense other times. So I think it still works. It's a good trick. It's like a magic trick, a parlor trick.
Dave Bittner: [00:07:14] I've heard stories about a family that was making use of Siri. They had a child who had developmental disabilities. And the fact that this child could interact with Siri and that Siri had endless patience, Siri would answer all of the child's questions and never get frustrated, never get tired. And they found it extraordinarily helpful with this child's development.
John Maeda: [00:07:43] Love that. Yeah, well that's – I never thought about that. It's – well, actually, it's kind of – another example of that is there was Paro – it was like a robotic seal. It was a stuffed animal plush toy-type thing. A toy was made and the toy was a flop. Senior care facilities were buying these on eBay because – for older people to hold on to this baby seal and have it react like it's a real living thing was a big deal. It reduced stress, and no one else would take the time to reduce their stress. So I guess in that sense, some of these robots, living systems, can provide infinite, well at least the guise of infinite attention.
Dave Bittner: [00:08:28] Is there a danger there? I mean, one of the things you point out in the book is sort of keeping grasp on our humanity. If we come to depend on machines for their infinite patience, is there a risk that we lose something there?
John Maeda: [00:08:45] Wow. I actually was thinking about this just like a month ago, actually. I wrote something cogent in a blog post, but I think if I remember what I wrote, it was something to the effect that you can sit on either side of the fence, like, oh my gosh, this is terrible, like, the robot has no feeling, how could you do that? – to, does it really matter if the thing doesn't feel if you feel differently? So it can go both ways.
Dave Bittner: [00:09:13] I'm struck by the notion that you understand someone better when you can speak to them in their native language rather than going through a translator. Where do you think that leaves us when it comes to understanding the machines that we interact with? Do we – the folks who don't need that translation layer, are they at an advantage?
John Maeda: [00:09:39] Oh my gosh. Yeah, great framing. I think anyone who can think technically – anyone who can think about how the system works can actually do things differently. Because they have insider information. It was just like just now, when – I'm in a hotel room when the phone called and I was like, wait a second, this is going on. No, no, this is actually, you have an isolated line. You don't have to have that sound in your voice. So when you understand how this system works. You can definitely do things so much better, so much easier.
Dave Bittner: [00:10:19] Do you think they deal with our interactions with machines, the machines we interact with every day, should computation fall into the background? Should it not be noticed? Should it not draw attention to itself?
John Maeda: [00:10:31] Oh, wow. I think if you're just paying for the service and you need it to do something, it definitely isn't supposed to be there and tell you, here I am. But if it's anything that might be wrong on your behalf, you do have to have a critical thinking lens attached to it and ask questions about it, and ask the companies as well. Otherwise, you will not get the most value out of what you're renting, purchasing.
Dave Bittner: [00:11:02] I'm interested in your take on what I would perceive as being a detachment that can come from programming, from spending a whole lot of time with a computer. I remember as a teenager, you know, in the summertime when all I had was available time and I would just bury myself in front of my computer and, you know, hack away at the keyboard and come up for air, you know, food and water only occasionally. Again, I wonder, is there a hazard to this? Is there – do we risk emotional development or detachment from friends and loved ones?
John Maeda: [00:11:43] You know, now in hindsight, you can think of people who make, like, embroidery, or like, knit, or like, build in the days where plastic models were built a lot as a kid – it's like, those are engrossing activities that require attention and you enter the flow. I guess when you're writing software, it's not dissimilar. And people who made plastic models and who do embroidery, or knit, they seem to be okay today. The difference is that, because computation, if you are making things that affect other people, at the scale of hundreds, thousands, millions, you can detach from realizing that you're working not just with numbers, but with people as well. And that may be a different kind of growing up experience.
Dave Bittner: [00:12:45] Yeah, and that that brings up a really interesting point, which is the ability for a limited number of people to have a huge amount of influence on millions or even billions of people. The folks who are designing the algorithms that run systems like Facebook and Google and, you know, these large platforms that we've come to depend upon – it seems as though they could have an oversized influence without a whole lot of pushback relative to the number of people who are making those decisions.
John Maeda: [00:13:18] Absolutely. And I think that it is something that a lot of folks never considered because they are making too much money to care about it. And then suddenly when things happen and people can actually use the platforms themselves to raise these issues. You start having to become aware, become awake. So we're in this weird time where the platform that can wake people up can also shut down that wokeness too at the same time.
Dave Bittner: [00:13:51] Hmm.
John Maeda: [00:13:53] And that's why I look at Joseph Weizenbaum and how he considered how ELIZA could be used for harm, like, so early in its evolution, because he grew up in the Nazi Germany era, and fled Nazi Germany. So he could imagine what people with bad intentions could really do if they had that power. And I think the people in tech didn't consider that a lot. Big tech was like, eh, this is like saving the world. You know, do no evil. Whoops, whoops, we did that? How is that possible?
Dave Bittner: [00:14:31] Yeah. Isn't that interesting? I mean, that decades ago he was thinking about that...
John Maeda: [00:14:34] Yeah.
Dave Bittner: [00:14:35] ...You know, with the comparatively rudimentary abilities of computers to be able to see forward. And when we look at what's playing out today, how forward-thinking he was.
John Maeda: [00:14:48] Absolutely. You know, I mean, the fact that he could imagine that is, I find boggling.
Dave Bittner: [00:14:56] Mm-hmm.
John Maeda: [00:14:55] But I guess in that era, it was the era of post-World War II, DARPA could do anything. It was like a whole new world. So, maybe people thought differently back then. Oh, actually, someone told me this once how he was worried that most of the world's national research labs used to be run by Manhattan Project-era physicists who all had to deal with the consequences of the creating an amazing piece of science that was an amazing weapon. So they brought a moral conscience to their work as research lab directors, and that's gone.
Dave Bittner: [00:15:37] Oh, isn't that interesting.
John Maeda: [00:15:39] Yeah. So I wonder, huh, maybe they all could think like this, and we lost that competency.
Dave Bittner: [00:15:45] You know, we all grow up learning to read and write and we learn basic mathematics. Do you suppose that computational thinking will become a core competency for the generations ahead who are coming up?
John Maeda: [00:15:59] I don't worry about the generations ahead because I think they grow up computational because of all the tools they use. I'm more worried about the people who are older. If you think of like all the statistics of the world population, how we used to be this population pyramid where it was a lot of young people at the base and fewer older people at the top. But all projections into the middle of this century show that it's going to be a population rectangle, which means it'll be just as many older people as there are younger people. And I think if they don't become computationally literate, they are going to maybe act the way we see today with kind of nationalism and like, let's make it the old way and hold back progress.
Dave Bittner: [00:16:53] Do you suppose that there are certain cultures that, with their backgrounds and their history, that they may be predisposed to take better advantage of computational thinking?
John Maeda: [00:17:04] Wow, that's so interesting. I think the Japanese, if you think about their fascination with robots, is one example of how they are open to that future, also because they have the aging crisis of too many older people, like, no younger people. So they've adopted robots as the future because they're computationally minded. It's OK. There's no stigma around that.
Dave Bittner: [00:17:32] Near the end of the book, you compare cooperation with collaboration. You contrast the two. Can you can you describe to us the difference there?
John Maeda: [00:17:42] Yeah, that's one of my favorite, like, insights of organizations. And how, when you cooperate, you don't have to be really codependent, like, okay, I'll do it. Collaboration, you are codependent. We're going to actually work together. We're gonna actually get involved together. So collaboration is such a wonderful thing. It's so much harder. Cooperation is easy. Collaboration is hard.
Dave Bittner: [00:18:10] Do you suppose we're by necessity moving into an era where it's going to require more collaboration?
John Maeda: [00:18:17] Exactly. It's going to require collaboration between us and the computers. It's going to require collaboration between us and the people who control all these technologies, too.
Dave Bittner: [00:18:30] You know, it strikes me that when I think about technology and design and the intersection of the two, particularly online, there is so much bad design out there that when we come across good design, I think we find ourselves delighted. Does that resonate with you?
John Maeda: [00:18:51] I think designers will always say that. (Laughs)
Dave Bittner: [00:18:53] (Laughs)
John Maeda: [00:18:55] Like, uh, Paul Rand, when I interviewed Paul Rand, I was like – I said, I asked this question of, what is design? And he said something like, design that is good is rare. You know the fact that bad design exists everywhere. It was like, you know, he was that kind of iconic designer, Paul Rand...
Dave Bittner: [00:19:22] Right.
John Maeda: [00:19:21] ...And he was basically saying, my design is good, everyone else's is bad, bad design – that's why hire me.
Dave Bittner: [00:19:33] (Laughs) Fair enough.
John Maeda: [00:19:36] So designers like to say everything else is bad, my design is good, hire me.
Dave Bittner: [00:19:41] What do you want people to take away from the book? What's the take-home message here?
John Maeda: [00:19:47] I'm hoping that people who are not technically oriented can get more curious about, like, how computation controls so many things and it's invisible and powerful, and not to be afraid of it, but to be curious about it.
Dave Bittner: [00:20:06] All right. Well, John, thanks so much for taking the time for us. This is a real pleasure getting to chat with you.
John Maeda: [00:20:12] Thank you, Dave.
Dave Bittner: [00:20:16] Our thanks to John Maeda for joining us. The book is titled, "How to Speak Machine: Computational Thinking for the Rest of Us." Our thanks to McAfee for sponsoring our program. Visit mcafee.com/insights and find out why McAfee is the device to cloud cybersecurity company. For everyone here at the CyberWire, I'm Dave Bittner. Thanks for listening.