Q&A with Evan Ratliff from Shell Game

Learn the AI ropes from someone who dove in head first.

Q&A Subheader

Evan Ratliff // Shell Game

Evan Ratliff is the host of Shell Game and has also helmed Persona: The French Deception, The Longform Podcast and On Musk with Walter Isaacson which we featured in Podcast Delivery #328.

Evan also happens to be an award-winning journalist and founder of The Atavist Magazine as well as the author of The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord.

We caught up with Evan (at least we think it was the real Evan) and chatted through what voice in AI is all about, where it’ll take podcasts and podcasting, and what we can expect with season one of Shell Game.

 

SO: You’ve been in the podcasting game for a while now and from my vantage, there’s a to be gained with the purported efficiency gains that come with the proliferation of AI. What kind of relationship did you have with AI before deciding to make Shell Game and how did that pull you down the path to making a podcast about AI experimentation?

ER: To be honest, I was trying to ignore AI in the period just before I started the project. As I say in the show, for writers or other creative-types there are three kinds of reactions people have had to generative AI: raging against it, embracing it as part of their process, or just ignoring it and hoping it’ll go away. I was in camp three—partly just because I’d played around with chatbots and gotten bored, and found the discussion and coverage around AI reductive and repetitive. Late last year my curiosity got the better of me, though, and I started messing around with voice cloning. Again I got bored pretty quickly. You can make a clone of your voice and have it say things! It’s fun for a couple days, tops. 

But then I realized I could connect that voice to a chatbot and allow the chatbot to control the voice’s conversation. And then I realized I could connect that up to a phone number, creating what’s called a “voice agent” that sounded like me and called whoever I wanted. When I tested it out, I thought, wow, this is actually incredible – I’m not sure people understand that this is possible. Any time you find something like that, it feels like a story. 

SO: You use an early AI bot with friends and family, notably with your producer and wife Samantha Henig. What was the biggest surprise that came once you folded AI into your personal life?

ER: It was two things: the diversity of reactions, and the kind of visceral depth of them. People had all sorts of responses to suddenly talking to an AI (without knowing in advance, usually, that I even had one.) Some people found it hilarious and amazing, some were disturbed, some of them humored it, some were angry, some didn’t realize it wasn’t me. There’s one friend of mine who just says “this is the best conversation of my life,” and that really captured a sense of: no matter what, you’re having an experience you’ve never had. Other people essentially said the equivalent in the other direction: “this is the worst / most frightening conversation of my life.” I expected most people to just be mildly annoyed—at the voice agent, and at me. But instead it brought up very strong feelings pretty much every time. 

SO: Developing your own voice clone and hooking it up to your own bot is no small feat, especially at the time that you did that. Were there any big ideas you had to back-burner because they were too big for the podcast?

ER: Well, I do have some other more reported-type stories I was already working on, including about AI. But once I got rolling on the show I really wanted to get it out before someone else tried this same idea, so I set everything else aside. That was one reason that we did it completely independently. Because all the production houses and platforms just create too much friction to do something like this quickly. Also, it was so strange and fun, and I didn’t want anyone to kill off the sense of fun. 

SO: “The worst AI you're going to use is the AI you're using today.” What do you think of that mantra and did that sort of thinking creep into your experimentation and the podcast?

ER: That was certainly a part of my thinking, that this stuff is all going to get better. But on the other hand, I wanted to do a show that took into account that the technology might not improve at the same rate, either. The developments have been extraordinary over two years, but they might be slowing down, there’s a big debate about it. I was positing that even if advancements become more incremental, you are still going to experience these voice agents in all kinds of ways in your life. Because look at what you can do with them right now! 

The funny thing was, at first I was concerned that people would say “this is lame, doesn’t sound real, nobody would think it’s real.” As in, I’d done the show too soon, and it would be better to have waited for the tech to get better. Because I just wasn’t sure. When you are listening to a clone of your own voice, you just think “that’s it’s not me.” But as we went along, the problem was the opposite. People would say to me things like, “I can’t tell when it’s you narrating and when it’s the AI narrating, is that intentional?” But the AI never narrates. It’s always me! So then I thought damn, we might have hit the sweet spot. Wait till these people hear better voices six months or a year from now.

SO: What’s the best or most impressive application of AI voices you’ve come across? The worst or most concerning?

ER: Well, depending on your definition of “best,” some of the AI therapists are impressive, let’s say as… technology deployments. Their ability to play the role of a talk therapists was beyond what I think many many people would expect. But of course this is also concerning, because they are being deployed without much societal consideration over whether this is a good idea. They aren’t “licensed,” if that even makes sense, and there isn’t a clear plan for what happens if they backfire in some way. So I would say they are both impressive and concerning. 

The most straightforwardly concerning application is certainly the use of AI voices in scams. It is already big and is just going to get bigger and bigger. The ability to use voice agents at scale to trick people into parting with their money is something that we’re going to have to deal with. But of course humans can scam humans, so it’s not necessarily a dramatically new concern, just an extension of an existing one. The qualitatively new concerns, as I see them, involve what it means for us to be surrounded by a lot of voice agents and other AI agents. What it means for our sense of trust, and other human values we might want to maintain. 

SO: How do you feel about the sound of your own voice?

ER: I mean, I like it. I hated the sound of it when I first started podcasting, most people do. But after you get over the hump and accept its flaws (required if you are going to listen to hundreds of hours of yourself), you can grow comfortable with it. I know its limitations. I’m not the person you are going to hire to voice a Pixar movie or something. I just don’t have the range. And I still worry, when I’m doing a show, about my limitations—like, can I deliver a joke? But one thing having a clone showed me was that if I have some distance from it, I can almost enjoy the sound of it. I can listen to my clone all day, as long as it’s not in therapy, which is painful to hear.  

SO: I’ve played around with AI enough to know that things can get really open-ended if you don’t know what you’re chasing. Were there moments when you lost your bearings or your focus?

ER: Not too much. I did try a fair number of things with the clones that didn’t end up making it into episodes (although maybe future ones). That always happens with a big show and months of reporting. You end up with a bunch of ideas on the cutting room floor. Fortunately here the central path was pretty clear: I wanted to try this voice agent out in different contexts, but also take people on a kind of journey that went deeper and more personal with each episode. Once I had the general structure in mind, the day to day things I was chasing kind of followed from it. 

The bigger problem was that—unlike in my last show Persona, which was a very reported, interviewing-humans show—I could just sit at my desk and generate unlimited tape. So I had to be pretty organized and targeted about what I was doing, and restrain myself from just creating more hours of the clone doing stuff than we could process.

SO: What are your thoughts on AI-generated podcasts like Perplexity Daily and The Weather in Brooklyn?

ER: The weather one just seems silly as a concept. If someone wants to have the weather told to them every day by a podcast, I can’t see that it matters what kind of voice it is. No one is demanding real voices in their driving directions. Let the robots do the tedious stuff, perhaps. The Perplexity one is maybe closer to something worth thinking about. I wouldn’t listen to a podcast like that, but it is a situation where the synthetic voices are doing something more human. Wondery tried something like this—an AI host covering the headlines—with a sports round-up podcast last year. (They eventually just canceled the whole show not long after.) 

My general thought is that our media industry has dumbed down so many things to base-level “content” that it’s set up a situation where people won’t care whether they are hearing or reading human or AI output. And then AI will start working its way up, the way we’ve worked our way down. 

SO: Aside from voice, what about AI excites you the most as a podcaster? As a journalist?

ER: I wouldn’t say “excites”—I’m probably past the gee-whiz phase of my own outlook on technology, even though I tend to embrace it and I’m not remotely anti-tech. After a couple of decades covering the downsides, it’s just hard not to always see the peril with the promise. That said, I think for journalism the ability for AI to comb through and sort data and reporting, and extract from it meaningful stories that humans can go and report out and tell, seems like a place where it could have a really positive impact. 

SO: Between Persona, On Musk, and now Shell Game, you’ve got a wide range of podcasts under your belt. What comes next for you? How is season two of Shell Game shaping up?

ER: I’ve got some magazine assignments to attend to, as well as some ongoing Shell Game stuff from season one that we want to get out to paid supporters. And then I’ve got the next season of my interview show with Walter Isaacson—this one on Ben Franklin—rolling out over the coming weeks. After that wraps up, I’ll turn to Shell Game season two, which right now is just notional. Got some ideas, but a lot of work to do seeing what has legs. 

SO: What impact do you hope Shell Game will have on its listeners?

ER: My ideal impact is for people who might have tuned out AI discourse to suddenly see it in a new way, think about it in a new, more personal way. And then to desperately want to talk about it with someone. 

SO: How has the response been from your audience so far?

ER: It’s been incredible, truly. We’ve heard from a lot of people, and the most fun part is just the diversity of reactions. Some people have been utterly freaked out and terrified, some people have said it’s the hardest they’ve ever laughed at a show, and some have been moved to tears. You can’t really ask for anything better than that. 

SO: Where are you getting your AI news and information?

ER: I’m a certified news junkie so it comes from all over: the three newspapers I read every morning, the occasional magazine story on it, online news sources of all stripes, random Discords. And then say what you will about Twitter being a hell hole (and I can!), but it’s still a place where you can curate a collection of informed people on a topic like AI. 

SO: A bit unrelated, but what have you been reflecting on with The Longform Podcast coming to an end?

ER: Now that it’s been a couple of months, I just feel really grateful. Not just for how people responded when we ended it, which was lovely and overwhelming. But also for the chance to do it for that long, talk to that many people that I respect and admire. I stand by that it was a good time to bring it to an end, on our own terms. I do miss getting to hop on and talk to someone about a piece of journalistic work that really blew me away.  

SO: How does journalism need to evolve to remain in place as the Fourth Estate? How do you see AI playing a part in that?

ER: If I had a pithy answer to this one, they’d pay me the big money to run something! (Kidding, you couldn’t pay me enough to run anything.) And there are many flavors of journalism, some of which I don’t do, and I’m not keen to speak to or for them. So I’ll just say for my own niche, narrative longform journalism: We have to keep evolving, sure, but also keep remembering what we’re in it for. To me that may mean harnessing AI tools to help build out stories. (Are any reporters now not using free AI transcription for their reporting tapes? If so, wild!) But also, it means keeping human hands and brains at the wheel. If not, I mean, what’s the point of being involved in it? There are better ways to make money, and on the other side, no sane person is actually looking for greater volumes of “content” anymore. We’re all full up. So we desperately need work that engages people in a human way, beyond just flashing across their screen for five seconds. Most people do this work because they have the drive to create that experience for readers, informing them by captivating them. I do, and I don’t care if it’s a losing battle or not.

Evan can be found on Twitter at @ev_rat and on Instagram at @ev_rat_public

Discover more about what Evan is up to and support Shell Game at shellgame.co.

📢 Interested in sponsoring Podcast Delivery?
Book your ads today

Reply

or to participate.