· 58:04
Software engineering can get to feel a little bit sublime when there's a chance to make a small change that could ripple through a big system. That sense that nothing is ever necessarily stuck, and that there's a series of motions you can make with your fingers on a keyboard to change a whole reality.
Dan Fichter:The big system I think we can change beyond recognition, if we're not afraid to, is our social media landscape: Instagram, TikTok, YouTube, Twitter -- the big giants where even engineers working on them don't have much say over how they work. The change I want to see is not a new startup to unseat them, or, just, the Fediverse or, hey, let's all switch to Bluesky.
Dan Fichter:It's something, instead, that would make the biggest social media platforms work very differently, whether their owners like it or not. It's a public policy idea: a step we can take under the law, or maybe through new law, that the biggest voices in tech policy aren't really talking about. But I think they should be.
Dan Fichter:Because it would take away Elon Musk's megaphone, and all the tech giants' sway over how we see and understand each other, and give us all much more say over how the social platforms work, every time we use them.
Dan Fichter:So I'm grateful to you for hearing this wild idea out, and proud to introduce three guests: three software engineers, three public policy thinkers, and three very kind and charming communicators who think a change widely considered impossible isn't, and could be just the thing to fix a system that's moved so fast, it's broken the way we see and hear each other.
Jonathan Stray:What are you doing? Are you starting a podcast?
Dan Fichter:That's Jonathan Stray, and he graciously sat down for me for this conversation without exactly knowing why.
Dan Fichter:And I'm Dan Fichter, and this is Hucklemelon, and in this first episode of the show, I'm going to be opinionated about the social media giants.
Dan Fichter:I worked with all of them when I was chief technology officer at a company called Moat, that tried to make the social media platforms more transparent, at least to the advertisers paying their bills. I got to see up close how things these tech giants claimed were technologically impossible suddenly became totally possible when big-name advertisers insisted on it.
Dan Fichter:And it got me thinking about how little we ask of them, as a matter of public policy, and how readily we accept their arguments that changes we'd like to see just can't be made. The contrast between what they'll do for advertisers and what they'll tell us can't be done: it's just hard to sit with.
Dan Fichter:And later, working full time at a youth suicide prevention nonprofit, and briefly on the federal government's suicide prevention team, what these companies do to so many people's mental health was always on my mind and led me to want to bring you the ideas you're about to hear.
Dan Fichter:So, anyway, back to these ideas, and back to Jonathan. Jonathan is a senior scientist at the UC Berkeley Center for Human-Compatible AI. And among other research efforts, Jonathan is exploring this big policy idea I want to tell you about. And the working name for this policy idea is 'middleware', and it targets the content selection algorithms that decide what we see on the biggest social media platforms.
Jonathan Stray:As you know, I was a coauthor on -- there was just this big paper that came out of Stanford on the concept, you know, the legal issues surrounding it, the technical issues surrounding it. The idea of middleware is that there should be a level of interoperability where I can decide who I want to run my content selection algorithm, you know, independent of the platform.
Jonathan Stray:Wouldn't it be nice if I could choose from a selection of third-party algorithms to do my content selection? I think it would address many of the concerns that people have, in terms of, you know, monopoly, or oligopolies, in terms of content selection. Let me choose the algorithm for my feed.
Dan Fichter:So, in other words, the big idea behind middleware is that we can keep using the social media platforms we can't really get away from -- the ones our friends are on, and the ones all the content is on -- but we shouldn't have to use those platforms' content recommendation algorithms if we don't want to.
Dan Fichter:We should get alternatives to those algorithms to pick from, from outside parties -- startups, AI companies, newspapers, magazines, university departments, entertainment companies -- a whole range of people and organizations that could do what the algorithm does if we gave them a chance to compete with the algorithm.
Dan Fichter:And that paper Jonathan mentioned is the latest in a series of academic papers on middleware that have come out of Stanford and other research settings in recent years, but have largely flown under the radar of many tech policy thinkers, for reasons I want to get into.
Dan Fichter:I'll do my best to paint a picture of the reshaped social media landscape that middleware proposes we aim for. But first, the status quo: these researchers say that 70% of what we watch on YouTube is picked for us by YouTube's algorithm, and that on TikTok, it's 90%. I haven't seen estimates for Instagram, but in my own Instagram feed -- and I mean the main feed, not the Explore page -- close to half of the posts are recommendations from accounts I don't follow, peeking out and saying 'hi' from among posts for my friends.
Dan Fichter:It's not that we aren't at all in control of how we use social media. We scroll past a lot of what the algorithm picks out; and some of what we see is because we followed the person who posted it, or we searched for it, or we even tried to 'train the algorithm'. It's just that the most natural thing to do on social media is to check out whatever lands in our feeds.
Dan Fichter:And because we do exactly that, more often than we do anything else, the algorithm that decides what's there in our feed when we scroll effectively gets to decide how we see the world and each other when we're on these platforms. 70% or 90% of our time is the algorithm's to do what it wants with because in the time we have to browse social media before falling asleep, or right after waking up, or while crossing a busy street -- or, hopefully, not that -- these aren't moments when we tend to ask, what can I search for now so I can go learn something new? They're moments when we surrender to the algorithm, and these moments add up.
Dan Fichter:It's what happens in these moments, that make up most of our time on social media, that middleware could radically change. Because getting access to middleware would simply mean getting the ability to decide what shows up when we scroll in a way we really aren't used to having.
Dan Fichter:We're going to talk about what our feeds could look like if we won that control, and what a huge shot in the arm this could be for creators of alternatives to the algorithm, and how the economics of middleware could be transformative for news organizations and other institutions that could come compete with the algorithm in a way they've never been allowed to.
Dan Fichter:And, of course, we'll talk about the social impact middleware could have. Jonathan studies conflict, and is working on a major study to reduce online polarization on social media. And Marc Faddoul, a guest you'll meet in just a moment, has uncovered Russian influence operations on social media that may have swung world events in antidemocratic ways. The social stakes for middleware are extremely high, and that's why so many of us care about dethroning these profit-driven algorithms that don't mind tearing us apart to keep us engaged.
Dan Fichter:And we'll talk about pitfalls with middleware, which I bet you are already starting to tally up, as I speak. And finally, we'll get into how to make the platforms allow middleware to exist, even as middleware will eat into their numbers, and their profits, and for owners like Elon Musk, the political power that the megaphone they've bought has given them.
Dan Fichter:But I really think it pays to start off with the more personal impact middleware could have, not on strangers scrolling on their phones, but on you. And to do that, I want to introduce another accomplished social media researcher who's joining me today.
Chand Rajendra-Nicolucci:It's weird to be somebody who's studying social media but tries to avoid it as much as possible.
Dan Fichter:That's Chand Rajendra-Nicolucci, an author on several important research papers, including with Ethan Zuckerman, who ran the MIT Center for Civic Media and now teaches at UMass.
Chand Rajendra-Nicolucci:A lot of times, you know, I'll go and use it and feel like, oh, you know, I just wasted an hour, and I'm not sure how that happened. I didn't want that to happen.
Dan Fichter:Chand is going to say a lot more about his everyday experiences with social media, and how they've shaped his policy thinking, and his research. But I also want you to meet our third guest today, Marc Faddoul. And I asked Marc what middleware might mean for him personally, broader social questions aside.
Marc Faddoul:Hi. I'm Marc Faddoul. I'm the director of AI Forensics, which is a nonprofit organization based in Europe, which investigates social media recommender systems and other influential algorithms. And so our mission is kind of to hold big tech platforms accountable to their users and to the law.
Marc Faddoul:I think for me, really, the main gain of this algorithmic pluralism would be to be able to switch between feeds, depending on the time of the day, or my mood. Because I do appreciate and love and want quality journalistic content, and good recommendations to help me discover it. But it's not the only thing I like to watch on the Internet. I also love the silly, light videos which are circulating or getting viral. But I only want a little bit of that, at certain very delineated moments. And then I'm also a passionate climber, and so I can very easily get into watching hours of climbing videos. But, also, I don't want this to come distract me when I'm trying to do something else.
Dan Fichter:Marc's research and ideas are taken very seriously by European legislators and regulators, and that's part of why I'm so happy to have him, in a moment when policymakers outside the US could potentially challenge social media tycoons like Elon Musk in a way our government -- in a moment when our government sort of is Elon Musk. But that's also why I'm grateful to Marc for illustrating how middleware isn't just about geopolitics and struggles for mass influence on the Internet.
Dan Fichter:It's also about a better Internet for people like Marc. YouTube, for someone like Marc, is sort of like a TV that has only one channel, when you scroll through the recommended videos its algorithm has put in your feed. And all Marc wants to do is sometimes be able to change the channel.
Dan Fichter:If middleware weren't that -- if it weren't about making the Internet a little more usable and more interesting for all of us -- if the case for middleware were only about the dangers of today's social media algorithms, it could certainly appeal to parents who are concerned about what kids see when they scroll. But if middleware also makes the Internet better for Marc, and you, and me, when we scroll, the case for it is even stronger. It's even more of a case of mass appeal, which is a pretty vital prereq to making a big policy change happen.
Dan Fichter:So I also want you to hear from someone I didn't interview, but someone who has contributed to policy on middleware, and who also knows how to talk about middleware in relatable, intimate terms.
Francis Fukuyama:I am very pleased to be talking to Daphne Keller. Daphne is a lawyer. She had been an associate general counsel at Google in the past. She's now at the Stanford Law School. She is running a program on platform regulation.
Dan Fichter:That voice is Francis Fukuyama, whom we'll hear from more later. But here, he's introducing Daphne Keller, a scholar Fukuyama interviewed on his YouTube channel two years ago.
Daphne Keller:You know, in a middleware universe, or middleware Internet, maybe you could go to YouTube and say, I don't want these results ranked in the way that YouTube is offering. I instead want to subscribe to the Disney ranking, or the ESPN ranking, or the ACLU ranking, or my church's ranking. And maybe also I have a, you know -- I am a feminist, and I don't want content that seems degrading to women, or I don't want my child to see Disney princesses. So I'm also going to subscribe to an overlay of a block list from a feminist organization that provides that. And, you know, you can imagine having multiple sources that users trust more than they trust Mark Zuckerberg, or the YouTube leadership, and users get to choose those, in lieu of YouTube, as the providers of the ranking and the content moderation.
Dan Fichter:When Professor Keller says ranking, she's using a technical term for what algorithms do when they choose what content you should see when you scroll. So she's saying middleware could make us a lot less worried about what kids are seeing on social media than Mark Zuckerberg has ever let anyone feel. And for grown-ups, it could metaphorically let us flip between channels we're interested in, and away from the choices of the platforms' algorithms.
Marc Faddoul:I was trained as a computer scientist.
Dan Fichter:That's Marc again. And I asked him what led him to his work on middleware, and how he sees the tech giants objecting to the idea.
Marc Faddoul:I studied engineering in France, and then I soon quite got more interested into the impact of technology on society rather than building technology on a day to day basis. And so I went to Berkey at the School of Information, which has a very transdisciplinary department, whose purpose is basically to study technology and society.
Marc Faddoul:So the other argument that sometimes -- is that, users, they don't want to take the time to customize or to play with the content that is being recommended. But I think it's actually a quite patronizing statement on behalf of the platform, because I do think that users -- considering the amount of time that they spend on these apps -- really care about what they see there. And it's just about providing interfaces that are usable enough and kind of intuitive enough to empower them to make these choices.
Marc Faddoul:And obviously, right now, the incentive is not really on the platforms to develop these kinds of choices. And so that's why they haven't made them available or deployed these types of features on the apps.
Dan Fichter:So Marc says, if we want middleware, we're probably going to have to make the tech giants give it to us. Bluesky, one of Twitter's rivals, might give it to us voluntarily -- here's Jonathan, on that.
Jonathan Stray:It's somewhat limited. It's not -- you can't really write a complete recommender algorithm for Bluesky at this time. I'm actually talking with the team about that, to see if we can figure out how to change the protocol to allow it.
Dan Fichter:And while we're speaking of Bluesky, Marc is working on an independent philanthropic effort that would keep Bluesky from backing out of this kind of progress in the future.
Marc Faddoul:And so, actually, we are launching this project called Free Our Feeds. You can find FreeOurFeeds.com, whose purpose is to make sure that the underlying protocol of Bluesky, which is the AT Protocol, is not only decentralized in theory, but also in practice. Because currently, it's an open protocol that everyone can build upon, but there's only really one actor that builds on it: it's Bluesky. And so, therefore, if you take control of Bluesky, you take control of the entire infrastructure. And, so, the idea here is to create a foundation that would be publicly governed and a nonprofit that would also contribute to the infrastructure.
Dan Fichter:But because, for now, we collectively spend so much more time on the huge social platforms that aren't voluntarily considering middleware, unlike Bluesky, the question of forcing change with those tech giants is where I want to focus. So here's Chand, again, explaining his view of the tech giants, and how he became involved in middleware policy.
Chand Rajendra-Nicolucci:Things were shifting from, oh, you know, Google and Facebook are saving the world to, oh, maybe they're having, you know, more harmful impacts on the world, and we should take a little more critical look at them. I ended up working with a law professor on a research project that looked at market manipulation by financial trading algorithms. And it sort of got my feet wet in tech policy and law, and, you know, led me to continue to seek out more opportunities that were interdisciplinary. And eventually, I took the job at the Knight First Amendment Institute after I graduated, and that sort of created, you know, the whole path to where I am now. That's where I met Professor Ethan Zuckerman, who's been my longtime mentor and collaborator.
Dan Fichter:Zuckerman was actually in court last year, suing Meta for interfering with browser extensions that try to override its platforms' recommendation algorithms. And Chand participated in the symposium last spring that the December middleware paper Jonathan contributed to drew from. So you're hearing from -- and they are so nice to be talking to someone who hasn't made a podcast before -- from a few of the people in the inner circle of middleware policy.
Chand Rajendra-Nicolucci:I think I struggle with, you know, feeling like I don't have -- necessarily -- control over my interactions with social media. When I'm on Instagram and I'm, you know, scrolling, I don't know, some entertainment gossip, or whatever, that's definitely not something that I want to be spending my time on. Like, in the moment, I am spending time on it. But if I had a way to tell Instagram, don't give me any entertainment gossip on my Explore page, I would gladly take advantage of that, and actually enjoy my experience on Instagram much more.
Chand Rajendra-Nicolucci:There are these really powerful algorithms that are supposedly optimizing for what people want. But on the other hand, people signal again and again that they don't want what social media is giving them. Even if they, you know, are watching it, and there's this whole idea of revealed preferences, where they might say they don't want that, but they really want it, because you can tell they really want it, because that's what they keep, you know, consuming. It's sort of almost paternalistic, and like feeding you junk food, and saying you like the junk food because you don't really have any other option.
Chand Rajendra-Nicolucci:Basically, it's this idea of like, yeah, I might want that, just like I might want sugary foods or other things that I know are bad for me. But what they're missing is that even the folks who regularly use their platforms would come away with a better feeling and experience if they were -- if their intentional preferences were respected more. I talk to people all the time, you know, not just people like me who study social media, but just regular people, who are in the same cycle of deleting Instagram or TikTok from their phone and then redownloading it. You know, liking some of the things that it shows them, but hating some subset of content that it shows them. And, you know, that feeling of not having control over their relationship with and experience on these platforms.
Dan Fichter:So Chand, Marc, and Daphne Keller aren't just talking about dangerous Internet rabbit holes here -- although they study them, and worry a lot about them -- but here, they're talking about the everyday frustration of knowing there is more interesting stuff on the social platforms than what the algorithms feed us, because the algorithms are taking the low-risk, profit-maximizing bet of mostly feeding us junk food.
Dan Fichter:YouTube could expand your horizons a little bit, every time you open it, just like your favorite news site, or museum, or stand-up comedian might. But the YouTube algorithm -- that's not its MO. The promise of middleware is simply to let you very literally put that news site, museum, or comedian in charge of your YouTube feed, to see what they can turn YouTube into.
Dan Fichter:Even Instagram has, somewhere on it, gateways to parts of the world you might really be fascinated by, and art and architecture that might inspire you, and powerful social commentary, and announcements of events near you. But the Instagram algorithm's mission just isn't to guide you to that constellation of discovery and awe, so it typically doesn't.
Dan Fichter:And Twitter, despite the wrecking ball Elon Musk has taken to it, is still a place where well-informed people say what they think, and where people break news, and where primary sources pop out of the ether every single day, even though the Twitter algorithm seems dead set on, just -- well, Marc talked about the tech giants' theory that users don't like to change the settings in their apps. On Twitter -- of course, legally, it's now X, but for the sake of familiarity, I'll call it Twitter -- I follow close to 2,500 people on Twitter. To follow 2,500 people, I clicked and tapped 2,500 times to try to make Twitter a worthwhile source of news and commentary. But when I open Twitter in 2025 and watch the videos its algorithm has stuffed into my feed, this is what it sounds like.
x.com:Did you hear about the woke gunman in Milwaukee?
x.com:So here's the deal. Candace Owens believes -- well, she knows that the French president is married to a trans woman, and not just any trans woman, but his own father.
x.com:The night of the twenty-seventh? Okay. And how did you end up with him?
x.com:I murdered him.
x.com:And you are acting in the shadows, and you are destabilizing nations, using race wars to do it.
x.com:You guys truly have no idea what a great country looks like. You guys are too wrapped up with your communist bull-, thinking everybody should have the same thing as everyone else. Well, I, for one, don't like that, because I work a lot harder than, say, my neighbor over here, and I have a lot more nice things than he does. And it's because I earn it. That's all you have to do. Nothing should be given to anybody, ever. I don't care.
Dan Fichter:All of that is from accounts on Twitter that I don't follow but that Elon Musk's algorithm wants me to hear anyway.
Dan Fichter:If middleware became real on Twitter, your feed could be totally different. You could tap someone like The New York Times, instead of Twitter's demented algorithm, to fill in your feed. And if you did, reporters and editors at the Times could flag tweets they find interesting, all day, from all over Twitter -- not just their own tweets -- and those tweets could become your main Twitter feed.
Dan Fichter:There would be dozens of other news organizations doing the same thing, and you could swap them in for the Times, or you could mix them in with the Times. Through middleware, you could also get your Twitter feed curated by a mix of think tanks or writers you find interesting, or you could switch over to algorithms from algorithm providers Elon Musk doesn't own, which could include interesting startups, and AI companies, and software foundations. Maybe one of them lets you just say or type what you want more of or less of in your Twitter feed, the way you talk to ChatGPT.
Dan Fichter:All of that would take you way less than 2,500 taps, and Elon Musk would no longer have the final say on what the world looks like through Twitter. Other middleware providers that you could just mix in might try to determine who's a real person versus a bot: an alternative blue check system that might actually be legitimate, weeding a lot of the AI bots out of the comments.
Dan Fichter:On YouTube, you might like what's in your feed more if it's hand chosen by someone you choose: the BBC, or your local library, or even a YouTube creator you like. Creators could also be curators now. They could make money not just on their own videos, but also by curating feeds of YouTube videos that they've scoured YouTube to find and bet you'll like, helping you see hidden gold on YouTube that the algorithm might never have steered you to.
Dan Fichter:And maybe you like TikTok's algorithm, but if you're ever feeling like it has you a little too dead to rights, and like you just want to peek outside your bubble, you could turn the TikTok algorithm off for a sec and try scrolling TikTok with recommendations powered by someone else.
Dan Fichter:And I just want to pause to say, this hopefully enticing fantasy of middleware -- it's not exactly where the academic discourse on middleware has been. The discourse started about four years ago with Francis Fukuyama, the Stanford professor best known for his work on geopolitics in the nineties -- someone whose background is not in tech, or in law, and whose name is more strongly associated with the neocon movement -- though he's since broken with that movement -- than with tech policy.
Dan Fichter:To Fukuyama's credit, he is the one who convened a working group half a decade ago that got the idea of middleware into academic circulation. But the way he describes and pitches middleware might help explain why so many people still haven't heard of middleware -- why the idea hasn't taken off.
Francis Fukuyama:The origin of this was a Stanford working group I established at the beginning of 2020. And as we started to think about what was really bothering people about the platforms, it struck us that the problem was really related to the way that the platforms -- these large platforms; basically just Facebook, Google, and Twitter -- had become the primary means by which Americans communicated politically with one another, and that this was very problematic. And because they were profit-making, they were not making good decisions. They wanted to maximize clicks and user attention, and therefore tended towards more salacious, outrageous material, and were therefore contributing to polarization, and the deterioration of discourse over the Internet.
Francis Fukuyama:I mean, like, the yoga moms all of a sudden adopting QAnon theories because some yoga guru, you know, decided that this was an interesting thing. So that was the origin of our concern.
Dan Fichter:That was four years ago, around when Fukuyama and his colleague wrote about middleware in the Wall Street Journal, and the concept hasn't gotten all that much play outside of academia and think tanks since then. Middleware isn't an idea that Ezra Klein talks about, even though he's got a lot to say about social media and tech policy. Jonathan Haidt isn't talking about it either, as much as the floor is his, in some circles, on public policy towards social media. And Chris Hayes's new book on social media and attention fracking, published in January, doesn't mention middleware. Why not?
Dan Fichter:Well, positioning middleware as a way to protect moms from online propaganda -- that's not a cause I would jump on if I heard that pitch. And even beyond what's wrong with singling out 'yoga moms', it's hard in general to make an idea about changing other people's relationship to media sound like something that's going to get mass appeal. If that's the pitch, it sounds paternalistic -- ironic, because the monolithic algorithms we're pushing against here are themselves paternalistic, as Chand and Marc like to point out. A push to democratize social media through middleware should be something that appeals even to people who are nervous about central control over the media, in all forms.
Dan Fichter:I was actually emailing about middleware with the secretary of the Libertarian Party in a red state. We'd once crossed paths working in tech. And when I asked if I could share some thoughts about middleware with her, here's what she said: sure; send your thoughts over, but I can already tell you why your idea is a mistake. She said -- and I'll quote her -- "even well-meaning regulations can end up suppressing voices or ideas that don't align with the current political narrative. If the government mandates these curation options, it could lead to censorship." Very on brand for a libertarian party leader, but I sent her my thoughts anyway, and she graciously read through what I sent.
Dan Fichter:And then she said, oh, I actually think I like it. Algorithms today do sort of have too much sway over all of us, she said. And she doesn't like that, so she liked my pitch for regulating middleware into existence, unusual as it is for a libertarian to get excited about new regulation.
Dan Fichter:But maybe the most substantive objection to middleware comes from an instinct that if we get to choose who curates our feeds, instead of just leaving it to Elon Musk, and Mark Zuckerberg, and ByteDance, we'll end up retreating further into our corners. We'll tap middleware curators who agree with us on everything we believe, and the door will shut on having our views challenged. We'll become more polarized.
Dan Fichter:Jonathan is helping to run a study out of Berkeley to test whether custom-made middleware that's designed with the intention of reducing polarization can do it. It's a controlled experiment where volunteers agree to use this particular middleware through their browsers.
Dan Fichter:So if the experiment's hypotheses are borne out, that won't prove that people prefer depolarizing middleware. It'll only mean that this kind of middleware reduces polarization if people use it. But it's an interesting study. The study is called the Prosocial Ranking Challenge, which you can read about at rankingchallenge.substack.com or in a Nature article published in March.
Dan Fichter:And Jonathan -- I didn't fully introduce him up top -- has degrees in computer science and journalism, taught a dual master's program in those two fields at Columbia, built software for investigative journalism, and has worked as an editor at the Associated Press. He was also a contributor to the latest academic paper on middleware, which came out in December. And in an interview last year on Dave Troy's podcast, Jonathan said some surprising things about filter bubbles and feedback loops -- about the notion a lot of us have that polarization is happening because we're all consuming media that only affirms our beliefs.
Jonathan Stray:Yeah. So feedback loops are very interesting to me because feedback loops are how you get the whole system to change. So let's talk about the echo chamber. So you mentioned -- or, filter bubble idea -- you mentioned Eli Pariser. Of course, that was his 2011 book where he popularized the idea. People have been talking about it earlier. Nicholas Negroponte was talking about the Daily Me in the nineties. There's a 1950s science fiction book which talks about a similar idea.
Jonathan Stray:And the idea -- I think most people are familiar -- is that these algorithms show you what you like. And if you're a political person, then you only like stuff on your side, and therefore, that's all you ever see, and that reinforces your belief. And then it does the same for the other side. So it's a good theory. An effect like this appears in simulations. Right? You can make very simple models that show effects like this. The challenge for the filter bubble hypothesis is that we don't -- we've now had almost fifteen years of research on this stuff, and we don't really see it empirically.
Dan Fichter:In other words, social media algorithms could reduce polarization if that were their goal, but Jonathan feels there isn't really evidence that they cause polarization in the first place. And none of this really speaks to how moving from centralized algorithms, where everyone uses the same algorithm, designed and operated by the corporation that owns the platform, to a decentralized, democratized world of middleware -- none of this says how middleware could affect polarization.
Dan Fichter:So I'd start by bringing it back to you. Is a new algorithm or piece of middleware that feeds you nothing but videos bolstering arguments you already agree with on political topics -- is that really what you want your YouTube or Instagram feed to be?
Dan Fichter:It's not really what I want mine to be. And if you feel similarly, maybe we can relax a little about the worry that middleware will be polarizing. If you just want to learn more, laugh more, and hear more from people you know or find interesting on the social apps, maybe you'll find middleware that does that, if middleware becomes a thing.
Dan Fichter:If you're learning a language, and want videos in that language, at an appropriate level in that language, in your feed, middleware from a language learning platform could do exactly that, in a way the YouTube algorithm can't reliably do. Or if you want more anthropology videos than YouTube has been throwing your way, middleware from an anthropologist or museum could make your feed what you want it to be, when you want it to be that. Or if you just want climbing videos, like Marc does, that are picked by fellow climbers, and not by an algorithm that favors clickbait over real gold -- in other words, if you don't want your social media feed to be all politics, all the time, then maybe that's not what other people want out of social media either.
Dan Fichter:And anyone on the right who really does love the rage content -- the Twitter algorithm was made for that. I don't think any kind of new right-wing middleware could hold a candle to the right-wing algorithm Elon Musk has shaped at Twitter. And even for those people, is rage content really all they like? Don't OAN and Newsmax viewers occasionally change the channel and watch Animal Planet? The algorithm doesn't let them do that, but middleware could.
Dan Fichter:To take a worst-case scenario, if someone like Alex Jones ends up making his own middleware, handpicking outrageous tweets for people's feeds, then, sure, some people will eat it up. But, first, I'm not sure an Alex Jones feed would end up being any more outrageous than what the Twitter algorithm is already pushing into those people's feeds. And the Twitter algorithm gets to do it in secret. Advertisers don't get to see which tweets the algorithm is promoting, and media watchdogs have a very hard time guessing which tweets the algorithm is promoting, and a hard time sounding the alarm bell. Twitter tends to sue watchdogs who attempt that, claiming their studies are not representative.
Dan Fichter:Speaking of Alex Jones: in October 2024, a Wall Street Journal study actually found the Twitter algorithm was promoting Alex Jones's tweets to new accounts they'd made. Unlike the Twitter algorithm, Alex Jones can't operate in secret. And if he wants to handpick tweets and offer a curated feed, we'll all get to see what he's up to. And advertisers, researchers, and maybe even new defamation plaintiffs will get a chance to hold Alex Jones accountable in a way the Twitter algorithm is rarely held accountable. And evidently, as that Wall Street Journal study implied, people like Alex Jones get a lot of help from the Twitter algorithm right now in getting popular. So if better thought-out middleware can spring up and gain market share, and the rage-driven algorithms begin to recede a little, the next Alex Jones might never get as big in the first place.
Dan Fichter:But, of course, middleware itself will only get big if the social platforms allow it to exist, and if there's an incentive for independent parties to make really compelling middleware. Jack Dorsey, who co-founded Twitter, spoke about this in February on In Good Company, a podcast by Norges Bank Investment Management.
Jack Dorsey:Right now, most of these Internet companies, and most of these Internet services -- the algorithm is driven towards one thing, which is maximizing impressions to maximize advertising revenue. If I can choose what algorithm filters my tweets and what I get to see and I have, instead of, like, an app store, an algorithm store, where I get to put these on my timeline, then I have a lot more agency in the content and the experience. And I think it ends up creating a much healthier relationship with the technology. And I'm not just speaking about Twitter -- this is for YouTube and for financial technologies.
Jack Dorsey:We are being programmed. These algorithms know our preferences better than us. That's only going to increase. How do we increase agency at the individual level? It's by choice. It's by being able to turn off the algorithm. It's by being able to choose different algorithms. It's by being able to make my own algorithm.
Dan Fichter:And you'll hear him flesh out the potential economics of middleware: why it could be worthwhile to make middleware, if the platforms ever allowed us to.
Jack Dorsey:You can sell the algorithms. You can subscribe to them. You can put more directed topical advertising on these algorithms, because they actually are programmed in a way that you're actually interested in the outcomes.
Dan Fichter:So maybe we could make the tech giants share revenue with middleware providers, or if not, maybe you'd pay a dollar a month to have your Twitter feed curated by someone you trust, like your favorite news outlet.
Dan Fichter:This could be a great deal for news organizations, incidentally, whose staff are already reading Twitter and noticing interesting tweets every day. Getting to put those tweets into a middleware feed that makes revenue -- that could be a real shot in the arm for news organizations.
Dan Fichter:And advertisers might pay a premium to advertise in those feeds. Those feeds would feel a lot more like trusted news outlets than my Twitter feed that you heard a few minutes ago. Would you rather advertise on that, or on a feed curated by CNN?
Dan Fichter:And although we're focusing mostly on the kind of middleware that would curate our feeds, people are theorizing about other kinds of middleware that could modify other aspects of the social platforms. Here's Jonathan again.
Jonathan Stray:You know, some of my colleagues are like, actually, I care less about the algorithm than about the user interface around it. Right? So, you know, I want to hide likes. A lot of people have the idea that, you know, maybe this is producing social comparison. It's affecting our mental health.
Jonathan Stray:Or I want to add a -- you know, Talia Stroud did this research a decade ago now -- if you add a respect button instead of a like button, people are more likely to click that when they disagree -- which, you know, respect means, yeah, I see -- I don't agree with it, but I see your point, and it's well stated and, you know, I acknowledge your dignity as a human being who has different opinions than I do. So, you know -- or you could try different types of misinformation labeling, or you could try, you know, screen time limitations. There's, you know, there's a hundred things you could try. So in principle, you could have middleware both at the content selection level and at the UI level.
Dan Fichter:But everything Jonathan's describing is software, whether it's picking out tweets for you, or doing something else. And I wanted to ask Jonathan: why this exclusive focus on middleware that is software? The original Fukuyama and team paper and last December's paper both describe a range of parties that could make middleware, but they'd all be making software.
Dan Fichter:The idea seems to be, we are in the algorithm age, so people are always going to choose algorithms over hand-tailored feeds of content, because algorithms are so much more powerful at giving us what we want -- because no human being picking out the most interesting videos on YouTube can really know what I find interesting better than a piece of software that's taking the time to get to know me -- to describe the mass surveillance the social platforms do charitably.
Dan Fichter:I just don't know if that's true; if algorithms win every time; and if we've stopped caring what other people find important now that we have algorithms surveilling and experimenting on us and trying to reveal our truest preferences.
Dan Fichter:On Twitter, if The New York Times found a hundred tweets a day they think are truly worth reading, would you really not want to take a look at that feed, at least some of the time? Would you really always trust a machine over human judgment? I put this question to Jonathan, and asked, what about that example where New York Times staff get to handpick tweets from all across the Twitterverse that they find interesting?
Jonathan Stray:The more interesting question is if you want that to be personalized. Right? So don't show me The New York Times's selection of interesting stories. Show me what The New York Times's carefully designed algorithm thinks is right for me.
Dan Fichter:And that might happen. The New York Times has engineers and data scientists. But what if I'm more interested in what Times reporters and editors find interesting on Twitter than what any algorithm finds interesting? It wouldn't be the craziest thing in the world if news reporters turned out to have a good eye for news.
Dan Fichter:I think we're missing out by not bringing people and institutions who curate information into the middleware conversation, as real constituents in this policy debate, who might want middleware to become a thing so they can create it, even if they aren't algorithm makers.
Dan Fichter:It's true, like Jonathan says, that our feeds can be incredibly hard to look away from when they're personalized by algorithms that have data on us. But like Chand argued, we don't necessarily like feeling addicted to our feeds. When we have a choice, we often choose a nonaddictive and more satisfying alternative to something that feels like junk food.
Chand Rajendra-Nicolucci:But I also think it also might just be a blind spot. I think that people are accustomed to thinking about social media experiences as being driven by algorithms now. And I think that, you know, maybe that sort of default just has imposed itself on the academic and policy discussions around it.
Dan Fichter:And I asked him, how exactly do you wish your own YouTube feed worked?
Chand Rajendra-Nicolucci:For me, my YouTube experience? I mean, honestly, one of the things that first comes to mind is I might turn to the friends of mine who are more prolific YouTube users than me, but are always recommending videos that I should watch. And I always end up enjoying them more than, necessarily, whatever YouTube ends up recommending to me. Like, if I could be to YouTube, hey, you know, let my friends give me 50 videos to watch, or even just show me everything that my friends have watched in the last week, I might like that -- just copy whatever they're watching.
Dan Fichter:And Chand points out that while some of his friends might spend a lot of time watching videos on TikTok or YouTube, mostly directed by the algorithm, it's different when it comes to picking movies. It's human judgment -- not algorithmic judgment -- that they turn to: often each other's judgment.
Chand Rajendra-Nicolucci:They are going to stuff like Letterboxd or, you know, their favorite online movie critic, or lists of the best 200 movies. And I think, yeah, it's such a great point that, you know, that same principle can be applied to social media more broadly.
Dan Fichter:My friends use Letterboxd too. And when it comes to music, they're often more interested in playlists curated by people they trust. Recommending a movie or a song to a friend -- that brings you closer together. If they like the thing you suggest, you've just done something nice for them, and now you've also got a new thing in common to talk about. It's prosocial. Middleware could help steer us back to prosocial ways of sharing. Here's Marc again.
Marc Faddoul:The data that you train the algorithm on are not quality ratings. They are engagement ratings, basically. It's whether the user engaged or not on the video, but it doesn't tell them whether the user likes the video or wished they had -- or to basically answer the question whether -- if you ask them a week later, did you wish you had watched this video that you did a week ago? or would you prefer you had spent your time differently? Like, do you remember anything about it? Did it bring any value?
Marc Faddoul:It's a very different question, and it's not a question that algorithms are equipped to answer, because they simply don't have this data. Humans, on the other hand, can use other proxies to determine that, including their own perception, their knowledge or understanding of the other person, what they might find interesting. And so, in fact, I think that human curation is an extremely important and valuable paradigm which is currently underleveraged for the recommendation of -- for content recommendation.
Dan Fichter:And that could be especially true when it comes to a platform like YouTube Kids. YouTube Kids is widely reported to be not all that safe for kids. Exactly because the content on it is not handpicked by people, some really bad stuff can get through the filters. It's been shown to sometimes include sexual and violent videos, and even videos containing instructions for suicide. That wouldn't happen if a middleware provider -- which could be PBS, Nickelodeon, Disney, whoever -- were allowed to handpick YouTube videos they feel are great for kids and give parents those hand tailored collections.
Chand Rajendra-Nicolucci:I think especially when it comes to kids, parents are really looking for online experiences that they can trust. I think you talked about, like, why couldn't on YouTube, PBS come up with a handpicked playlist for kids? I think that's such a great point. And I think that parents would want that -- that they wouldn't necessarily want the uncertainty that comes with an algorithm that PBS built, versus knowing that somebody went through and actually watched and chose all of these videos. They might actually prefer the sort of human and organizational curation.
Dan Fichter:Marc also points to a public media group in Europe that has gone TikTok-like in its user experience, but without using algorithms.
Marc Faddoul:And if I can maybe give another example on this kind of media curation for social media: Arte, which is, if you want, the equivalent of BBC public service, funded half by the French and half by the German government --
Dan Fichter:-- that's arte.tv --
Marc Faddoul:-- which kind of is a standard of quality public service, has just launched, a couple of months ago, on their app, a TikTok-like feed where you can just scroll through, just like on TikTok, snippets of quality programs that they run, with also the link, if you want to see the full program. But then you see, kind of, the best snippets of these other programs. So I think this new feature of Arte kind of exemplifies the type of quality feed you could have if you put this -- on behalf of journalists who have maybe, who are, like, also attuned to the culture that users might be looking for and can kind of also dedicate time and energy to this.
Dan Fichter:On the very other end of the Luddite spectrum, one type of middleware that could be really responsive to what you want to see would be middleware that uses conversational AI -- where you could just have an ongoing conversation with ChatGPT, Claude, or Gemini, and give it instructions about what you want, and don't want, to show up in your feeds on TikTok, or YouTube, or the other platforms. It could remember your instructions, and you could always update them, and give it specific feedback in plain English.
Dan Fichter:Here's Chand again. And when he says LLMs, he's talking about large language models like ChatGPT.
Chand Rajendra-Nicolucci:It's actually an area where I'm excited about the rise of generative AI and LLMs. I think that there's potential there for a really simple interface for people to express their preferences, and maybe get to the point where instead of having to go in and adjust a bunch of dials, or check boxes, or whatever, you could just have an LLM on these platforms that you just say like, hey, I want to see less of entertainment content, and it will just go and, based on that, adjust your feed algorithm. And that seems to me like something that could be plausible, you know, in the nearish term.
Jonathan Stray:Yeah. So that's a very exciting idea, which a bunch of people are pursuing. It's an idea sometimes called conversational recommenders. So, yeah, rather than just clicking like or dislike, what if I tell the system in words what I liked or didn't like about that? And then that got added to, let's call it a system prompt that, you know, a large language model uses to evaluate content to decide what to show me.
Dan Fichter:I was actually a little surprised that the middleware paper released in December didn't touch on conversational AI, or conversational recommenders. I think it's a miss to talk about middleware and not draw the connection to things like ChatGPT that so many people use. Again, if middleware hasn't broken through as a mainstream idea, maybe it's because no one's written an op-ed describing how YouTube could be so much better if we were allowed to navigate it using AI. I think a lot of people would tune into that argument.
Jonathan Stray:Right. I think that's coming for sure. There's some technical challenges, and there's some economic challenges. Right now, it's too expensive for a large platform to run a large language model on every single post. Like, large language model-based recommenders are still expensive. But that'll change. And then you have the very interesting possibility of, like you say, programming a recommender, by telling it in language what you like and don't like. And I think that'll be very cool, and I think that'll be very, very useful.
Dan Fichter:And as Marc said, the platforms used to have the excuse, we think users don't really want to mess around with settings in our app in order to get control over the algorithm. But conversational AI sort of takes away that excuse. ChatGPT proves that we're happy to tell a computer more about what we want -- sometimes a lot more -- if we know the computer will do a way better job helping us once it has that input.
Dan Fichter:The social platforms may eventually introduce conversational AI on their own -- their own AI -- but as for independent middleware, and every other form of middleware we've talked about, there's little chance it will happen outside of Bluesky. TikTok, Instagram, YouTube, and the rest -- they certainly don't want to share revenue with middleware providers, and they probably won't allow middleware voluntarily, because they don't want to give up control over what makes their platform so addictive: the algorithm.
Dan Fichter:But what if it turned out that making their algorithm your only option on their platform -- what if that weren't legal? What if there's an argument we haven't really made, but could make, and could prevail in making, that antitrust laws say you can't bundle or tie two independent products together, and force people to use both, and so you can't bundle a recommendation algorithm with a social platform?
Dan Fichter:They really are two separate products. One is sort of like a TV, and the other, a TV channel. TVs don't get to limit which channels we watch. And if they tried to, especially in favor of a channel owned by the TV maker, they'd face trouble in court.
Dan Fichter:The social platforms might argue: there's no antitrust problem here; our algorithm is just a natural feature of our platform, and it's inseparable from our platform.
Dan Fichter:But that doesn't really hold water. Platforms like Flickr and Vimeo host our photos and videos without filling our home screen with recommended content from an algorithm. And most of the largest social platforms didn't have feeds with algorithmic recommendations until years after they launched and became popular.
Dan Fichter:The platforms argue, when challenged about extreme or defamatory content they host, that they are just neutral pipes for their users' content, and that they can't be held responsible for what it says because they aren't publishers. But then they also turn around and argue, in different court cases, that their algorithms have free speech rights and are performing a kind of Constitutionally protected expression when they pick out content to stuff into our feeds.
Dan Fichter:The only way both things could be true is if this set of pipes -- the content platform -- is one product, and the megaphone -- the recommendation algorithm -- is another. And antitrust law frowns on tying or bundling two very distinct products together.
Dan Fichter:Even if federal antitrust authorities are unlikely to make that case in the next four years, what if a private party could make the case in court -- someone like The New York Times arguing that the Twitter algorithm sort of works like a news desk, so a real news desk should have a fair shot at competing with it, through middleware?
Dan Fichter:Or what if, separately from all these antitrust arguments, American or European legislators simply said, we want to see an open marketplace of feed curation options on every social media platform, so we're just going to require the platforms to allow middleware, going forward?
Dan Fichter:Marc, as I've said, has a real voice in European tech policy circles. He's earned it. And part of why I'm so grateful he's joined us today is that he has a perspective we don't often hear on how Europe might do exactly that: might require social media companies to allow middleware to exist.
Dan Fichter:Future European legislation might essentially say, YouTube can no longer stop you from making an alternative to the YouTube app, with your own content recommender or marketplace of middleware built in, and giving users direct access to the whole world of YouTube videos through your app. Today, YouTube would get that kind of app shut down overnight. It's a violation of their terms of service to let users see YouTube videos outside the YouTube player. But Marc says that could change, through potential new European regulation that would require 'content interoperability' on social platforms.
Marc Faddoul:I don't think it will be a short term future, because this type of discussion takes a lot of time to be voted, especially when it's such a controversial kind of topic, where there will be really strong and intense lobbying.
Marc Faddoul:I don't think we will see this coming from the current US administration, which seems to be quite defensive and protective of its tech giants, now that they have all given allegiance to the administration.
Marc Faddoul:And so I think the European Union is kind of the only other credible regulator that could come up with such a regulation, and hopefully the capacity to enforce it, which is the other big challenge with this type of regulation -- is to enforce it, and make these extremely powerful platforms -- have them change their practices.
Marc Faddoul:But yes, there will be an opportunity in the European Union to put this on the table again through the Digital Fairness Act, which will be discussed starting in, now -- in the next few months. The first discussion on the matter will happen, and probably it's expected to be delivered in the next two or three years. And then we'll see, if this is included, how long it will take for it to be in place.
Marc Faddoul:I can mention another interesting example of interoperability through European regulation, which is very beneficial for consumers. It is the USB-C that you have on your Mac or your iPhone. For the longest time, Apple didn't want to put any kind of standard connectivity on its devices, because they can make money from selling specific chargers. But it's in -- it has been introduced in a European regulation -- Apple and others lobbied extremely strongly against -- and eventually it passed. And now this is why, worldwide, we have USB-C chargers on every device, which I think every consumer around the world loves.
Marc Faddoul:But I think it's important to remind that regulation, in the end, is sometimes, and often, the only way to get -- to put the users' interests above the commercial interests of the platforms.
Dan Fichter:That kind of regulation is so much likelier to come from Europe than the US, so I really hope you will follow Marc, as his work may be critical to bringing middleware into existence. You can find Marc's information in the show notes, along with Jonathan's, and Chand's, and mine, and I just want to thank all three guests again so much for sharing their ideas with you.
Dan Fichter:And before we close, one last thought: to get these ideas to break through, we need dreamers and communicators like Marc, Chand, and Jonathan to spell them out, in better ways than the discourse has done previously, and to put how people might love middleware at the center of explaining it.
Dan Fichter:And we also need good branding. 'Middleware', as a name -- I think the kids might call it 'mid'. 'Middleware' is a term that actually already means half a dozen other things in software engineering, which is a little confusing. And at any rate, 'middleware' doesn't inherently communicate a kind of revolutionary zeal to change the Internet.
Dan Fichter:Maybe a new name could help. So I want to humbly propose that the alternatives to the algorithm we're hungry for be called 'IOTAs'. An NPR IOTA, a ChatGPT IOTA, a Substacker's personal IOTA -- they'd be giving us something to use instead of the algorithm, so I-O-T-A, IOTA. Just a thought.
Dan Fichter:Please stay tuned for an interview with a big city librarian about what a public library IOTA might look like. And if you want to come talk about IOTAs on your own episode, drop me a line. You can reach me at hucklemelon@gmail.com.
Dan Fichter:Thank you so much for listening, and talk to you soon.
Listen to Hucklemelon using one of many popular podcasting apps or directories.