Episode 43: AI, Bias, and Capitalism — The Cost of Our Data
Artificial intelligence (AI) is often framed as a technological breakthrough. But behind the headlines is a deeper question: who owns the infrastructure shaping how we communicate, create, and understand truth?
In this episode of Art of Citizenry Podcast, we slow down the AI conversation to ask harder questions – not just about what these systems can do, but who built them, who profits from them, and what we give up by using them. Host, Manpreet Kaur Kalra is joined by Vauhini Vara, author of Searches: Selfhood in the Digital Age and longtime journalist covering Big Tech, to unpack the structural forces at play behind the AI boom.
Together, we explore how AI is less a neutral technology than a mirror of the economic and ideological forces that built it. A social system shaped by corporate incentives, embedded bias, and the quiet erosion of our ability to define truth for ourselves and what it means for all of us when the infrastructure shaping that truth is privately owned, profit-driven, and constantly learning from us.
This isn't a conversation with easy answers. It's about sitting with complexity, and the uncomfortable reality that opting out is rarely simple.
In this episode, we explore:
AI as a corporate product, not a neutral tool. Large language models (LLMs) like ChatGPT and Claude are expensive to build and maintain, which means they will always reflect the goals, values, and profit incentives of the companies behind them. When we treat AI's outputs as authoritative, we risk outsourcing our sense of truth to privately-owned, profit-driven infrastructure.
Language as labor. Writing isn't just output; it's expression, persuasion, and relationship. The monetization of language has made it a target for automation.
The legal landscape: copyright, consent, and accountability. We dig into the growing body of litigation around AI training data, including:
The New York Times v. OpenAI and Microsoft, a landmark copyright infringement lawsuit filed in late 2023.
Bartz v. Anthropic, a case brought forward by a group of writers against Anthropic, the makers of Claude alleging that the company used large quantities of scanned and pirated books to train its system.
AI Bias (beyond the obvious). Bias in AI isn't just about harmful outputs. It lives in training data, design decisions, corporate incentives, and moderation policies. We explore selection bias, stereotyping bias, and the ideological bias embedded in how these platforms are trained.
How platforms learn us. Vauhini shares how she deliberately preserves her Google search history and why that archive, while personally useful, also gives Google an intimate and monetizable portrait of her life. The same dynamic applies across Amazon, Meta, OpenAI, and beyond: unless legally prohibited (as in the EU), data collection is opt-in by default, and most people don't know to opt out.
From shareholder pressure to alternative funding models. Wikipedia, Firefox, Signal, municipal broadband – we ask whether it's possible to build technology aligned with public benefit rather than private profit. And we sit with the uncomfortable truth that imagining something different is harder when we're operating inside a system already shaped by capitalism's logic.
AI isn’t just a technical system. It’s a social and economic one. The outputs we see reflect the data they’re trained on, the incentives of the companies building them, and the broader political economy of the internet. If we want different outcomes from AI, the conversation must expand beyond engineering fixes to include questions of ownership, accountability, and power.
📌 Art of Citizenry is proudly independent. Support us as we critically explore, challenge, and unravel mainstream narratives by empowering listeners with accessible, nuanced perspectives.
Make a one-time contribution via Paypal: visit.artofcitizenry.com/paypal
Meet Our Guest: Vauhini Vara
Vauhini Vara is the author of Searches: Selfhood in the Digital Age, named a best book of the year by Esquire, Slate, and Publisher’s Weekly and a winner of the Porchlight Business Book Award. Her previous books are This is Salvaged, which was longlisted for the Story Prize and won the High Plains Book Award, and The Immortal King Rao, a Pulitzer Prize finalist and winner of the Colorado Book Award. She is also a journalist, currently working as a contributing writer for Businessweek.
“Think of the central conflict, with technology, as being less about humans versus machines than about various groups of humans jockeying for power using machines.” – Vauhini Vara
AI and Capitalism
AI products are always going to reflect the goals and values of the companies behind them. – Vauhini Vara
When people interact with systems like ChatGPT, those systems learn, at least in part, from patterns of engagement: what users ask, what they accept, what they push back on, etc. Over time, that creates feedback loops where the model increasingly reflects back what a user already believes, rather than challenging it.
There's a dual approach to language production in which the product is serving the interests of its corporate owner, while using language that is rhetorically customized to be persuasive to you in particular. – Vauhini Vara
AI Bias
When people talk about “AI bias,” they’re often pointing to one thing: biased outputs. Responses that reproduce racist, sexist, or otherwise discriminatory stereotypes. But what gets labeled as an AI “hallucination” is often something else entirely: the resurfacing of systemic prejudice embedded in data, design choices, and power structures. If we treat AI bias as a technical bug that can simply be patched, we risk repeating a pattern we know all too well: addressing symptoms while refusing to confront the structural conditions that produced the harm in the first place.
Select Articles Referenced in this Episode
Searches: Selfhood in the Digital Age by Vauhini Vara
Vauhini Vara's essay "Ghosts," co-written with GPT-3
Vauhini Vara's New Yorker piece on MFA students and AI-generated prose: What If Readers Like A.I.-Generated Fiction?
Vauhini Vara's Atlantic piece on ideological bias in ChatGPT's guidelines: ChatGPT’s Self-Serving Optimism
Harvard Business Review: When AI Amplifies the Biases of Its Users
Bartz v. Anthropic: I Sued Anthropic, and the Unthinkable Happened
Capitalism: A Global History by Sven Beckert
Key Terms
Large Language Model (LLM): A type of AI designed to process and generate human-like language by identifying patterns in massive amounts of text and predicting what comes next.
Hallucination: In AI contexts, outputs that contain false or fabricated information.
Selection Bias: When AI systems are trained predominantly on data from certain groups, leading to worse performance for underrepresented populations.
Filter bubble: A feedback loop in which AI systems increasingly reflect back what a user already believes, rather than challenging it. Similar to an echo chamber.
Alternative Tools Run by Nonprofit Organizations
Wikipedia: Volunteer run and operated by a nonprofit foundation
Firefox: Open-source browser run by the Mozilla Foundation
Signal: Encrypted messaging app run by a nonprofit foundation
Proton: Encrypted and secure email platform
Things Worth Doing After You Listen
Check your Google account settings to see what search history is being stored and decide whether you want to keep it or turn it off.
Look at the privacy and data settings in products you use regularly: Google, Amazon, OpenAI, Meta. Most tracking is on by default.
If you have a retirement account or mutual funds, it's worth knowing whether you hold shares in the companies you're critiquing and whether that changes how you think about accountability.
Support the Podcast
Art of Citizenry is 100% listener-supported. Every contribution helps us continue producing research-intensive episodes and paying our small but mighty team. If this episode resonates with you, consider supporting our work:
Thank You for Listening
Please subscribe and leave a five-star review for Art of Citizenry wherever you listen to podcasts. Also, follow along and share your thoughts on Substack - artofcitizenry.substack.com