AI in Publishing: Will It Create Jobs or Merely Shift Power?
It’s a familiar refrain in the tech world: AI won’t steal your job, but will rather “augment” your role, freeing you from the monotony of repetitive tasks to focus on more creative, fulfilling work. The publishing industry, with its labour-intensive workflows and dependence on human creativity, is often held up as an exemplar of this narrative. But is this optimism grounded in reality—or yet another case of tech evangelism glossing over uncomfortable truths?
Let’s start with the premise that AI will “create better jobs.” While it’s true that automation can unlock efficiencies, it’s naïve to assume this process will necessarily benefit workers. In publishing, AI tools can already handle tasks like copy-editing, content curation, typesetting, and even rudimentary writing. These aren’t just “mundane” operations—they’re skilled jobs that have long sustained careers. The notion that AI will “free” workers assumes these workers will seamlessly transition to higher-order roles, but history tells us otherwise. In most industries, automation tends to do the opposite: it consolidates power and wealth among those who control the technology, while displacing workers who are left scrambling for fewer, often more precarious positions.
Publishing companies aren’t charities, and their adoption of AI is less about enhancing employee satisfaction than achieving cost savings and competitive advantage. AI doesn’t just take over repetitive tasks—it enables companies to scale operations without scaling labour costs. For instance, automated editing tools allow publishers to cut down on expensive human editors. AI-driven content generation tools, like OpenAI’s GPT models, enable publishers to churn out vast quantities of content without the need for writers or researchers. The result? A leaner workforce, not necessarily a “better” one.
Even the supposed “creative” roles AI may open up—like AI trainers or prompt engineers—come with caveats. These positions will likely demand specialised knowledge and technical skills, effectively barring many displaced workers from re-entering the industry without significant upskilling. And let’s not forget the inherent irony: AI trainers are essentially teaching machines to one day replace their own expertise.
Then there’s the matter of power dynamics. By embedding AI deeper into publishing workflows, the industry risks becoming even more centralised, favouring organisations with the resources to develop or license cutting-edge AI systems. Smaller publishers, freelancers, and independent creators—already struggling to compete in an increasingly commoditised digital landscape—may find themselves further marginalised. The democratisation of publishing, one of the promises of the internet age, stands to be eroded as AI tools favour those with capital to invest.
Privacy and data security, too, are glaring concerns. AI systems in publishing rely heavily on data—whether it’s consumer behaviour analytics to predict trends or datasets to train generative models. This raises troubling questions about how publishers collect, store, and use personal information. For example, many AI systems are trained on publicly available text, but what happens when that text includes sensitive or copyrighted material? The legal and ethical frameworks governing AI use in publishing remain woefully underdeveloped, leaving room for exploitation.
The broader implications for education and literacy are also worth unpacking. If publishing continues down the path of AI-driven content creation, we risk flooding the market with formulaic, algorithmically optimised material. What does that do to the quality of information or the richness of storytelling? Will the next generation of readers grow up consuming content shaped by machine priorities rather than human creativity? And what happens to the writers, editors, and researchers who bring depth, nuance, and cultural diversity to publishing?
Institutions and organisations that depend on published materials, such as schools and libraries, should be asking hard questions. If AI-generated content becomes the norm, are we prioritising cost over quality? Are we inadvertently feeding students content that lacks critical rigour or ethical oversight? And how do we ensure equitable access to publishing tools when AI systems come with hefty licensing fees and proprietary models?
The reality is that AI adoption in publishing isn’t a simple case of job creation versus job elimination. It’s a shift in power structures and priorities, with complex implications for workers, organisations, and society at large. The promise of “better jobs” may be true for some, but for many others, it will mean navigating a landscape where their roles are diminished, their skills devalued, and their opportunities unevenly distributed.
Instead of framing the conversation around whether AI will “steal” jobs, we should be asking: Who benefits from this transformation? Who loses? What safeguards can we implement to ensure workers and creators aren’t left behind? And what kind of publishing industry do we want to build—not just for today, but for future generations? Until we start addressing these questions, the idea that AI will simply make jobs “better” remains little more than a comforting myth.

Leave a Reply