My guest is Robert Shepherd, an artist, writer and thinker from Edinburgh (by way of Aberdeen) whose essay series ‘Our Ghost’ explores some deep and fundamental questions about artificial intelligence, agency, and what it means to be human.
Robert writes short stories and essays, as well as producing his own unique illustrations and artwork, and the odd bit of Doctor Who fan-fiction. We discuss his perceptive essays on consciousness and AI, his love for the Doctor, and much, much more.
Reading recommendations from Robert Shepherd

First, to get some context on our conversation, read Robert’s excellent essay series, ‘Our Ghost’ — named for the German term ‘zeitgiest’, the defining spirit or mood of a particular period of history, often understood as a synthesis of an era’s ideas and beliefs. Perhaps a good modern analogue for this is the concept of ‘vibes’. Here are some links to the whole series in order:
We start by briefly discussing The Darkest Timeline: Living in a World With No Future, my book published in 2024 by Revol Press. Stephen shares some of my opinions on cyberpunk, and the dystopian nature of our present moment.
We go on to discuss John Gray’s Feline Philosophy: Cats and the Meaning of Life, which was one of the books that inspired him to start an Instagram account for his illustration work. You can follow that account at robertdraws, and you can find some of his short fiction at his website robertbshepherd.com.
We touch on Susan Blackmore’s The Meme Machine, a 1999 book that develops and builds upon the concept of the ‘meme’ as a transmissible unit of cultural information, originated by Richard Dawkins in his 1976 book The Selfish Gene. In Blackmore’s view, human culture and even our personalities are little more than a ‘memeplex’ — a sophisticated, interlocking, sometimes contradictory bundle of inherited and discovered ideas that ‘infect’ the ‘host’ like a virus.
Blackmore’s is a chilly, mechanistic view of human nature, but one which has become even more important to consider in light of the past 25 years of online culture and behaviour — indeed, the very idea of ‘going viral’ owes much to Blackmore and Dawkins’ theories. The ‘selfish gene’ model of memes has been challenged over the years, largely because it is difficult to falsify — but it remains a powerful metaphor for the way human experience can be influenced, constructed, guided and predicted.
We connect this to Roland Barthes’ essay ‘The Death of the Author’, which explores some of the same themes that Robert highlights in his discussion of the death of God, and what that means for the act of creation, either as an author or as a more general figure of cultural production. Broadly speaking, Barthes challenges the notion that a single figure can be understood to have authored or created a work, and that the evolution of ideas is something more akin to the error-infused method of horizontal transmission described by Blackmore and Dawkins. Robert also connects this to some thoughts on anthropomorphism in Terry Pratchett’s book The Science of Discworld, co-authored with science writers Ian Stewart and Jack Cohen.
We go on to discuss the ‘paperclip problem’ — a metaphor for instrumental convergence in artificially intelligent systems, first proposed by Nick Bostrom in a 2003 paper, ‘Ethical Issues in Advanced Artificial Intelligence’. Bostrom writes:
The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result… in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.
Stephen connects this to some of Marx’s thoughts in Grundrisse, from the fragment on machines and machine-driven economies. Marx predicted that capitalism would become increasingly mechanised, and he had some salient thoughts on how this might affect workers. He writes (emphasis mine):
… once adopted into the production process of capital, the means of labour passes through different metamorphoses, whose culmination is the machine, or rather, an automatic system of machinery (system of machinery: the automatic one is merely its most complete, most adequate form, and alone transforms machinery into a system), set in motion by an automaton, a moving power that moves itself; this automaton consisting of numerous mechanical and intellectual organs, so that the workers themselves are cast merely as its conscious linkages. In the machine, and even more in machinery as an automatic system, the use value, i.e. the material quality of the means of labour, is transformed into an existence adequate to fixed capital and to capital as such; and the form in which it was adopted into the production process of capital, the direct means of labour, is superseded by a form posited by capital itself and corresponding to it. In no way does the machine appear as the individual worker's means of labour. Its distinguishing characteristic is not in the least, as with the means of labour, to transmit the worker's activity to the object; this activity, rather, is posited in such a way that it merely transmits the machine's work, the machine's action, on to the raw material — supervises it and guards against interruptions.
This presciently illuminates one of the central problems of a mechanised economy driven by artificially intelligent machines — workers cannot (for now at least) own the labour of machines, so by the logic of capital, they can be excluded from a share in its profits and outputs. The question of what workers can do in the face of automation is a spectre that has haunted capitalism (and anti-capitalist writing) since the dawn of the Industrial Revolution. Stephen connects this to some criticisms he had of Yuval Noah Harari’s Nexus: A Brief History of Information Networks from the Stone Age to AI. As Stephen puts it, previous information networks have need of humans, artificially intelligent ones might not — which links back to Marx’s point above.
Other books Stephen mentions include G.K. Chesterton’s Orthodoxy, a treatise on orthodoxy and heresy, which we discuss in relation to mortality and optimism. We finish with a discussion of Stephen’s love for the perennial British science fiction TV show Doctor Who, which he frequently uses as a way to unpack philosophical ideas (for example in this essay from his Substack). In particular we discuss the ‘character’ of B.O.S.S., an artificial intelligence who locks horns with Sean Pertwee’s incarnation of The Doctor in ‘The Green Death’ (above).
Next up: Alex Mazey
My next guest is Alex Mazey, an award-winnng performance poet, and contributing researcher on sociology and postmodern theory for the international academic journal Baudrillard Now. We’ll discuss one of my favourite books of recent years, his excellent Baudrillardian treatise on the work of Lil Peep, Yung Lean and others, Sad Boy Aesthetics, published by Broken Sleep, and his more recent work in the fields of culture, poetry and Baudrillard studies. Join us in October!
Last week I published a new essay for Revol Press — a look back at some of the predictions I made in The Darkest Timeline, plus my thoughts on network states, AI, the ‘This Is Fine’ dog meme, and more. Check it out below.
Thanks for supporting Strange Exiles! I have new projects to announce soon, exclusively for subscribers — stay tuned.
-Bram, Glasgow, September 2025
Support my work:
Explore my writing: linktr.ee/bramegieben
Read my book: linktr.ee/thedarkesttimeline
Follow @strangeexiles for updates on Instagram and Twitter
















