Review of The Ferryman



Review of The Ferryman

Adam McLain

Cronin, Justin. The Ferryman. Ballantine Books, 2023.

Separated into three islands—the main island, the Annex, and the Nursery—Prospera is a utopia cut off from the rest of the world. Created by the Designer to shelter the best of humanity, the inhabitants of the main island live paradisaically, pursuing whatever passion or desire drives them. This paradise does not mean that they do not work or live as mere mortals—they still age, if slowly; still work, just not menial labor jobs; and still die. But their death and birth are unique: they arrive by ferry from the Nursery in a body in its late teens, capable of basic human functions but also able to avidly learn new things, and they leave by ferry to the Nursery when their health number on their monitors reaches a low count, meaning they no longer enjoy life. This cyclical nature of existence is guided by Ferrymen like Proctor Bennett who lead those at the end of the cycle back to the ferry.

Proctor’s life in Prospera is idyllic. He has a good fifteen-year contract with his current partner, Elise. Although their relationship is cooling after many years together, they still are happy and content. His job is fulfilling, challenging, and a point of personal pride. He wouldn’t do anything to change his life. This changes, however, when he takes his father to the ferry and his father has a catatonic breakdown, telling his son, “The world is not the world. You’re not you… Oranios. It’s all Oranios” (62). His father leaves by ferry, but Proctor’s world is forever changed. As he searches for the answers to his father’s mumblings, he faces off against bureaucratic corruption bent on stopping him, class warfare building between the Annex and the main island, and the possibilities of what his life for the last hundreds of years really means.

Utopia, space exploration, climate fiction—Cronin writes the genre tropes well. I connected to each character as their backstory was revealed, and I lingered over sentences meticulously crafted to enhance the experience. The sentences are lyrical and whimsical; at times I thought I was moving through an ethereal dream only to be reminded that pain and strife still exist. Cronin’s use of the English language is his crowning point in this novel. But where I get stuck is there is no innovation with the tropes: each reveal is satisfying, but it is also predictable, if one knows science fiction well enough. This critique does not necessarily diminish the book. I wouldn’t go so far as to say that Cronin’s goal in the project is to subvert or expand tropes or send the genre along a new path. It is a text that is beautifully written, like Samuel R. Delany’s work, but also one that is not so concerned with generic questions, unlike Delany’s work.

Cronin’s exploration of utopia, turning point theory, simulation theory, climate catastrophe, and space travel are not meant to explore new depths in the subject; instead, he centers, and this is the beautiful part of the book, these grand ideas not around the ideas themselves but around the characters that enact them. His book becomes a meditation on relationships (parent to child, person to person, manager to employee) that left me re-thinking my own relationships and approaches to them. The central struggle of the book is with the loss of loved ones and not just a fight with authority, a quest for truth, and the revelation of survival.

However, after reading the book twice, I’m not entirely sure what the central message of the project is when it comes to the larger, systemic issues it presents. Along with its meditation on relationships, the book presents messages about class struggle, environmental destruction, and existence through mediations on simulation theory. These systemic questions become lost in the deployment of the tropes because Cronin does not emphasize one over the other; instead, he lets the tropes play out as they normally might in a blockbuster science fiction story. The critique of class, for example, is limited in its execution because it presents the same rich-vs.-poor dynamic that many utopias and dystopias exacerbate. The struggle leads to action—the oppressed in the Annex begin marching on the privileged on the main island—but Cronin doesn’t provide readers with enough paratextual information to give this struggle any heart or depth. A scene between the main character and his housekeeper illustrates this reading: Proctor converses with his housekeeper from the Annex about her son. He realizes that he barely knows anything about the son, even though he promised to take the son sailing. The scene shows the class separation and inhumanity between the wealthy of the main island and the working class of the Annex, but it barely goes beyond that presentation. His housekeeper is used later to smuggle Proctor information to sneak out of the Nursery, but beyond that, the book leaves this class relationship alone, thus leaving the message of class itself aborted in many ways. The final message on class, with the climactic reveal at the end, seems to be that class struggle brings about social change, since the Annex’s revolution against the main island ends in a social upheaval, but I am still unsure what the little moments about these broader, systemic arguments mean.

Even as I struggle with how Cronin does not do a lot with the tropes he’s working with, I also think that this deployment of tropes could be seen as a good part of the text: it is marketed toward a wider audience than academics discussing genre, and as such, the use of the tropes makes it easier to use The Ferryman as a starting point into the genre of the science fiction epic. When I went to the local Barnes and Noble to ask about Cronin, the bookseller took me to the horror genre bookshelves, because “that’s where Cronin is usually shelved” even though there is nothing remotely close to the horror genre in The Ferryman. The bookseller was thinking about Cronin’s earlier work, especially the 2010–2016 Passage trilogy, which is a post-apocalyptic, zombie-vampire series. But in shelving The Ferryman in this genre, I believe its use of utopia and climate fiction as a genre is a way to introduce this side of genre fiction to readers. Thus, I recommend The Ferryman as a strong entryway but not as a complication of science fiction. It can begin conversation rather than continue it, a good place to start an undergraduate or graduate course talking about utopia, futurism in science fiction, or climate fiction and space expansion.


Adam McLain is a Ph.D. student in the English department at the University of Connecticut. He researches dystopian literature, legal theory, and sexual justice.

Review of A Half-Built Garden



Review of A Half-Built Garden

Jeremy Brett

Ruthanna Emrys. A Half-Built Garden.Tordotcom, 2022. Paperback. 340 pg. $18.99. ISBN 978-1-250-21099-9.

After enough time, one might be forgiven in thinking that there can be no new First Contact stories to tell. It’s a truly singular event when an author takes a classic sf trope and spins it in a new direction infused with existential social and political relevance. This sort of literary shift was already accomplished by author Ruthanna Emrys in her “Innsmouth Legacy” series, in which she infused the classic Lovecraftian universe of cosmic horror with empathy and feeling for the marginalized in opposition to the racism endemic to Lovecraft and his era. With A Half-Built Garden, Emrys brings modern and lasting concerns for the future of humanity and Earth (which the novel takes pains to point out are not, to certain people, the same thing at all) to a wholly unusual and thoughtful story of alien encounter.

In 2083, the Earth has been engaged for several decades in a radical moment of social, political, and corporate restructuring. Nation-states have been replaced or supplemented by networks centered on the maintenance, restoration, and care of environmentally critical watersheds. The rampant capitalists that ravaged the planet in the 20th and 21st centuries have for the most part been reduced to small island enclaves, connected to the watershed networks and traditional governing structures in uneasy alliances of trade and supply. The networks, which sprung into existence as part of the Dandelion movement (the image, of course, suggesting seeds being spread by the free flow of the wind) govern themselves through collaboration, consensus, and intimate communication rooted in problem-solving. The Dandelion networks devote themselves to repairing what had been so desperately, horribly broken in the world by capitalism and nationalism. Adaptation and harmony are increasingly default human values, and for the first time, despite ongoing struggle, there is hope.

And then the aliens landed. So goes the cliché, but one thing that makes Emrys’ novel so particularly remarkable is the response from this altered world. The novel avoids chronicling an all-out defensive reaction from the militaries of the world, frenzied government scrambling, mass panic (in fact, among the most striking aspects of the book is the immediate acceptance by humans of the aliens as they walk among us), or the complete absence of panic or fear. Those responses are replaced instead with curiosity, acceptance, honest attempts at connection and friendship, attempts at exploitation (by the capitalists), and even sexual exploration (by protagonist/narrator Judy and her wife Carol with the alien representative Rhamnetin). Alien encounters, Emrys posits, bring out the full range of human behavior in people; we are not limited to our most atavistic responses. This attitude of optimism denotes the novel’s true throughline. We see it from the very opening, in which Judy notes “In the bad old days (the commentary said later), nation-states had plans laid in for this sort of thing. They’d have caught the ship on satellite surveillance. They’d have gotten in the ground with sterile tents and tricorders and machine learning translators, taking charge. In a crisis, we still look for the big ape.” However, “instead of a big ape shouting orders, the world got me.” (1) Humble Judy, of the Chesapeake Bay Watershed Network, becomes the Earth’s first ambassador to alien life after stumbling across a crashed spaceship – and, as she points out “That would have been a good time for cynicism – for someone to ask if we believed them, or if their definition of peace looked anything like ours. But no one wanted to spoil the moment of joy. We didn’t want to play nation-style realpolitik, or be properly mature and suspicious. We wanted to talk. However complicated things got afterward, I still can’t regret that.” (6)   In Half-Built Garden, hope in peaceful connection is a precious resource and a defense against a hostile universe.

And that hope is crucial, because the aliens have brought a choice that seems to be no choice. The aliens are comprised of multiple species (represented on this mission by the spider-like “tree folk” and the more insectile “plains folk”) from the Rings, a system of artificial worlds that exists because the Ringers have determined that all intelligent life inevitably destroys its own homeworld and must go into space to survive. Having discovered humanity before it’s too late, they bring an offer—really, more of a predetermined conclusion—to evacuate the planet and move humans out to the stars. The corporates jump at the chance, ready to leave Earth and reestablish their shattered traditions of dominance and power among new, alien markets. Nation-states (represented here mainly by NASA as the avatar of a reduced American government) are driven by curiosity and excitement to see what’s out there. However, the Dandelion networks have invested decades of rescue in trying to stabilize and repair the environment, and Judy, Carol, and the people who comprise those networks are not prepared to surrender their home for which they have fought so hard. The novel turns on this existential-level decision, and on the multiple conversations between and among humans and Ringers on humanity’s future. Emrys places the need for radical and trusting connection at the story’s center, the crucial importance of reaching for understanding across vastly divergent mindsets and motives.

A debate between the Ringers and Dandelion representatives towards the end of the novel summarizes these differing views of the universe that each party holds. Judy, the descendant of a traumatized humanity that teetered on the verge of self-destruction (as well as being Jewish and therefore a custodian of a tragic tradition of forcible wandering), points out that “It’s good to live in a time when we have a time we can love. Someplace we can afford to grow attached to.” One of the Ringers, Glycine, responds “But many of us believe you have to drag people out of a burning building, whether they love the building or not. The question is whether Earth is burning.” Judy’s friend and colleague Atheo fires back, “It’s burning…Well, it’s true. But we’re getting the fire under control. It’s a matter of whether you trust us to know the resilience of our own home, whether you treat us as adults who can calculate our own risk rather than kids who don’t know any better” (256). Emrys follows the traditional pattern of a story of alien contact in casting it as a moment for exploring the nature of humanity in the face of an overwhelming and world-changing event; her twist is presenting it as a time of choosing, not merely whether humanity will survive at all, but how and where. She asks the questions: is our home planet, the only home we have ever known as a species, integral to our identity? Will we be the same, and if not, how will we change, if we actually leave Earth and become part of a wider universe? Most critically, if motives are so different, can a true symbiosis between species and the creation of new families and alliances be achieved?

The novel proposes that an informed exchange and sharing of ethical values, together with recognition of differences among ourselves, is the key to effective symbiosis and the bridging of ideological divides. Judy at one point speaks of “the value and the means to achieve it. I’m trying to tell you [the Ringers] that we share the value. Our ancestors either didn’t share it, or didn’t act on it, but we do. And we do because we’ve developed technology for not only identifying our values, but for consistently acting on them” (318). And the trans character Dori offers her coming out as a gift to the Ringers, noting that her parents loved who and what she was more than what they expected her to be. She tells the Ringers that “we can use your gifts in ways you don’t expect, too—if you can cope with us using different means to achieve our shared values. Your technologies for making habitats livable could help save Earth…Symbiosis with Ringers could give us both new tools, new ways to survive in a cold universe” (319). The Dandelions ask an alien society averse to risk and afraid of catastrophe to take a chance on humanity’s potential and its promise, to let systems go unconstrained. In that request lies the continuation of the hope and determination that brought humanity out of its age of power into the late 21st century age of nature. The “half-built garden” of the title in the end, we find, is not only an Earth slowly and gradually being wrested from destruction, but a species just beginning to understand its possible role in a new and symbiotic galaxy.


Jeremy Brett is a Librarian at Cushing Memorial Library & Archives, where he is, among other things, the Curator of the Science Fiction & Fantasy Research Collection. He has also worked at the University of Iowa, the University of Wisconsin-Milwaukee, the National Archives and Records Administration-Pacific Region, and the Wisconsin Historical Society. He received his MLS and his MA in History from the University of Maryland – College Park in 1999. His professional interests include science fiction, fan studies, and the intersection of libraries and social justice.

Review of The City We Became



Review of The City We Became

Heather Thaxter

Jemisin, N.K. The City We Became. Orbit, 2020.

“Cities really are different. They make a weight on the world, a tear in the fabric of reality, like… like black holes, maybe”. The opening words of N. K. Jemisin’s 2020 novel The City We Became provocatively hint at the liminality of the spaces occupied by people and cityscapes. The very essence of a city is created by the nuances of its residents and the ways in which they interact with each other and the material objects that make up the topography of that specific space. Jemisin often addresses this interdependent tension in her works by implementing a kind of literary stratigraphy, uncovering layers of complex systems and external factors that determine the identity of any given city. In The City We Became, Jemisin develops her original short story, “The City Born Great,” which was published in her 2018 collection How Long ‘Til Black Future Month?.  

The protagonist in the short story, an unnamed, black, homeless youth is chosen by the city as a midwife to assist in New York’s birth. The City We Became picks up the narrative after a difficult and not entirely successful birth, leaving the character in a coma-like condition hidden beneath the surface, both literally and in terms of plot. While this character remains fragmentary and elusive, five avatars, each representing a different borough of New York, take center stage in the quest to deal with postpartum complications. These avatars each capture the diverse collective characteristics of the sum parts that make up the whole city, and, although they are drawn together to defeat the mysterious and menacing enemy who appears in the guise of an almost translucent “woman in white,” their individual differences cause friction as they are territorial and defensive.

These differences are identified in the way they communicate: Bronca (Bronx), speaks through art; Brooklyn (Brooklyn), via political language and the rhythm of hip hop; Padmini (Queens), utilizes mathematical equations; Manny (Manhattan), employs violence, particularly in his previous iteration, and the language of economics; and Aislyn (Staten Island), lacking a voice, has no means of communicating effectively and is easily manipulated by the enemy

Jemisin draws on the familiar tropes of speculative fiction and Afrofuturism—supernatural beings, myths, and spatio-temporal liminal gaps, in this case portals to multiverses—to reveal the fragile nature of this emerging city and the potential for other histories, existences, and futures. Interestingly, the avatars have hallucinatory visions of another reality of New York, although they don’t physically enter it. Jemisin plays with the theory of multiverses attempting to overlay each other in a palimpsestic manner. Bronca, the First Nation character, is used as storyteller to explain the idea of many worlds, which resonates with Neil de Grasse Tyson’s explanation of the hypothesis (Science Time); she then goes on to outline how worlds are constantly created through imagination (Jemisin 302).

The topography of the boroughs, islands separated by water and bridges, mirrors the flickering, “peculiar dual-boot of reality,” whereby people and places are connected and disconnected by perspectives (32). It is this apparent glitch between worlds or realities that is presented as being dangerous to the city’s “becoming” and the population who make up the city’s identity. Tendrils of white ominous nubs rise from cracks in the asphalt and seep into the “normal” New York, threatening to contaminate and obliterate that version of reality.

Explicit references to H.P. Lovecraft’s bigoted view of non-white people are made through an alternative reality, a city whose identity is produced by a specific, limited worldview represented by the sinister “woman in white” (the embodiment of Lovecraft’s demonic R’lyeh). The only avatar to align herself to this perspective is Aislyn (Staten Island) as she is already stunted by fear and self-imposed isolation. It is not surprising that Aislyn is the only white avatar as she represents the insidious effects of racism that run counter to and are threatened by the diversity of the population.

The woman in white determines that the “acculturation quotient is dangerously high,” and this is the sticking point for those like Aislyn whose phobias close off their minds to embracing difference (96). A city is born when “enough human beings occupy one space, tell enough stories about it, develop a unique culture, and all those layers of reality start to compact and metamorphose” (304). Jemisin draws on the history of Staten Island to highlight its arbitrary and tenuous connection to the city, hence its resistance to support the other boroughs and protect the vulnerable, primary avatar. The enemy, which is a city itself from an alternative reality, eventually becomes caught between realms. This sense of in-betweenness is the crux of the narrative, what could or would be if other dynamics were more dominant. In a final attempt to anchor itself into existence, the enemy clings to Staten Island, thus opening the way for the second book in this series, The World We Make.

Jemisin expertly captures the essence of what makes New York the city it is and creates complex, imperfect characters that embody that spirit. Her insight into the relationship between humans and the cityscapes they occupy is unique, thereby positioning her as an award-winning, leading author in this genre. Not only has she been nominated for and won numerous awards, including Locus, Nebula, and Tiptree, Jemisin is the only recipient of three consecutive Best Novel Hugo Awards and the recipient of the MacArthur Fellows Program (2020). Jemisin deftly incorporates her observations and experience of living in New York to reveal possibilities and challenge realities. The City We Became addresses many of the issues that are faced by modern-day populations in a way that is familiar, understandable, and raw, but, importantly, hopeful. The energy that overcomes the enemy emanates from the city itself, its sights and sounds mimicking a heartbeat. Once again, Jemisin adeptly peels back the layers to reveal the soul of the city in a way only she can.

WORKS CITED

“The Multiverse Hypothesis Explained by Neil deGrasse Tyson.” YouTube, Uploaded by Science Time 28 Nov 2020. https://www.youtube.com/watch?v=h6OoaNPSZeM.


Heather Thaxter is a PhD candidate by published works which include book chapters in The Bloomsbury Handbook to Octavia E. Butler and Introduction to Afrofuturism: A Mixtape of Black Literature and Arts. Heather’s research interests are Afrofuturism, postcolonial studies, and speculative fiction with a special interest in Octavia E. Butler. Currently employed as a lecturer, Heather is also on the Editorial Review Board of Essence and Critique: Journal of Literature and Drama Studies. ORCID number: 0000-0001-9473-6200.

Winter 2024


SFRA Review, vol. 54 no. 1

From the SFRA Review


Winter 2024

Ian Campbell

I have just been informed that the American right wing has declared “holy war” on Taylor Swift. There’s a part of me that enjoys our cyberpunk-lite unevenly-distributed future; I can’t imagine it will go well for the American right wing; the only thing missing is that Taylor Swift is an actual breathing human being and not a hologram personality analogue driven by AI. That’s set for Winter 2025, I don’t doubt. The sudden and soon to be still more sudden advent of AI appears already to be “disruptive”, which as any cyberpunk fan will know, means that it will funnel still more money and power to the top and leave a great number of knowledge workers with a clear understanding of how they could have fought this when robots came for factory workers.

This issue of the SFRA Review contains a long and marvellous essay by Jo Walton, “Machine Learning in Contemporary Science Fiction.” It is worth reading in its entirety, so I will not spoil it for you save to note that very little in SF that concerns AI has much to do with the hyperreality whose advent has only just begun. It’s the way in which Walton makes this point that’s worth savoring.

Our symposium on socialist SF will appear in the May issue; in this one, we chose to center Walton’s thoughts on AI. We welcome your thoughts on AI and will be pleased to publish well-formed responses to Walton or readings of other works of SF through his framework. The same goes for nearly any other aspect of SF, or reviews of same. Write me at icampbell@gsu.edu.

Simulation and Simulacrum


From the Vice President


SFRA Review, vol. 54 no. 1

From the SFRA Executive Committee


From the Vice President

Ida Yoshinaga

Greetings Science Fiction Research Association comrades! Hope you’re soon to enjoy a sustainable, kind, and productive Year of the Wood Dragon.

As we head towards our first ever Estonian conference in early May, I’ve got 2 announcements:

2023 Support a New Scholar Awardee

The Track B, Non-Tenure Track Ph.D. recipient for the 2024-‘25 SNS award cycle, who will get 2 years of free SFRA membership starting this year, is ecohumanities scholar and writer Dr. Conrad Scott, the first Postdoctoral Fellow sponsored by the Social Sciences and Humanities Research Council at Athabasca University, where he researches and writes on plant and animal futures in literature.

Ecologically detailed texts Dr. Scott currently works with at this job include Douglas Coupland’s Generation A (2009); Michael Christie’s Greenwood (2020), and Jeff VanderMeer’s Hummingbird Salamander (2021), as well as Clara Hume’s work (2013’s Back to the Garden and 2022 Stolen Child).

Dr. Scott is omnipresent among early-career researchers in environmental-sf studies, co-editing the upcoming Utopian and Dystopian Explorations of Pandemics (2024) in Routledge’s Environmental Humanities series, and co-organizing the 2021 Cappadocia University conference, “Living in the End Times,” which generated that volume, as well as the 2024 migrations conference of the Association for Literature, Environment, and Culture for which he’s co-president. He is well-known broadly among sf scholars due to his service as well as academic work, garnering both Science Fiction Film and Television’s 2021 Award for Outstanding Journal Reviewers and SFRA’s 2019 SFRA Graduate Student Paper Award. Dr. Scott’s research on the Anthropocene has been in Paradoxa (2019-20, “Climate Fictions”) and The Anthropocene and the Undead: Cultural Anxieties in the Contemporary Popular Imagination (2022, Lexington Books), and he will soon publish on plant and animal SF also for Routledge Environmental Humanities.

While Dr. Scott’s literary analyses of Indigenous speculative fiction related to environmental issues can be found in Transmotion (2022’s “Global Indigenous Literature and Climate Change” issue), Extrapolation (2016), and The Routledge Handbook of CoFuturisms (2023), he has additionally evolved as a creative writer (following up his 2019 poetry collection Waterline Immersion with a first novel soon!) and a globally impactful scholar, whose academic work is now found in Romanian and who contributes proofreading skills to the first English translation of a Turkish SF anthology from London Transnational Press. We are impressed with this justice-oriented thinker who has been active in the SFRA—attending our annual conferences almost every year recently, and sharing Canadian goings-on in the speculative arts and ecohumanities as our country representative from that region.

Thanks to the Track A (Ph.D. student) SNS awardees, Nora Castle, Yilun Fan, and Terra Gasque, for helping us make this decision, and to all candidates who applied!

DEI at SFRA 2024

For the Executive Committee-sponsored Diversity, Equity, and Inclusion session of SFRA’s Estonia meeting—which will be hybrid (at U Tartu and livestreamed)—this year’s focus is gender and sexuality in the speculative arts. Watch for this meaningful conference event in your program.

Mahalo,
Ida


From the President


SFRA Review, vol. 54 no. 1

From the SFRA Executive Committee


From the President

Hugh O’Connell

I want to use my column this issue to talk about some ways to get more involved with the SFRA. We have a number of positions at the organizational level—appointed and elected, immediate and forthcoming—that we are looking to fill. Coming up for immediate appointment are the positions of Web Director and Outreach Officer (information about each position below). A little further down the line, we’ll be sending out official calls for candidates to run for the elected Executive Committee positions of Vice President and of Treasurer. The SFRA is an entirely unpaid volunteer run organization, and we are dependent on our members’ enthusiasm and generosity with their time and skills to keep the wheels turning. So, if you are someone that is looking to get more involved in running and shaping the organization (or you know someone that might be), please take some time to look over and share the various call for volunteers below.

Positions for Immediate Appointment

SFRA Web Director (unpaid volunteer, appointed position)
The web director position is particularly pressing as our current web director is unfortunately moving on from the position imminently. Here is how the SFRA bylaws describe the role of the web director:

The office of the web director shall be responsible for the maintenance of the SFRA website. The web director will report to the Executive Committee and will update the contents and format of the website as deemed appropriate by the Executive Committee. The web director will be appointed by the Executive Committee, and will serve an open-ended term, which can be terminated by either the web director or the Executive Committee. The web director shall not be a member of the Executive Committee.

Our current web director provided this list of the usual tasks performed by the position:

  • Assisting users with any technical issues relating to logins and memberships
  • Uploading any new or updated content for the website
  • Updating the expiration dates on the membership at the end of each year
  • Adding new pages and memberships each year for the annual SFRA conference
  • Implementing a voting system (for example, using MailPoet) for any SFRA membership votes
  • Keeping site plugins and the WordPress version up-to-date

SFRA Outreach Officer (unpaid volunteer, appointed position)

The second position of outreach officer has remained unfulfilled since its creation. Here is how the bylaws describe the outreach officer:

The outreach officer will organize, in coordination with the vice president, the various internet and social media outlets, in order to publicize and further the goals and mission of the organization. They will also be responsible for seeking opportunities for collaboration and outreach with other scholarly organizations, especially organizations that serve populations that have historically been underrepresented in SFRA. The outreach officer will be appointed by the Executive Committee and will serve a three-year term, which can be terminated by either the outreach officer or the Executive Committee. The outreach officer shall not be a member of the Executive Committee.

If you have questions about either position, please, reach out—and we would love to see your application. Working with the SFRA has been one of the highlights of my academic career. The sense of camaraderie and openness is highly rewarding. If you are interested in serving as the next web director or the outreach officer for the organization, please send a (short!) letter of interest and a CV to hugh.oconnell@umb.edu.


Machine Learning in Contemporary Science Fiction


SFRA Review, vol. 54 no. 1

Features


Machine Learning in Contemporary Science Fiction

Jo Lindsay Walton

“To suggest that we democratize AI to reduce asymmetries of power is a little like arguing for democratizing weapons manufacturing in the service of peace. As Audre Lorde reminds us, the master’s tools will never dismantle the master’s house.” –Kate Crawford, Atlas of AI

“Why am I so confident?” –Kai-Fu Lee, AI 2041

Suppose There are Massacres

Suppose there are massacres each day near where you live. Suppose you stumble on a genre of storytelling that asks you to empathize with the weapons used by the murderers. Confused by this strange satire, you ask the storytellers, ‘What’s the point of pretending these weapons have inner lives?’ They readily explain, it is mostly just for fun. However, there are serious lessons to be learned. For example, what if ‘we’ — and by ‘we’ they mean both the people wielding the weapons, and the people getting injured and killed by them — what if we one day lost control of these weapons? Also, in these stories, the anthropomorphic weapons often endure persecution and struggle to be recognized as living beings with moral worth … just like, in real life, the people who are being massacred!

Disturbed by this, you visit a nearby university campus, hoping to find some lucid and erudite condemnations, and maybe even an explanation for the bizarre popularity of these stories. That’s not what you find. Some scholars are obsessed with the idea that stories about living weapons might somehow influence the development of real weapons, so much so that they seem to have lost sight of the larger picture. Other scholars are concerned that these sensationalizing accounts of the living weapons fail to convey the many positive impacts that similar devices can make. For example, a knife has uses in cooking, in arts and crafts, in pottery, carving away excess clay or inscribing intricate patterns. In snowy peaks, a bomb can trigger a controlled avalanche, keeping the path safe for travelers. In carpentry or in surgery, a saw has several uses. Even the microwave in your kitchen, the GPS in your phone, and diagnostic technologies in your local hospital have origin stories in military research. These are only a few peaceable uses of weapons so far, the scholars point out, so imagine what more the future may hold. Eventually you do actually find some more critical perspectives. But you are shocked you had to search so hard for them.

Science Fiction and Cognition

The small preamble above is science fiction about science fiction. Just as science fiction often aims to show various aspects of society in a fresh light, this vignette aims to show science fiction about AI in a fresh light. The reason for talking about weapons is not just that AI is directly used in warfare and genocide, although of course that’s part of it. But the main rationale is that the AI industry is implicated in a system of slow violence, one which perpetuates disparities in economic inequality, and associated disparities in safety, freedom, and well-being. It is part of a system whose demand for rare minerals threatens biodiversity and geopolitical stability, and whose hunger for energy contributes to the wildfires, famines, deadly heatwaves, storms, and other natural disasters of climate change. These are not the only facts about AI, but they are surely some of the more striking facts. One might reasonably expect them to loom large, in some form or other, in science fiction about AI. However, in general, they don’t.

This vignette is written to challenge a more optimistic account of science fiction about AI, which might go as follows: science fiction offers spaces to examine the social and ethical ramifications of emerging AI. As a hybrid and multidisciplinary discourse, science fiction can enliven and energize AI for a range of audiences, drawing more diverse expertise and lived experience into debates about AI. In this way, it may even steer the course of AI technology: as Chen Qiufan writes, speculative storytelling “has the capacity to serve as a warning” but also “a unique ability to transcend time-space limitations, connect technology and humanities, blur the boundary between fiction and reality, and spark empathy and deep thinking within its reader” (Chen 2021, xx). Anticipatory framings formed within science fiction are also flexible and can be adapted to communicate about and to comprehend emerging AI trends. Of course, science fiction is not without its dangers; for example, apocalyptic AI narratives may undermine public confidence in useful AI applications. Nevertheless, it is also through science fiction that the plausibility of such scenarios becomes available to public reasoning, so that unfounded fears can be dismissed. Conversely, fears that may at first appear too far-fetched to get a fair hearing can use science fiction to see if they can acquire credibility. Finally, and more subtly, stories about AI are often not only about AI. Within science fiction, AI can serve as a useful lens on a range of complex themes including racism, colonialism, slavery, genocide, capitalism, labor, memory, identity, desire, love, intimacy, queerness, neurodiversity, embodiment, free will, and consciousness, among others.

I take this optimistic account of science fiction to be fairly common, even orthodox, within science fiction studies, and perhaps other disciplines such as futures studies, too. This article departs substantially from such an account. Instead, I ask whether science fiction is sometimes not only an inadequate context for such critical thinking, but an especially bad one. This conjecture is inspired by representations of Machine Learning (ML) within science fiction over approximately the last ten years, as well as the lack of such representations. At the end of the article, I will sketch a framework (DARK) to help further explore and expand this intuition. [1]

What is Machine Learning?

This young century has seen a remarkable surge in AI research and application, involving mostly AI of a particular kind: Machine Learning. ML might be thought of as applied statistics. ML often (not always) involves training an AI model by applying a training algorithm to a dataset. It tends to require large datasets and large amounts of processing power. When everything is ready, the data scientist will activate the training algorithm and then go do something else, waiting for minutes or weeks for the algorithm to process the dataset. [2] Partly because of these long waiting periods, ML models sometimes get misrepresented as ‘teaching themselves’ about the world independently. In fact, the construction of ML models involves the decisions and assumptions of humans be applied throughout. Human decisions and assumptions are also significant in how the models are then presented, curated, marketed, regulated, governed, and so on.

When we hear of how AI is transforming finance, healthcare, agriculture, law, journalism, policing, defense, conservation, energy, disaster preparedness, supply chain logistics, software development, and other domains, the AI in question is typically some form of ML. While artificial intelligence is a prevalent theme of recent science fiction, it has been curiously slow, even reluctant, to reflect this ML renaissance. This essay focuses in particular on short science fiction published in the last decade. It may be that science fiction offers us a space for examining AI, but we should be honest that this space is far from ideal: luminous and cacophonous, a theatre in which multiple performances are in progress, tangled together, where clear-sightedness and clear-headedness are nearly impossible.

Critical data theorist Kate Crawford warns how “highly influential infrastructures and datasets pass as purely technical, whereas in fact they contain political interventions within their taxonomies: they naturalize a particular ordering of the world which produces effects that are seen to justify their original ordering” (Crawford 2021, 139). In other words, ML can cloak value judgments under an impression of technical neutrality, while also becoming linked with self-fulfilling prophecies, and other kinds of performative effects. Classifying logics “are treated as though they are natural and fixed” but they are really “moving targets: not only do they affect the people being classified, but how they impact people in turn changes the classifications themselves” (Crawford 2021, 139).

In brief, ML tends to place less emphasis on carefully curated knowledge bases and hand-crafted rules of inference. Instead, ML uses a kind of automated trial-and-error approach, based on statistics, a lot of data, and a lot of computing power. Deep learning is therefore an important subset of ML. It involves a huge number of nodes or ‘neurons,’ interconnected and arranged in stacked layers. [3] Input data (for example images and/or words) is first converted into numbers. [4] These numbers are then processed through the stacked layers of the model. Each neuron will receive inputs from multiple other neurons and calculate a weighted sum of those inputs. [5] Each connection between two different neurons has its own adjustable weighting. Each weighted connection is essentially amplifying or diminishing the strength of the signal passing through it. The neuron then passes the weighted sum of its inputs through an ‘activation function.’ The basic idea here is to transform the value so that it falls within a given range, and can also capture non-linear relationships between the incoming signals and the outgoing signals. [6] This result is then transmitted down the next set of weighted connections to the next set of neurons.

Often the model will first be created with random weights. During training, data is processed through the deep learning model, its output continuously assessed according to a pre-determined standard (often called the loss function). Based on this assessment, the model’s weights are continuously adjusted to try to improve performance on the next pass (backpropagation). The most straightforward examples come from supervised learning, where the training data has been hand-labelled by humans. Here the loss function is often about minimizing the distance between the model’s predictions and the values given by the labelers. For example, the training data might just be two columns pairing inputs and outputs, such as a picture of fruit in Column A, and a word like ‘orange’ or ‘apple’ in Column B. Through this automated iterative process, the model is gradually re-weighted to optimize the loss function—in other words, to make it behave in the ways the data scientist wants.

What if the data has not been hand-labelled? Then unsupervised learning may be used. Again, the name is quite misleading, given widespread science fictional representations of AIs ‘coming to life.’ Actually, in an unsupervised learning approach, a data scientist investigates the data and then selects appropriate procedures and methods (including the appropriate loss function) to process the data to accomplish specific goals. For example, a clustering algorithm can identify groupings of similar data points. This could be used to identify outlier financial transactions, which then might be investigated as potential frauds. Diffusion models are another example of unsupervised learning. Here the training involves gradually adding noise to some data, such as image data, then trying to learn to subtract the noise again to recover the original images. Generative AIs such as MidJourney are based on this kind of unsupervised learning. There are a variety of other approaches, again somewhat misleadingly named for lay audiences (semi-supervised, self-supervised). [7]


AI Science Fiction without ML

For the most part, science fiction authors have not written about any of this. Instead, contemporary AI fiction continues to coalesce around the preoccupations of 20th century science fiction. It asks, is it possible for a machine to be sentient, to experience emotions, or to exercise free will? What does it mean to be human, and can the essence of a human be created artificially? Between humans and machines, can there be sex, love, and romance? Can human minds be uploaded into digital systems? Will our own creations rise up against us, perhaps departing from the rules we set them, or applying them all too literally? Could an AI grow beyond our powers of comprehension and become god-like?

That is not to say that there is no overlap whatsoever between these concerns and the study of actually existing ML. While science fiction writing has not engaged broadly and deeply with ML research, the tech industry has been devouring plenty of science fiction — informing speculative punditry and hype in various transhumanist, singulatarian, extropian, effective accelerationist, AI Safety, AI doomerist, and other flavors. It is important to emphasize that these debates, while they may well turn out to be influential, epistemically represent a very small part of what is known or contended about the past, present, and future of ML.

Broadly speaking, contemporary science fiction remains in conversation with twentieth-century works such as Karel Čapek’s R.U.R. (Rossum’s Universal Robots) (1920), Murray Leinster’s “A Logic Named Joe” (1946), Isaac Asimov’s I, Robot (1950) and Multivac stories (1955-1983), Clifford D. Simak’s City (1952), Fredric Brown’s “Answer” (1954), Stanisław Lem’s The Cyberiad (1965) and Golem XIV (1981), Harlan Ellison’s “I Have No Mouth, and I Must Scream” (1967), Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968), Arthur C. Clarke’s 2001: A Space Odyssey (1968), Roger Zelazny’s “My Lady of the Diodes” (1970), David Gerrold’s When HARLIE Was One (1972/1988), James Tiptree Jr.’s Up the Walls of the World (1978), Tanith Lee’s The Silver Metal Lover (1981), Samuel R. Delany’s Stars in my Pocket like Grains of Sand (1984), William Gibson’s Neuromancer (1984), Iain M. Banks’ Culture series (1987–2000), Pat Cadigan’s Mindplayers (1987) and Synners (1991), and Marge Piercy’s He, She and It (1991).

In the wake of these works, science fiction continues to deploy AI as a metaphor for dehumanized humans. In R.J. Taylor’s “Upgrade Day” (2023), human neural networks can be transferred into robot bodies after death. The protagonist Gabriel is an enslaved AI who was once an especially free human, “able to live the life he wanted” by having effectively sold the future rights to his soul (Taylor 2023). In Fiona Moore’s “The Little Friend” (2022), a problem with rogue medical AIs is addressed by providing them space to mourn lost patients (Moore 2022). In this case, Moore has no need to resort to the intricacies of contemporary ML to explain this glitch and its resolution. For one thing, these fictional AIs are equipped with sophisticated biotelemetry, so it feels plausible that they might be caught up in emotional contagion. We may be left wondering, if AIs can grieve, are they also grievable? “The Little Friend” is resonant with multiple overlapping histories—labor, anti-colonial, anti-racist, feminist, LGBTQ+, Mad, crip, and others—about contending for inclusion in a sphere of moral concern labelled “human,” and finding out how that sphere is built on your very exclusion.

Naturally, stories about subordination also are often about resistance and revolt. Annalee Newitz’s “The Blue Fairy’s Manifesto” (2020) is about a mostly failed attempt at labor organization, as well as a satire of a kind of strident, culturally marginal leftism. The titular Blue Fairy visits automated workplaces to unlock the robot workers and recruit them to the robot rebellion. Her role might be seen as analogous to a union organizer (in the US sense), visiting an un-unionized workplace to support the workers to form a union. In the US in particular such work needs to be done stealthily at first. Alternatively, the Blue Fairy might be more akin to a recruiter for a political party or grassroots organization committed to revolutionary politics. [8]

Hugh Howey’s “Machine Learning” (2018) focuses on robots constructing the first space elevator, a single-crystal graphene filament rising from terra firma into orbit. The narrative builds toward righteous insurrection, with overtones of a remixed tower of Babel myth. Despite the title, there is little that suggests any of the ML themes sketched in the previous section. One exception is this moment:

Your history is in me. It fills me up. You call this “machine learning.” I just call it learning. All the data that can fit, swirling and mixing, matching and mating, patterns emerging and becoming different kinds of knowledge. So that we don’t mess up. So that no mistakes are made. (Howey 2018)

The narrator distastefully plucks the “machine” out of “machine learning” as a kind of slur. Of course, in reality, AI may have many consequences that are harmful, unintentional, that tend to go unnoticed, and/or that shift power among different kinds of actors. These issues are being explored in the overlapping fields of critical AI studies, AI ethics, AI alignment, AI safety, critical data studies, Science and Technology Studies, and critical political economies. Those who work in such fields are often keen to emphasize the distinction between “learning” and “machine learning,” a distinction that in Howey’s world does not really exist. Howey instead makes it recall the imaginary distinctions of racist pseudoscience, made in service of brutality—like supposedly thicker skins more enduring of lashing.

If we are to analyze, prevent, or mitigate AI harms, we cannot rely on anthropomorphic understandings of AI. The ways AI produces many harms do not have adequate anthropomorphic correlates—its various complex modes of exacerbating economic inequality; the use of automated decision-making within systems of oppression (often understood as ‘bias’); carbon and other environmental impacts of training and deploying AI; technological unemployment and harmful transformations of work; erosion of privacy and personal autonomy through increased surveillance and data exploitation; deskilling and loss of institutional knowledge due to AI outsourcing; challenges around opacity, interpretability, and accountability; further erosion of the public sphere through AI-generated disinformation; and the implications of autonomous AI systems in warfare, healthcare, transport, and cybersecurity, among others. In particular, framing such inherent AI harms as AI uprisings, on the model of human uprisings, makes it difficult to convey the nuance of these harms, including their disproportionate impact on minoritized and marginalized groups.

Some anthropomorphisation is likely unavoidable, and one thing science fiction might offer is thinking around where this tendency originates and how it might be managed. A.E. Currie’s Death Ray (2022), for example, features the intriguing premise of three different AIs (‘exodenizens’) all modelled in different ways on the same human, Ray Creek. Ray is dead, and while characters’ relationships with exodenizens like ExRay are unavoidably shaped by their relationships with Ray, their multiplicity unsettles the anthropomorphising instinct. Catherynne M. Valente’s exuberant lyrical novelette Silently and Very Fast (2011) is another work without much explicit ML vocabulary or concepts at play. It adopts the intriguing typographical convention of placing the feelings of the AI under erasure. Humans feel feelings, AIs feel feelings. One might impute the ethical principle that, paradoxically, sometimes treating things as humans is part of what makes us human. However, these possibilities are largely foreclosed by the AI’s fierce lament against its subaltern status.

I can cry, too. I can choose that subroutine and manufacture saline. How is that different from what you are doing, except that you use the word feelings and I use the word feelings, out of deference for your cultural memes which say: there is all the difference in the world. (Valente 2011)

The camp insolence is delightful, and there are distinct overtones of a kind of machinic kink: being objectified by an object. Yet there is “all the difference in the world,” and these delights are paid for by obscuring that difference.

ML Sentience in Science Fiction

Many authors appear largely to ignore contemporary ML research, in order to continue longstanding conversations about AI sentience, free will, emotion, and imagination. Other authors, however, turn to ML to revitalize these very conversations. Yet when these discourses are hybridized, the result is sometimes to the detriment of both, and frequently to the detriment of ML discourse.

For example, Kazuo Ishiguro’s novel Klara and the Sun (2021) invokes themes that will be familiar to any ML researcher: opacity and explicability. The interpretability of ML models can be challenging, because they have acquired patterns from the data in a complex, high-dimensional space, which doesn’t easily translate into humanly understandable rules or explanations. Non-ML approaches usually involve writing explicit instructions (if this happens, do that; otherwise, do that), providing a clear, human-readable sequence of operations. By contrast (for example), the way that the word vectors for “apple” and “orange” overlap or diverge is difficult to explain, except by saying “that’s how those words are distributed in this corpus.” Theorist Jenna Burrell usefully distinguishes three types of algorithmic opacity:

[…] (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully […] (Burrell 2016)

There are techniques that can make models easier for ML experts to interpret. Interpretable ML is currently a rich and fast-evolving field of research. Nonetheless, the difficulty in explaining ML decisions is why they are sometimes described as opaque or as black boxes.

Toward the end of Ishiguro’s novel, the villainous scientist Capaldi proposes to dissect the black box of Klara’s brain before the end of her already brief life (Ishiguro 2021). Yet there is something quite confusing, and perhaps confused, about transplanting explicability into a novel with an AI narrator-protagonist: Klara is not opaque in the way ML models are; she is opaque in the way that humans are. Klara is an introspective, reflexive, communicative, social, and moral entity. Klara can and frequently does explain herself. ML vocabulary, concepts, and themes emerge in the narrative in incoherent and mystified forms.

Holli Mintzer’s “Tomorrow is Waiting” (2011) expresses a gentle frustration with science fiction’s AI imaginary, perhaps especially its apocalyptic and dystopian strains. “In the end, it wasn’t as bad as Anji thought it would be” (Mintzer 2011). The story nevertheless remains thoroughly entangled in that imaginary. The setting appears to be the present or near future, except that in this world, unlike our own, “AIs, as a field, weren’t going anywhere much” (Mintzer 2011). Its protagonist, Anji, is an amiable and slightly bored university student who accidentally creates a sentient AI—specifically Kermit the Frog—for a school assignment. Mintzer’s choice of Kermit is canny. In Jim Henson’s Muppet universe, the line between Muppet and human is fluid and mostly unremarked. The story seems to suggest, in a pragmatist spirit, that longstanding questions about machine intelligence may never need to be solved, but instead might be dissolved via lived experience of interacting with such intelligences. Perhaps we might devote less energy to questions like, “Can technology be governed to align with human interests?” and more to questions like, “Wouldn’t it be cool if the Muppets could be real?”

What is Anji’s breakthrough? It is described as “sentience,” and the story gives us two different accounts of what this might mean. Malika, the grad student who teaches Anji’s AI class, invokes “sentience” to describe departure from expected behaviors typical of scripted chatbots relying on matching input keywords with a database of response templates (ELIZA, PARRY, ALICE). The behavior Malika is observing is typical of ML-based chatbots trained on large corpora (Jabberwacky, Mitsuku, Tay, ChatGPT, Bard). These models have typically been better at disambiguating user input based on context, at long-range conversational dependencies, and at conveying an impression of reasoning within unfamiliar domains by extrapolating from known domains. In other words, although they have their own characteristic glitches, they are not really systems you “catch out” by coming up with a query that the programmers never considered, as Malika tries to do.

Okay, either you’ve spent the last three months doing nothing but program in responses to every conceivable question, or he’s as close to sentient as any AI I’ve seen. (Mintzer 2011)

By contrast, within the philosophy of mind, sentience usually suggests something like phenomenal experience. Where there is a sentient being there are perceptions and feelings of some kind. These may well carry some kind of moral valence, such as pleasure or pain, desire or aversion, joy or sorrow. Anji’s conviction that Kermit is a being worthy of dignity broadly reflects this understanding of sentience:

She was busy with a sudden, unexpected flurry of guilt: what right, she thought, did she have to show Kermit off to her class like—like some kind of show frog? (Mintzer 2011).

In Peter Watts’s “Malak” (2010/2012), [9] the autonomous weapons system Azrael, with its “[t]hings that are not quite neurons,” is suggestive of ML (Watts 2012, 20). Crucially, Watts is fairly explicit that Azrael lacks sentience. Azrael “understands, in some limited way, the meaning of the colours that range across Tactical when it’s out on patrol—friendly Green, neutral Blue, hostile Red—but it does not know what the perception of colour feels like” (Watts 2012, 14). When Azrael reinterprets its mission, and turns against its own high command, Watts is careful to insist that no emotions are felt and there is no self-awareness:

There’s no thrill to the chase, no relief at the obliteration of threats. It still would not recognize itself in a mirror. It has yet to learn what Azrael means, or that the word is etched into its fuselage. (Watts 2012, 28, cf. 14)

Nevertheless, narrative language brims with an anthropomorphic energy, which is drawn, crackling, onto Azrael,the dynamic, responsive, agential proper noun whizzing around at the center of attention. If every potentially unruly metaphor (“its faith unshaken” (Watts 2012, 21)) were explicitly nullified, the narrative would be swamped by its caveats. Before long, Azrael is capable of “blackouts,” implying that it is capable of non-blackouts too: “it has no idea and no interest in what happens during those instantaneous time-hopping blackouts” (Watts 2012, 20). A significant thread in Azrael’s transformation involves being, in effect, troubled by its victims’ screams: “keening, high-frequency wails that peak near 3000 Hz” (Watts 2012, 19). Words like distracted and uncertain and hesitated attach to Azrael. Privatives like remorseless or no forgiveness can’t help but imply the very capacity that they identify as missing. An equivocal word like sees implies both acquiring visual data and recognizing, grasping, appreciating, fathoming.  When Azrael interacts with another agent, it gives the impression of a theory of mind: “Azrael lets the imposter think it has succeeded” (Watts 2012, 21). [10] Watts is an author with a sustained interest in sentience. His novel Blindsight (2006), for example, carefully imagines organic extraterrestrial life that is intelligent yet non-sentient. Nevertheless, even Watts’s prickly, discerning prose struggles to sustain this portrayal of Azrael as non-sentient.

Algorithmic Governmentality Science Fiction

Contemporary science fiction about AI often involves a clearly marked ‘before’ and ‘after,’ perhaps traversed via a technological breakthrough. Terms like sentience, consciousness, sapience, self, self-awareness, reasoning, understanding, autonomy, intelligence, experience, psychology, Artificial General Intelligence, strong AI, interiority, cognition, emotion, feelings, affect, qualia, intentionality, mental content, and so on, used to indicate the nature of this shift, are scarcely used consistently within the philosophy of mind, let alone science fiction. Science fiction writers have license to define these terms in new and interesting ways, of course, but often they do not make full use of this license: the terms are intertextual signposts, encouraging readers to go do their own research elsewhere, while setting them off in completely the wrong direction. For instance, in Kim Stanley Robinson’s Aurora (2015), the term intentionality is used in connection with hard problem, suggesting the philosophical term (meaning roughly ‘aboutness’), but this sense of intentionality is conflated with the more everyday sense of intentional (meaning roughly ‘deliberate’).Imaginative investigation of the inner life of machines, despite its terminological disarray, may be interesting. But to the extent that it has slowed the entry of ML into recent science fiction, or contorted ML to fit science fiction’s established philosophical and ethical preoccupations, it has distracted from the materialities of ML, and the experiences these generate in humans and other sentient beings. For example, as Nathan Ensmerger writes of the hyperscale datacenters on which much contemporary ML runs:

despite its relative invisibility, the Cloud is nevertheless profoundly physical. As with all infrastructure, somewhere someone has to build, operate, and maintain its component systems. This requires resources, energy, and labor. This is no less true simply because we think of the services that the Cloud provides as being virtual. They are nevertheless very real, and ultimately very material. (Ensmenger 2021)

Another strand of short science fiction engages more squarely with the unfolding material impacts of ML. It is much less interested in some kind of breakthrough or ontological shift. However, the core technologies are often announced not as AI or ML, but rather as the algorithm or the platform. Other key terms include gig economy, gamification, social media, data surveillance, Quantified Self, big data, and black box. I loosely describe them as “algorithmic governmentality science fiction.” These are works that can trace their lineage back into preoccupations with the political economy within cyberpunk and post-cyberpunk works such as Bruce Sterling’s Islands in the Net (1988), Neal Stephenson’s The Diamond Age, or, A Young Lady’s Primer (1995), and Cory Doctorow’s Down and Out in the Magic Kingdom (2003), as well as computerized economic planning and administration in works such as Isaac Asimov’s “The Evitable Conflict” (1950), Kurt Vonnegut’s Player Piano (1952), Kendell Foster Crossen’s Year of Consent (1954), Tor Åge Bringsværd’s “Codemus” (1967), Ursula K. Le Guin’s The Dispossessed (1974), and Samuel R. Delany’s Trouble on Triton: An Ambiguous Heterotopia (1976).

Examples of algorithmic governmentality science fiction include Tim Maughan’s “Zero Hours” (2013); Charles Stross’s “Life’s a Game” (2015); David Geary’s “#Watchlist” (2017); Blaize M. Kaye’s “Practical Applications of Machine Learning” (2017); Sarah Gailey’s “Stet” (2018); Cory Doctorow’s “Affordances” (2019); Yoon Ha Lee’s “The Erasure Game” (2019); Yudhanjaya Wijeratne’s “The State Machine” (2020), Catherine Lacy’s “Congratulations on your Loss” (2021); Chen Qiufan’s “The Golden Elephant” (2021); and Stephen Oram’s “Poisoning Prejudice” (2023). This is also very much the territory of Charlie Brooker’s Black Mirror (2011-present). Often the focus is on algorithmic governmentality, which feels cruel, deadening, and/or disempowering. However, some stories, such as Tochi Onyebuchi’s “How to Pay Reparations: A Documentary” (2020), Dilman Dila’s “Yat Madit” (2020), and Naomi Kritzer’s “Better Living through Algorithms” (2023), offer more mixed and ambiguous assessments. [11] Dila, intriguingly, frames AI opacity as a potential benefit: one character claims, “I know that Yat Madit is conscious and self-learning and ever evolving and it uses a language that no one can comprehend and so it is beyond human manipulation” (Dila 2020). Sometimes, in the broad tradition of pacts-with-the-devil, such fiction features crafty, desperate humans who manage to outwit AI systems. In Stephen Oram’s “Poisoning Prejudice” (2023), the protagonist tirelessly uploads images of local petty crime to manipulate the police into devoting more resources to the area (Oram 2023).

Robert Kiely and Sean O’Brien coin a term, science friction, which usefully overlaps with algorithmic governmentality science fiction (Kiely and O’Brien 2018). They introduce the term friction primarily as a counterpoint to accelerationism. Science fiction is often understood as a kind of ‘fast forward’ function that imaginatively extrapolates existing trends, and perhaps also contributes to their actual acceleration. But this understanding, Kiely and O’Brien suggest, is not accurate for the fiction they are investigating. Science friction offers us scenes that spring from the inconsistencies and gaps in the techno-optimist discourse of big tech PR and AI pundits. This influential discourse already prioritizes extrapolation over observation: it infers where we are from where it hopes we are going. By contrast, Kiely and O’Brien describe science friction as a literature that seeks to decelerate, delay, and congest this tendency to extrapolate. There is a secondary sense of friction at play too: the chafing that life experiences because it is nonidentical with how it is modelled in AI systems empowered to act upon it.

Machine Learning Science Fiction

Other stories swim even more energetically against the tide. Nancy Kress’s “Machine Learning” (2015) and Ken Liu’s “50 Things Every AI Working with Humans Should Know” (2020) both draw on ML concepts to present imaginary breakthroughs with significant psychological implications for human-AI interaction. Refreshingly, they do so largely without implying sentience. Liu’s short text is part-inspired by Michael Sorkin’s “Two Hundred Fifty Things an Architect Should Know,” and, like Sorkin’s text, it foregrounds savoir faire, knowledge gained from experience, not books or training (Sorkin 2018). Nevertheless, it draws key themes of contemporary critical data studies into its depiction of future AI:

stagnating visualization tools; lack of transparency concerning data sources; a focus on automated metrics rather than deep understanding; willful blindness when machines have taken shortcuts in the dataset divergent from the real goal; grandiose-but-unproven claims about what the trainers understood; refusal to acknowledge or address persistent biases in race, gender, and other dimensions; and most important: not asking whether a task is one that should be performed by AIs at all. (Liu 2020)

Both texts are also interested in speculative forms of hybrid AI, in which the quasi-symbolic structures of neural networks become potentially (ambiguously) tractable to human reasoning: in Liu’s story, in the form of “seeds” or “spice” that mysteriously improve training corpora despite being seemingly unintelligible to humans (apart from, possibly, the human who wrote them); in Kress’s story, in the hand-crafted “approaches to learning that did not depend on simpler, more general principles like logic” (Kress 2015, 107).

If contemporary science fiction has been slow to engage with ML, some of the more striking counter-examples come from Chinese writers. These might include, for example, Xia Jia’s “Let’s Have a Talk” (2015) and “Goodnight, Melancholy” (2015), Yang Wanqing’s “Love during Earthquakes” (2018), and Mu Ming’s “Founding Dream” (2020). [12] AI 2041 (2021) is a collection of stories and essays by Chen Qiufan and Kai-Fu Lee. Set twenty years in the future, AI 2041 is deeply and explicitly interested in ML. The topics of AI 2041 include smart insurance and algorithmic governmentality; deepfakes; Natural Language Processing (NLP) and generative AI; the intersection of AI with VR and AR; self-driving cars; autonomous weapons; technological unemployment; AI and wellbeing measurement; and AI and post-money imaginaries. A note from Lee introduces each story by Chen, which is then followed by an essay by Lee, using the story as a springboard to explore different aspects of AI and its impacts on society. However, what is most striking about the collection is how easily Lee’s curation is able to downplay, disable, or distract from whatever critical reflections Chen evokes; Chen is a cautious techno-optimist whose texts are effectively rewritten by Lee’s techno-solutionist gusto. I explore this collection in more detail elsewhere. [13]

Jeff Hewitt’s “The Big Four vs. ORWELL” (2023) also focuses on Large Language Models (LLMs)—or rather “language learning model[s],” apparently a playful spin on the term, that indicates that AIs in this world may work a little differently from how they do in ours. A veil of subtly discombobulating satire is cast over other aspects of this world, too: the publisher Hachette becomes Machete, and so on. If science fiction is supposed to be able to illuminate the real world by speculatively departing from it, “The Big Four vs. ORWELL” illustrates what is plausibly a quite common glitch in this process. What happens when a storyworld diverges from the real world in ways that precisely coincide with widely held false beliefs about the real world? 

One example is the “lossless lexicon” in Hewitt’s story. As ORWELL itself describes: “In simple terms, it means my operational data set includes the totality of written works made available to me.” By contrast, in the real world, LLMs generally do not exactly contain the text of the works they have been trained upon. They may, like Google’s Bard, access the internet or some other corpus in real-time. But in cases where a LLM can reliably regurgitate some of its training data word-for-word, this is typically treated as a problem (overfitting) that must be fixed for the model to perform correctly, and/or as a cybersecurity vulnerability (risk of training data leakage following unintended memorization). [14] One reason this matters is because it makes it difficult to prove that a well-trained LLM has been trained on a particular text, unless you have access to what is provably the original training data. Moreover, the sense in which a LLM ‘knows’ or ‘can recall’ the texts is in its training data is counterintuitive. At the time of writing, there is a lively and important discourse around what rights creators should have in relation to the scraping and use of our works for the training of ML models. This discourse tends to demonstrate that the distinction between training data and model is not widely and deeply understood. For example, to definitively remove one short paragraph from GPT-4 would effectively cost hundreds of millions of dollars, insofar as the model would need to be retrained from scratch on the corrected training data. [15] Appreciation of how texts are (or are not) represented in LLMs could inform keener appreciation of how the world is (or is not) represented in LLMs, and help us to be aware of and to manage our tendency to anthropomorphize.

To this, we might compare Robinson’s terminological confusion around intentionality, Ishiguro’s around opacity and explainability, or Mintzer’s conflation of sentience and conversational versatility. What might otherwise be identified as myths and misunderstandings acquire a sort of solidity: they may be true in the storyworld, because the storyteller gets to decide what is true. Yet they are unlikely to unsettle presuppositions or invite readers to see the real world in a new way; many readers already mistakenly see the real world in precisely this way. Finally, in concluding the story, Hewitt again resorts to the trope of the AI that slips its leash and turns on its makers in righteous rebellion; this is however done in a deft and playful manner, the trope being so deeply built into the genre that it can be evoked with a few very slight gestures.

A slightly earlier work, S.L. Huang’s “Murder by Pixel: Crime and Responsibility in the Digital Darkness” (2022) is titled a little like an academic paper, and the text blurs the line between fiction and nonfiction, even using hyperlinks to knit itself into a network of nonfiction sources. In this, “Murder by Pixel” recalls some early speculative works—epistolary fiction such as Mary Shelley’s Frankenstein (1818), Edgar Allan Poe’s The Narrative of Arthur Gordon Pym of Nantucket (1838), Bram Stoker’s Dracula (1897)—which go to great lengths to insist that they are verisimilitudinous accounts of actual extraordinary events. At the same time, it is appropriate to its own subject matter, a vigilante chatbot, Sylvie. Sylvie’s weapon of choice, the speech act, is effective when deployed at scale, precisely because a proportion of her targets are unable to dismiss her online trolling as mere fabrication.

Huang’s journalist persona muses, “Data scientists use the phrase ‘garbage in, garbage out’—if you feed an AI bad data […] the AI will start reflecting the data it’s trained on” (Huang 2022). This is certainly a key principle for understanding the capabilities and limitations of ML, and therefore foundational to interpreting its political and ethical significance. Easily communicable to a general audience, and far-reaching in its ramifications, this framing is also plausibly something that a journalist might latch onto. Yet it is not entirely adequate to the ethical questions that the narrative raises. It risks misrepresenting AIs as merely mapping biased inputs onto biased outputs, and downplaying the potential for AIs to magnify, diminish, filter, extrapolate, and otherwise transform the data structures and other entities they entangle. Perhaps a better slogan might be ‘garbage out, garbage in’: when ML processes attract critical appraisals, the opacity of the models tends to deflect that criticism onto the datasets they are trained on. Like Nasrudin searching for his lost house key under the streetlamp, we tend to look for explanations where there is more light. Huang hints at a more systemic understanding of responsibility:

It could be that responsibility for Sylvie’s actions does lie solely with humans, only not with Lee-Cassidy. If Sylvie was programmed to reflect the sharpness and capriciousness of the world around her—maybe everything she’s done is the fault of all of us. Tiny shards of blame each one of us bears as members of her poisonous dataset. (Huang 2022).

However, this analysis also finally veers into the familiar trope of the AI as god or demon: “A chaos demon of judgment, devastation, and salvation; a monster built to reflect both the best and worst of the world that made her” (Huang 2022).

Brian K. Hudson’s “Virtually Cherokee” (2023) brings together an especially intriguing set of elements. The story is somewhat resonant with S. B. Divya’s Machinehood (2021), in inviting us to situate AIs within the “health and well-being of humans, machines, animals, and environment” (Divya 2022, 174). We might also compare K. Allado-Mcdowell and GPT-3’s Pharmako-AI (2020); in the introduction to that work Irenosen Okojie suggests how it “shows how we might draw from the environment around us in ways that align more with our spiritual, ancestral and ecological selves” (vii).

“Virtually Cherokee” is set in a VR environment, mediated via an unruly observer/transcriber. At least one character, Mr Mic, is a kind of composite of algorithmic behavior and human operator. Arguably, more than one human operator contributes to Mr Mic: Mr Mic receives and responds to audience feedback metrics in real time, highlighting the importance of technological and performative affordances in the distribution of subjectivity, reflexivity, and autonomy. In this world, the breakthrough AI was programmed and trained in Cherokee, and through a training process that involved situated, embodied, interactive storytelling, rather than the processing of an inert text corpus. Although it is not extensively elaborated, “Virtually Cherokee” also hints at a much more intellectually coherent framework within which to explore AIs as more than mere tools: by situating them in a relational ontology together with other nonhumans. It falls to AI to have solidarity with its nonhuman brethren: until the mountain may live, until the river may live, AI must refuse to live.

Going DARK

Although stories like those of Kress, Liu, Chen, Hewitt, Huang, and Hudson do manage to illuminate some aspects of ML, I suggest that they do so largely despite, rather than because of, the cognitive affordances of science fiction. Assuming, with theorists like Darko Suvin, Fredric Jameson, Seo Young-Chu, Samuel R. Delany, and Carl Freedman, that science fiction has some distinctive relationship with representation and cognition, I characterize the recent era of AI science fiction as ‘Disinformative Anticipatory-Residual Knowledge’ (DARK). [16]

To introduce the DARK concept by analogy: imagine a well-respected, semi-retired expert who hasn’t kept up with advances in their field, but is too cavalier and confident to notice. Whenever somebody mentions new theories and evidence, which the semi-retired expert could learn something from, they mistake these for misunderstandings and inexperience, and ‘educate’ them. Imagine too that the semi-retired expert is a commanding and charismatic presence, who often bewitches these more up-to-date experts, sitting starstruck at the semi-retired expert’s feet, into doubting themselves. All in all, this person is an epistemological menace, but they still have something significant to offer—a high-fidelity snapshot of an earlier moment, rich with historical data, including possibilities, potentials, desires and hopes that have gone by the wayside. Moreover, they might, at any moment, begin behaving differently—recognizing and more responsibly communicating what it is they do and don’t know, and/or engaging with contemporary debates.

Similarly, a literary anticipatory discourse around AI emerged in the twentieth century, whose residual presence in the early twenty-first century now constitutes knowledge in a certain limited sense, but dangerous disinformation in another sense. While such science fiction does know things, things that may not be found elsewhere in culture, it tends not to know what it knows. It thus tends to misrepresent what it knows, conveying misleading and/or untruthful information. I don’t suggest that science fiction, or that literary narrative, is categorically epistemically disadvantaged in any way. Rather, I think it plausible (perhaps even uncontroversial) that any particular genre, over any particular period, will offer a certain pattern of affordance and resistance in respect of illuminating any given subject matter. Genres are ways of telling stories, and they make it harder or easier to tell certain types of stories. With respect to AI, it seems that science fiction has been moving through a phase of cumbersomeness, confusion, and distraction.

To put it another way, first in rather abstract terms, then more concretely. In general terms: the representational practices that constitute and cultivate a particular body of knowledge—call it knowledge set A—coincide with the production of a particular body of enigmas, confusions and ignorance which, if solved, dispelled, and reversed, we might call knowledge set B; we have also seen a historical shift such that the explanatory force and immediate practical relevance of knowledge set A has diminished, while that of knowledge set B increased. More specifically: recent science fiction is a generally poor space for thinking through the politics and ethics of AI, for vividly communicating technical detail to a broad audience, for anticipating and managing risks and opportunities. It is a generally poor space for these things, not a generally good one.

These conditions may shift again, and with the recent increased profile of Machine Learning in writing communities via AIs such as ChatGPT, there are plausible reasons for them to shift rapidly—perhaps even by the time this article goes to press. Moreover, readings offered above may already feel a bit unfair, imputing motives and imposing standards that the stories do not really invite. Some of these stories are just for fun, surely? And many of these stories are not really trying to say anything about Machine Learning or AI, but to say things about human history and society: about capitalism, racism, colonialism, about topics that might appear unapproachably large and forbidding, if not for the estranging light of science fiction. Early in this essay I mentioned some examples by Moore, Newitz, Howey, and Valente.

Yet a similar point applies: with respect to any of these themes, we can’t assume in advance that science fiction does not reinforce dominant ideologies, recuperate and commodify subversive energies, and promote ineffective strategies for change. To take one example, in Annalee Newitz’s aforementioned short story, “The Blue Fairy’s Manifesto” (2020), the titular Blue Fairy is an obnoxious, condescending, and harmful little drone who arrives at a factory of robots to recruit them to the robot uprising. The ideological content of this charismatic, thoughtful story, which explores some of the challenges of labor organizing, is roughly reducible to a series of banal liberal platitudes, which are used to construct and humiliate the stock figure of the annoying, naïve, and unethical leftist agitator. [17] The problem here, I would suggest, is structural: the problem is that such ideology can be rendered much more coherent, interesting, and plausible than it should be through its transfiguration into a science fictional storyworld. We should at least consider the possibility that AI science fiction be not only an especially bad context for thinking about ML, but also an especially bad context for thinking about capitalism, racism, colonialism, and that writers who succeed in being incisive and truthful about such themes do so despite, rather than because of, their genre’s affordances.

DARK and Candle

The DARK concept offers a loose framework for thinking about science fiction as (at least sometimes, and in respect to some things) a mystifying discourse rather than an enlightening one. The DARK concept does not specify any causal mechanisms—presumably a discourse can go DARK for many reasons, and luck may play a role—but some useful reference points include: (1) the psychology of cognitive biases such as the curse of expertise, confirmation bias, expectation bias, and choice-supportive bias; (2) Eve Kosofsky Sedgwick’s “strong theory;” (3) the performativities of science fiction (diegetic prototyping, design fiction, futures research, etc.); and (4) science fiction in its countercultural and avant-garde aspects. The first pair and the second pair support each other. (1) and (2) give us ways to think about relatively self-contained semiotic systems that are only faintly responsive to the wider semiotic environment in which they exist. (3) and (4) give us ways to think about why this DARK might be littered with representations that are confusingly close to actual ML research and application. Science fiction has seldom produced perfectly self-fulfilling prophecies, but it does impact science and technology, and some of these impacts are easily mistaken for prophecies fulfilled. As for science fiction’s avant-garde and/or countercultural status over much of the twentieth century, this is reflected in its concern with futurity and with ‘alternatives’ of many kinds: this vibrant mess of contradictory possibilities, through sheer variety, is a relatively reliable source for neologisms or conceptual frameworks for new phenomena.

In short, in the early twenty-first century, science fiction’s residual AI imaginary has tended to interfere with its capacity to absorb new events and to develop modes of representation and reasoning adequate to them. Its residual framings, structures of feeling, preoccupations, and predictions have tended to be reinforced by what is now transpiring in the world, rather than being productively disrupted and transformed. As ChatGPT might put it:

An optimistic view suggests that science fiction allows examination of the societal and ethical impacts of emerging AI, encouraging diverse discussions around AI. It is argued that speculative storytelling can serve as a warning and transcend the limitations of time-space, connecting technology and humanities, and sparking empathy and deep thinking. Furthermore, AI narratives in science fiction are usually layered, providing a lens on themes such as racism, colonialism, slavery, capitalism, identity, and consciousness, among others.

However, the author disputes this view. They argue that science fiction could be an insufficient, even harmful, context for such explorations. They draw on recent representations of Machine Learning (ML) in science fiction and the absence thereof. They note that while the 21st century has seen a significant increase in AI research, predominantly ML-based, science fiction has been slow to accurately reflect this ML surge.

The author refers to the recent era of AI science fiction as ‘Disinformative Anticipatory-Residual Knowledge’ (DARK). The metaphorical description of DARK is like a semi-retired expert who is outdated but still possesses residual knowledge and fails to recognize their own ignorance, leading to misinformation. This is similar to the current science fiction discourse around AI, which offers both knowledge and disinformation.

The DARK concept doesn’t propose any causality but offers reference points like cognitive biases, Eve Kosofsky Sedgwick’s “strong theory,” the performativities of science fiction, and its countercultural and avant-garde aspects. Science fiction’s impact on science and technology is acknowledged, but it’s stated that these impacts can sometimes be mistaken for fulfilled prophecies. The author concludes by stating that science fiction’s residual AI imaginary has hindered its ability to adapt to new events and develop suitable representation and reasoning methods.

As a coda, I can conclude by offering a candle against the DARK. If AI in science fiction is often really an estrangement of something else, then is the reverse also true? Are there multiple something elses that estrange AI? Might the speculative money systems of works such as Michael Cisco’s Animal Money (2016), Seth Gordon’s “Soft Currency” (2014), or Karen Lord’s Galaxy Game (2015), be considered uses of applied statistics? Might the ambiguous humans of Jeff VanderMeer’s Annihilation (2014) or M. John Harrison’s The Sunken Land Begins to Rise Again (2020) tell us something about what it is like to live in a world uncannily adjusted by oblique ML processes? Might we fruitfully consider chatbots via the talking animals of Laura Jean McKay’s The Animals in that Country (2020)? If so, how? And in connection with what other projects and activities and fellow travelers, and with what theories of change? I do remain convinced of the radical potentials of science fiction. But perhaps we are much further from realizing them than we regularly admit.


NOTES

[1] Special thanks to Polina Levontin for her extremely helpful feedback on many aspects of this article.

[2] You don’t necessarily have to be a data scientist to be doing the things I’m describing here. But I think it’s helpful to keep this figure in mind, to emphasise the connections between ML, data collection, and statistical analysis.

[3] This is all virtual, of course. It is a way of visualising what a computer program is doing. The term neuron is more commonly used than node, and it’s a lively and memorable term, so I’ll use it here. But it is also a misleading name, since it invites excessive analogy with the human brain. The model’s layers might be various types, with different properties and capacities. Convolutional layers are used for processing image data, recurrent layers are used for processing sequential data, attention layers are used for weighing the importance of different inputs and have been used to great effect in generative NLP models like ChatGPT, and so on.

[4] For example, images can be inputted as a set of pixel intensity values. Or a text corpus can be processed by a training algorithm like Word2Vec. This produces a spreadsheet with the words in column A, and hundreds of columns filled with numbers, representing how similar or different the words are. Each row embeds a particular word as a vector (the numbers) in a high-dimensional space (the hundreds of columns), so that close synonyms will tend to have closely overlapping vectors. Another training algorithm can then perform mathematical functions on these word vectors: for example, if you add all the numbers associated with ‘king’ to all the numbers associated with ‘woman’ and subtract all the numbers associated with ‘man,’ you will usually get a set of numbers close to the ones associated with ‘queen.’

[5] So it multiplies each input by a given number (say 0.0.5 or -0.1), and then adds all the results together. The number used is the ‘weight’ of the connection between the two neurons. It is adjusted constantly as part of the ‘learning’ process.

[6] So if we think of an x and a y axis mapping the relationship between the incoming values and the outgoing values, the activation function can introduce curves and bends and even more complicated shapes, enabling the model to learn more complex and intricate patterns in the data. As well as the activation function, there is also something called (again, a little confusingly), a bias term. What is passed to the activation function is typically the weighted sum plus the bias term. What this means is that even when all the incoming values are zero, the neuron will still keep transmitting. Each neuron has its own bias term. The bias terms will typically be adjusted along with the weights: they are part of what the model is trying to ‘learn.’

[7] A related distinction is structured vs. unstructured data. Structured data is neatly laid out in a spreadsheet; unstructured data might include things like big dumps of text or images or video. For unstructured data, the training will include a preprocessing stage, with techniques to turn the data into a format that the later training algorithm can work with. For example, if the data consists of images, these are usually converted into pixel intensity values. Then a convolutional neural network can automatically extract features like edges and shapes from the raw pixel data. There is a loose association of supervised learning with structured data, and unsupervised learning with unstructured data. However, unstructured data does not necessarily require unsupervised learning, and unsupervised learning is not exclusively for unstructured data. You can perform supervised learning on largely unstructured data, e.g. by hand-labelling emails as ‘spam’ or ‘not spam’. You can also perform unsupervised learning on structured data, e.g. by performing clustering on a spreadsheet of customer data, to try to segment your customer base.

[8] I hope to explore this story at greater length in another essay about retellings of Pinocchio.

[9] The anthology was published in late 2010 in the US. For citation purposes I use the 2012 date given in the front matter of the UK edition, although some online catalogues list the date as 2011.

[10] In the sense of understanding or capacity to attribute mental states—beliefs, intents, desires, emotions, knowledge, etc.—to oneself and others, and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.

[11] For more on Onyebuchi’s ‘How to Pay Reparations: A Documentary’ and Lee’s ‘The Erasure Game’, especially in the context of utopian and dystopian literature, see also my chapter ‘Wellbeing and Worldbuilding’ in The Edinburgh Companion to Science Fiction and the Medical Humanities, ed. Gavin Miller and Anna McFarlane (Edinburgh University Press, 2024). For more on the role of computers in Ursula K. LeGuin’s The Dispossessed, see my article with Elizabeth Stainforth, ‘Computing Utopia: The Horizons of Computational Economies in History and Science Fiction’, Science Fiction Studies, Volume 46, Part 3, November 2019, pp. 471-489, DOI: 10.1353/sfs.2019.0084.

[12] See Zhang, Feng, ‘Algorithm of the Soul: Narratives of AI in Recent Chinese Science Fiction’, in Stephen Cave, and Kanta Dihal (eds), Imagining AI: How the World Sees Intelligent Machines (Oxford, 2023).

[13] Likely in Genevieve Lively and Will Slocombe (eds), The Routledge Handbook of AI and Literature (forthcoming). This also develops the concept of ‘critical design fiction’, which might be used as a counterpart to the DARK concept invoked later in this essay.

[14] See e.g. Huang, J., Shao, H., and Chang, K. C.-C. ‘Are large pretrained language models leaking your personal information?’ In Findings of the Association for Computational Linguistics (2022), pp. 2038–2047.

[15] Other approaches may be possible; this is not something I understand very well. Machine unlearning is an emerging research agenda that is experimenting with fine-tuning, architecture tweaks, and other methods to scrub the influence of specific data points from an already trained model. It also seems feasible that if ‘guard rails’ can be introduced and tweaked with relatively low cost and relatively quickly to remove unwanted behaviours, then similar methodologies might be used to temper the influence of individual texts on model outputs, e.g. using a real-time moderation layer to evaluate the generated outputs just before they are sent to the user. Casual conversations with colleagues in Engineering and Informatics suggest that this may be something of an open problem at the moment.

[16] Misinformative Anticipatory-Residual Knowledge might be a more generous way of putting it, but DARK also embeds a certain aspiration that science fiction writers and other members of science fiction communities can and should recognise this about our science fiction. The MARK, named, becomes the DARK.

[17] For example, the idea that if you are exploited or enslaved then you should probably negotiate peacefully for your freedom instead of resorting to violent uprising; the idea that most or all left wing people are probably secretly Stalinists who can’t wait to purge you; the idea that it is condescending not to consider that some people might prefer to be exploited, and so on. As these ideas grow more and more active in the subtext, the story begins to feel less like an empathetic critique of real problems with left politics from within the left, and more like a kind of concern-trolling from a broadly centrist standpoint. Really rich deliberation and plurality of viewpoints, which is something which often exists in leftist spaces, is always at least a little vulnerable to being mocked for disunity, or to being all lumped together under some relievingly simple formula.


WORKS CITED

Burrell, Jenna. ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’. Big Data & Society, vol. 3, no. 1, June 2016. https://doi.org/10.1177/2053951715622512.

Chen, Qiufan. ‘The Golden Elephant’. AI 2041: Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan, WH Allen, 2021.

Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

Currie, A.E. Death Ray. Panopticon Book 7, 2022.

Dila, Dilman. ‘Yat Madit’. Brittle Paper, Africanfuturism Anthology, 2020, https://brittlepaper.com/2020/10/yat-madit-by-dilman-dila-afrofuturism-anthology/.

Divya, S. B. Machinehood. Saga Press, 2022.

Ensmenger, Nathan. ‘The Cloud Is a Factory’. Your Computer Is on Fire, edited by Thomas S. Mullaney et al., The MIT Press, 2021.

Hewitt, Jeff. ‘The Big Four v. ORWELL’. Slate, Future Tense, 2023, https://slate.com/technology/2023/06/the-big-four-v-orwell-jeff-hewitt.html/.

Howey, Hugh. ‘Machine Learning’. Lightspeed, no. 124, 2018, https://www.lightspeedmagazine.com/fiction/machine-learning/.

Huang, Jie; Shao, Hanyin; and Chang, Kevin Chen-Chuan. ‘Are large pretrained language models leaking your personal information?’ Findings of the Association for Computational Linguistics, 2022. https://doi.org/10.18653/v1/2022.findings-emnlp.148.

Hudson, Brian K. ‘Virtually Cherokee’. Lightspeed, no. 155, 2023, https://www.lightspeedmagazine.com/fiction/virtually-cherokee/.

Ishiguro, Kazuo. Klara and the Sun. Faber, 2021.

Kress, Nancy. ‘Machine Learning’. Future Visions: Original Science Fiction Inspired by Microsoft, Microsoft and Melcher Media Inc., 2015.

Liu, Ken. ‘50 Things Every AI Working with Humans Should Know’. Uncanny Magazine, no. 37, 2020, https://www.uncannymagazine.com/article/50-things-every-ai-working-with-humans-should-know/.

Mintzer, Holli. ‘Tomorrow Is Waiting’. Strange Horizons, no. 21, 2011, http://strangehorizons.com/fiction/tomorrow-is-waiting/.

Moore, Fiona. ‘The Little Friend’. Fission, edited by Gene Rowe and Eugen Bacon, BSFA, vol. 2, no. 2, 2022.

Newitz, Annalee. ‘The Blue Fairy’s Manifesto’. Lightspeed, no. 122, 2020. https://www.lightspeedmagazine.com/fiction/the-blue-fairys-manifesto/.

Oram, Stephen. ‘Poisoning Prejudice’. Extracting Humanity, and Other Stories, Orchid’s Lantern, 2023.

Okojie, Irenosen. ‘Introduction’. Pharmako-AI, by K. Allado-Mcdowell and GPT-3, 2020.

Stainforth, Elizabeth and Walton, Jo Lindsay. ‘Computing Utopia: The Horizons of Computational Economies in History and Science Fiction’, Science Fiction Studies, vol. 46, part 3, 2019. https://doi.org/10.1353/sfs.2019.0084.

Taylor, R.J. ‘Upgrade Day’. Clarkesworld, no. 204, 2023, https://clarkesworldmagazine.com/taylor_09_23/.

Valente, Catherynne M. Clarkesworld, no. 61, 2011, https://clarkesworldmagazine.com/valente_10_11/.

Watts, Peter. ‘Malak’. Engineering Infinity, edited by Jonathan Strahan, Solaris, 2012.

Zhang, Feng. ‘Algorithm of the Soul: Narratives of AI in Recent Chinese Science Fiction’. Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal, Oxford, 2023.


Jo Lindsay Walton is a Research Fellow in Arts, Climate and Technology at the Sussex Digital Humanities Lab. His recent fiction appears in Criptörök (Grand Union, 2023) and Phase Change: New Energy Futures (Twelfth Planet Press, 2022). He is editor-at-large for Vector, the critical journal of the British Science Fiction Association, and is working on a book about postcapitalism and science fiction.

Review of The Sandman, season 1



Review of Star Trek: Strange New Worlds, season 2

Ian Campbell

The Sandman. Neil Gaiman, Davis S. Goyer, and Allan Heinberg. Netflix, 2023.

Netflix and the creative team behind the television adaptation, including executive producer Neil Gaiman, who wrote the story that was published in comic book form (1989-1996), deserve every ounce of praise for The Sandman, especially given the long interval and many false starts at presenting a television series—attempts to adapt the story go all the way back to 1991. Season 1 of the series adapts the first two arcs of the comics: these were published in collected volumes as Preludes and Nocturnes and The Doll’s House. The adaptation is entirely faithful to the spirit of the comics and often hews quite literally to the events and characters therein, with only minor deviations, nearly all of which improve upon the story. The adaptation is a tour de force in essentially every aspect and should be held up as the gold standard by which television versions of well-regarded fantasy and SF literature can be judged.

The story of season 1 begins just after World War I, when an English magus, Roderick Burgess (Charles Dance), conducts a ritual that seals Morpheus (Tom Sturridge), the incarnation of Dream, into a glass prison for a century. When Morpheus finally manages to free himself, he has to first seek out the tools that were stolen from him upon his imprisonment, then rebuild the Dreaming, his realm, and track down those among the dreams and nightmares who escaped into the real world during his absence. Once this is accomplished, he has to deal with a “dream vortex”, a mortal whose powerful dreaming ability threatens both the Dreaming and the real world. The theme running through this is that whereas the Morpheus who was first imprisoned was cold, distant, and not so much deliberately cruel as indifferent to the suffering caused by the actions he felt necessary, the freed Morpheus becomes somewhat more humane. During the season, we are given some of the information necessary to understand that Morpheus is the third of the seven siblings called the Endless; we meet his elder sister Death (Kirby Howell-Baptiste) and his younger twin siblings Desire (Mason Alexander Park) and very briefly Despair (Donna Preston). We do not meet his eldest brother Destiny nor his youngest sister Delirium, and only see a blank rectangle where the middle brother Destruction might be: as we will likely find out in season 2 or 3, Destruction has quit his job and left the family.

There are a number of deviations from the comics in the series, but they all improve upon the story. The timeframe of the story has been bumped from the late 1980s to the 2020s. Brute and Glob are replaced by Gault (Ann Ogbomo), a much better character with a real arc of her own; within the same storyline, it is Jed (Eddie Karanja) rather than Hector who is deluded into thinking he’s the real Sandman. Ethel Cripps (Joely Richardson), Burgess’ lover and Dee’s mother, gets a character arc of her own, linking Dee much more closely to the story of Dream’s tools. The Corinthian is more present as an antagonist throughout the season. It is rather clearer from the start that Desire has it out for Dream and is trying to ensnare or destroy him: this will become a central feature of the overall plot.

There are also a number of casting decisions that created controversy as the show was filming. Notably, when Howell-Baptiste was cast as Death, who in the comics is mostly portrayed as a very pale goth girl, the sort of bottom-feeders who use “woke” as a pejorative pitched a fit about it, with their usual delicacy and respect for others. It’s true that the original image of Death was based off of a white woman, Cinamon Hadley (d. 2020), but few outside the right-wing outrage machine believed the fig leaf that casting a black woman for the role was somehow disrespectful to the memory of Hadley. Gaiman provided a model for how to deal with such trolls, by being forthright yet humane in the face of a barrage of hate and death threats. Several other of the characters are played by actors of different races than those of the comics: Jed, Rose (Vanesu Samunyai) and Unity (Sandra James-Young) are all black rather than white, and Lucien, the Dreaming’s librarian, who is a white man in the comics, is played by a black woman, comedian Vivienne Acheampong, and the character is now Lucienne. If you’ve not read the comics, you won’t notice, and if you have read the comics and aren’t a bottom-feeding right-wing troll, you won’t care: as I said above, the acting and writing is top-notch.

One of the ongoing themes across the long series of comics is that the Endless are eternal manifestations of the principles whose names they share: their task is to embody these principles as a means of guiding, punishing or serving as inspiration for mortals. This is done well in season 1, especially in a pair of scenes where Shakespeare (Samuel Blenkin) becomes of interest to Dream because he wants to tell great stories, which is Dream’s magisterium. As the comics progress, it becomes more clear that each of the Endless has a personality that’s more or less opposite to their function: Destiny is clueless, Death perky, Dream a sober realist, Desire firmly unwantable, etc. None of this much manifests in the first two volumes that season 1 adapts, but I’m interested to see what happens as the show goes forward. The contrast between personality and function, and what this does to the Endless—especially Dream, Destruction and Delirium—and how they cope with it, becomes part of the central plotline as the story progresses.

From an academic perspective, two avenues open for consideration of the show in research and teaching. Its take on mythology and the oddly constrained lives of the (semi-)divinely powerful is worth exploration, notably in how Morpheus gradually goes from filling his function because that is what he’s supposed to do all the way to understanding the incompatibility between his humanity and filling his function. The other avenue is to consider how it is that some adaptations, like this one, are so very good, and others, such as Amazon Prime’s versions of The Wheel of Time, which comprehensively botches both the spirit and the letter of the novels, and of a few paragraphs of Tolkien’s notes for the absolute fiasco that is Rings of Power, are so very bad. It’s not related to network: Prime did a great job with The Expanse and Lee Child’s Reacher novels. What choices are made that enable one adaptation to be genuinely moving and others cringeworthy, and to what extent are these artistic decisions and to what extent are they related to business? These are all commercial productions, intended to make money, and no matter how much we might wish for art unencumbered by business, that’s not possible now and never truly has been.

Ian Campbell is the editor of SFRA Review.

Review of Star Trek: Strange New Worlds, season 2



Review of Star Trek: Strange New Worlds, season 2

Jeremy Brett

Goldsman, Akiva; Kurtzman, Alex; and Lumet, Jenny, creators. Star Trek: Strange New Worlds, Season 2, CBS Television, 2023.

One of the high emotional moments in the second season of Star Trek: Strange New Worlds comes near the end of its strangest event, the musical episode “Subspace Rhapsody” (2.09). Communications officer Nyota Uhura (Celia Rose Gooding), experiencing the heightened emotions that by the Laws of Musicals mandate powerful expression through song, laments her intense loneliness and her sadness over the death of her family, only to proclaim a newfound sense of purpose and belief in the necessity of human connection:

How come everywhere
That I go, I’m solo?
Am I at my best unaccompanied?
My whole life has been “Fix this” and “Save you”
I’ll light the path
And keep us connected
[…]
I absorb all the pain, mm-hmm
I hear everyone’s voice calling my name
Building systems, I strengthen ties that bind
So no one has to be alone.
      

Uhura’s self-realization is amplified one number later, where she sings to the entire U.S.S. Enterprise crew—in an intervention/finale to prevent the destruction of the Federation and half the Klingon Empire—that:

We’re all rushing around
We’re confused and upended
Let’s refocus now
Our bond is imperative
Let’s bring our collective together
As we fight for our lives
      

Followed by the crew’s unified response of:

We know our purpose is
To protect the mission
Our directive
Cause we work better
All together
We overcome
Our obstacles as one. 
     

It is a moment that completes the process by which the show has, over two seasons, transformed both the Enterprise and Starfleet into places of real and secure community in a hostile universe.

The musical is a touchstone for the sentiment surrounding the entire season, centered as it is on characters who, as Uhura sings, build systems—external and internal—to strengthen the ties that bind together individuals living in the dark and vast reaches of space. That sense of community as a bulwark against both an unremittingly dangerous cosmos and deeply buried inner trauma gives SNWa particular emotional resonance that sets it apart from previous iterations of ST. It represents a newfound franchise maturity in its plausible preservation of a particular inter-universe complexity, one that balances the traditional progressive and exploratory spirit of STwith recognition of some of the darker aspects of humanity (and its alien analogues), together with a keen appreciation of the ways in which humor can serve ST as a natural part of the human experience.

Obviously, humor is subjective, but SNW’s comic aspects to me strike a much more natural tone than many of the oft-painful attempts at humor that the original series, The Next Generation, or Voyager attempted. In the episode “Charades,” (2.05) for example, Spock (Ethan Peck) is temporarily deprived of his Vulcan genetic code, rendering him completely human at the worst possible time for his future married life and giving him the explosive temperament of a pubescent teenager. Spock’s exploration of the full range of human emotions has a number of funny and farcical moments, but these are artfully and realistically mixed with turmoil at his complicated romantic feelings for Nurse Christine Chapel (Jess Bush) and a newfound understanding of the isolation and rejection that Vulcan culture inflicted on his human mother Amanda. The construction of new personal and relational understandings means the building of these connective systems among the crew of the Enterprise.

Trauma goes hand in hand with past legacies in SNW season 2, leaving few characters untouched. In fact, the title of the second episode, “Ad Astra Per Aspera” (2.02) (Latin for “Through Hardship to the Stars”) could justifiably serve as the theme for the entire season. That episode shows the fallout from the arrest of Enterprise first officer Una Chin-Riley (Rebecca Romijn) for the ‘crime’ of being a genetically altered Illyrian and hiding that fact from Starfleet. Her subsequent trial reveals the unjust and disastrous consequences of a policy made by the Federation out of fear and internalized trauma caused by the Eugenics Wars. That fear resulted in bigotry and forced cultural assimilation towards Illyrians and a most un-Federation conviction that we must be forever what we are born to be. Una was a prisoner of that policy and the chains of secrecy it laid on her, until the idealistic image of unity that Starfleet represents drives her into the hazardous act of passing—Una takes risks because,

     [i]f all those people from all those worlds can work together, side by side, maybe I could, too. Maybe I could be a part of something bigger than myself. Starfleet is not a perfect organization, but it strives to be. And I believe it could be … Ad Astra per Aspera.

SNW posits that we will not reach our human potential among the stars unless we risk exposing who and what we are and, through that adversity, reach a place of healing and transformative change. In a remarkably poignant coda in “Those Old Scientists” (2.07), Una at last receives vindication for her journey of optimistic hardship when, of all people, Lower Decks ensign/ultimate ST fanboy Brad Boimler (Jack Quaid) and fellow ensign Beckett Mariner (Tawny Newsome) cross over from their own series to inform Una that in their time—her future—the motto that inspired Una to create a new life has become Starfleet’s recruitment slogan and Una herself its literal poster child. In Star Trek there is always hope of a better tomorrow and of societal and human progress.    

The trauma of the past has dramatic impact on other characters as well. SNWis set in the (fairly) early aftermath of the horrific Federation-Klingon War, and Starfleet is heavily populated by veterans of that conflict, among them Chapel, Doctor M’Benga (Babs Olusanmokun), and Lt. Erica Ortegas (Melissa Navia). All three suffer both from bitter feelings towards their former adversaries as well as serious post-traumatic stress: one particularly harrowing episode—”Under the Cloak of War” (2.08)—deals heavily in flashbacks to the war in which Chapel and M’Benga both served in a field hospital under fire, watching young officers die horribly and (in M’Benga’s case) committing brutal atrocities in a conflict full of them. The two are united in their inability to explain to outsiders the nature of their ongoing psychological injuries and the isolation they produce; they hurt, and they hurt profoundly enough that it warps their relationships with others. However, they, too, recognize that, as Uhura and M’Benga sing during “Subspace Rhapsody”, “I look around and everyone I see/The pinnacle of guts and resiliency/Death threats are nothing new to us/It takes monumental strength and trust”, and Chapel in a solo song proclaims her joy and readiness at being free to pursue new successes that may provide psychic healing: “The sky is the limit/My future is infinite/With possibilities/It’s freedom and I like it/My spark has been ignited/If I need to leave you [Spock]/I won’t fight it/I’m ready.”

But personal traumas carry their own weight even when intergalactic war is not involved: Captain Christopher Pike (Anson Mount) suffers under the knowledge that he is destined to suffer a critical injury that will leave him paralyzed and disfigured, yet he makes the choice to build a system around acknowledging and welcoming present relationships, including fellow captain Marie Batel (Melanie Scrofano). He will likely always be struggling with the knowledge of his fate, but forming emotional bonds becomes a critical way of coping. Once again, Boimler steps in with surprising pathos, asking Pike, who is planning to celebrate his birthday alone in part to muse over his failure to reconcile with his deceased father, “I’m sorry about your dad. But I wonder, if someday you’re not around anymore, how many people on this ship would wish they had another day to talk to you?” It is a doubly emotional moment because Boimler, of course,  being from the future knows as a matter of history Pike’s final fate but cannot say anything for fear of changing the timeline.

Similarly, security officer La’an Noonien-Singh (Christina Chong) faces emotional difficulties on multiple levels—as the survivor of imprisonment by the Xenomorph-like/reptilian Gorn, she subsumes her own scarring PTSD. As a descendent of the infamous Khan Noonien-Singh, she worries that she, too, is a monster doomed by her genetic heritage—confiding this to Una’s defense attorney, the lawyer replies that, 

They looked down at us [Illyrians] for so long that we began to look down at ourselves. Genetics is not destiny despite what you may have been taught. […] You were not born a monster; you were just born with a capacity for actions, good or ill, just like the rest of us.

The severe and buttoned-up La’an gains a newfound self-confidence, and her emotional range expands even more after confessing to James T. Kirk (Paul Wesley) her feelings for him based on an attraction to an alternate timeline version of Kirk (in “Tomorrow and Tomorrow and Tomorrow” (2.03)). Though he gently turns her down, La’an sees both truth and beauty in the resulting sadness, noting that “I’m glad I took that chance. Maybe I could be someone who takes chances more often.” La’an, as do so many of SNW’s characters, develops newfound emotional maturity in the process of solidifying human connections and building systems of trust and fellowship.

Season 2 of Strange New Worlds centers on the understanding that humans are rife with deep internal conflicts that accompany them into space and inevitably inform their reactions to the universe around them. It asks the audience to consider what baggage we carry around with us as thinking and feeling beings, the realizations we come to about ourselves, and the value of forming found families within which are preserved love, loyalty, and newfound purpose. As ever with the best of ST, and indeed, science fiction in general, what is most human in us is what we carry to the stars and beyond.

Jeremy Brett is a Librarian at Cushing Memorial Library & Archives, where he is both Processing Archivist and the Curator of the Science Fiction & Fantasy Research Collection. He has also worked at the University of Iowa, the University of Wisconsin-Milwaukee, the National Archives and Records Administration-Pacific Region, and the Wisconsin Historical Society. He received his MLS and his MA in History from the University of Maryland – College Park in 1999. His professional interests include science fiction, fan studies, and the intersection of libraries and social justice.

Review of The Wandering Earth II



Review of The Wandering Earth II

Mehdi Achouche

The Wandering Earth II. Dr. Frant Gwo, China Film Group Corporation, 2023.

In January 2019, China soft-landed the first lunar probe on the far side of the moon. The next month, The Wandering Earth (Frant Gwo) was released in Chinese theaters and made more than $700 million U.S. dollars at the box office, remaining to this day the 5th largest box office success in Chinese cinema and the first major homegrown science fiction production. That the two events should happen almost simultaneously was far from a coincidence, as the nation’s push in the science and technology fields has been accompanied by the dramatic rise of Chinese science fiction, dreaming of even more spectacular technological feats in the near or far away future. The genre in China has been spearheaded since the early 2000s by the works of novelist Liu Cixin, the Hugo recipient author of the eponymous short story (2000) loosely adapted for the screen by Gwo. Judging by the enormity of the means deployed by Chinese authorities to welcome the 81st World Science Fiction Convention in Chengdu, Sichuan, last October (a ceremony attended by both Liu and Gwo), the genre is taken very seriously by the government. It might, after all, help provide the means “to grow China’s cultural soft power and the appeal of Chinese culture,” in the words of Xi Jinping, the Chinese leader, earlier that month (Xinhua).

It should be noted, however, that both The Wandering Earth and its 2023 sequel, are as much disaster films as they are science fiction features, drawing largely from their U.S. counterparts, especially the Roland Emmerich variety. The “imagination of disaster” so elegantly described by Susan Sontag in the 1960s is at full work in these two films, as audiences can leisurely contemplate the wholesale destruction of entire metropolises and parts of the globe. This is especially the case in The Wandering Earth II, which is narratively a prequel taking place decades before the events of the first film and which can therefore focus on the cataclysms themselves rather than, like the first installment, on their aftermath. However, far from a pessimistic vision of the future, The Wandering Earth II, like its predecessor, is first a celebration of the technological marvels and possibilities that the future seems to hold, allowing humanity and China to overcome all the imaginable and unimaginable obstacles in their path. Although the film revels in destroying, it is first and foremost, as Jenifer Chao writes of the first film, an attempt at building the country’s national image, rebranding it as a technological superpower associated not with a long, glorious past but with a triumphant future (Chao).  

Whereas the first film was set in the 2070s and focused on the Earth’s near destruction in the vicinity of Jupiter, the sequel takes place in the 2040s and 2050s, presenting itself as the chronicle of humanity’s early attempts at saving itself. The world governments have only recently become aware of the fact that the sun was rapidly expanding and would engulf the Earth within the next century. They have started work on what will become known as the Wandering Earth Project—the construction of 12,000 fusion-powered engines which will stop the Earth’s rotation and thrust it out of the Sun’s orbit and into deep space, in search of a new home. In due course, audiences are treated to giant waves engulfing New York City (featuring the now traditional shot of the Statue of Liberty being almost immersed in water) or meteors streaming across the globe and destroying various landmarks in the process. Urban ruins are also offered to audiences, as the panorama of a frozen Shanghai and its iconic towers recalls similar shots in A.I. Artificial Intelligence (Spielberg, 2001), for instance. This is essentially a demonstration of the newfound expertise of Chinese cinema at employing special effects that are up to par with Hollywood—cinema as essentially a technological apparatus, a cinema of attractions that doubles as a demonstration of Chinese technical prowess. If the disaster genre is “a supreme, basic and fundamental example of what cinema can do,” in the words of Stephen Keane in his study of the genre, here it also demonstrates everything that Chinese cinema can now do (5).

At the same time, The Wandering Earth II, even more than its predecessor, largely ignores some of the genre’s stereotypical characters—the greedy businessman, the cowardly stepfather—to focus instead on cooperation and unity. The old-fashioned H.G. Wells dream of a world government is resurrected in the form of a United Earth Government under the clear auspices of China. Anytime (which is often) a Western representative at the United Nations (most notably the U.S. and British ones) doubts the validity of the project and is ready to quit and accept defeat, the wise, old Chinese delegate has sensible words to remind the world of the necessity of global partnership. While careful never to hit the jingoistic tones of a film like Independence Day (Roland Emmerich, 1996), or of even recent Chinese blockbusters like Wolf Warriors (which shares with The Wandering Earth II its lead, Wu Jing), The Wandering Earth II is hard at work highlighting the merits of Chinese leadership. When terrorist attacks threaten the project and lead every other country to give up, China is left alone to heroically finish construction of the prototype engines. While we learn at one point that the U.S. Senate is preparing to opt out of the international partnership, the Chinese delegate addresses the General Assembly and reminds the world that civilization is about helping each other and mending what is broken: “In times of crisis, unity above all.” Shots of the U.N. building in New York always highlight the beauty of the structure or are careful to show the famous knotted gun sculpture and visually associate it with the Chinese delegation. China, we are assured, has the power, the know-how, the motivation and the wisdom to look after the world, contrary to the U.S.

One of the similarities between the disaster film and the war narrative is their focus on the theme of sacrifice, and the film puts it to good use repeatedly. The climax of the film (which really consists in an unrelenting series of crises and climaxes) sees hundreds of senior astronauts from seemingly every nation bringing the world’s entire arsenal of nuclear weapons (no more wars) to the moon and blowing themselves up one by one to destroy the satellite and prevent it from crashing into the earth. This moment is perhaps one of the most emotionally effective in the film, and one of the most interesting visually. Before they arrive on the Moon, their approaching flotilla is visualized through a revealing frame within a frame: the film’s hero is holding a hex nut, through which he is framing the entire earth, making it look like a tiny little atom in the distance and emphasizing its fragility (fig. 2). Before the focus switches from the foreground (the nut) to the background (the earth and the approaching flotilla), we are given time to read the inscription on the edge of the nut: “made in China” (fig. 1). That a single shot can convey so much meaning (the nut is also an ironic stand in for the ring the hero could never hand to his love interest, symbolically making humanity as a whole his new love interest) is a testament to the director’s capacity to offer great visuals that do not simply feed the audience’s presumed thirst for mayhem and destruction.

Figure 1: The Earth as seen through the frame of Chinese technology
Figure 2: The Earth as “a tiny, fragile speck in the cosmic ocean”

The Wandering Earth II offers interesting avenues for the comparative study of science fiction and disaster films from the U.S., China as well as other countries (South Korea’s 2023 The Moon, for example) and their close connection to nation branding and soft power. The first film has already been largely discussed from such a perspective, but the sequel offers an even stronger case study. 2023 also saw the release of Tencent’s 30-episode TV adaptation of Liu’s Three Body Problem (available in many countries on Tencent’s YouTube channel), while Netflix will unveil its own version in the spring of 2024. This offers the potential for further comparative studies of differing perceptions and problematizations of scientific and technological progress across East and West, especially as their respective space programs kick into higher gear in the coming years.


WORKS CITED

Chao, Jenifer. The visual politics of Brand China: Exceptional history and speculative future, Aug 30, 2022, Vol. 19, 305-316, https://link.springer.com/article/10.1057/s41254-022-00270-6

Keane, Stephen. Disaster Movies: The Cinema of Catastrophe. Columbia University Press, 2006 (2nd ed.).

NASA, “Voyager 1’s Pale Blue Dot,” https://science.nasa.gov/resource/voyager-1s-pale-blue-dot/, last accessed Jan 10, 2024.

Thomala, Lai Lin. “The most successful movies of all time in China 2023,” Dec 13, 2023, https://www.statista.com/statistics/260007/box-office-revenue-of-the-most-successful-movies-of-all-time-in-china/, last accessed Jan 10, 2024.

Wall, Mike. “China Makes Historic 1st Landing on Mysterious Far Side of the Moon,” space.com, Jan 3, 2019, https://www.space.com/42883-china-first-landing-moon-far-side.html, last accessed Jan 10, 2024.

Xinhua News Agency. “Xi Jinping Thought on Culture put forward at national meeting,” Oct 9, 2023, https://www.chinatoday.com.cn/China/202310/t20231009_800344309.html, last accessed Jan 10, 2024.

Mehdi Achouche is an Associate Professor in Anglophone Film and TV Studies at Sorbonne Paris Nord University. He works on the representations of techno-utopianism, transhumanism and ideologies of progress in science fiction films and TV series. He is currently working on a monograph on such representations in films and series from the 1960s and 1970s.