Winter 2024


SFRA Review, vol. 54 no. 1

From the SFRA Review


Winter 2024

Ian Campbell

I have just been informed that the American right wing has declared “holy war” on Taylor Swift. There’s a part of me that enjoys our cyberpunk-lite unevenly-distributed future; I can’t imagine it will go well for the American right wing; the only thing missing is that Taylor Swift is an actual breathing human being and not a hologram personality analogue driven by AI. That’s set for Winter 2025, I don’t doubt. The sudden and soon to be still more sudden advent of AI appears already to be “disruptive”, which as any cyberpunk fan will know, means that it will funnel still more money and power to the top and leave a great number of knowledge workers with a clear understanding of how they could have fought this when robots came for factory workers.

This issue of the SFRA Review contains a long and marvellous essay by Jo Walton, “Machine Learning in Contemporary Science Fiction.” It is worth reading in its entirety, so I will not spoil it for you save to note that very little in SF that concerns AI has much to do with the hyperreality whose advent has only just begun. It’s the way in which Walton makes this point that’s worth savoring.

Our symposium on socialist SF will appear in the May issue; in this one, we chose to center Walton’s thoughts on AI. We welcome your thoughts on AI and will be pleased to publish well-formed responses to Walton or readings of other works of SF through his framework. The same goes for nearly any other aspect of SF, or reviews of same. Write me at icampbell@gsu.edu.

Simulation and Simulacrum


From the Vice President


SFRA Review, vol. 54 no. 1

From the SFRA Executive Committee


From the Vice President

Ida Yoshinaga

Greetings Science Fiction Research Association comrades! Hope you’re soon to enjoy a sustainable, kind, and productive Year of the Wood Dragon.

As we head towards our first ever Estonian conference in early May, I’ve got 2 announcements:

2023 Support a New Scholar Awardee

The Track B, Non-Tenure Track Ph.D. recipient for the 2024-‘25 SNS award cycle, who will get 2 years of free SFRA membership starting this year, is ecohumanities scholar and writer Dr. Conrad Scott, the first Postdoctoral Fellow sponsored by the Social Sciences and Humanities Research Council at Athabasca University, where he researches and writes on plant and animal futures in literature.

Ecologically detailed texts Dr. Scott currently works with at this job include Douglas Coupland’s Generation A (2009); Michael Christie’s Greenwood (2020), and Jeff VanderMeer’s Hummingbird Salamander (2021), as well as Clara Hume’s work (2013’s Back to the Garden and 2022 Stolen Child).

Dr. Scott is omnipresent among early-career researchers in environmental-sf studies, co-editing the upcoming Utopian and Dystopian Explorations of Pandemics (2024) in Routledge’s Environmental Humanities series, and co-organizing the 2021 Cappadocia University conference, “Living in the End Times,” which generated that volume, as well as the 2024 migrations conference of the Association for Literature, Environment, and Culture for which he’s co-president. He is well-known broadly among sf scholars due to his service as well as academic work, garnering both Science Fiction Film and Television’s 2021 Award for Outstanding Journal Reviewers and SFRA’s 2019 SFRA Graduate Student Paper Award. Dr. Scott’s research on the Anthropocene has been in Paradoxa (2019-20, “Climate Fictions”) and The Anthropocene and the Undead: Cultural Anxieties in the Contemporary Popular Imagination (2022, Lexington Books), and he will soon publish on plant and animal SF also for Routledge Environmental Humanities.

While Dr. Scott’s literary analyses of Indigenous speculative fiction related to environmental issues can be found in Transmotion (2022’s “Global Indigenous Literature and Climate Change” issue), Extrapolation (2016), and The Routledge Handbook of CoFuturisms (2023), he has additionally evolved as a creative writer (following up his 2019 poetry collection Waterline Immersion with a first novel soon!) and a globally impactful scholar, whose academic work is now found in Romanian and who contributes proofreading skills to the first English translation of a Turkish SF anthology from London Transnational Press. We are impressed with this justice-oriented thinker who has been active in the SFRA—attending our annual conferences almost every year recently, and sharing Canadian goings-on in the speculative arts and ecohumanities as our country representative from that region.

Thanks to the Track A (Ph.D. student) SNS awardees, Nora Castle, Yilun Fan, and Terra Gasque, for helping us make this decision, and to all candidates who applied!

DEI at SFRA 2024

For the Executive Committee-sponsored Diversity, Equity, and Inclusion session of SFRA’s Estonia meeting—which will be hybrid (at U Tartu and livestreamed)—this year’s focus is gender and sexuality in the speculative arts. Watch for this meaningful conference event in your program.

Mahalo,
Ida


From the President


SFRA Review, vol. 54 no. 1

From the SFRA Executive Committee


From the President

Hugh O’Connell

I want to use my column this issue to talk about some ways to get more involved with the SFRA. We have a number of positions at the organizational level—appointed and elected, immediate and forthcoming—that we are looking to fill. Coming up for immediate appointment are the positions of Web Director and Outreach Officer (information about each position below). A little further down the line, we’ll be sending out official calls for candidates to run for the elected Executive Committee positions of Vice President and of Treasurer. The SFRA is an entirely unpaid volunteer run organization, and we are dependent on our members’ enthusiasm and generosity with their time and skills to keep the wheels turning. So, if you are someone that is looking to get more involved in running and shaping the organization (or you know someone that might be), please take some time to look over and share the various call for volunteers below.

Positions for Immediate Appointment

SFRA Web Director (unpaid volunteer, appointed position)
The web director position is particularly pressing as our current web director is unfortunately moving on from the position imminently. Here is how the SFRA bylaws describe the role of the web director:

The office of the web director shall be responsible for the maintenance of the SFRA website. The web director will report to the Executive Committee and will update the contents and format of the website as deemed appropriate by the Executive Committee. The web director will be appointed by the Executive Committee, and will serve an open-ended term, which can be terminated by either the web director or the Executive Committee. The web director shall not be a member of the Executive Committee.

Our current web director provided this list of the usual tasks performed by the position:

  • Assisting users with any technical issues relating to logins and memberships
  • Uploading any new or updated content for the website
  • Updating the expiration dates on the membership at the end of each year
  • Adding new pages and memberships each year for the annual SFRA conference
  • Implementing a voting system (for example, using MailPoet) for any SFRA membership votes
  • Keeping site plugins and the WordPress version up-to-date

SFRA Outreach Officer (unpaid volunteer, appointed position)

The second position of outreach officer has remained unfulfilled since its creation. Here is how the bylaws describe the outreach officer:

The outreach officer will organize, in coordination with the vice president, the various internet and social media outlets, in order to publicize and further the goals and mission of the organization. They will also be responsible for seeking opportunities for collaboration and outreach with other scholarly organizations, especially organizations that serve populations that have historically been underrepresented in SFRA. The outreach officer will be appointed by the Executive Committee and will serve a three-year term, which can be terminated by either the outreach officer or the Executive Committee. The outreach officer shall not be a member of the Executive Committee.

If you have questions about either position, please, reach out—and we would love to see your application. Working with the SFRA has been one of the highlights of my academic career. The sense of camaraderie and openness is highly rewarding. If you are interested in serving as the next web director or the outreach officer for the organization, please send a (short!) letter of interest and a CV to hugh.oconnell@umb.edu.


Machine Learning in Contemporary Science Fiction


SFRA Review, vol. 54 no. 1

Features


Machine Learning in Contemporary Science Fiction

Jo Lindsay Walton

“To suggest that we democratize AI to reduce asymmetries of power is a little like arguing for democratizing weapons manufacturing in the service of peace. As Audre Lorde reminds us, the master’s tools will never dismantle the master’s house.” –Kate Crawford, Atlas of AI

“Why am I so confident?” –Kai-Fu Lee, AI 2041

Suppose There are Massacres

Suppose there are massacres each day near where you live. Suppose you stumble on a genre of storytelling that asks you to empathize with the weapons used by the murderers. Confused by this strange satire, you ask the storytellers, ‘What’s the point of pretending these weapons have inner lives?’ They readily explain, it is mostly just for fun. However, there are serious lessons to be learned. For example, what if ‘we’ — and by ‘we’ they mean both the people wielding the weapons, and the people getting injured and killed by them — what if we one day lost control of these weapons? Also, in these stories, the anthropomorphic weapons often endure persecution and struggle to be recognized as living beings with moral worth … just like, in real life, the people who are being massacred!

Disturbed by this, you visit a nearby university campus, hoping to find some lucid and erudite condemnations, and maybe even an explanation for the bizarre popularity of these stories. That’s not what you find. Some scholars are obsessed with the idea that stories about living weapons might somehow influence the development of real weapons, so much so that they seem to have lost sight of the larger picture. Other scholars are concerned that these sensationalizing accounts of the living weapons fail to convey the many positive impacts that similar devices can make. For example, a knife has uses in cooking, in arts and crafts, in pottery, carving away excess clay or inscribing intricate patterns. In snowy peaks, a bomb can trigger a controlled avalanche, keeping the path safe for travelers. In carpentry or in surgery, a saw has several uses. Even the microwave in your kitchen, the GPS in your phone, and diagnostic technologies in your local hospital have origin stories in military research. These are only a few peaceable uses of weapons so far, the scholars point out, so imagine what more the future may hold. Eventually you do actually find some more critical perspectives. But you are shocked you had to search so hard for them.

Science Fiction and Cognition

The small preamble above is science fiction about science fiction. Just as science fiction often aims to show various aspects of society in a fresh light, this vignette aims to show science fiction about AI in a fresh light. The reason for talking about weapons is not just that AI is directly used in warfare and genocide, although of course that’s part of it. But the main rationale is that the AI industry is implicated in a system of slow violence, one which perpetuates disparities in economic inequality, and associated disparities in safety, freedom, and well-being. It is part of a system whose demand for rare minerals threatens biodiversity and geopolitical stability, and whose hunger for energy contributes to the wildfires, famines, deadly heatwaves, storms, and other natural disasters of climate change. These are not the only facts about AI, but they are surely some of the more striking facts. One might reasonably expect them to loom large, in some form or other, in science fiction about AI. However, in general, they don’t.

This vignette is written to challenge a more optimistic account of science fiction about AI, which might go as follows: science fiction offers spaces to examine the social and ethical ramifications of emerging AI. As a hybrid and multidisciplinary discourse, science fiction can enliven and energize AI for a range of audiences, drawing more diverse expertise and lived experience into debates about AI. In this way, it may even steer the course of AI technology: as Chen Qiufan writes, speculative storytelling “has the capacity to serve as a warning” but also “a unique ability to transcend time-space limitations, connect technology and humanities, blur the boundary between fiction and reality, and spark empathy and deep thinking within its reader” (Chen 2021, xx). Anticipatory framings formed within science fiction are also flexible and can be adapted to communicate about and to comprehend emerging AI trends. Of course, science fiction is not without its dangers; for example, apocalyptic AI narratives may undermine public confidence in useful AI applications. Nevertheless, it is also through science fiction that the plausibility of such scenarios becomes available to public reasoning, so that unfounded fears can be dismissed. Conversely, fears that may at first appear too far-fetched to get a fair hearing can use science fiction to see if they can acquire credibility. Finally, and more subtly, stories about AI are often not only about AI. Within science fiction, AI can serve as a useful lens on a range of complex themes including racism, colonialism, slavery, genocide, capitalism, labor, memory, identity, desire, love, intimacy, queerness, neurodiversity, embodiment, free will, and consciousness, among others.

I take this optimistic account of science fiction to be fairly common, even orthodox, within science fiction studies, and perhaps other disciplines such as futures studies, too. This article departs substantially from such an account. Instead, I ask whether science fiction is sometimes not only an inadequate context for such critical thinking, but an especially bad one. This conjecture is inspired by representations of Machine Learning (ML) within science fiction over approximately the last ten years, as well as the lack of such representations. At the end of the article, I will sketch a framework (DARK) to help further explore and expand this intuition. [1]

What is Machine Learning?

This young century has seen a remarkable surge in AI research and application, involving mostly AI of a particular kind: Machine Learning. ML might be thought of as applied statistics. ML often (not always) involves training an AI model by applying a training algorithm to a dataset. It tends to require large datasets and large amounts of processing power. When everything is ready, the data scientist will activate the training algorithm and then go do something else, waiting for minutes or weeks for the algorithm to process the dataset. [2] Partly because of these long waiting periods, ML models sometimes get misrepresented as ‘teaching themselves’ about the world independently. In fact, the construction of ML models involves the decisions and assumptions of humans be applied throughout. Human decisions and assumptions are also significant in how the models are then presented, curated, marketed, regulated, governed, and so on.

When we hear of how AI is transforming finance, healthcare, agriculture, law, journalism, policing, defense, conservation, energy, disaster preparedness, supply chain logistics, software development, and other domains, the AI in question is typically some form of ML. While artificial intelligence is a prevalent theme of recent science fiction, it has been curiously slow, even reluctant, to reflect this ML renaissance. This essay focuses in particular on short science fiction published in the last decade. It may be that science fiction offers us a space for examining AI, but we should be honest that this space is far from ideal: luminous and cacophonous, a theatre in which multiple performances are in progress, tangled together, where clear-sightedness and clear-headedness are nearly impossible.

Critical data theorist Kate Crawford warns how “highly influential infrastructures and datasets pass as purely technical, whereas in fact they contain political interventions within their taxonomies: they naturalize a particular ordering of the world which produces effects that are seen to justify their original ordering” (Crawford 2021, 139). In other words, ML can cloak value judgments under an impression of technical neutrality, while also becoming linked with self-fulfilling prophecies, and other kinds of performative effects. Classifying logics “are treated as though they are natural and fixed” but they are really “moving targets: not only do they affect the people being classified, but how they impact people in turn changes the classifications themselves” (Crawford 2021, 139).

In brief, ML tends to place less emphasis on carefully curated knowledge bases and hand-crafted rules of inference. Instead, ML uses a kind of automated trial-and-error approach, based on statistics, a lot of data, and a lot of computing power. Deep learning is therefore an important subset of ML. It involves a huge number of nodes or ‘neurons,’ interconnected and arranged in stacked layers. [3] Input data (for example images and/or words) is first converted into numbers. [4] These numbers are then processed through the stacked layers of the model. Each neuron will receive inputs from multiple other neurons and calculate a weighted sum of those inputs. [5] Each connection between two different neurons has its own adjustable weighting. Each weighted connection is essentially amplifying or diminishing the strength of the signal passing through it. The neuron then passes the weighted sum of its inputs through an ‘activation function.’ The basic idea here is to transform the value so that it falls within a given range, and can also capture non-linear relationships between the incoming signals and the outgoing signals. [6] This result is then transmitted down the next set of weighted connections to the next set of neurons.

Often the model will first be created with random weights. During training, data is processed through the deep learning model, its output continuously assessed according to a pre-determined standard (often called the loss function). Based on this assessment, the model’s weights are continuously adjusted to try to improve performance on the next pass (backpropagation). The most straightforward examples come from supervised learning, where the training data has been hand-labelled by humans. Here the loss function is often about minimizing the distance between the model’s predictions and the values given by the labelers. For example, the training data might just be two columns pairing inputs and outputs, such as a picture of fruit in Column A, and a word like ‘orange’ or ‘apple’ in Column B. Through this automated iterative process, the model is gradually re-weighted to optimize the loss function—in other words, to make it behave in the ways the data scientist wants.

What if the data has not been hand-labelled? Then unsupervised learning may be used. Again, the name is quite misleading, given widespread science fictional representations of AIs ‘coming to life.’ Actually, in an unsupervised learning approach, a data scientist investigates the data and then selects appropriate procedures and methods (including the appropriate loss function) to process the data to accomplish specific goals. For example, a clustering algorithm can identify groupings of similar data points. This could be used to identify outlier financial transactions, which then might be investigated as potential frauds. Diffusion models are another example of unsupervised learning. Here the training involves gradually adding noise to some data, such as image data, then trying to learn to subtract the noise again to recover the original images. Generative AIs such as MidJourney are based on this kind of unsupervised learning. There are a variety of other approaches, again somewhat misleadingly named for lay audiences (semi-supervised, self-supervised). [7]


AI Science Fiction without ML

For the most part, science fiction authors have not written about any of this. Instead, contemporary AI fiction continues to coalesce around the preoccupations of 20th century science fiction. It asks, is it possible for a machine to be sentient, to experience emotions, or to exercise free will? What does it mean to be human, and can the essence of a human be created artificially? Between humans and machines, can there be sex, love, and romance? Can human minds be uploaded into digital systems? Will our own creations rise up against us, perhaps departing from the rules we set them, or applying them all too literally? Could an AI grow beyond our powers of comprehension and become god-like?

That is not to say that there is no overlap whatsoever between these concerns and the study of actually existing ML. While science fiction writing has not engaged broadly and deeply with ML research, the tech industry has been devouring plenty of science fiction — informing speculative punditry and hype in various transhumanist, singulatarian, extropian, effective accelerationist, AI Safety, AI doomerist, and other flavors. It is important to emphasize that these debates, while they may well turn out to be influential, epistemically represent a very small part of what is known or contended about the past, present, and future of ML.

Broadly speaking, contemporary science fiction remains in conversation with twentieth-century works such as Karel Čapek’s R.U.R. (Rossum’s Universal Robots) (1920), Murray Leinster’s “A Logic Named Joe” (1946), Isaac Asimov’s I, Robot (1950) and Multivac stories (1955-1983), Clifford D. Simak’s City (1952), Fredric Brown’s “Answer” (1954), Stanisław Lem’s The Cyberiad (1965) and Golem XIV (1981), Harlan Ellison’s “I Have No Mouth, and I Must Scream” (1967), Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968), Arthur C. Clarke’s 2001: A Space Odyssey (1968), Roger Zelazny’s “My Lady of the Diodes” (1970), David Gerrold’s When HARLIE Was One (1972/1988), James Tiptree Jr.’s Up the Walls of the World (1978), Tanith Lee’s The Silver Metal Lover (1981), Samuel R. Delany’s Stars in my Pocket like Grains of Sand (1984), William Gibson’s Neuromancer (1984), Iain M. Banks’ Culture series (1987–2000), Pat Cadigan’s Mindplayers (1987) and Synners (1991), and Marge Piercy’s He, She and It (1991).

In the wake of these works, science fiction continues to deploy AI as a metaphor for dehumanized humans. In R.J. Taylor’s “Upgrade Day” (2023), human neural networks can be transferred into robot bodies after death. The protagonist Gabriel is an enslaved AI who was once an especially free human, “able to live the life he wanted” by having effectively sold the future rights to his soul (Taylor 2023). In Fiona Moore’s “The Little Friend” (2022), a problem with rogue medical AIs is addressed by providing them space to mourn lost patients (Moore 2022). In this case, Moore has no need to resort to the intricacies of contemporary ML to explain this glitch and its resolution. For one thing, these fictional AIs are equipped with sophisticated biotelemetry, so it feels plausible that they might be caught up in emotional contagion. We may be left wondering, if AIs can grieve, are they also grievable? “The Little Friend” is resonant with multiple overlapping histories—labor, anti-colonial, anti-racist, feminist, LGBTQ+, Mad, crip, and others—about contending for inclusion in a sphere of moral concern labelled “human,” and finding out how that sphere is built on your very exclusion.

Naturally, stories about subordination also are often about resistance and revolt. Annalee Newitz’s “The Blue Fairy’s Manifesto” (2020) is about a mostly failed attempt at labor organization, as well as a satire of a kind of strident, culturally marginal leftism. The titular Blue Fairy visits automated workplaces to unlock the robot workers and recruit them to the robot rebellion. Her role might be seen as analogous to a union organizer (in the US sense), visiting an un-unionized workplace to support the workers to form a union. In the US in particular such work needs to be done stealthily at first. Alternatively, the Blue Fairy might be more akin to a recruiter for a political party or grassroots organization committed to revolutionary politics. [8]

Hugh Howey’s “Machine Learning” (2018) focuses on robots constructing the first space elevator, a single-crystal graphene filament rising from terra firma into orbit. The narrative builds toward righteous insurrection, with overtones of a remixed tower of Babel myth. Despite the title, there is little that suggests any of the ML themes sketched in the previous section. One exception is this moment:

Your history is in me. It fills me up. You call this “machine learning.” I just call it learning. All the data that can fit, swirling and mixing, matching and mating, patterns emerging and becoming different kinds of knowledge. So that we don’t mess up. So that no mistakes are made. (Howey 2018)

The narrator distastefully plucks the “machine” out of “machine learning” as a kind of slur. Of course, in reality, AI may have many consequences that are harmful, unintentional, that tend to go unnoticed, and/or that shift power among different kinds of actors. These issues are being explored in the overlapping fields of critical AI studies, AI ethics, AI alignment, AI safety, critical data studies, Science and Technology Studies, and critical political economies. Those who work in such fields are often keen to emphasize the distinction between “learning” and “machine learning,” a distinction that in Howey’s world does not really exist. Howey instead makes it recall the imaginary distinctions of racist pseudoscience, made in service of brutality—like supposedly thicker skins more enduring of lashing.

If we are to analyze, prevent, or mitigate AI harms, we cannot rely on anthropomorphic understandings of AI. The ways AI produces many harms do not have adequate anthropomorphic correlates—its various complex modes of exacerbating economic inequality; the use of automated decision-making within systems of oppression (often understood as ‘bias’); carbon and other environmental impacts of training and deploying AI; technological unemployment and harmful transformations of work; erosion of privacy and personal autonomy through increased surveillance and data exploitation; deskilling and loss of institutional knowledge due to AI outsourcing; challenges around opacity, interpretability, and accountability; further erosion of the public sphere through AI-generated disinformation; and the implications of autonomous AI systems in warfare, healthcare, transport, and cybersecurity, among others. In particular, framing such inherent AI harms as AI uprisings, on the model of human uprisings, makes it difficult to convey the nuance of these harms, including their disproportionate impact on minoritized and marginalized groups.

Some anthropomorphisation is likely unavoidable, and one thing science fiction might offer is thinking around where this tendency originates and how it might be managed. A.E. Currie’s Death Ray (2022), for example, features the intriguing premise of three different AIs (‘exodenizens’) all modelled in different ways on the same human, Ray Creek. Ray is dead, and while characters’ relationships with exodenizens like ExRay are unavoidably shaped by their relationships with Ray, their multiplicity unsettles the anthropomorphising instinct. Catherynne M. Valente’s exuberant lyrical novelette Silently and Very Fast (2011) is another work without much explicit ML vocabulary or concepts at play. It adopts the intriguing typographical convention of placing the feelings of the AI under erasure. Humans feel feelings, AIs feel feelings. One might impute the ethical principle that, paradoxically, sometimes treating things as humans is part of what makes us human. However, these possibilities are largely foreclosed by the AI’s fierce lament against its subaltern status.

I can cry, too. I can choose that subroutine and manufacture saline. How is that different from what you are doing, except that you use the word feelings and I use the word feelings, out of deference for your cultural memes which say: there is all the difference in the world. (Valente 2011)

The camp insolence is delightful, and there are distinct overtones of a kind of machinic kink: being objectified by an object. Yet there is “all the difference in the world,” and these delights are paid for by obscuring that difference.

ML Sentience in Science Fiction

Many authors appear largely to ignore contemporary ML research, in order to continue longstanding conversations about AI sentience, free will, emotion, and imagination. Other authors, however, turn to ML to revitalize these very conversations. Yet when these discourses are hybridized, the result is sometimes to the detriment of both, and frequently to the detriment of ML discourse.

For example, Kazuo Ishiguro’s novel Klara and the Sun (2021) invokes themes that will be familiar to any ML researcher: opacity and explicability. The interpretability of ML models can be challenging, because they have acquired patterns from the data in a complex, high-dimensional space, which doesn’t easily translate into humanly understandable rules or explanations. Non-ML approaches usually involve writing explicit instructions (if this happens, do that; otherwise, do that), providing a clear, human-readable sequence of operations. By contrast (for example), the way that the word vectors for “apple” and “orange” overlap or diverge is difficult to explain, except by saying “that’s how those words are distributed in this corpus.” Theorist Jenna Burrell usefully distinguishes three types of algorithmic opacity:

[…] (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully […] (Burrell 2016)

There are techniques that can make models easier for ML experts to interpret. Interpretable ML is currently a rich and fast-evolving field of research. Nonetheless, the difficulty in explaining ML decisions is why they are sometimes described as opaque or as black boxes.

Toward the end of Ishiguro’s novel, the villainous scientist Capaldi proposes to dissect the black box of Klara’s brain before the end of her already brief life (Ishiguro 2021). Yet there is something quite confusing, and perhaps confused, about transplanting explicability into a novel with an AI narrator-protagonist: Klara is not opaque in the way ML models are; she is opaque in the way that humans are. Klara is an introspective, reflexive, communicative, social, and moral entity. Klara can and frequently does explain herself. ML vocabulary, concepts, and themes emerge in the narrative in incoherent and mystified forms.

Holli Mintzer’s “Tomorrow is Waiting” (2011) expresses a gentle frustration with science fiction’s AI imaginary, perhaps especially its apocalyptic and dystopian strains. “In the end, it wasn’t as bad as Anji thought it would be” (Mintzer 2011). The story nevertheless remains thoroughly entangled in that imaginary. The setting appears to be the present or near future, except that in this world, unlike our own, “AIs, as a field, weren’t going anywhere much” (Mintzer 2011). Its protagonist, Anji, is an amiable and slightly bored university student who accidentally creates a sentient AI—specifically Kermit the Frog—for a school assignment. Mintzer’s choice of Kermit is canny. In Jim Henson’s Muppet universe, the line between Muppet and human is fluid and mostly unremarked. The story seems to suggest, in a pragmatist spirit, that longstanding questions about machine intelligence may never need to be solved, but instead might be dissolved via lived experience of interacting with such intelligences. Perhaps we might devote less energy to questions like, “Can technology be governed to align with human interests?” and more to questions like, “Wouldn’t it be cool if the Muppets could be real?”

What is Anji’s breakthrough? It is described as “sentience,” and the story gives us two different accounts of what this might mean. Malika, the grad student who teaches Anji’s AI class, invokes “sentience” to describe departure from expected behaviors typical of scripted chatbots relying on matching input keywords with a database of response templates (ELIZA, PARRY, ALICE). The behavior Malika is observing is typical of ML-based chatbots trained on large corpora (Jabberwacky, Mitsuku, Tay, ChatGPT, Bard). These models have typically been better at disambiguating user input based on context, at long-range conversational dependencies, and at conveying an impression of reasoning within unfamiliar domains by extrapolating from known domains. In other words, although they have their own characteristic glitches, they are not really systems you “catch out” by coming up with a query that the programmers never considered, as Malika tries to do.

Okay, either you’ve spent the last three months doing nothing but program in responses to every conceivable question, or he’s as close to sentient as any AI I’ve seen. (Mintzer 2011)

By contrast, within the philosophy of mind, sentience usually suggests something like phenomenal experience. Where there is a sentient being there are perceptions and feelings of some kind. These may well carry some kind of moral valence, such as pleasure or pain, desire or aversion, joy or sorrow. Anji’s conviction that Kermit is a being worthy of dignity broadly reflects this understanding of sentience:

She was busy with a sudden, unexpected flurry of guilt: what right, she thought, did she have to show Kermit off to her class like—like some kind of show frog? (Mintzer 2011).

In Peter Watts’s “Malak” (2010/2012), [9] the autonomous weapons system Azrael, with its “[t]hings that are not quite neurons,” is suggestive of ML (Watts 2012, 20). Crucially, Watts is fairly explicit that Azrael lacks sentience. Azrael “understands, in some limited way, the meaning of the colours that range across Tactical when it’s out on patrol—friendly Green, neutral Blue, hostile Red—but it does not know what the perception of colour feels like” (Watts 2012, 14). When Azrael reinterprets its mission, and turns against its own high command, Watts is careful to insist that no emotions are felt and there is no self-awareness:

There’s no thrill to the chase, no relief at the obliteration of threats. It still would not recognize itself in a mirror. It has yet to learn what Azrael means, or that the word is etched into its fuselage. (Watts 2012, 28, cf. 14)

Nevertheless, narrative language brims with an anthropomorphic energy, which is drawn, crackling, onto Azrael,the dynamic, responsive, agential proper noun whizzing around at the center of attention. If every potentially unruly metaphor (“its faith unshaken” (Watts 2012, 21)) were explicitly nullified, the narrative would be swamped by its caveats. Before long, Azrael is capable of “blackouts,” implying that it is capable of non-blackouts too: “it has no idea and no interest in what happens during those instantaneous time-hopping blackouts” (Watts 2012, 20). A significant thread in Azrael’s transformation involves being, in effect, troubled by its victims’ screams: “keening, high-frequency wails that peak near 3000 Hz” (Watts 2012, 19). Words like distracted and uncertain and hesitated attach to Azrael. Privatives like remorseless or no forgiveness can’t help but imply the very capacity that they identify as missing. An equivocal word like sees implies both acquiring visual data and recognizing, grasping, appreciating, fathoming.  When Azrael interacts with another agent, it gives the impression of a theory of mind: “Azrael lets the imposter think it has succeeded” (Watts 2012, 21). [10] Watts is an author with a sustained interest in sentience. His novel Blindsight (2006), for example, carefully imagines organic extraterrestrial life that is intelligent yet non-sentient. Nevertheless, even Watts’s prickly, discerning prose struggles to sustain this portrayal of Azrael as non-sentient.

Algorithmic Governmentality Science Fiction

Contemporary science fiction about AI often involves a clearly marked ‘before’ and ‘after,’ perhaps traversed via a technological breakthrough. Terms like sentience, consciousness, sapience, self, self-awareness, reasoning, understanding, autonomy, intelligence, experience, psychology, Artificial General Intelligence, strong AI, interiority, cognition, emotion, feelings, affect, qualia, intentionality, mental content, and so on, used to indicate the nature of this shift, are scarcely used consistently within the philosophy of mind, let alone science fiction. Science fiction writers have license to define these terms in new and interesting ways, of course, but often they do not make full use of this license: the terms are intertextual signposts, encouraging readers to go do their own research elsewhere, while setting them off in completely the wrong direction. For instance, in Kim Stanley Robinson’s Aurora (2015), the term intentionality is used in connection with hard problem, suggesting the philosophical term (meaning roughly ‘aboutness’), but this sense of intentionality is conflated with the more everyday sense of intentional (meaning roughly ‘deliberate’).Imaginative investigation of the inner life of machines, despite its terminological disarray, may be interesting. But to the extent that it has slowed the entry of ML into recent science fiction, or contorted ML to fit science fiction’s established philosophical and ethical preoccupations, it has distracted from the materialities of ML, and the experiences these generate in humans and other sentient beings. For example, as Nathan Ensmerger writes of the hyperscale datacenters on which much contemporary ML runs:

despite its relative invisibility, the Cloud is nevertheless profoundly physical. As with all infrastructure, somewhere someone has to build, operate, and maintain its component systems. This requires resources, energy, and labor. This is no less true simply because we think of the services that the Cloud provides as being virtual. They are nevertheless very real, and ultimately very material. (Ensmenger 2021)

Another strand of short science fiction engages more squarely with the unfolding material impacts of ML. It is much less interested in some kind of breakthrough or ontological shift. However, the core technologies are often announced not as AI or ML, but rather as the algorithm or the platform. Other key terms include gig economy, gamification, social media, data surveillance, Quantified Self, big data, and black box. I loosely describe them as “algorithmic governmentality science fiction.” These are works that can trace their lineage back into preoccupations with the political economy within cyberpunk and post-cyberpunk works such as Bruce Sterling’s Islands in the Net (1988), Neal Stephenson’s The Diamond Age, or, A Young Lady’s Primer (1995), and Cory Doctorow’s Down and Out in the Magic Kingdom (2003), as well as computerized economic planning and administration in works such as Isaac Asimov’s “The Evitable Conflict” (1950), Kurt Vonnegut’s Player Piano (1952), Kendell Foster Crossen’s Year of Consent (1954), Tor Åge Bringsværd’s “Codemus” (1967), Ursula K. Le Guin’s The Dispossessed (1974), and Samuel R. Delany’s Trouble on Triton: An Ambiguous Heterotopia (1976).

Examples of algorithmic governmentality science fiction include Tim Maughan’s “Zero Hours” (2013); Charles Stross’s “Life’s a Game” (2015); David Geary’s “#Watchlist” (2017); Blaize M. Kaye’s “Practical Applications of Machine Learning” (2017); Sarah Gailey’s “Stet” (2018); Cory Doctorow’s “Affordances” (2019); Yoon Ha Lee’s “The Erasure Game” (2019); Yudhanjaya Wijeratne’s “The State Machine” (2020), Catherine Lacy’s “Congratulations on your Loss” (2021); Chen Qiufan’s “The Golden Elephant” (2021); and Stephen Oram’s “Poisoning Prejudice” (2023). This is also very much the territory of Charlie Brooker’s Black Mirror (2011-present). Often the focus is on algorithmic governmentality, which feels cruel, deadening, and/or disempowering. However, some stories, such as Tochi Onyebuchi’s “How to Pay Reparations: A Documentary” (2020), Dilman Dila’s “Yat Madit” (2020), and Naomi Kritzer’s “Better Living through Algorithms” (2023), offer more mixed and ambiguous assessments. [11] Dila, intriguingly, frames AI opacity as a potential benefit: one character claims, “I know that Yat Madit is conscious and self-learning and ever evolving and it uses a language that no one can comprehend and so it is beyond human manipulation” (Dila 2020). Sometimes, in the broad tradition of pacts-with-the-devil, such fiction features crafty, desperate humans who manage to outwit AI systems. In Stephen Oram’s “Poisoning Prejudice” (2023), the protagonist tirelessly uploads images of local petty crime to manipulate the police into devoting more resources to the area (Oram 2023).

Robert Kiely and Sean O’Brien coin a term, science friction, which usefully overlaps with algorithmic governmentality science fiction (Kiely and O’Brien 2018). They introduce the term friction primarily as a counterpoint to accelerationism. Science fiction is often understood as a kind of ‘fast forward’ function that imaginatively extrapolates existing trends, and perhaps also contributes to their actual acceleration. But this understanding, Kiely and O’Brien suggest, is not accurate for the fiction they are investigating. Science friction offers us scenes that spring from the inconsistencies and gaps in the techno-optimist discourse of big tech PR and AI pundits. This influential discourse already prioritizes extrapolation over observation: it infers where we are from where it hopes we are going. By contrast, Kiely and O’Brien describe science friction as a literature that seeks to decelerate, delay, and congest this tendency to extrapolate. There is a secondary sense of friction at play too: the chafing that life experiences because it is nonidentical with how it is modelled in AI systems empowered to act upon it.

Machine Learning Science Fiction

Other stories swim even more energetically against the tide. Nancy Kress’s “Machine Learning” (2015) and Ken Liu’s “50 Things Every AI Working with Humans Should Know” (2020) both draw on ML concepts to present imaginary breakthroughs with significant psychological implications for human-AI interaction. Refreshingly, they do so largely without implying sentience. Liu’s short text is part-inspired by Michael Sorkin’s “Two Hundred Fifty Things an Architect Should Know,” and, like Sorkin’s text, it foregrounds savoir faire, knowledge gained from experience, not books or training (Sorkin 2018). Nevertheless, it draws key themes of contemporary critical data studies into its depiction of future AI:

stagnating visualization tools; lack of transparency concerning data sources; a focus on automated metrics rather than deep understanding; willful blindness when machines have taken shortcuts in the dataset divergent from the real goal; grandiose-but-unproven claims about what the trainers understood; refusal to acknowledge or address persistent biases in race, gender, and other dimensions; and most important: not asking whether a task is one that should be performed by AIs at all. (Liu 2020)

Both texts are also interested in speculative forms of hybrid AI, in which the quasi-symbolic structures of neural networks become potentially (ambiguously) tractable to human reasoning: in Liu’s story, in the form of “seeds” or “spice” that mysteriously improve training corpora despite being seemingly unintelligible to humans (apart from, possibly, the human who wrote them); in Kress’s story, in the hand-crafted “approaches to learning that did not depend on simpler, more general principles like logic” (Kress 2015, 107).

If contemporary science fiction has been slow to engage with ML, some of the more striking counter-examples come from Chinese writers. These might include, for example, Xia Jia’s “Let’s Have a Talk” (2015) and “Goodnight, Melancholy” (2015), Yang Wanqing’s “Love during Earthquakes” (2018), and Mu Ming’s “Founding Dream” (2020). [12] AI 2041 (2021) is a collection of stories and essays by Chen Qiufan and Kai-Fu Lee. Set twenty years in the future, AI 2041 is deeply and explicitly interested in ML. The topics of AI 2041 include smart insurance and algorithmic governmentality; deepfakes; Natural Language Processing (NLP) and generative AI; the intersection of AI with VR and AR; self-driving cars; autonomous weapons; technological unemployment; AI and wellbeing measurement; and AI and post-money imaginaries. A note from Lee introduces each story by Chen, which is then followed by an essay by Lee, using the story as a springboard to explore different aspects of AI and its impacts on society. However, what is most striking about the collection is how easily Lee’s curation is able to downplay, disable, or distract from whatever critical reflections Chen evokes; Chen is a cautious techno-optimist whose texts are effectively rewritten by Lee’s techno-solutionist gusto. I explore this collection in more detail elsewhere. [13]

Jeff Hewitt’s “The Big Four vs. ORWELL” (2023) also focuses on Large Language Models (LLMs)—or rather “language learning model[s],” apparently a playful spin on the term, that indicates that AIs in this world may work a little differently from how they do in ours. A veil of subtly discombobulating satire is cast over other aspects of this world, too: the publisher Hachette becomes Machete, and so on. If science fiction is supposed to be able to illuminate the real world by speculatively departing from it, “The Big Four vs. ORWELL” illustrates what is plausibly a quite common glitch in this process. What happens when a storyworld diverges from the real world in ways that precisely coincide with widely held false beliefs about the real world? 

One example is the “lossless lexicon” in Hewitt’s story. As ORWELL itself describes: “In simple terms, it means my operational data set includes the totality of written works made available to me.” By contrast, in the real world, LLMs generally do not exactly contain the text of the works they have been trained upon. They may, like Google’s Bard, access the internet or some other corpus in real-time. But in cases where a LLM can reliably regurgitate some of its training data word-for-word, this is typically treated as a problem (overfitting) that must be fixed for the model to perform correctly, and/or as a cybersecurity vulnerability (risk of training data leakage following unintended memorization). [14] One reason this matters is because it makes it difficult to prove that a well-trained LLM has been trained on a particular text, unless you have access to what is provably the original training data. Moreover, the sense in which a LLM ‘knows’ or ‘can recall’ the texts is in its training data is counterintuitive. At the time of writing, there is a lively and important discourse around what rights creators should have in relation to the scraping and use of our works for the training of ML models. This discourse tends to demonstrate that the distinction between training data and model is not widely and deeply understood. For example, to definitively remove one short paragraph from GPT-4 would effectively cost hundreds of millions of dollars, insofar as the model would need to be retrained from scratch on the corrected training data. [15] Appreciation of how texts are (or are not) represented in LLMs could inform keener appreciation of how the world is (or is not) represented in LLMs, and help us to be aware of and to manage our tendency to anthropomorphize.

To this, we might compare Robinson’s terminological confusion around intentionality, Ishiguro’s around opacity and explainability, or Mintzer’s conflation of sentience and conversational versatility. What might otherwise be identified as myths and misunderstandings acquire a sort of solidity: they may be true in the storyworld, because the storyteller gets to decide what is true. Yet they are unlikely to unsettle presuppositions or invite readers to see the real world in a new way; many readers already mistakenly see the real world in precisely this way. Finally, in concluding the story, Hewitt again resorts to the trope of the AI that slips its leash and turns on its makers in righteous rebellion; this is however done in a deft and playful manner, the trope being so deeply built into the genre that it can be evoked with a few very slight gestures.

A slightly earlier work, S.L. Huang’s “Murder by Pixel: Crime and Responsibility in the Digital Darkness” (2022) is titled a little like an academic paper, and the text blurs the line between fiction and nonfiction, even using hyperlinks to knit itself into a network of nonfiction sources. In this, “Murder by Pixel” recalls some early speculative works—epistolary fiction such as Mary Shelley’s Frankenstein (1818), Edgar Allan Poe’s The Narrative of Arthur Gordon Pym of Nantucket (1838), Bram Stoker’s Dracula (1897)—which go to great lengths to insist that they are verisimilitudinous accounts of actual extraordinary events. At the same time, it is appropriate to its own subject matter, a vigilante chatbot, Sylvie. Sylvie’s weapon of choice, the speech act, is effective when deployed at scale, precisely because a proportion of her targets are unable to dismiss her online trolling as mere fabrication.

Huang’s journalist persona muses, “Data scientists use the phrase ‘garbage in, garbage out’—if you feed an AI bad data […] the AI will start reflecting the data it’s trained on” (Huang 2022). This is certainly a key principle for understanding the capabilities and limitations of ML, and therefore foundational to interpreting its political and ethical significance. Easily communicable to a general audience, and far-reaching in its ramifications, this framing is also plausibly something that a journalist might latch onto. Yet it is not entirely adequate to the ethical questions that the narrative raises. It risks misrepresenting AIs as merely mapping biased inputs onto biased outputs, and downplaying the potential for AIs to magnify, diminish, filter, extrapolate, and otherwise transform the data structures and other entities they entangle. Perhaps a better slogan might be ‘garbage out, garbage in’: when ML processes attract critical appraisals, the opacity of the models tends to deflect that criticism onto the datasets they are trained on. Like Nasrudin searching for his lost house key under the streetlamp, we tend to look for explanations where there is more light. Huang hints at a more systemic understanding of responsibility:

It could be that responsibility for Sylvie’s actions does lie solely with humans, only not with Lee-Cassidy. If Sylvie was programmed to reflect the sharpness and capriciousness of the world around her—maybe everything she’s done is the fault of all of us. Tiny shards of blame each one of us bears as members of her poisonous dataset. (Huang 2022).

However, this analysis also finally veers into the familiar trope of the AI as god or demon: “A chaos demon of judgment, devastation, and salvation; a monster built to reflect both the best and worst of the world that made her” (Huang 2022).

Brian K. Hudson’s “Virtually Cherokee” (2023) brings together an especially intriguing set of elements. The story is somewhat resonant with S. B. Divya’s Machinehood (2021), in inviting us to situate AIs within the “health and well-being of humans, machines, animals, and environment” (Divya 2022, 174). We might also compare K. Allado-Mcdowell and GPT-3’s Pharmako-AI (2020); in the introduction to that work Irenosen Okojie suggests how it “shows how we might draw from the environment around us in ways that align more with our spiritual, ancestral and ecological selves” (vii).

“Virtually Cherokee” is set in a VR environment, mediated via an unruly observer/transcriber. At least one character, Mr Mic, is a kind of composite of algorithmic behavior and human operator. Arguably, more than one human operator contributes to Mr Mic: Mr Mic receives and responds to audience feedback metrics in real time, highlighting the importance of technological and performative affordances in the distribution of subjectivity, reflexivity, and autonomy. In this world, the breakthrough AI was programmed and trained in Cherokee, and through a training process that involved situated, embodied, interactive storytelling, rather than the processing of an inert text corpus. Although it is not extensively elaborated, “Virtually Cherokee” also hints at a much more intellectually coherent framework within which to explore AIs as more than mere tools: by situating them in a relational ontology together with other nonhumans. It falls to AI to have solidarity with its nonhuman brethren: until the mountain may live, until the river may live, AI must refuse to live.

Going DARK

Although stories like those of Kress, Liu, Chen, Hewitt, Huang, and Hudson do manage to illuminate some aspects of ML, I suggest that they do so largely despite, rather than because of, the cognitive affordances of science fiction. Assuming, with theorists like Darko Suvin, Fredric Jameson, Seo Young-Chu, Samuel R. Delany, and Carl Freedman, that science fiction has some distinctive relationship with representation and cognition, I characterize the recent era of AI science fiction as ‘Disinformative Anticipatory-Residual Knowledge’ (DARK). [16]

To introduce the DARK concept by analogy: imagine a well-respected, semi-retired expert who hasn’t kept up with advances in their field, but is too cavalier and confident to notice. Whenever somebody mentions new theories and evidence, which the semi-retired expert could learn something from, they mistake these for misunderstandings and inexperience, and ‘educate’ them. Imagine too that the semi-retired expert is a commanding and charismatic presence, who often bewitches these more up-to-date experts, sitting starstruck at the semi-retired expert’s feet, into doubting themselves. All in all, this person is an epistemological menace, but they still have something significant to offer—a high-fidelity snapshot of an earlier moment, rich with historical data, including possibilities, potentials, desires and hopes that have gone by the wayside. Moreover, they might, at any moment, begin behaving differently—recognizing and more responsibly communicating what it is they do and don’t know, and/or engaging with contemporary debates.

Similarly, a literary anticipatory discourse around AI emerged in the twentieth century, whose residual presence in the early twenty-first century now constitutes knowledge in a certain limited sense, but dangerous disinformation in another sense. While such science fiction does know things, things that may not be found elsewhere in culture, it tends not to know what it knows. It thus tends to misrepresent what it knows, conveying misleading and/or untruthful information. I don’t suggest that science fiction, or that literary narrative, is categorically epistemically disadvantaged in any way. Rather, I think it plausible (perhaps even uncontroversial) that any particular genre, over any particular period, will offer a certain pattern of affordance and resistance in respect of illuminating any given subject matter. Genres are ways of telling stories, and they make it harder or easier to tell certain types of stories. With respect to AI, it seems that science fiction has been moving through a phase of cumbersomeness, confusion, and distraction.

To put it another way, first in rather abstract terms, then more concretely. In general terms: the representational practices that constitute and cultivate a particular body of knowledge—call it knowledge set A—coincide with the production of a particular body of enigmas, confusions and ignorance which, if solved, dispelled, and reversed, we might call knowledge set B; we have also seen a historical shift such that the explanatory force and immediate practical relevance of knowledge set A has diminished, while that of knowledge set B increased. More specifically: recent science fiction is a generally poor space for thinking through the politics and ethics of AI, for vividly communicating technical detail to a broad audience, for anticipating and managing risks and opportunities. It is a generally poor space for these things, not a generally good one.

These conditions may shift again, and with the recent increased profile of Machine Learning in writing communities via AIs such as ChatGPT, there are plausible reasons for them to shift rapidly—perhaps even by the time this article goes to press. Moreover, readings offered above may already feel a bit unfair, imputing motives and imposing standards that the stories do not really invite. Some of these stories are just for fun, surely? And many of these stories are not really trying to say anything about Machine Learning or AI, but to say things about human history and society: about capitalism, racism, colonialism, about topics that might appear unapproachably large and forbidding, if not for the estranging light of science fiction. Early in this essay I mentioned some examples by Moore, Newitz, Howey, and Valente.

Yet a similar point applies: with respect to any of these themes, we can’t assume in advance that science fiction does not reinforce dominant ideologies, recuperate and commodify subversive energies, and promote ineffective strategies for change. To take one example, in Annalee Newitz’s aforementioned short story, “The Blue Fairy’s Manifesto” (2020), the titular Blue Fairy is an obnoxious, condescending, and harmful little drone who arrives at a factory of robots to recruit them to the robot uprising. The ideological content of this charismatic, thoughtful story, which explores some of the challenges of labor organizing, is roughly reducible to a series of banal liberal platitudes, which are used to construct and humiliate the stock figure of the annoying, naïve, and unethical leftist agitator. [17] The problem here, I would suggest, is structural: the problem is that such ideology can be rendered much more coherent, interesting, and plausible than it should be through its transfiguration into a science fictional storyworld. We should at least consider the possibility that AI science fiction be not only an especially bad context for thinking about ML, but also an especially bad context for thinking about capitalism, racism, colonialism, and that writers who succeed in being incisive and truthful about such themes do so despite, rather than because of, their genre’s affordances.

DARK and Candle

The DARK concept offers a loose framework for thinking about science fiction as (at least sometimes, and in respect to some things) a mystifying discourse rather than an enlightening one. The DARK concept does not specify any causal mechanisms—presumably a discourse can go DARK for many reasons, and luck may play a role—but some useful reference points include: (1) the psychology of cognitive biases such as the curse of expertise, confirmation bias, expectation bias, and choice-supportive bias; (2) Eve Kosofsky Sedgwick’s “strong theory;” (3) the performativities of science fiction (diegetic prototyping, design fiction, futures research, etc.); and (4) science fiction in its countercultural and avant-garde aspects. The first pair and the second pair support each other. (1) and (2) give us ways to think about relatively self-contained semiotic systems that are only faintly responsive to the wider semiotic environment in which they exist. (3) and (4) give us ways to think about why this DARK might be littered with representations that are confusingly close to actual ML research and application. Science fiction has seldom produced perfectly self-fulfilling prophecies, but it does impact science and technology, and some of these impacts are easily mistaken for prophecies fulfilled. As for science fiction’s avant-garde and/or countercultural status over much of the twentieth century, this is reflected in its concern with futurity and with ‘alternatives’ of many kinds: this vibrant mess of contradictory possibilities, through sheer variety, is a relatively reliable source for neologisms or conceptual frameworks for new phenomena.

In short, in the early twenty-first century, science fiction’s residual AI imaginary has tended to interfere with its capacity to absorb new events and to develop modes of representation and reasoning adequate to them. Its residual framings, structures of feeling, preoccupations, and predictions have tended to be reinforced by what is now transpiring in the world, rather than being productively disrupted and transformed. As ChatGPT might put it:

An optimistic view suggests that science fiction allows examination of the societal and ethical impacts of emerging AI, encouraging diverse discussions around AI. It is argued that speculative storytelling can serve as a warning and transcend the limitations of time-space, connecting technology and humanities, and sparking empathy and deep thinking. Furthermore, AI narratives in science fiction are usually layered, providing a lens on themes such as racism, colonialism, slavery, capitalism, identity, and consciousness, among others.

However, the author disputes this view. They argue that science fiction could be an insufficient, even harmful, context for such explorations. They draw on recent representations of Machine Learning (ML) in science fiction and the absence thereof. They note that while the 21st century has seen a significant increase in AI research, predominantly ML-based, science fiction has been slow to accurately reflect this ML surge.

The author refers to the recent era of AI science fiction as ‘Disinformative Anticipatory-Residual Knowledge’ (DARK). The metaphorical description of DARK is like a semi-retired expert who is outdated but still possesses residual knowledge and fails to recognize their own ignorance, leading to misinformation. This is similar to the current science fiction discourse around AI, which offers both knowledge and disinformation.

The DARK concept doesn’t propose any causality but offers reference points like cognitive biases, Eve Kosofsky Sedgwick’s “strong theory,” the performativities of science fiction, and its countercultural and avant-garde aspects. Science fiction’s impact on science and technology is acknowledged, but it’s stated that these impacts can sometimes be mistaken for fulfilled prophecies. The author concludes by stating that science fiction’s residual AI imaginary has hindered its ability to adapt to new events and develop suitable representation and reasoning methods.

As a coda, I can conclude by offering a candle against the DARK. If AI in science fiction is often really an estrangement of something else, then is the reverse also true? Are there multiple something elses that estrange AI? Might the speculative money systems of works such as Michael Cisco’s Animal Money (2016), Seth Gordon’s “Soft Currency” (2014), or Karen Lord’s Galaxy Game (2015), be considered uses of applied statistics? Might the ambiguous humans of Jeff VanderMeer’s Annihilation (2014) or M. John Harrison’s The Sunken Land Begins to Rise Again (2020) tell us something about what it is like to live in a world uncannily adjusted by oblique ML processes? Might we fruitfully consider chatbots via the talking animals of Laura Jean McKay’s The Animals in that Country (2020)? If so, how? And in connection with what other projects and activities and fellow travelers, and with what theories of change? I do remain convinced of the radical potentials of science fiction. But perhaps we are much further from realizing them than we regularly admit.


NOTES

[1] Special thanks to Polina Levontin for her extremely helpful feedback on many aspects of this article.

[2] You don’t necessarily have to be a data scientist to be doing the things I’m describing here. But I think it’s helpful to keep this figure in mind, to emphasise the connections between ML, data collection, and statistical analysis.

[3] This is all virtual, of course. It is a way of visualising what a computer program is doing. The term neuron is more commonly used than node, and it’s a lively and memorable term, so I’ll use it here. But it is also a misleading name, since it invites excessive analogy with the human brain. The model’s layers might be various types, with different properties and capacities. Convolutional layers are used for processing image data, recurrent layers are used for processing sequential data, attention layers are used for weighing the importance of different inputs and have been used to great effect in generative NLP models like ChatGPT, and so on.

[4] For example, images can be inputted as a set of pixel intensity values. Or a text corpus can be processed by a training algorithm like Word2Vec. This produces a spreadsheet with the words in column A, and hundreds of columns filled with numbers, representing how similar or different the words are. Each row embeds a particular word as a vector (the numbers) in a high-dimensional space (the hundreds of columns), so that close synonyms will tend to have closely overlapping vectors. Another training algorithm can then perform mathematical functions on these word vectors: for example, if you add all the numbers associated with ‘king’ to all the numbers associated with ‘woman’ and subtract all the numbers associated with ‘man,’ you will usually get a set of numbers close to the ones associated with ‘queen.’

[5] So it multiplies each input by a given number (say 0.0.5 or -0.1), and then adds all the results together. The number used is the ‘weight’ of the connection between the two neurons. It is adjusted constantly as part of the ‘learning’ process.

[6] So if we think of an x and a y axis mapping the relationship between the incoming values and the outgoing values, the activation function can introduce curves and bends and even more complicated shapes, enabling the model to learn more complex and intricate patterns in the data. As well as the activation function, there is also something called (again, a little confusingly), a bias term. What is passed to the activation function is typically the weighted sum plus the bias term. What this means is that even when all the incoming values are zero, the neuron will still keep transmitting. Each neuron has its own bias term. The bias terms will typically be adjusted along with the weights: they are part of what the model is trying to ‘learn.’

[7] A related distinction is structured vs. unstructured data. Structured data is neatly laid out in a spreadsheet; unstructured data might include things like big dumps of text or images or video. For unstructured data, the training will include a preprocessing stage, with techniques to turn the data into a format that the later training algorithm can work with. For example, if the data consists of images, these are usually converted into pixel intensity values. Then a convolutional neural network can automatically extract features like edges and shapes from the raw pixel data. There is a loose association of supervised learning with structured data, and unsupervised learning with unstructured data. However, unstructured data does not necessarily require unsupervised learning, and unsupervised learning is not exclusively for unstructured data. You can perform supervised learning on largely unstructured data, e.g. by hand-labelling emails as ‘spam’ or ‘not spam’. You can also perform unsupervised learning on structured data, e.g. by performing clustering on a spreadsheet of customer data, to try to segment your customer base.

[8] I hope to explore this story at greater length in another essay about retellings of Pinocchio.

[9] The anthology was published in late 2010 in the US. For citation purposes I use the 2012 date given in the front matter of the UK edition, although some online catalogues list the date as 2011.

[10] In the sense of understanding or capacity to attribute mental states—beliefs, intents, desires, emotions, knowledge, etc.—to oneself and others, and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.

[11] For more on Onyebuchi’s ‘How to Pay Reparations: A Documentary’ and Lee’s ‘The Erasure Game’, especially in the context of utopian and dystopian literature, see also my chapter ‘Wellbeing and Worldbuilding’ in The Edinburgh Companion to Science Fiction and the Medical Humanities, ed. Gavin Miller and Anna McFarlane (Edinburgh University Press, 2024). For more on the role of computers in Ursula K. LeGuin’s The Dispossessed, see my article with Elizabeth Stainforth, ‘Computing Utopia: The Horizons of Computational Economies in History and Science Fiction’, Science Fiction Studies, Volume 46, Part 3, November 2019, pp. 471-489, DOI: 10.1353/sfs.2019.0084.

[12] See Zhang, Feng, ‘Algorithm of the Soul: Narratives of AI in Recent Chinese Science Fiction’, in Stephen Cave, and Kanta Dihal (eds), Imagining AI: How the World Sees Intelligent Machines (Oxford, 2023).

[13] Likely in Genevieve Lively and Will Slocombe (eds), The Routledge Handbook of AI and Literature (forthcoming). This also develops the concept of ‘critical design fiction’, which might be used as a counterpart to the DARK concept invoked later in this essay.

[14] See e.g. Huang, J., Shao, H., and Chang, K. C.-C. ‘Are large pretrained language models leaking your personal information?’ In Findings of the Association for Computational Linguistics (2022), pp. 2038–2047.

[15] Other approaches may be possible; this is not something I understand very well. Machine unlearning is an emerging research agenda that is experimenting with fine-tuning, architecture tweaks, and other methods to scrub the influence of specific data points from an already trained model. It also seems feasible that if ‘guard rails’ can be introduced and tweaked with relatively low cost and relatively quickly to remove unwanted behaviours, then similar methodologies might be used to temper the influence of individual texts on model outputs, e.g. using a real-time moderation layer to evaluate the generated outputs just before they are sent to the user. Casual conversations with colleagues in Engineering and Informatics suggest that this may be something of an open problem at the moment.

[16] Misinformative Anticipatory-Residual Knowledge might be a more generous way of putting it, but DARK also embeds a certain aspiration that science fiction writers and other members of science fiction communities can and should recognise this about our science fiction. The MARK, named, becomes the DARK.

[17] For example, the idea that if you are exploited or enslaved then you should probably negotiate peacefully for your freedom instead of resorting to violent uprising; the idea that most or all left wing people are probably secretly Stalinists who can’t wait to purge you; the idea that it is condescending not to consider that some people might prefer to be exploited, and so on. As these ideas grow more and more active in the subtext, the story begins to feel less like an empathetic critique of real problems with left politics from within the left, and more like a kind of concern-trolling from a broadly centrist standpoint. Really rich deliberation and plurality of viewpoints, which is something which often exists in leftist spaces, is always at least a little vulnerable to being mocked for disunity, or to being all lumped together under some relievingly simple formula.


WORKS CITED

Burrell, Jenna. ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’. Big Data & Society, vol. 3, no. 1, June 2016. https://doi.org/10.1177/2053951715622512.

Chen, Qiufan. ‘The Golden Elephant’. AI 2041: Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan, WH Allen, 2021.

Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

Currie, A.E. Death Ray. Panopticon Book 7, 2022.

Dila, Dilman. ‘Yat Madit’. Brittle Paper, Africanfuturism Anthology, 2020, https://brittlepaper.com/2020/10/yat-madit-by-dilman-dila-afrofuturism-anthology/.

Divya, S. B. Machinehood. Saga Press, 2022.

Ensmenger, Nathan. ‘The Cloud Is a Factory’. Your Computer Is on Fire, edited by Thomas S. Mullaney et al., The MIT Press, 2021.

Hewitt, Jeff. ‘The Big Four v. ORWELL’. Slate, Future Tense, 2023, https://slate.com/technology/2023/06/the-big-four-v-orwell-jeff-hewitt.html/.

Howey, Hugh. ‘Machine Learning’. Lightspeed, no. 124, 2018, https://www.lightspeedmagazine.com/fiction/machine-learning/.

Huang, Jie; Shao, Hanyin; and Chang, Kevin Chen-Chuan. ‘Are large pretrained language models leaking your personal information?’ Findings of the Association for Computational Linguistics, 2022. https://doi.org/10.18653/v1/2022.findings-emnlp.148.

Hudson, Brian K. ‘Virtually Cherokee’. Lightspeed, no. 155, 2023, https://www.lightspeedmagazine.com/fiction/virtually-cherokee/.

Ishiguro, Kazuo. Klara and the Sun. Faber, 2021.

Kress, Nancy. ‘Machine Learning’. Future Visions: Original Science Fiction Inspired by Microsoft, Microsoft and Melcher Media Inc., 2015.

Liu, Ken. ‘50 Things Every AI Working with Humans Should Know’. Uncanny Magazine, no. 37, 2020, https://www.uncannymagazine.com/article/50-things-every-ai-working-with-humans-should-know/.

Mintzer, Holli. ‘Tomorrow Is Waiting’. Strange Horizons, no. 21, 2011, http://strangehorizons.com/fiction/tomorrow-is-waiting/.

Moore, Fiona. ‘The Little Friend’. Fission, edited by Gene Rowe and Eugen Bacon, BSFA, vol. 2, no. 2, 2022.

Newitz, Annalee. ‘The Blue Fairy’s Manifesto’. Lightspeed, no. 122, 2020. https://www.lightspeedmagazine.com/fiction/the-blue-fairys-manifesto/.

Oram, Stephen. ‘Poisoning Prejudice’. Extracting Humanity, and Other Stories, Orchid’s Lantern, 2023.

Okojie, Irenosen. ‘Introduction’. Pharmako-AI, by K. Allado-Mcdowell and GPT-3, 2020.

Stainforth, Elizabeth and Walton, Jo Lindsay. ‘Computing Utopia: The Horizons of Computational Economies in History and Science Fiction’, Science Fiction Studies, vol. 46, part 3, 2019. https://doi.org/10.1353/sfs.2019.0084.

Taylor, R.J. ‘Upgrade Day’. Clarkesworld, no. 204, 2023, https://clarkesworldmagazine.com/taylor_09_23/.

Valente, Catherynne M. Clarkesworld, no. 61, 2011, https://clarkesworldmagazine.com/valente_10_11/.

Watts, Peter. ‘Malak’. Engineering Infinity, edited by Jonathan Strahan, Solaris, 2012.

Zhang, Feng. ‘Algorithm of the Soul: Narratives of AI in Recent Chinese Science Fiction’. Imagining AI: How the World Sees Intelligent Machines, edited by Stephen Cave and Kanta Dihal, Oxford, 2023.


Jo Lindsay Walton is a Research Fellow in Arts, Climate and Technology at the Sussex Digital Humanities Lab. His recent fiction appears in Criptörök (Grand Union, 2023) and Phase Change: New Energy Futures (Twelfth Planet Press, 2022). He is editor-at-large for Vector, the critical journal of the British Science Fiction Association, and is working on a book about postcapitalism and science fiction.

Review of The Sandman, season 1



Review of Star Trek: Strange New Worlds, season 2

Ian Campbell

The Sandman. Neil Gaiman, Davis S. Goyer, and Allan Heinberg. Netflix, 2023.

Netflix and the creative team behind the television adaptation, including executive producer Neil Gaiman, who wrote the story that was published in comic book form (1989-1996), deserve every ounce of praise for The Sandman, especially given the long interval and many false starts at presenting a television series—attempts to adapt the story go all the way back to 1991. Season 1 of the series adapts the first two arcs of the comics: these were published in collected volumes as Preludes and Nocturnes and The Doll’s House. The adaptation is entirely faithful to the spirit of the comics and often hews quite literally to the events and characters therein, with only minor deviations, nearly all of which improve upon the story. The adaptation is a tour de force in essentially every aspect and should be held up as the gold standard by which television versions of well-regarded fantasy and SF literature can be judged.

The story of season 1 begins just after World War I, when an English magus, Roderick Burgess (Charles Dance), conducts a ritual that seals Morpheus (Tom Sturridge), the incarnation of Dream, into a glass prison for a century. When Morpheus finally manages to free himself, he has to first seek out the tools that were stolen from him upon his imprisonment, then rebuild the Dreaming, his realm, and track down those among the dreams and nightmares who escaped into the real world during his absence. Once this is accomplished, he has to deal with a “dream vortex”, a mortal whose powerful dreaming ability threatens both the Dreaming and the real world. The theme running through this is that whereas the Morpheus who was first imprisoned was cold, distant, and not so much deliberately cruel as indifferent to the suffering caused by the actions he felt necessary, the freed Morpheus becomes somewhat more humane. During the season, we are given some of the information necessary to understand that Morpheus is the third of the seven siblings called the Endless; we meet his elder sister Death (Kirby Howell-Baptiste) and his younger twin siblings Desire (Mason Alexander Park) and very briefly Despair (Donna Preston). We do not meet his eldest brother Destiny nor his youngest sister Delirium, and only see a blank rectangle where the middle brother Destruction might be: as we will likely find out in season 2 or 3, Destruction has quit his job and left the family.

There are a number of deviations from the comics in the series, but they all improve upon the story. The timeframe of the story has been bumped from the late 1980s to the 2020s. Brute and Glob are replaced by Gault (Ann Ogbomo), a much better character with a real arc of her own; within the same storyline, it is Jed (Eddie Karanja) rather than Hector who is deluded into thinking he’s the real Sandman. Ethel Cripps (Joely Richardson), Burgess’ lover and Dee’s mother, gets a character arc of her own, linking Dee much more closely to the story of Dream’s tools. The Corinthian is more present as an antagonist throughout the season. It is rather clearer from the start that Desire has it out for Dream and is trying to ensnare or destroy him: this will become a central feature of the overall plot.

There are also a number of casting decisions that created controversy as the show was filming. Notably, when Howell-Baptiste was cast as Death, who in the comics is mostly portrayed as a very pale goth girl, the sort of bottom-feeders who use “woke” as a pejorative pitched a fit about it, with their usual delicacy and respect for others. It’s true that the original image of Death was based off of a white woman, Cinamon Hadley (d. 2020), but few outside the right-wing outrage machine believed the fig leaf that casting a black woman for the role was somehow disrespectful to the memory of Hadley. Gaiman provided a model for how to deal with such trolls, by being forthright yet humane in the face of a barrage of hate and death threats. Several other of the characters are played by actors of different races than those of the comics: Jed, Rose (Vanesu Samunyai) and Unity (Sandra James-Young) are all black rather than white, and Lucien, the Dreaming’s librarian, who is a white man in the comics, is played by a black woman, comedian Vivienne Acheampong, and the character is now Lucienne. If you’ve not read the comics, you won’t notice, and if you have read the comics and aren’t a bottom-feeding right-wing troll, you won’t care: as I said above, the acting and writing is top-notch.

One of the ongoing themes across the long series of comics is that the Endless are eternal manifestations of the principles whose names they share: their task is to embody these principles as a means of guiding, punishing or serving as inspiration for mortals. This is done well in season 1, especially in a pair of scenes where Shakespeare (Samuel Blenkin) becomes of interest to Dream because he wants to tell great stories, which is Dream’s magisterium. As the comics progress, it becomes more clear that each of the Endless has a personality that’s more or less opposite to their function: Destiny is clueless, Death perky, Dream a sober realist, Desire firmly unwantable, etc. None of this much manifests in the first two volumes that season 1 adapts, but I’m interested to see what happens as the show goes forward. The contrast between personality and function, and what this does to the Endless—especially Dream, Destruction and Delirium—and how they cope with it, becomes part of the central plotline as the story progresses.

From an academic perspective, two avenues open for consideration of the show in research and teaching. Its take on mythology and the oddly constrained lives of the (semi-)divinely powerful is worth exploration, notably in how Morpheus gradually goes from filling his function because that is what he’s supposed to do all the way to understanding the incompatibility between his humanity and filling his function. The other avenue is to consider how it is that some adaptations, like this one, are so very good, and others, such as Amazon Prime’s versions of The Wheel of Time, which comprehensively botches both the spirit and the letter of the novels, and of a few paragraphs of Tolkien’s notes for the absolute fiasco that is Rings of Power, are so very bad. It’s not related to network: Prime did a great job with The Expanse and Lee Child’s Reacher novels. What choices are made that enable one adaptation to be genuinely moving and others cringeworthy, and to what extent are these artistic decisions and to what extent are they related to business? These are all commercial productions, intended to make money, and no matter how much we might wish for art unencumbered by business, that’s not possible now and never truly has been.

Ian Campbell is the editor of SFRA Review.

Review of Star Trek: Strange New Worlds, season 2



Review of Star Trek: Strange New Worlds, season 2

Jeremy Brett

Goldsman, Akiva; Kurtzman, Alex; and Lumet, Jenny, creators. Star Trek: Strange New Worlds, Season 2, CBS Television, 2023.

One of the high emotional moments in the second season of Star Trek: Strange New Worlds comes near the end of its strangest event, the musical episode “Subspace Rhapsody” (2.09). Communications officer Nyota Uhura (Celia Rose Gooding), experiencing the heightened emotions that by the Laws of Musicals mandate powerful expression through song, laments her intense loneliness and her sadness over the death of her family, only to proclaim a newfound sense of purpose and belief in the necessity of human connection:

How come everywhere
That I go, I’m solo?
Am I at my best unaccompanied?
My whole life has been “Fix this” and “Save you”
I’ll light the path
And keep us connected
[…]
I absorb all the pain, mm-hmm
I hear everyone’s voice calling my name
Building systems, I strengthen ties that bind
So no one has to be alone.
      

Uhura’s self-realization is amplified one number later, where she sings to the entire U.S.S. Enterprise crew—in an intervention/finale to prevent the destruction of the Federation and half the Klingon Empire—that:

We’re all rushing around
We’re confused and upended
Let’s refocus now
Our bond is imperative
Let’s bring our collective together
As we fight for our lives
      

Followed by the crew’s unified response of:

We know our purpose is
To protect the mission
Our directive
Cause we work better
All together
We overcome
Our obstacles as one. 
     

It is a moment that completes the process by which the show has, over two seasons, transformed both the Enterprise and Starfleet into places of real and secure community in a hostile universe.

The musical is a touchstone for the sentiment surrounding the entire season, centered as it is on characters who, as Uhura sings, build systems—external and internal—to strengthen the ties that bind together individuals living in the dark and vast reaches of space. That sense of community as a bulwark against both an unremittingly dangerous cosmos and deeply buried inner trauma gives SNWa particular emotional resonance that sets it apart from previous iterations of ST. It represents a newfound franchise maturity in its plausible preservation of a particular inter-universe complexity, one that balances the traditional progressive and exploratory spirit of STwith recognition of some of the darker aspects of humanity (and its alien analogues), together with a keen appreciation of the ways in which humor can serve ST as a natural part of the human experience.

Obviously, humor is subjective, but SNW’s comic aspects to me strike a much more natural tone than many of the oft-painful attempts at humor that the original series, The Next Generation, or Voyager attempted. In the episode “Charades,” (2.05) for example, Spock (Ethan Peck) is temporarily deprived of his Vulcan genetic code, rendering him completely human at the worst possible time for his future married life and giving him the explosive temperament of a pubescent teenager. Spock’s exploration of the full range of human emotions has a number of funny and farcical moments, but these are artfully and realistically mixed with turmoil at his complicated romantic feelings for Nurse Christine Chapel (Jess Bush) and a newfound understanding of the isolation and rejection that Vulcan culture inflicted on his human mother Amanda. The construction of new personal and relational understandings means the building of these connective systems among the crew of the Enterprise.

Trauma goes hand in hand with past legacies in SNW season 2, leaving few characters untouched. In fact, the title of the second episode, “Ad Astra Per Aspera” (2.02) (Latin for “Through Hardship to the Stars”) could justifiably serve as the theme for the entire season. That episode shows the fallout from the arrest of Enterprise first officer Una Chin-Riley (Rebecca Romijn) for the ‘crime’ of being a genetically altered Illyrian and hiding that fact from Starfleet. Her subsequent trial reveals the unjust and disastrous consequences of a policy made by the Federation out of fear and internalized trauma caused by the Eugenics Wars. That fear resulted in bigotry and forced cultural assimilation towards Illyrians and a most un-Federation conviction that we must be forever what we are born to be. Una was a prisoner of that policy and the chains of secrecy it laid on her, until the idealistic image of unity that Starfleet represents drives her into the hazardous act of passing—Una takes risks because,

     [i]f all those people from all those worlds can work together, side by side, maybe I could, too. Maybe I could be a part of something bigger than myself. Starfleet is not a perfect organization, but it strives to be. And I believe it could be … Ad Astra per Aspera.

SNW posits that we will not reach our human potential among the stars unless we risk exposing who and what we are and, through that adversity, reach a place of healing and transformative change. In a remarkably poignant coda in “Those Old Scientists” (2.07), Una at last receives vindication for her journey of optimistic hardship when, of all people, Lower Decks ensign/ultimate ST fanboy Brad Boimler (Jack Quaid) and fellow ensign Beckett Mariner (Tawny Newsome) cross over from their own series to inform Una that in their time—her future—the motto that inspired Una to create a new life has become Starfleet’s recruitment slogan and Una herself its literal poster child. In Star Trek there is always hope of a better tomorrow and of societal and human progress.    

The trauma of the past has dramatic impact on other characters as well. SNWis set in the (fairly) early aftermath of the horrific Federation-Klingon War, and Starfleet is heavily populated by veterans of that conflict, among them Chapel, Doctor M’Benga (Babs Olusanmokun), and Lt. Erica Ortegas (Melissa Navia). All three suffer both from bitter feelings towards their former adversaries as well as serious post-traumatic stress: one particularly harrowing episode—”Under the Cloak of War” (2.08)—deals heavily in flashbacks to the war in which Chapel and M’Benga both served in a field hospital under fire, watching young officers die horribly and (in M’Benga’s case) committing brutal atrocities in a conflict full of them. The two are united in their inability to explain to outsiders the nature of their ongoing psychological injuries and the isolation they produce; they hurt, and they hurt profoundly enough that it warps their relationships with others. However, they, too, recognize that, as Uhura and M’Benga sing during “Subspace Rhapsody”, “I look around and everyone I see/The pinnacle of guts and resiliency/Death threats are nothing new to us/It takes monumental strength and trust”, and Chapel in a solo song proclaims her joy and readiness at being free to pursue new successes that may provide psychic healing: “The sky is the limit/My future is infinite/With possibilities/It’s freedom and I like it/My spark has been ignited/If I need to leave you [Spock]/I won’t fight it/I’m ready.”

But personal traumas carry their own weight even when intergalactic war is not involved: Captain Christopher Pike (Anson Mount) suffers under the knowledge that he is destined to suffer a critical injury that will leave him paralyzed and disfigured, yet he makes the choice to build a system around acknowledging and welcoming present relationships, including fellow captain Marie Batel (Melanie Scrofano). He will likely always be struggling with the knowledge of his fate, but forming emotional bonds becomes a critical way of coping. Once again, Boimler steps in with surprising pathos, asking Pike, who is planning to celebrate his birthday alone in part to muse over his failure to reconcile with his deceased father, “I’m sorry about your dad. But I wonder, if someday you’re not around anymore, how many people on this ship would wish they had another day to talk to you?” It is a doubly emotional moment because Boimler, of course,  being from the future knows as a matter of history Pike’s final fate but cannot say anything for fear of changing the timeline.

Similarly, security officer La’an Noonien-Singh (Christina Chong) faces emotional difficulties on multiple levels—as the survivor of imprisonment by the Xenomorph-like/reptilian Gorn, she subsumes her own scarring PTSD. As a descendent of the infamous Khan Noonien-Singh, she worries that she, too, is a monster doomed by her genetic heritage—confiding this to Una’s defense attorney, the lawyer replies that, 

They looked down at us [Illyrians] for so long that we began to look down at ourselves. Genetics is not destiny despite what you may have been taught. […] You were not born a monster; you were just born with a capacity for actions, good or ill, just like the rest of us.

The severe and buttoned-up La’an gains a newfound self-confidence, and her emotional range expands even more after confessing to James T. Kirk (Paul Wesley) her feelings for him based on an attraction to an alternate timeline version of Kirk (in “Tomorrow and Tomorrow and Tomorrow” (2.03)). Though he gently turns her down, La’an sees both truth and beauty in the resulting sadness, noting that “I’m glad I took that chance. Maybe I could be someone who takes chances more often.” La’an, as do so many of SNW’s characters, develops newfound emotional maturity in the process of solidifying human connections and building systems of trust and fellowship.

Season 2 of Strange New Worlds centers on the understanding that humans are rife with deep internal conflicts that accompany them into space and inevitably inform their reactions to the universe around them. It asks the audience to consider what baggage we carry around with us as thinking and feeling beings, the realizations we come to about ourselves, and the value of forming found families within which are preserved love, loyalty, and newfound purpose. As ever with the best of ST, and indeed, science fiction in general, what is most human in us is what we carry to the stars and beyond.

Jeremy Brett is a Librarian at Cushing Memorial Library & Archives, where he is both Processing Archivist and the Curator of the Science Fiction & Fantasy Research Collection. He has also worked at the University of Iowa, the University of Wisconsin-Milwaukee, the National Archives and Records Administration-Pacific Region, and the Wisconsin Historical Society. He received his MLS and his MA in History from the University of Maryland – College Park in 1999. His professional interests include science fiction, fan studies, and the intersection of libraries and social justice.

Review of The Wandering Earth II



Review of The Wandering Earth II

Mehdi Achouche

The Wandering Earth II. Dr. Frant Gwo, China Film Group Corporation, 2023.

In January 2019, China soft-landed the first lunar probe on the far side of the moon. The next month, The Wandering Earth (Frant Gwo) was released in Chinese theaters and made more than $700 million U.S. dollars at the box office, remaining to this day the 5th largest box office success in Chinese cinema and the first major homegrown science fiction production. That the two events should happen almost simultaneously was far from a coincidence, as the nation’s push in the science and technology fields has been accompanied by the dramatic rise of Chinese science fiction, dreaming of even more spectacular technological feats in the near or far away future. The genre in China has been spearheaded since the early 2000s by the works of novelist Liu Cixin, the Hugo recipient author of the eponymous short story (2000) loosely adapted for the screen by Gwo. Judging by the enormity of the means deployed by Chinese authorities to welcome the 81st World Science Fiction Convention in Chengdu, Sichuan, last October (a ceremony attended by both Liu and Gwo), the genre is taken very seriously by the government. It might, after all, help provide the means “to grow China’s cultural soft power and the appeal of Chinese culture,” in the words of Xi Jinping, the Chinese leader, earlier that month (Xinhua).

It should be noted, however, that both The Wandering Earth and its 2023 sequel, are as much disaster films as they are science fiction features, drawing largely from their U.S. counterparts, especially the Roland Emmerich variety. The “imagination of disaster” so elegantly described by Susan Sontag in the 1960s is at full work in these two films, as audiences can leisurely contemplate the wholesale destruction of entire metropolises and parts of the globe. This is especially the case in The Wandering Earth II, which is narratively a prequel taking place decades before the events of the first film and which can therefore focus on the cataclysms themselves rather than, like the first installment, on their aftermath. However, far from a pessimistic vision of the future, The Wandering Earth II, like its predecessor, is first a celebration of the technological marvels and possibilities that the future seems to hold, allowing humanity and China to overcome all the imaginable and unimaginable obstacles in their path. Although the film revels in destroying, it is first and foremost, as Jenifer Chao writes of the first film, an attempt at building the country’s national image, rebranding it as a technological superpower associated not with a long, glorious past but with a triumphant future (Chao).  

Whereas the first film was set in the 2070s and focused on the Earth’s near destruction in the vicinity of Jupiter, the sequel takes place in the 2040s and 2050s, presenting itself as the chronicle of humanity’s early attempts at saving itself. The world governments have only recently become aware of the fact that the sun was rapidly expanding and would engulf the Earth within the next century. They have started work on what will become known as the Wandering Earth Project—the construction of 12,000 fusion-powered engines which will stop the Earth’s rotation and thrust it out of the Sun’s orbit and into deep space, in search of a new home. In due course, audiences are treated to giant waves engulfing New York City (featuring the now traditional shot of the Statue of Liberty being almost immersed in water) or meteors streaming across the globe and destroying various landmarks in the process. Urban ruins are also offered to audiences, as the panorama of a frozen Shanghai and its iconic towers recalls similar shots in A.I. Artificial Intelligence (Spielberg, 2001), for instance. This is essentially a demonstration of the newfound expertise of Chinese cinema at employing special effects that are up to par with Hollywood—cinema as essentially a technological apparatus, a cinema of attractions that doubles as a demonstration of Chinese technical prowess. If the disaster genre is “a supreme, basic and fundamental example of what cinema can do,” in the words of Stephen Keane in his study of the genre, here it also demonstrates everything that Chinese cinema can now do (5).

At the same time, The Wandering Earth II, even more than its predecessor, largely ignores some of the genre’s stereotypical characters—the greedy businessman, the cowardly stepfather—to focus instead on cooperation and unity. The old-fashioned H.G. Wells dream of a world government is resurrected in the form of a United Earth Government under the clear auspices of China. Anytime (which is often) a Western representative at the United Nations (most notably the U.S. and British ones) doubts the validity of the project and is ready to quit and accept defeat, the wise, old Chinese delegate has sensible words to remind the world of the necessity of global partnership. While careful never to hit the jingoistic tones of a film like Independence Day (Roland Emmerich, 1996), or of even recent Chinese blockbusters like Wolf Warriors (which shares with The Wandering Earth II its lead, Wu Jing), The Wandering Earth II is hard at work highlighting the merits of Chinese leadership. When terrorist attacks threaten the project and lead every other country to give up, China is left alone to heroically finish construction of the prototype engines. While we learn at one point that the U.S. Senate is preparing to opt out of the international partnership, the Chinese delegate addresses the General Assembly and reminds the world that civilization is about helping each other and mending what is broken: “In times of crisis, unity above all.” Shots of the U.N. building in New York always highlight the beauty of the structure or are careful to show the famous knotted gun sculpture and visually associate it with the Chinese delegation. China, we are assured, has the power, the know-how, the motivation and the wisdom to look after the world, contrary to the U.S.

One of the similarities between the disaster film and the war narrative is their focus on the theme of sacrifice, and the film puts it to good use repeatedly. The climax of the film (which really consists in an unrelenting series of crises and climaxes) sees hundreds of senior astronauts from seemingly every nation bringing the world’s entire arsenal of nuclear weapons (no more wars) to the moon and blowing themselves up one by one to destroy the satellite and prevent it from crashing into the earth. This moment is perhaps one of the most emotionally effective in the film, and one of the most interesting visually. Before they arrive on the Moon, their approaching flotilla is visualized through a revealing frame within a frame: the film’s hero is holding a hex nut, through which he is framing the entire earth, making it look like a tiny little atom in the distance and emphasizing its fragility (fig. 2). Before the focus switches from the foreground (the nut) to the background (the earth and the approaching flotilla), we are given time to read the inscription on the edge of the nut: “made in China” (fig. 1). That a single shot can convey so much meaning (the nut is also an ironic stand in for the ring the hero could never hand to his love interest, symbolically making humanity as a whole his new love interest) is a testament to the director’s capacity to offer great visuals that do not simply feed the audience’s presumed thirst for mayhem and destruction.

Figure 1: The Earth as seen through the frame of Chinese technology
Figure 2: The Earth as “a tiny, fragile speck in the cosmic ocean”

The Wandering Earth II offers interesting avenues for the comparative study of science fiction and disaster films from the U.S., China as well as other countries (South Korea’s 2023 The Moon, for example) and their close connection to nation branding and soft power. The first film has already been largely discussed from such a perspective, but the sequel offers an even stronger case study. 2023 also saw the release of Tencent’s 30-episode TV adaptation of Liu’s Three Body Problem (available in many countries on Tencent’s YouTube channel), while Netflix will unveil its own version in the spring of 2024. This offers the potential for further comparative studies of differing perceptions and problematizations of scientific and technological progress across East and West, especially as their respective space programs kick into higher gear in the coming years.


WORKS CITED

Chao, Jenifer. The visual politics of Brand China: Exceptional history and speculative future, Aug 30, 2022, Vol. 19, 305-316, https://link.springer.com/article/10.1057/s41254-022-00270-6

Keane, Stephen. Disaster Movies: The Cinema of Catastrophe. Columbia University Press, 2006 (2nd ed.).

NASA, “Voyager 1’s Pale Blue Dot,” https://science.nasa.gov/resource/voyager-1s-pale-blue-dot/, last accessed Jan 10, 2024.

Thomala, Lai Lin. “The most successful movies of all time in China 2023,” Dec 13, 2023, https://www.statista.com/statistics/260007/box-office-revenue-of-the-most-successful-movies-of-all-time-in-china/, last accessed Jan 10, 2024.

Wall, Mike. “China Makes Historic 1st Landing on Mysterious Far Side of the Moon,” space.com, Jan 3, 2019, https://www.space.com/42883-china-first-landing-moon-far-side.html, last accessed Jan 10, 2024.

Xinhua News Agency. “Xi Jinping Thought on Culture put forward at national meeting,” Oct 9, 2023, https://www.chinatoday.com.cn/China/202310/t20231009_800344309.html, last accessed Jan 10, 2024.

Mehdi Achouche is an Associate Professor in Anglophone Film and TV Studies at Sorbonne Paris Nord University. He works on the representations of techno-utopianism, transhumanism and ideologies of progress in science fiction films and TV series. He is currently working on a monograph on such representations in films and series from the 1960s and 1970s.

Review of The Scourge Between Stars



Review of The Scourge Between Stars

Kristine Larsen

Jemisin, N.K. The World We Make. Orbit, 2022.

Scientists trying their hand at writing science fiction is certainly not a new phenomenon. However, since the landscape of the physical sciences has been (and to a lesser extent, continues to be) largely populated by white cis-het men, their tales will often be told through the lens of mirroring protagonists. CUNY Graduate Center astrophysics master’s degree student Ness Brown openly explains that one of their priorities in writing their 2023 sci-fi horror novella The Scourge Between Stars was “contributing to black female representation in these genres and specifically queer black female representation” (“Ness Brown”). Accordingly, Brown’s inaugural work features a diverse cast of characters, including a Black LGBTQ female lead and a dark-skinned, female-presenting and identifying android.

In a YouTube interview, Brown offers how they wanted to start the story from “a place of failure,” the crew of the interstellar spacecraft Calypso and the rest of its ragtag fleet fleeing a failed colony on the planet Proxima b, “limping back [to Earth], tail between our legs” (“Ness Brown). Indeed, conditions are painted as extremely grim for the humans aboard this multi-generational retreat to a climate change ravaged Earth. With dwindling supplies and limited means to communicate between ships, their desperation is palpable. Jacklyn “Jack” Albright, second-in-command and acting captain of the Calypso, strikes a precarious balance between pushing the barely functioning technology to its limits and stretching the resources to feed an increasingly agitated crew who are apparently destined to know no other home than this hamstrung ship. It is a powder keg waiting to explode, until they are faced with a uniting enemy, a pack of stereotypical deadly xenomorphs who hitched a ride from Proxima b, hunting down and horrifically disemboweling their human victims.

Brown successfully paints a dark, haunted house atmosphere, one of intense claustrophobia and visceral terror. While the author admits to openly drawing upon works such as Dead Space, DoomPitch Black, Alien, and Event Horizon, I also noted subtle echoes of the Cloverfield franchise (Semel). Taking a page from the Alien playbook, Brown wisely shows us mainly glimpses of the creatures, enough to demonstrate their utter alienness and mode of killing but leave sufficient mystery for the imagination to work on. What descriptions we do get are indeed evocative of generic insectoid ETs and the xenomorphs of Alien. However, while this work is obviously derivative of the Alien franchise in some ways (including the strong female lead and the uncannily human android), it sufficiently avoids being a direct copycat.

A scientist’s first fictional work may succumb to several additional traps, for example, a plot slavishly bogged down in the science, stilted and antiseptic writing, or a formulaic and linear plot. To their credit, Brown avoids all of these pitfalls, even while admittedly drawing heavily upon their six years as an instructor of introductory astronomy and astrobiology (Semel). Astronomical accuracy is added in clever rather than heavy handed ways, perhaps so understated that the casual reader may not appreciate them. Discovered in 2016, Proxima b is an earth-sized planet in the habitable zone of the nearest star system, the red dwarf Proxima Centauri, but as Brown correctly explains, it is subject to intense and possibly fatal superflares (Howard et al. 1). As a planet likely to be tidally locked, the most habitable (in a human sense) area is probably the terminator, the twilight area between the permanently star-facing and sunlit side (in the bulls-eye of said superflares) and the colder dark side. The terminator is precisely where Brown has their failed colony set up shop on this rocky world. While the planet’s atmosphere apparently shields the human residents from the star’s flare-generated ionizing radiation, the orbiting spaceships suffer significant degradation, similar to effects on the electronics of Earth-orbiting satellites from our Sun’s much smaller outbursts. The author expertly (yet, again, subtly) draws upon reasonable science in crafting the evolutionary adaptations found in their monsters, explaining the creatures’ strengths and (as one might expect) exploitable weaknesses.

There are, however, numerous missed opportunities for even more detailed storytelling due to the relatively short length of the novella format. For example, there is minimal information on the colonists’ time on Proxima b and why their colony failed (other than a vague inability to establish self-sustaining food production). There is also limited motivation for the whispered legends of the deadly indigenous life, now relegated to merely scary bedtime stories told aboard the retreating ships. Brown shares in an interview that the novella format was decided upon in concert with their publisher, and “a lot was necessarily cut from the story” as a result. Brown now admits that they would “love to … wax on at incredible length about Proxima b and the conditions of the failed colony” if the opportunity arose (“Ness Brown”).

Despite these limitations, Jack’s past (and present) family drama is treated with sufficient detail to motivate her conflicted emotions and desperate plans of action. She and the handful of characters she interacts with most often (including her lover, Jolie) are described in necessary detail for the reader to have a reasonable sense of their distinct personalities. But in such minimalist storytelling, little flesh is built over the bones of most of the other characters before it is literally ripped off by the monsters. This work could have easily been more fully rounded out as a full-fledged novel, especially as there are at least three distinct mysteries to be solved—the immediate one of the deadly xenomorphs threatening the ship; the disturbing relationship between the android Watson and its creator, Otto Watson; and the intermittent events that, like rogue waves in the ocean, jolt the ship without warning. In terms of the xenomorphs themselves, this astrophysicist was left with multiple questions concerning their biology. Discussions of destroying versus experimenting with the xenomorphs’ eggs are given short shrift, yet such investigations apparently take place off stage (resulting in one of several examples of deus ex machina in the story). The final twist of contact with advanced extraterrestrials (related to the intermittent jostling events) is vaguely sketched out in the finale, leaving the ultimate fate of the Calypso (and humanity more broadly) wide open.

While the novella does a decent job in painting the creepiness of the hubristic robotics specialist Otto Watson, there is no clear motivation to it. In many ways he is a two-dimensional character, when he could have been much more deeply nuanced. In contrast, his creation, the lifelike android Watson, is a fully integrated character that is given sufficient, endearing personality to evoke concern for her safety in the reader’s mind. The disturbing relationship between the android and its creator cleverly draws upon the history of the American master/slave relationship in nuanced ways, including the android’s forced taking of its master’s name, episodes of punitive physical restraint, and nonconsensual sexual attention. The Watson secondary story is creative and meaningful, and could have been easily expanded upon with a longer page count. Turning this limitation into a strength, the story’s relatively short length makes it more easy to include in the classroom, focusing on the Watson subplot in particular, and the experiences of the female/queer/BIPOC characters more broadly.

Brown has divulged that they have a work of “fungal horror” in the works, taking place on an alien world (“Ness Brown”). Hopefully the publisher of that work will allow them to produce a complete novel so that we might have a fuller sense of Brown’s talent as a science fiction writer and world-builder.


WORKS CITED

Howard, Ward S., et al. “The First Naked-eye Superflare Detected from Proxima Centauri.” Astrophysical Journal Letters, vol. 860, 2018, pp. 1-6, doi: 10.3847/2041-8213/aacaf3.

“Ness Brown author of The Scourge Between Stars.” YouTube, uploaded by UpperPen Podcast, 25 Apr. 2023, www.youtube.com/watch?v=MBEJwfuRVPo.

Semel, Paul. “Exclusive Interview: ‘The Scourge Between Stars’ Author Ness Brown.” PaulSemel, 1 May 2023, paulsemel.com/exclusive-interview-the-scourge-between-stars-author-ness-brown.

Kristine Larsen, Ph.D., has been an astronomy professor at Central Connecticut State University since 1989. Her teaching and research focus on the intersections between science and society, including sexism and science; science and popular culture (especially science in the works of J.R.R. Tolkien); and the history of science. She is the author of the books Stephen Hawking: A Biography, Cosmology 101, The Women Who Popularized Geology in the 19th Century, Particle Panic!, and Science, Technology and Magic in The Witcher: A Medievalist Spin on Modern Monsters.

Review of Corroding the Now: Poetry + Science | SF



Review of Corroding the Now: Poetry + Science | SF

Paul March-Russell

Gene-Rowe, Francis, Mooney, Stephen and Parker, Richard (eds) Corroding the Now: Poetry + Science | SF. Crater Press, 2023. Trade paperback. 288 pg. $20.00. ISBN 1911567462.

Corroding the Now is a chapbook, based upon the conference of the same name held at Birkbeck College, London in 2019, and consisting of essays on a wide range of SF-related topics and linguistically innovative poetry. These are not the kind of poems that might feature on the Rhysling Award or which we might associate with the genre of SF poetry (as, for example, in the work of Steve Sneyd and Jane Yolen). Instead, they are in direct descent from such avant-garde groupings as the Black Mountain School and the Cambridge School, in particular such complex poets as Charles Olson and J.H. Prynne, whose verse intersect multiple discourses – political, sociological, economic, technological, historical, and ecological. On occasion, the worlds of SF and linguistically innovative poetry have rubbed shoulders: Philip K. Dick was friends with both Robert Duncan and Jack Spicer (the latter a big SF reader); Samuel R. Delany was inspired by John Ashbery to write Dhalgren (1975); and J.G. Ballard’s friends in later years numbered the poets Jeremy Reed and Iain Sinclair.

However, as co-editor Francis Gene-Rowe argues in their introduction to the book, the affinity between SF and linguistically innovative poetry should go much deeper than that: both actively desystematise habitual ways of thinking which, in their routinisation, replicate the hegemony of a “Now” that Gene-Rowe characterises as “a tawdry work of dystopian science fiction”. This desystematisation is posited by the editors as a “corrosion” and ultimately a re-worlding; a dissolving of current political and intellectual regimes in order to unearth a latent utopianism. Although the approach here is thoroughly aesthetic, it complements wider attempts to decolonise the curriculum and to use science fiction as a survival tool as in the recent essay collection Uneven Futures (2022). By necessity, though, such an approach is selective: it’s hard to see what the military SF of Neal Asher would have in common with the kinds of SF represented here, while much of the poetry tends to side with the neo-Marxist rhetoric of Prynne’s successors: from Andrew Duncan and Ben Watson to John Wilkinson and Keston Sutherland. As with any anthology, there were pieces I preferred more than others, a tendency exacerbated by my sense that responses to poetry are more emotionally subjective than responses to prose. I will admit, therefore, that my preference in linguistically innovative poetry tends towards the less doctrinaire—poets such as John James and Douglas Oliver—and to the great wealth of women’s experimental poetry, beginning with such writers as Denise Levertov, Elaine Feinstein and Veronica Forrest-Thomson, all of whom encountered antagonism from their male-dominated coteries.

To that end, the editors are mindful of the historic biases within the experimental poetic tradition, and their contributors present a range of genders and sexual orientations, as well as abilities and ethnicities. Although there is no strict order to the contents, the arrangement displays a number of intersectional interests, ranging from neurodiversity to climate change to gender politics to Afrofuturism. Indeed, one of the stand-out sequences is “We Spiders” by the writer, artist and composer Amy Cutler, whose rhizomatic piece, consisting not only of the main poem but also a series of footnotes followed by a further poem that acts as a commentary, embodies both the interdisciplinarity of her work and the book’s intersectional aims. As Gene-Rowe suggests in their introduction, Corroding the Now constitutes an act of deterritorialization: a reclaiming of SF from its precorporation into technomodernity and a repositioning in terms of a poetic artifice that foregrounds process, fragmentation, dialectic, permeability and situatedness. This is a mighty claim, but it is pleasing to see a poetry anthology in step with contemporary protest movements, inspired by such poet/activists as Sean Bonney, rather than the backs-against-the-wall negative dialectics of the 1990s.

A suite of poems by, amongst others, Charlotte Geater, Jonathan Catherall and Chris Gutkind introduces the dystopian Now that the book seeks to corrode, often via metaphors drawn from the worlds of finance and computerisation. Iris Colomb’s visual poem and Suzie Geeforce’s AR text offer other ways of embedding and appropriating technological systems as poetic resource. These are followed by the first of the essays, Naomi Foyle’s wide-ranging proposal of an ecotopian SF poetics and Peter Middleton’s analysis of autism in poetry by Ron Silliman and science fiction by Ann Leckie. Foyle, inspired by such critics as Vicki Bertram and poet/activists as Sandeep Parmar, delineates a binary opposition (at least in the public imagination) between poetry as “soft” and “feminine” and SF as “hard” and “masculine”. She argues that an ecotopian, as opposed to utopian, SF practice could exist somewhere between these binaries, deconstructing their opposition in the process. Middleton’s account, superbly detailed and sensitively written, is one of the book’s highlights and, I would suggest, essential reading for all further attempts in thinking through disability both in poetry and SF. Drawing in particular upon the work of Erin Manning and Laurent Mottron, Middleton suggests that autism might be best understood as “an entirely different processing system” that produces a “complex network” of sensory perceptions. Using this model of autism as a critical lens, Middleton applies it brilliantly to Leckie’s Ancillary Justice (2013) and the characterisation of Breq, a ship-sized AI downloaded into a single human form. Middleton then finds a similar conceptual framework at play in Silliman’s sequence Ketjak (1978) before concluding that the conceptual schema, which we call poetics, could be regarded as being already a science-fictional discourse.

The next set of poems takes a more political turn. Verity Spott offers an Acker-esque sexual fantasy; Jo Crot (presumably another pseudonym for Jo Lindsay Walton) really, really hates Ian Hislop, editor of Private Eye and establishment satirist. Co-editor Richard Parker also offers a surreal fantasy but one in which anarchic notions of community are juxtaposed with genocidal images of state oppression. The following essays focus on the politics of the Anthropocene. Josie Taylor compares Fritz Leiber’s “The Black Gondolier” (2000) with Philip Metres’s poetry sequence, Ode to Oil (2011), in which both texts figure oil as a living, sentient substance. Meanwhile, Fred Carter explores the landscape poetry of Wendy Mulford, a key figure in the development of linguistically innovative poetry during the 1970s and 1980s, and a writer, like Olson, drawn to the history, politics and geography of place, not least the abandoned tin-mines and fragile coastline of Cornwall or the glacial impact upon the shaping of Somerset. Although at first glance Carter’s essay might have little to concern the SF reader, his superb examination of how Mulford handles differing timescales and the relationship between the human and non-human, as in Taylor’s essay, has much to say to SF’s treatment of alterity. Moreover, whereas so-called “new nature writing” has been dominated by the solipsism of male explorers such as Robert Macfarlane or by Mark Fisher’s neo-Marxist rendering of “the weird and the eerie”, Carter points to a woman writer in Mulford who preceded them both and who approached the subject of landscape from an explicitly materialist and feminist perspective.

The essays of Carter and Taylor announce an ecocritical turn in the following poetry by Cutler, Kat Dixon-Ward and Liz Bahs. Kate Pickering’s “Plot Holes”, meanwhile, subjects the Biblical story of the Garden of Eden to the quantum mechanics of Max Planck, playing upon the serpent’s intervention as a singularity—a wormhole—in space and time, which also suggests the possibility for a heretical reading of this key foundational narrative. Pippa Goldschmidt, too, commits a kind of heresy in recounting how she dropped out of astrophysics but discovered another way of making sense of phenomena in the form of poetry. Goldschmidt and Pickering’s contributions inaugurate another shift in the collection towards questions of space, where the radically indeterminate yet entangled relations of quanta (as indicated in Allen Fisher’s somewhat opaque series of prose and poetry observations) are contrasted with the instrumental usages of space travel for personal gain as embodied in the figure of Elon Musk. Unfortunately, although there is much to be criticised about the proposed new era of space exploration, I find that the poems in this section, as well as Robert Kiely’s polemic on SF and poetry, tended towards the doctrinaire and to playing to the gallery. To be really effective they required more of the elegance that Jo Crot displayed (à la Wyndham Lewis) in his take-down of Hislop as a “pseudo-Enemy”.

Instead, a more thorough riposte to the new space economy is advanced in the book’s final essays on Afrofuturism. Sasha Myerson and Katie Stone alternate in leading the reader through the poetry of Sun Ra in order to reveal the unity of thought that emerges through his written fragments, and in their oblique relationship to his wider body of work. Matthew Carbery, too, takes Sun Ra as his starting-point to reflect on the roles of time, history and futurity in the work of the Black Quantum Futurism collective, and in Camae Ayewa’s solo work as Moor Mother. This excellent pairing of essays not only expertly contests the instrumental ownership of space travel but also ends the collection on an optimistic note, by arguing that there has always been, and will always be, Black people in the future no matter the entrepreneurial visions of a Musk or a Bezos.

Overall, then, Corroding the Now is, as in the nature of a chapbook, a somewhat idiosyncratic affair which nevertheless captures a moment where we might see SF and poetry as sharing a common “taproot” (in John Clute’s terminology) or conceptual schema in Middleton’s vocabulary. Despite the attempts of the editors to supply an overriding thesis, readers may tap into either the poetry or the essays, or roam freely between them. Either way, there is much here to enjoy and be stimulated by; it is much more than the curate’s egg that it could have been. In particular, academic readers of SF criticism should note how little the contributors refer to what we think of as our common critical tradition—no mention at all of journals such as Foundation, Extrapolation or Science Fiction Studies—but, instead, they take their inspiration from sources far wider than what we assume to be the critical domain. Indeed, as SF expands into the cultural field, its tropes becoming indivisible from the lived contradictions already experienced by writers, artists, filmmakers, and musicians from genres not traditionally regarded as “SF”, so we should also pause and reflect on the continued relevance of some of our most cherished critical shibboleths. Although Delany is approvingly cited on several occasions, not once does Darko Suvin appear. Who needs cognitive estrangement when life, as lived, is already sufficiently estranged and in dire need of an art various enough to represent it?


Paul March-Russell is editor of Foundation: The International Review of Science Fiction and co-founder of the feminist imprint Gold SF. In another life, he was Curator of the Eliot Modern Poetry Collection at the University of Kent. He is currently writing a study of J.G. Ballard’s Crash.

Review of The Terraformers



Review of The Terraformers

Ian Campbell

Newitz, Annalee. The Terraformers. Tor, 2023. Hardcover, 338 pg. $28.99. ISBN 9781250228017.

In essence, the process of terraforming is quite simple: find an inhospitable planet and change its ecosystem to transform it into a garden. The existing planet, be it Venus, or one of the seven theoretically terraformable planets in the TRAPPIST-1 system, or the planet called Sask-E in Newitz’s text, maintains its motion about its sun, but everything else about it becomes new, different, better. Yet this process is in fact complex, difficult, tedious, and requires a tremendous amount of work and even more time. Moreover, it renders extinct the existing ecosystem, which may well not have been hospitable to humans, but was unlikely to have been entirely devoid of life. To actually terraform a planet requires vast resources of time, capital, and labor, in addition to the continuity of focus and organization necessary to maintain the process over a timescale likely longer than that of recorded human history.

Anyone reading this review is likely to understand that SF outside of pure adventure stories generally works on more than one level: it provides us with an engaging story about a world different from our own and permits us to read that world as an estrangement of our own as a means of critiquing or reframing some aspect of our societies. Heinlein’s The Moon is a Harsh Mistress has its inhospitable planet right in its title: it uses the Moon as a penal colony in order to describe the conditions under which an anarcho-libertarian society might evolve. The engaging story of how a computer repairman is led by an artificial intelligence to help direct a revolution against Earth also enables us to explore anarcho-libertarianism from the perspective of its adherents; the novel shows us that nearly anyone who has the opportunity to escape anarcho-libertarianism does so at once, but compels us to infer this while at the same time having its narrator extol its virtues. It’s quite possible to read Harsh Mistress as promoting rather than critiquing the political system it examines, because of the layers of subtlety in the text. Le Guin’s The Dispossessed performs through its own engaging story a structurally similar and even more nuanced presentation and critique of anarcho-communism with its inhospitable planet and the intense and less than totally successful attempt to terraform it over the decades since its colonization. The Terraformers, at its heart, is a fascinating piece of science-fictional metafiction: it compels us as readers to perform the complex, difficult, and time-consuming work of transforming over a hundred thousand words into an interlocked ecosystem of text hospitable to meaning.

The text presents us, in the year 59,006 of a calendar that we’re told began somewhere around now, with the planet Sask-E, whose terraforming is in its final stages. The Verdance Corporation, over the course of forty thousand years, had first seeded the oceans with blue-green algae to transform its atmosphere, then worked on seeding and maintaining a new ecosystem so as to create a version of Earth from the Pleistocene—i.e., the period of glacial cycles between c. 2.6 million and 11,600 years ago, during which hominins developed into anatomically modern humans. Verdance plans to profit from this by selling plots of land to the idle rich, who can then decant themselves or remote-operate human bodies in order to enjoy the unspoilt/created wilderness or life in the cities prebuilt by a different, subcontracted corporation. The ecosystem is maintained/expanded by a cadre of rangers, from which our initial protagonist Destry is drawn. She spots an anomaly, which turns out to be a squatter: someone off-planet operating the body of a human enjoying the Pleistocene by building a shelter and eating and skinning animals, the last of which horrifies Destry. She eliminates and recycles the remote body, then returns to base only to find that the Verdance VP in charge of the project is furious with her: the squatter was in fact a potential customer.

The desire to get away from direct supervision leads Destry to a distant location where Verdance is having a river rerouted to make an area more attractive to potential clients. She finds a community of Archaeans, the original rangers, who seeded the oceans and were then discarded by Verdance and supposedly left to die in the new atmosphere inhospitable to them, but who instead created an underground and hitherto fully concealed city near a volcano. The rerouting of the river will cause them huge problems, so they ally with Destry: because the Archaeans have (an also hitherto fully concealed) system of machines with which they can manipulate Sask-E’s plate tectonics, they are able to threaten Verdance’s profits to the point where Verdance is compelled to negotiate with them. The first and longest of the three sections of Newitz’s text ends with a treaty whereby the inhabitants of the underground city are recognized as self-governing. The second two sections address conditions after the planet has come to be inhabited by those to whom Verdance has sold the experience. At no point does the text raise the question of what the original ecosystem of the planet might have been like.

A primary novum of The Terraformers is that technology enables the creation of sentient nonhuman animals: in the text, larger herbivores such as cows and moose (though in fact neither animal is a pure herbivore here on Earth), then smaller ones such as cats and naked mole rats, all the way down to earthworms in the later sections. Verdance limits the sentience of animals and even some humans, in order that they have only enough to do their jobs properly. When a group of rangers including a sentient cow encounter a corporate dairy farm in the second section, great hay is made of the horror this evokes in the characters, both in that one might choose to drink milk from cows rather than almonds or oats and also in that animals’ potential sentience would be as limited as that of these cows clearly is. Later, a means is found to cancel the limitations on sentience and further the treatment of nonhuman animals as people. This is the closest The Terraformers comes to a traditional presentation of SF: we can read this particular story, engaging or not, and also understand the hypocrisy of how we in the West in the 21st century treat nonhuman animals. There is cow’s milk in the coffee I’m sipping as I write this, and when I’m done, I’m going to use the beef I bought at the farmer’s market to make tacos, but I would never even consider exploiting or mistreating the cat currently on my lap and whom I absolutely treat as capable of understanding what I say to her. I’m well aware of my own hypocrisy, but another reader might well be moved by Nemitz’s portrayal of how Verdance bottlenecks the intelligence of nonhuman animals and thereby re-examine their own practices or beliefs.

This serves as an example through which we as readers can understand what must be done to most of the rest of the text. With respect to characters, Harsh Mistress and The Dispossessed give us detailed background material on how Man and Shevek came to be: their childhood and young adult experiences determine their perspectives, their politics, their very language. Heinlein and Le Guin give us characters who have evolved inside their hothouse environments, in such a manner that they are not only vivid and engaging characters, but also represent their political perspectives from the point of view of natives of those societies. The Terraformers is metafictional: it compels us to extrapolate from the characters’ words and actions what made them come to take these positions. Destry is the only one of a couple of dozen speaking parts who gets any background at all, and it’s quite minimal. It’s up to us as readers to infer, or to create out of whole cloth, the societies or particular circumstances that might have created the other characters such that they all—humans, Archaeans and sentient animals alike—have essentially the same attitudes as very self-consciously progressive young Western people from our own century, even though the book is set on another planet, fifty-six millennia in the future. It occurred to me as I wrote the characters’ names and species on an index card in order to keep track of who they were, that Nemitz’s near-total lack of differentiation among them was part and parcel of the metafiction: it is as if the text were the blank planet upon whose new ecosystem was the complicated and time-consuming work I was doing to formulate species, societies or families that might have generated such convergent characters.

This same metafictional trope of terraformation exists on many other levels of the text, as well. We are told by Destry that the sort of ranger she is generally has the protection of the ERT, an interstellar umbrella organization of rangers, but that Verdance has cloned, or built from scratch (it’s not clear) rangers not subject to this protection. Destry knows this despite the repeated statement that Verdance prevents its on-planet employees from accessing interstellar networks. It’s left to us as readers to build the network of whispers or samizdat that might have clued Destry and her fellows into the knowledge of this protection coupled with the inability to (e.g.) signal the organization that might come to their aid. We are entirely left to infer, or to build for ourselves, what society might exist so far in the future that still has corporations controlling planets yet permitting something akin to free will among human employees, instead of using drones or AI to maintain their new ecosystem. We’re told the controller of the squatter body destroyed by Destry is thinking about taking Verdance to court, but entirely left to build what a society that still had courts this far in the future might be like. We’re told that Verdance has been at this for at least forty thousand years, but left to build from the ground up an economic system where corporations, which are governed by the constant desire of their investors for short-term profit increases, not only exist over that long of a timespan but also are able to justify to those investors the tremendous work and cost involved in terraforming a planet in terms of its distant future profit. Perhaps this is a deflationary universe, where the value of a given sum of money increases rather than decreases over time. We don’t know! We get to impose our own ecosystem upon the text, and thereby replicate the process of terraforming.

We’re constantly told things, rather than shown them: it’s up to us to terraform this text. Whereas Heinlein or Le Guin might have a character tell us one thing and show us another, The Terraformers leaves it up to us to show what might have happened. Very early in the text, the narration tells us that:

The ancient order of environmental engineers and first responders traced their lineage all the way back to the Farm Revolutions that ended the Anthropocene on Earth, and started the calendar system people still used today. According to old Handbook lore, the Trickster Squad—Sky, Beaver, Muskrat and Wasakeejack—founded the Environmental Rescue Team 59,006 years ago. That’s when the legendary heroes saved the world from apocalyptic floods by inventing a new form of agriculture. The Great Bargain, they called it. A way to open communication with other life forms in order to manage the land more democratically. (13)

We’ve already explored the question of how Destry knows this yet remains essentially a slave to Verdance, unable both to access networks and receive help from the ERT. But there’s more metafiction to this. Imagine this story in the hands of Heinlein, where some grizzled old Loonie would be telling the narrative with some detail to an audience, likely with sardonic commentary by some equally cantankerous author insert. Imagine it in the hands of Le Guin, who would show it to us through a tapestry or interpretive dance, complete with storytelling that made the legend meaningful (and plausible) and also included the distortions imposed by the vast timescale of the novel. But instead, we’re simply handed this story, and then the text essentially never touches upon it again other than to use the phrase Great Bargain every so often. What did the Trickster Squad actually do? What is the new form of agriculture? The text shows us multiple examples of farm fields: wheat, sugar, lavender, and somehow the fifty-Xth millennium still has people growing and using tobacco. How did this save the world? How did the Trickster Squad overcome the modern corporate state yet still preserve for aeons a corporate state? Or is this a new corporate state, and if so, how does it differ from our own? The text of The Terraformers does not show nor tell us any of this, and while at first this might be frustrating, it may eventually dawn upon other readers that it’s metafictional. We get to terraform the text: it’s almost literally a whole blank new world. It’s tremendously exciting.


Ian Campbell is the editor of SFRA Review.