Features
Monotheistic Ethics in Caprica: AI, Governance, and Queer Futurity
Nathan Lamarche
It doesn’t concern you sister, that kind of absolutist view of the universe? Right and wrong determined solely by a single all-knowing, all-powerful being whose judgement cannot be questioned, and in whose name the most horrendous of acts can be sanctioned without appeal? (Caprica, “Pilot”)
This is the question asked in the pilot episode of Caprica, the prequel to Battlestar Galactica that depicts both the emergence of sentient artificial intelligence and the cultural conflict between polytheistic and monotheistic structures of belief and moral values. Detective Duram, a man devoutly committed to the polytheistic status quo, stands theistically opposed to Sister Clarice, the headmistress of the Athenian Academy. Clarice portrays herself publicly as polytheistic, yet secretly leads a cell of the Soldiers of the One, the militarised terrorism branch of the monotheistic church of the colonies, zealots responsible for the recent bombing of a train. It is with irony that shortly after this, he tells her “Know your enemy,” and she echoes back, “Love your enemy, Agent Duram. That is what we followers of Athena believe” (“Pilot”). This black-and-white commentary on a monotheistic system of ethics mirrors real-world criticism of religion’s treatment of queer communities. Abram Brummett, in his paper on queer reproductive rights and access to assisted reproductive technologies, argues that “conscience claims against LGBT individuals ought to be constrained because the underlying metaphysic—that God has decreed the LGBT lifestyle to be sinful—is highly implausible” (272). The Soldiers of the One also use similar illogical justifications, these same conscious claims, for their acts of terror. As Brummett notes, and as we can see in the current global political climate, queer communities face oppression as a result of these “conscience claims,” with some clinics in the United States using overt policies or subtle methods for limiting access to reproductive healthcare on moral grounds (273). There are also growing political threats to queer rights (Moreau). Which decisions require moral justification to enact? Who decides what’s right and what’s wrong? Who opts to restrict queer rights? What systems might humanity use to make those decisions tomorrow?
The AI machines—cylons—in Caprica and Battlestar Galactica carry a code of ethics that positions a right and a wrong, an objective good and evil, and in Battlestar Galactica, this code of ethics in service of their god is used to justify a surprise attack that results in the near-total annihilation of the human race. As it turns out, Agent Duram was fundamentally correct in his aforementioned assessment of dichotomous ethics. The actions taken by the cylons over the course of Battlestar Galactica mirror modern colonial oppressive regimes, and those same black and white codes of dichotomous ethics are used to justify or excuse their actions. At one point in the series, the cylons decide, entirely without remorse, that the genocide was a mistake, sorry! Our bad, oopsies. Well, no harm done, and that it would be just the best idea, really, to live with the humans on a planet called New Caprica, that the humans had believed would be a refuge from the machines. The cylons trespass on this new world, impose their own system of law and order, strip the colonised humans of their rights, abduct and detain or execute dissidents, and commit countless horrific acts in the name of living together in harmony (“Lay Down Your Burdens,” “Occupation”). Colonisation is the antithesis of queer identity and freedom, both fictitiously and in practice. Conversations converging Native studies and queer theory, for instance, recognise the persistence of heteropatriarchal structures wrought by colonialist regimes imposing a disappearance not only on Indigenous peoples, but on queer peoples as well, and that the queering of decolonisation is an essential step in having such conversations (See Smith; Morgensen 2010; Morgensen 2016; Abu-Assab and Eddin).
This discourse of erasure and the paralleled movement in popular society to promote queer rights and pride movements in opposition to the status quo of heteronormativity conforms to the theory of queer futurity, that queerness is a being rather than merely a doing, and the horizon is ever-onwards (Muñoz). And yet, the other side of the coin also remains true. With every step forward, the tide of oppression carries us back. Colonisation is intrinsic, pervasive, and fundamental to the very core of our society, just as the cylon pursuit is fundamental and intrinsic to the lives of the humans in Battlestar Galactica, post-genocide. The values our world holds against queerness are that of invisibility. As with Indigeneity, society wants it gone, out of sight, in a place where it can be ignored and shunned. The claim so often made, that “I don’t care what you do, as long as it’s not in my face,” reflects societal values. How many advertisements use sexual appeal as a selling point? Heteronormativity as a default state, by its very nature, suppresses the existence of queernormativity. It is a deep, ingrained concept even in queer communities, both at the institutional and social level (Van der Toorn et al.). It isn’t necessarily that someone is at fault, nor committing wilful acts of anti-queering, but the message is so deep, so institutionalised through customs and social norms, that this outcome is inevitable, embedded at the subconscious social level (Rafanell and Gorringe).
We must consider an important point: who are the people working to develop AI? The majority are extremely well educated and well paid, with positions across all fields at Open AI, for example, easily earning hundreds of thousands to millions in annual salaries in 2024 (Levels.Fyi). What ratio of developers are queer? How many are Indigenous? Black? Asian? How many are women? How many come from countries not considered part of “the West”? We have innate biases, and AI draws from our instructions, our experiences, our words, and draws from the data fed to it, which is mostly from western English sources and includes societal biases. Thus, the information going into AI is biased information, which makes AI biased in turn. And when we’re coding in how AI should act and react based on various ethical standards, those ethical standards are innately biased because we have certain conceptions of ethics that are not necessarily universal. For instance, there are big differences based on where you are on the planet as to whether marrying your first cousin is socially acceptable. Likewise, and more relevant here, whether homosexuality is deemed morally wrong.
As Damien Patrick Williams puts it, “We must continually ask, who is in the room when we make the decisions that influence, shape, or even determine research directions”? (251–52) These questions are essential for shaping our future reality. Algorithms are inherently neutral. They lack the capacity for ethical considerations. Those considerations instead rest with us and how we design AI, and in turn, which of us are responsible for creating that AI. The perspectives present when the people in positions of power make decisions about how to implement AI are the same perspectives that have created persistent harm in our society; they will create harm again, with or without this new system.
In Battlestar Galactica, the outsourcing of public power away from human beings and towards the cylons on New Caprica, where the cylons and their perspectives become synonymous with the law, creates a violated space devoid of human-centric rights and legal systems. Everything becomes black and white in this world. A human insurgency comes into motion, but instead of re-evaluating the merit of the system and the ethics of the very presence of the cylons, the insurgency is instantly deemed to be evil. Even the families of insurgents, even those simply associated with insurgents, are put on a list for execution. The invasion of the cylons is not merely an occupying force, it is a system of justice and law that bypasses human moderation. The power of decision making is taken out of their hands.
The parallel drawn here between the cylon occupation of New Caprica and our situation in the current political landscape is again not merely one of plutocratic or late-stage capitalist structures. Artificial Intelligence threatens the limited level of control we do have in that system. The exact same outsourcing of queer power to a heteronormative society also deprives queer rights. This is where Detective Duram’s observations hit so strongly. Not only were the cylons originally created in Caprica, the first sentient AI is modelled identically to her creator, a victim of the train bombing. She retains her creator’s personality, emotional ties, expressions, and innate biases, including religious beliefs. Imposing our own subconscious biases on AI will result in a regime, a new human age that will result in an “AI Empire” built on the very foundations of oppression. To avoid this reproduction, we must redevelop from the ground-up, starting with our core philosophies and innate assumptions surrounding its conception (Tacheva and Ramasubramanian). Yet, “Even the most thoughtful and thoroughgoing intervention cannot come close to confronting its deep roots” (Tacheva and Ramasubramanian 10), which are based in subconscious oppression. Proposed approaches to improved AI developmental ethics include training, policy writing, and the consideration of potential world impact (Xivuri and Twinomurinzi), but fall short in diversifying hiring practices and incorporating underrepresented codes of ethics. What happens when an oppressive regime backed by religious connotations of sin decides to use AI to maliciously pursue queer communities? What happens when an individual, organisation, or nation opts to create AI for just this exact purpose?
Outsourcing our public power away from human beings would result in a violation of our rights and legal systems (Liu et al.). Isherwood notes that “queer theology with its postmodern roots asks us to distrust any master narrative” (1349), and in this case, the master narrative is not of a divine being, but of a machine god, a purveyor of all our deepest and darkest secrets, our flaws and biases. The development of AI itself creates a master narrative. Just as the narrative in Caprica (2010) parallels monotheistic ethics that ultimately justify a cylon genocide against humanity, we must be cautious of single-minded codes of morality in the directional development of AI, where lacking developer diversity results in narrow world views, creating a risk of disproportionate impact on queer communities. A single woman’s avatar formed the framework of Caprica’s cylons and their eventual extermination of the human race. Developer homogeneity creates a disproportionate risk, especially in harm to queer communities, who see impact and oppression no matter their origin. Suggested approaches to AI ethics (Xivuri and Twinomurinzi) fail to address diversity hiring and foundational philosophical shortcomings.
Schneider’s book, Beyond Monotheism: A Theology of Multiplicity notes the archaic nature of a monotheistic code of ethics as applied to AI development. When modern legal and social battles have us fighting for queer rights, the problem lies in our perceptions of AI ethics. We are quite concerned with whether or not depictions of a human-machine war would kill us all off, and not anywhere near concerned enough with whether AI could impose a perpetual “status quo,” where the only thing that might change it is the will of the ruling class and the decisions made by the individuals with the wealth and power to do so. This is not limited to the ruling class. AI is already capable of profiling, and algorithmic frameworks are used to categorise who deserves or needs help and who doesn’t (Williams). Schneider notes the archaic nature of an anti-queer code of ethics, yet those ethics still form the blood and bone of society.
AI will one day soon even be capable of the distribution of judgment based on its own concepts of sin, inherited by subconscious oppression. Non-AI automated systems of justice are already here, such as Alberta’s Provincial Administrative Penalties Act, (s 16(1), s 18(4)), where legal reviews are held remotely and are not bound by evidence checks. Automated justice forecasts a future where AI is not only involved, but a leader and central figure in these decisions. It’s already being used in some parts of the world (Ulenaers). How long before we let it write legal policies that could have a direct impact on queer freedom? Look at the governance by the cylons on New Caprica, or Canada and the United States blatantly ignoring treaties made with Indigenous peoples. These are systems of governance influenced by colonisation. A government with AI influence would be perpetual, an embedded toxin that we can’t address or convince otherwise. It would be one with dichotomous ethics, the same monotheistic concepts of objective good and evil critiqued in Caprica. Who determines right from wrong? What happens when AI is used as a tool of oppression by groups with pre-conceived notions of sin?
What we might deem extremist, like the terrorist train bombing in Caprica, could be justified through a certain lens as morally correct. This notion of extremism as distinct from terrorism, as a form of morally correct activism, is subjective, but shall we consider air travel for a moment? Random pat-down searches are invasive to queer travellers, both for gender minorities and differing sexual orientations that may have what a heteronormative society deems an unconventional preference for which gender ought to conduct the pat-down. Yet, these pat-downs are justified for matters of safety. Okay, security matters. That’s what the cylons insisted on New Caprica. Arrest the insurgents, put them in camps, that will promote security! Humans are dying in these attacks, too, you know, not just the machines. Cai Wilkinson analyses the lie of queer security, a truth that escapes heteronormative society, saying that “Public bathrooms continue the logic of national borders, with gender policing central to ensuring that only the ‘right’ people enter” (97). Yet, even despite health and privacy concerns (Mehta and Smith-Bindman), safety is apparently essential enough to revoke basic rights and freedoms. That’s the cylon argument as well. The human insurgency put everyone at risk, so detain any suspects in its involvement or affiliation. So, what do we do in real life, in a post-9/11 world? Technology creates the next step towards a supposedly idealised airport security, with metal detectors and scanners. Airports are being equipped at an increasing rate with full body-scanning technology that can detect genitalia through clothing (Elias). In the system’s current implementation, security agents push a button and pick a colour, either blue or pink depending on a brief visual assumption of the traveller’s sex. If the scanner’s expectations of genitalia are not met, it sounds the alarm (Waldron). Those genitals over there have violated airport safety! Or you could always choose the pat down. Which bodily invasion are you signing up for today? Privacy is not a concept we can simply dismiss in this discussion, especially where AI is concerned, but for the sake of brevity, let’s move to the next logical step in our current political atmosphere, the “tomorrow” of this technology. We care about security, right? Acceptability is shaped by our perceptions of safety. A machine would not expect to see threateningly thick hair (Medina). Where else can these scanners go? Oh, AI can tell us. The optimal locations. Banks, sure. Government buildings. Trade centres. Key points of infrastructure.
And queer freedom dies. Bodies on full display, detectable anywhere you go. This implementation, at some point, stops being about merely security, and becomes more about power consolidation (Magnet and Rodgers), and the presence of that power is invisible to most of us, buried in our subconscious. The cylons on New Caprica use their systems of justice to enforce their will, not only their measure of security. Even in attempts to use AI to fight against corruption may lead to new avenues of corruption in turn. Kobis et al. analyse this, noting that “algorithms never operate in a vacuum but are embedded in socio-institutional contexts,” and while bottom-up AI anti-corruption tools can exist, such an approach must be done in keeping with a society adequately prepared for it, and especially in cases of societies where corruption is the default rather than the exception, “top–down AI-based anti-corruption tools can be misappropriated by governments to enhance digital surveillance, suppress opposition and undermine democratic liberties” (Köbis et al.). On New Caprica, the cylons act to corrupt the human system in order to impose their will, and in doing so, act as the default system. In this context, the situation is different from what we might see in the real world; the cylons are also very literally one and the same as the artificial intelligence. And yet, the manipulation of the system is the critical point. The cylons in this case, despite being an artificial intelligence, act more symbolically of the issue of AI in a totalitarian state, rather than a mere literal representation. The more surveillance, the more power. As one of them suggests as a response to the insurgency, “we round up the leaders of the insurgency, and we execute them—publicly. We round up at random groups off the street, and we execute them—publicly. Send a message that the gloves are coming off” (Battlestar Galactica, “Occupation”). After a suicide bombing, they realise that increased control is still needed: “we have a very serious, very straightforward problem; either we increase control or we lose control. That’s a fact” (“Precipice”). Every action, every reaction, is seen as black and white, where the only world that exists is maintaining the world’s status quo, enforced by the visionaries of the current world.
This is not fearmongering; we can see similar situations happening now, in the United States especially. Unfortunately, this paper would not be complete without addressing the human rights violations occurring against queer people in the United States. This discussion cannot be limited in whole to queer rights, as the same issues with dichotomous ethics apply to treatment towards other minority, underserved, and sensitive communities, including a presidential executive order to revoke birthright citizenship (United States, Protecting The Meaning And Value Of American Citizenship), and another to send deportees to a detention centre capable of holding up to 30,000 people (Chao-Fong and Phillips), something eerily reminiscent of Dachau. The changes being implemented refer to concrete definitions for genders, and it is likely that transgender rights are merely the beginning. The suspension of trans people’s passport applications are one indication of further widespread change and impact, as without legal recognition of gender markers on already valid passports for trans American emigrants, and the disruption of those applications, access is far more difficult and restrictive through a lack of proper documentation (Wood), and even trans Canadian travellers face uncertainty in border crossings (Major). In one notable US executive order, the sitting president went so far as to declare that transgender people’s assertion of their identity was “not consistent with the humility and selflessness required of a service member,” and further questioned the integrity and honesty of trans people, along with other relevant qualities, and named that identity “radical gender ideology” (United States, Prioritizing Military Excellence and Readiness; Lamothe et al.). To question such traits is to question their very fundamental worth and essential humanity; it is not simply akin to, it is a direct degradation of transgender people from human to sub-human by declaring their very identity as compromising to moral values. This order calls into question their very right to exist with dignity and autonomy. The intent is to persecute any dissident opinions, to fire federal employees who fail to support the regime or dare to investigate its leadership, and to purge the concepts of freedoms of speech and expression, to pursue political opponents with military tribunals, to jail election workers, private citizens, and journalists for news networks who refuse to divulge sources or who criticise the president (“Trump’s Enemies List;” “Trump Administration Fires Justice Department Officials Who Investigated Him”).
The issues arising through these dichotomous concepts of “truth” are numerous, and they do not end at traditional concerns over basic human rights and freedoms. In this paper, I have mentioned monotheistic concepts of ethics, and it is important here to call on a very important distinction: this is no criticism of faith. The opinions, faiths, and other beliefs of individuals with regard to religion are not the focus of this argument. Rather, this focus lies on the institutions that have treated religion as a scapegoat and used it as a tool of permission to abuse and exploit the people. State-sanctioned declarations against sodomy, against gender identity, against race and differing religions, these are no longer a warning, a perpetual controversy; they are here. Everything from this moment on, the DEI crisis, the attacks on immigrants, the detention centres, the closing of borders, etc.—these are all the direct display of fascist, monotheistic authoritarianism in the United States. The religious beliefs themselves are not the problem with monotheistic ethics. When Bishop Budde spoke out at the president’s inauguration, for example, to call for mercy for both immigrants and queer people who fear for their lives, she was met with some calling her a part of the radical left, and some even wishing her dead (Bennett). Because she called for mercy. Still, despite religious figures standing in opposition to these changes, the anti-queer laws implemented use arguments of religious structures, regardless of their true intent and origin.
The institutions of religion have, through politics, enabled frightening changes to governance already, and in the midst of this, a government led by the richest people in the richest country in the world has pledged half a trillion dollars to fund artificial intelligence research (Jacobs). How long before that funding is reflected in AI models through new “truth” policies and regulations, with regulated opinions that hold discriminatory values?
If our society contains subconscious biases and restricts queer rights (as it seems evident that it does), and if our biases are passed on to generative artificial intelligence, and if it/they will one day write our policies, police our streets, determine our healthcare access, and judge us in courtrooms… at what point do we realise that the ruling class will no longer be a collective of human beings? With the new political directions across the world, it may not be even subconscious bias, but an ideological imperative imposed upon us by an oligarchy that has found a method of permitting perpetual power. Caprica and Battlestar Galactica embody a colonially oppressive regime that mirrors real life, but this isn’t a regime we can fight and overcome. This is an algorithmic one, which can be programmed to think and do precisely what its creators or controllers want it to. Queer rights today exist for as long as we fight for them to exist. Queer rights tomorrow face an existential threat. How long will it be before the people already saying that “god says your identity is sinful” dictate their beliefs through their power to command AI to carry out a colonised disappearance? How long will it be before a government or ruling elite decides that some events should be wiped from history and ordains that decision through AI? It would be better to rephrase—how long before a Western government decides to do what China has already done with DeepSeek, denying the events of Tiananmen Square (McCarthy)? This is a warning of the oppressive power of AI’s ability to control information, and how easily it can be done by simply declaring an objective “truth.”
Caprica’s cylons operate at more complex levels of coding than our level of technology can muster. At our level of technology, AI is black and white. There is no empathy. No nuance. No understanding. Only an illusion of it, generated to please the status quo, the algorithm generator, and brought to you by our own inner failings. So, do we trust ourselves? Not the version of ourselves that we aspire to be, but the version that we are, the cold, hard reality of the situation. Even if we had the technology for nuance and empathy, this is still a machine, one controlled by humanity, and we have proven ourselves rather adept at manipulating each other and the masses through propaganda, disinformation, and rage. What is the god defined in Caprica but an entity of command? Not a divine being; the belief may exist, but the influence that comes not from that belief, but the control of that belief is what such a deity represents. This divine being is distinct from “god” as a symbol of worship and morality, that all-powerful being with infallible and unquestionable moral ideals.
When speaking to Galen Tyrol, Cavil, a cylon and the primary antagonist of Battlestar Galactica, comments on the futility of prayer, calling it “chanting and singing and mucking about with old half-remembered lines of bad poetry. And you know what it gets you? Exactly nothing.” He further remarks that “I’ve learned enough to know that the gods don’t answer prayers” (Battlestar Galactica “Lay Down Your Burdens, Part 1”). The gods, or god, in this case, are all irrelevant. Prayers are therefore meaningless, because this is not a discussion of faith. This is not about an innate criticism of any religious god, but rather about how god exists in our society, how divine dichotomous truth is represented in our legal, moral, and social frameworks. Religion offers ethics when we don’t have the answers, but it’s not religion that codes those ethics into our society and legal structures. Can we be expected to always know what to do, to always recognise right from wrong? What about our politicians, and others in the oligarchal order who derive greater incentive to ignore morality and prioritise their own gain? AI offers a path to moral recognition, tainted by both an intrinsic limitation of its coding and a flawed human filter. What makes ethical sense is not a factor here, because AI takes its data from our innate beliefs and biases, from our perceptions of right and wrong, not from any sort of divine objective truth. Even without intentional influence, the data it gathers and uses is data originally written by us and our biases. Does it concern us? To have right and wrong be determined solely by an all-knowing, all-powerful being whose judgement cannot be questioned, and in whose name the most horrendous of acts can be sanctioned without appeal? Is Caprica a warning of AI’s relationship with our internal biases? So the question I offer today is a simple one: are we building a god?
WORKS CITED
Abu-Assab, Nour, and Nof Eddin. “Queering Justice: States as Machines of Oppression.” Kohl: A Journal for Body and Gender Research, vol. 4, no. Summer, June 2018, pp. 48–59, https://doi.org/10.36583/20184101.
Battlestar Galactica. Created by Ronald D. Moore, Sci Fi, 2004.
Bennett, Brian. “‘I Am Not Going to Apologize’: Bishop Who Confronted Trump Speaks Out.” TIME, 22 Jan. 2025, https://time.com/7209222/bishop-mariann-budde-trump/.
Brummett, Abram. “Conscience Claims, Metaphysics, and Avoiding an LGBT Eugenic.” Bioethics, vol. 32, no. 5, June 2018, pp. 272–80, https://doi.org/10.1111/bioe.12430.
Caprica. Created by Ronald D. Moore, Syfy, 2010.
Chao-Fong, Léonie, and Tom Phillips. “Trump Orders Opening of Migrant Detention Center at Guantánamo Bay.” The Guardian, 29 Jan. 2025. The Guardian, https://www.theguardian.com/us-news/2025/jan/29/trump-guantanamo-detention-center. Accessed 8 July 2025.
Elias, Bart. Airport Body Scanners: The Role of Advanced Imaging Technology in Airline Passenger Screening. Congressional Research Service, 20 Sept. 2012, https://sgp.fas.org/crs/homesec/R42750.pdf.
Isherwood, Lisa. “Christianity: Queer Pasts, Queer Futures?” HORIZONTE, vol. 13, no. 39, Oct. 2015, pp. 1345–74, https://doi.org/10.5752/P.2175-5841.2015v13n39p1345.
Jacobs, Jennifer. “Trump Announces up to $500 Billion in Private Sector AI Infrastructure Investment.” CBS News, 22 Jan. 2025, https://www.cbsnews.com/news/trump-announces-private-sector-ai-infrastructure-investment/. Accessed 8 July 2025.
Köbis, Nils C., et al. “The Promise and Perils of Using Artificial Intelligence to Fight Corruption.” Nature Machine Intelligence, vol. 4, no. 5, 23 May 2022, pp. 418–24. https://doi.org/10.1038/s42256-022-00489-1.
Lamarche, Nathan. Monotheistic Ethics in Caprica: The Consequences of AI Development on Queer Futurity. ERA: Education and Research Archive, 2024, https://doi.org/10.7939/R3-YKVF-FD07.
Lamothe, Dan, et al. “Trump Order Targets Transgender Troops and ‘Radical Gender Ideology’ – The Washington Post.” The Washington Post, https://www.washingtonpost.com/national-security/2025/01/28/trump-transgender-troops-military-hegseth/. Accessed 31 Jan. 2025.
Liu, Han-Wei, et al. “Beyond State v Loomis: Artificial Intelligence, Government Algorithmization and Accountability.” International Journal of Law and Information Technology, vol. 27, no. 2, June 2019, pp. 122–41, https://doi.org/10.1093/ijlit/eaz001.
Magnet, Shoshana, and Tara Rodgers. “Stripping for the State: Whole Body Imaging Technologies and the Surveillance of Othered Bodies.” Feminist Media Studies, vol. 12, no. 1, Mar. 2012, pp. 101–18. Taylor and Francis+NEJM, https://doi.org/10.1080/14680777.2011.558352.
Major, Darren. “Unclear How Trump’s Gender Order Would Impact Canadians with ‘X’ Mark on Passports.” CBC News, https://www.cbc.ca/news/politics/trump-gender-passports-canada-1.7440414. Accessed 30 Jan. 2025.
McCarthy, Simone. “DeepSeek Is Giving the World a Window into Chinese Censorship and Information Control.” CNN, 30 Jan. 2025, https://www.cnn.com/2025/01/29/china/deepseek-ai-china-censorship-moderation-intl-hnk/index.html. Accessed 8 July 2025.
Medina, Brenda, and Thomas Frank. “TSA Agents Say They’re Not Discriminating Against Black Women, But Their Body Scanners Might Be.” ProPublica, 17 Apr. 2019, www.propublica.org/article/tsa-not-discriminating-against-black-women-but-their-body-scanners-might-be. Accessed 8 July 2025.
Mehta, Pratik, and Rebecca Smith-Bindman. “Airport Full Body Screening: What Is the Risk?” Archives of Internal Medicine, vol. 171, no. 12, June 2011, pp. 1112–15, https://doi.org/10.1001/archinternmed.2011.105.
Moreau, Julie. “Trump in Transnational Perspective: Insights from Global LGBT Politics.” Politics & Gender, vol. 14, no. 4, Dec. 2018, pp. 619–48, https://doi.org/10.1017/S1743923X18000752.
Morgensen, Scott L. “Encountering Indeterminacy: Colonial Contexts and Queer Imagining.” Cultural Anthropology, vol. 31, no. 4, Nov. 2016, pp. 607–16, https://doi.org/10.14506/ca31.4.09.
—. “Settler Homonationalism: Theorizing Settler Colonialism within Queer Modernities.” GLQ: A Journal of Lesbian and Gay Studies, vol. 16, no. 1–2, Apr. 2010, pp. 105–31, https://doi.org/10.1215/10642684-2009-015.
Muñoz, José Esteban. Cruising Utopia: The Then and There of Queer Futurity. 10th Anniversary Edition, New York University Press, 2019.
“Provincial Administrative Penalties Act.” SA 2020, c. P-30.8, CanLII, 1 Dec. 2020, https://www.canlii.org/en/ab/laws/stat/sa-2020-c-p-30.8/latest/sa-2020-c-p-30.8.html.
Rafanell, Irene, and Hugo Gorringe. “Consenting to Domination? Theorising Power, Agency and Embodiment with Reference to Caste.” The Sociological Review, vol. 58, no. 4, Nov. 2010, pp. 604–22, https://doi.org/10.1111/j.1467-954X.2010.01942.x.
Schneider, Laurel C. Beyond Monotheism : A Theology of Multiplicity. Routledge, 2008.
Smith, Andrea. “Queer Theory and Native Studies: The Heteronormativity of Settler Colonialism.” GLQ: A Journal of Lesbian and Gay Studies, vol. 16, no. 1–2, Apr. 2010, pp. 41–68, https://doi.org/10.1215/10642684-2009-012.
Tacheva, Jasmina, and Srividya Ramasubramanian. “AI Empire: Unraveling the Interlocking Systems of Oppression in Generative AI’s Global Order.” Big Data & Society, vol. 10, no. 2, July 2023, p. 20539517231219241, https://doi.org/10.1177/20539517231219241.
“Trump Administration Fires Justice Department Officials Who Investigated Him.” BBC News, https://www.bbc.com/news/live/cjw461nelzdt. Accessed 31 Jan. 2025.
“Trump has made more than 100 threats to prosecute or punish perceived enemies.” All Things Considered, hosted by Tom Dreisbach, NPR, 22 Oct. 2024. NPR, https://www.npr.org/2024/10/21/nx-s1-5134924/trump-election-2024-kamala-harris-elizabeth-cheney-threat-civil-liberties. Accessed 8 July 2025.
Ulenaers, Jasper. “The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?” Asian Journal of Law and Economics, vol. 11, no. 2, Aug. 2020, https://doi.org/10.1515/ajle-2020-0008.
United States, Executive Office of the President [Donald Trump]. Additional Measures to Combat Anti-Semitism. The White House, 29 Jan. 2025, https://www.whitehouse.gov/presidential-actions/2025/01/additional-measures-to-combat-anti-semitism/.
—. Prioritizing Military Excellence and Readiness. The White House, 28 Jan. 2025, https://www.whitehouse.gov/presidential-actions/2025/01/prioritizing-military-excellence-and-readiness/.
—. Protecting The Meaning And Value Of American Citizenship. The White House, 21 Jan. 2025, https://www.whitehouse.gov/presidential-actions/2025/01/protecting-the-meaning-and-value-of-american-citizenship/.
Van der Toorn, Jojanneke, et al. “Not Quite over the Rainbow: The Unrelenting and Insidious Nature of Heteronormative Ideology.” Current Opinion in Behavioral Sciences, vol. 34, Aug. 2020, pp. 160–65, https://doi.org/10.1016/j.cobeha.2020.03.001.
Waldron, Lucas, and Brenda Medina. “When Transgender Travelers Walk Into Scanners, Invasive Searches Sometimes Wait on the Other Side.” ProPublica, 26 Aug. 2019, https://www.propublica.org/article/tsa-transgender-travelers-scanners-invasive-searches-often-wait-on-the-other-side. Accessed 8 July 2025.
Wilkinson, Cai. “Queer Our Vision of Security.” Feminist Solutions for Ending War, Pluto Press, 2021.
Williams, Damien Patrick. “Disabling AI: Biases and Values Embedded in Artificial Intelligence.” Handbook on the Ethics of Artificial Intelligence, edited by David J. Gunkel, Edward Elgar Publishing, 2024, pp. 246–61, https://doi.org/10.4337/9781803926728.00022.
Wood, Olivia. “Suspending Trans People’s Passports Impacts More Than Just Travel.” Left Voice, 25 Jan. 2025, https://www.leftvoice.org/suspending-trans-peoples-passports-impacts-more-than-just-travel/.
Xivuri, Khensani, and Hosanna Twinomurinzi. “How AI Developers Can Assure Algorithmic Fairness.” Discover Artificial Intelligence, vol. 3, no. 1, July 2023, p. 27, https://doi.org/10.1007/s44163-023-00074-4.
Levels.Fyi, https://www.levels.fyi/companies/openai/salaries. Accessed 6 Dec. 2024.
Nathan Lamarche is a first year Master of Arts in English student at the University of Alberta, a creative writer, and the Associate Vice President of Labour of the institution’s Graduate Student Association. Their thesis concerns the impact of artificial intelligence and its potential manipulation on social relationships and institutional infrastructures through deceit, artificial empathy, and information control. Their future research will delve into domestic national security policies and international relations and treaties concerning GenAI. Their other areas of research interest lie in rhetoric and composition theory, storytelling, accessibility in academic writing, labour laws and movements, queer theory, neurodivergent communication, and Indigenous literature. When not at the university, you can probably find them buried deep in the mountains backcountry hiking, cooking very strange meals, or deeply immersed in a book. The majority of this paper was originally written for and presented at The Ninth Annual City Tech Science Fiction Symposium on Science Fiction, Artificial Intelligence, and Generative AI on 10 December 2024.
