Let’s Talk About Midnight Mass

[THERE ARE SPOILERS BELOW! IF YOU CARE ABOUT THAT SORT OF THING, STOP NOW. YOU’VE BEEN WARNED!]

K and I recently finished Midnight Mass on Netflix. I enjoyed it–this time of year I’m always in the mood for some horror fiction and there’s a lot out there that just isn’t good (I also recently watched Gretel & Hansel, which was mildly interesting but really just doesn’t merit a post).

Much has already been said about the series’ approach to religion, but rather than respond to the thoughts of others (many of which I’ve found cogent and insightful even where I may not agree with them), I thought I’d write my own instead.

Communion and Vampirism

Let’s first address the elephant in the room, shall we? Midnight Mass is certainly not the only piece of fiction to have made an association between vampirism and Communion. The Vampire: The Masquerade roleplaying game played with this idea and Biblical legend has perhaps always played a part in the various cultural ideas of vampirism–after all, if you have a Christian worldview and also believe in the existence of vampires (as was somewhat broadly the case almost to the 20th century and still has its holdouts) you have to figure out how the two ideas mesh. Various possibilities have been put forth in religious folklore–Cain, Lilith, etc.

The accusation that the “love feasts” of early Christians involved the literal eating of flesh was made by the Romans (probably either in cynical propaganda or credulous misunderstandings of the new religion’s rites), but Christianity doesn’t stand alone in this regard–the “blood libel” against the Jews throughout the medieval period represents a much more serious and lasting accusation than that against Christians. If you’re unfamiliar, the “blood libel” is a long-running tradition of belief that Jews were actually eating Christian babies and children, or at least killing them and using their blood. It shouldn’t need to be said but: this was an outright anti-Semitic lie perpetuated out of a cultural need for a culpable “other” and justification for pogroms against and the exile of Jews that had financial motivations as much as socio-religious ones.

For purposes of this post, though, I’m less interested in historical beliefs and more interested in the seemingly-natural association humans seem to draw between Communion and vampirism. In other words: what does it mean to “eat the body of Christ” and “drink the blood of Christ?” This will not be a thorough discussion of the theologies of Communion, but rather some general thoughts on the matter.

The first question raised, of course, is whether the terms are intended to be literal or figurative. The Catholic doctrine of transubstantiation takes the meaning literally–and this, of course, is part of the reason that Midnight Mass works with Catholic liturgy and theology in a way that just wouldn’t track the same for a Protestant theology holding that the meaning of Communion is symbolic and commemorative.

The doctrine of transubstantiation is a difficult one at best. On the one hand, that means a direct confrontation with the belief that you are literally eating your Savior (and the necessary follow-up question of “why?”). On the other, this creates the additional problem of what happens to the body of Christ once you’ve ingested it, requiring a doctrine of “untransubstantiation,” because it would be improper to defecate your Lord and Savior. Yes, that’s funny, and I giggle, too, but it’s the sort of corner that theology can back itself into sometimes. I am less inclined to believe that this is a matter of the foolishness of early theologians and more inclined to believe that it simply a matter of the limitations of the human mind as it struggles with divine mystery. There’s just really no way to definitively determine the question of transubstantiation, so doctrine on the subject must be based on other theological assumptions rather than logic applied to the question itself.

As a Methodist, I belong to a tradition that denies transubstantiation and views it as a sacrament, but one that serves as reminder for grace and divine action rather than a regular miracle. Maybe that sits well with me because of my own skepticism (where, of course, skepticism is the exercise of intellectual analysis before coming to a conclusion rather than taking an answer entirely on faith–or, conversely, denying a possibility outright). This is because I think that the metaphor of Communion is two-fold: on the side of the supplicant, the metaphor is one of spiritual sustenance embodied in reference to literal sustenance. Jesus states in the Gospels that he is the source of the living water, and that he is the bread of life, but we do not take these statements to mean, literally, that Jesus was made of water or of bread. Nevertheless, the meaning is clear–God is sustainer of all things, whether that’s the coherence of reality itself or the strength of the individual soul.

The metaphor on Christ’s side–body and blood–serves as a metaphor for sacrifice. “No greater love has a man than this: that he lay down his life for his friends.”

Does that qualify as a mixed metaphor? Maybe, but I’d chalk it up to Chesterton’s argument that Christianity overcomes “problems” of contradiction by “combining furious opposites, keeping them both, and keeping them furious.” Hence, perhaps, the love of some Christian theologians for “both/and” as the answer to apparent contradictions.

If we view Communion in this light, the comparison to vampirism breaks down immediately. There is no predation or consumption on or of one party by the other, but two different ways of looking at the meaning of the same event, both of which are simultaneously true if not directly compatible. For me, personally, this is where I find the argument for a commemorative Communion more convincing than the argument for transubstantiation; not in the rejection of the possibility of miracle but in preference of the meaning that most fits with my understanding of Christianity as a whole.

None of this is to discount the possibility of a personal, existential and mystical encounter with God through the act and ritual of Communion, regardless of your theological view of the sacrament.

Critique of Religion

Much has been made of the character of Bev Keane as vehicle for much critique of religion in general and Christianity in particular. Rightly so, she is the main villain and a truly horrible person. But, I’d argue (as others have done) that the critique demonstrated by her character is not a critique of religion itself, but of the use of religion–and equally applicable to the misuse of any philosophy or system of belief adhered to without any doubt or humility. That could just as easily be aggressive atheism, materialist science, the social-Darwinism tenets of neo-capitalism, political beliefs or, in a slightly less dangerous and much more amusing version, fandom.

It is not the substantive belief (i.e. Christianity) that makes Bev Keane evil. The story provides Christian characters antithetical to such a reading. Think in particular of Annie Flynn, who first offers a verbal rebuke (from the lens of Christianity) to Bev Keane and then lays down her life for the benefit of others in that ultimate expression of love meant to counterbalance the evil Keane has worked. If you want to argue that the fact that Annie doesn’t actually die undercuts her sacrifice, I have two responses. First, there are some consequences that are worse than death–especially to a Christian who believes in the promise of eternal life. Becoming whatever she became after she transitioned into undeath would not have been a welcome prospect. Second, that does not undo the terror that must be overcome to willingly slit one’s own throat and experience what followed.

Instead, there are two possibilities for explaining Bev Keane’s evil, and both are both infuriating and ubiquitous in humanity. The first possibility is that her position and the use of her faith serves only to fulfill those petty desires of the small-minded: something to control, something to feel superior to, something to set you apart for special praise. The second is that she has allowed her convictions to stand in the way of her compassion. This is the behavior that causes Jesus to rebuke the Pharisees so many times in the Gospels, to call them “white-washed sepulchers.”

I would argue that all genuine faith (regardless of creed or theology) must begin from a place of humility and an acceptance of love for others as the deciding factor in all moral questions. It is humility that keeps us from the surety and pride in our own ideas that allows them to justify hurting others in the interests of “purity of doctrine.” It is love that guides us not to hurt others for our own gain. That Jesus demonstrates these points time and time again is one of the most convincing aspects of Christianity to me, personally. At the same time, regardless of doctrine, I cannot conceive of a good God who would not appreciate a person who follows these practices, regardless of the specifics of their theology.

Erin Greene’s Speech

Here’s the problem that I have with the narrative and the arguments it makes: Erin Greene’s “I am that I am” death speech. Now, to be complete forthright and honest, I’m biased against the argument made by this speech in the first place, so take it as you will (which may be not at all). Here’s a transcript of the monologue so that it is fresh before you:

“Speaking for myself? Myself. My self. That’s the problem. That’s the whole problem with the whole thing. That word: self. That’s not the word, that’s not right, that isn’t — that isn’t. How did I forget that? When did I forget that? The body stops a cell at a time, but the brain keeps firing those neurons. Little lightning bolts, like fireworks inside, and I thought I’d despair or feel afraid, but I don’t feel any of that. None of it. Because I’m too busy. I’m too busy in this moment. Remembering. Of course. I remember that every atom in my body was forged in a star. This matter, this body, is mostly just empty space after all, and solid matter?

It’s just energy vibrating very slowly and there is no me. There never was. The electrons of my body mingle and dance with the electrons of the ground below me and the air I’m no longer breathing. And I remember there is no point where any of that ends and I begin. I remember I am energy. Not memory. Not self. My name, my personality, my choices, all came after me. I was before them and I will be after, and everything else is pictures picked up along the way. Fleeting little dreamlets printed on the tissue of my dying brain.

And I am the lightning that jumps between. I am the energy fighting the neurons. And I’m returning. Just by remembering, I’m returning home. It’s like a drop of water falling back into the ocean, of which it’s always been a part. All things, a part. All of us, a part. You, me, and my little girl, and my mother, and my father, everyone who’s ever been. Every plant, every animal, every atom, every star, every galaxy, all of it. More galaxies in the universe than grains of sand on the beach.

That’s what we’re talking about when we say God. The one. The cosmos, and its infinite dreams. We are the cosmos dreaming of itself. It’s simply a dream that I think is my life, every time. But I’ll forget this. I always do. I always forget my dreams. But now, in this split second, in the moment I remember, the instant I remember, I comprehend everything at once. There is no time. There is no death. Life is a dream. It’s a wish. Made again and again and again and again and again and again and on into eternity. And I am all of it. I am everything. I am all. I am that I am.”

The first thing I take issue with is that the speech exists at all. If you’re going to spend an entire series deconstructing religion and the problems that arise within it, I find it disingenuous to substitute your own argument for cosmological truth in the final act–it just makes everything that came before a strawman for knocking down, a rhetorical sleight-of-hand to lend strength to a belief about fundamental reality just as unprovable as the ones you’ve spent the rest of the story questioning. Given that the rest of the narrative raises questions about how we judge the leaps of faith we willingly make–or are called to make by others–trying to answer the question only cheapens it. The more honest approach is to leave the question open: we don’t know for sure what ultimate reality is or what happens when we die, no matter how deeply we believe in the answer provided by one faith or another, so let’s start from a place of compassion towards others and humility in our understanding of self.

For this same reason, this speech is entirely unnecessary and overreaches. The only satisfying answer that we find in the questions raised by the story lie within our lived lives, not our expectations of the afterlife. How our faith causes is to treat people in the here and now is the primary focus of any theological argument made by the show, so why suddenly go beyond that?

[Aside: I’d also note that this is the same focus that Jesus takes in the Gospels–he spends much less time (but not none) discussing the nature of the afterlife or resurrection, because (I think) however God has (or has not) structured any life to come, anything more than the hope of it is a distraction from the lives we lead now. Jesus has much more to say on how we ought to conduct ourselves in our present lives; I’d argue the central theme of his teachings is a revelation of how creation operates (or should operate) so that we can use that knowledge now.]

Here’s where, if the approach taken by Erin’s speech appeals to you, you may really want to leave off. I think it’s only fair to deconstruct that argument about the nature of reality in the same way the show does for other religious ideas. Here we go.

The speech begins with a denial that the self exists, but continues to speak in the first person. This is a problem that I have with any theological argument that asserts that denial of the self and re-assimilation to an undifferentiated whole is the purpose or end of existence. First, because this is, effectively, death. If you do not believe in an afterlife, that’s fine, this concept will work for you. But it is incompatible with the idea that we continue to exist after the assimilating event, you are, by necessity, a self.

More important, if you are arguing that the self is only an illusion (as does Greene in her monologue, as do some forms of Buddhism), who is making the argument? You have no internal consistency when you argue that there is no true thing as self and then make a bunch of statements as assertions made by yourself. This is the same problem with the materialist arguments that “there is no self, there is only the illusion of self because consciousness is an unfunctioning byproduct of firing neurons” (something that Green alludes to herself) or that we lack free will because “we’re just bags of chemicals.”

Erin’s cosmology leads to nothing morally superior to Christianity or any other philosophy or theology–it is not exempt from being misused. If I am everything and everything is me, I can justify doing whatever I want for my own power, because it’s all me anyway. If my actions only hurt myself, there is no one but me who can truly complain about anything I do, even if it seems to hurt part of me–I have the right to hurt myself as an autonomous being. Bev Keane could find ways to work with this kind of solipsism with no more difficulty than she justifies herself through Christianity.

I’m going to sidestep the hubris of decided that one is God, not to mention the absurdity of denying the existence self and then claiming such an expansive definition of self.

That said, I do believe that this philosophy is particularly apt for a horror story…if the point of the philosophy is existential terror. Really think about what Erin is arguing about her existence–she continually “forgets” and believes that she’s a self, has experiences, comes to find out she’s not a self and it has only been a “dream,” then forgets that dream and goes through the process ad infinitum. This is a cycle of believing that there is meaning in existence and then finding that there is none. It is a masturbatory universe playing with itself because there’s nothing else to do. Without variety, without self, without memory, without relationship, where can meaning be found?

Between Riley Flynn and Erin, what I really see motivating their beliefs is a desire for oblivion, a desire for the end of suffering. That’s understandable from a certain perspective; given enough suffering, the will to continue to exist in the face of pain and despair will eventually abate. I’d like to say I think of the Book of Job when I think of this, but really I think of the narrator in Fight Club: “On a long enough time line, the survival rate for everyone drops to zero.” This is a desire for escape, a desire simply to stop suffering. Given Riley and Erin’s experiences in life, I see why such a belief would be appealing. And maybe that’s all we get at the end of life, a ceasing to exist that alleviates all pain–but that also denies any of the joys of existence. I have only my faith to say otherwise.

But that is, in fact, part of why I have faith. I want to believe that there is an ultimate meaning to existence, that we exist in the creation of an omnipotent and beneficent God who wants the highest joy for each of us when all is said and done in this world. No joy that ends can be the highest joy, so it stands to reason that eternal life is necessary (though not sufficient) to the abundant life Jesus promises us. Instead of having a hope to one day escape the bad, I would rather have something more–a hope for being complete in the good.

That faith and hope makes me a better person. Yes, it helps me to suffer more patiently. Yes, it helps me to be generally happier. But it also helps me to strive to create meaning, in both life and art. It helps me to love others and to push for that abundant life here and now (what, after all, is eternity but an unending “now?”). It helps me to do good. This kind of faith isn’t a crutch; it’s a ladder.

It’s possible that Erin’s explanation of reality is the correct one; I lack the knowledge and experience to say anything conclusive on the matter. But I also see no reason, theological or practical, to live one’s life with such a belief. I, for one, will continue to set my faith on something higher.

Conclusions

If you watched this show and felt that it singled out Christianity for special treatment (I think there’s an argument that it went softer on Islam, but it’s also true that that may only be a matter of space in the story and the fact that it is Monsignor Pruitt and his church that is the focus), I’d ask you to ask why you think that is. There is, as I’ve mentioned above, the strange relationship between Communion and vampirism. But I’d argue that that’s not it. Instead, I’d argue that this is a matter of the times in which we find ourselves and of the nature of American Christianity (painted unfairly in the broadest possible brush, of course).

In the past few years, we’ve had conservative Christians call Obama the antichrist, act as if Trump were the Second Coming (a thought so antithetical to me that I have a physical reaction upon writing it), call the Covid-19 vaccine a sign of the End Times, use their faith as an excuse for not showing compassion to their fellow man (again with the vaccine, and I’ve written previously about the use of faith as an excuse given by child placing agencies to discriminate within the Texas foster and adoptive care systems) to support fascism undercurrents and spread lies about our government, to make arguments against equality, and so on and so on. The litany of offenses would be a long one indeed, and this is nothing new.

Given these stances and their affect on believers and non-believers alike, they should be subject to scrutiny and criticism. It should be a matter for every honest believer, regardless of their specific beliefs, to introspectively question the rightness of their theological positions as a matter of a desire to truly live faithfully–entrenched tradition and interpretations of doctrine originating in very different historical contexts should be especially subject to this process. Not because we have changed for the better, necessarily, but because the interpretations that arose in one context may be influenced by that context just as ours affects our interpretation. The argument that progressive Christians are trying to “change the Bible” because of changes in culture is a willful ignorance that all interpretation is subject to human limitation and the influence of culture on the mind. By having a greater diversity of interpretations, we may be able to make comparisons and weigh arguments to find something closer to the truth.

Those who’ve read my blog for a while know that one of the primary focuses in my religious writing is to argue against the fundamentalist and conservative interpretations of Christianity that I believe grossly miss the meaning of the faith–and create barriers to others in considering what true Christianity is about by creating an image of the faith that is repulsive to those who feel that compassion and love, not fear and hatred, is the message of a good God, regardless of the specific faith. In that sense, Midnight Mass makes a strong and valuable point–we have a moral obligation to consider whether our religious beliefs lead to good things or bad, lead us to make the world better or to make it worse. When it’s the latter, is it really fair to resort to divine mandate theory–that because God said it it’s true and moral? Or should we believe in a God that does not ask us to hurt others for vainglory?

UbiWorld (a “kind-of” Far Cry 6 Review)

In the midst of some (sporadic) writing, running a Brancalonia/D&D game, and preparing to open back up for another foster placement, I’ve been playing Far Cry 6. I have completed the main story and done most, but not all, of the side missions.

I’m a fan of the series, having played them since 2. But it’s a guilty pleasure, really–I don’t particularly see the setting or story of the games as particularly enthralling (despite Giancarlo Esposito playing his signature bad-guy role in 6, I think the story of 5 was more compelling–probably because it played upon personal interests (the morbid fascination with cults) and fears (the increasingly dangerous idea of what constitutes “patriotism” in the U.S.). For Far Cry 6, I’ve mostly been enjoying the mindless fun of the gameplay, the beauty of the environments, and the exploration element.

As I’ve done so, a realization has started to sink in–Ubisoft’s really only been making one game for a while. Far Cry 6 is most similar (I’d argue) to Ghost Recon: Wildlands (which I loved), but the latest Ghost Recon entries, Far Cry games, and Assassin’s Creed games are basically the same thing with some minor gameplay differences and some reskins for setting.

I understand that that’s a good business move–all of these franchises perform well financially, consumers pretty much know exactly what they’re going to get with a new version in any of those franchises, and going back to the same well of systems and mechanics certainly lowers production costs (or at least so I’d assume).

Being a person who loves RPGs (which there is some of in these games), tactical shooters (in the non-Assassin’s Creed lines), and game-world exploration (at the core of all of them), I do look forward to new entries in each genre. But I think that the narrative efforts in each new game come out much like any copy of a copy of a copy: always a little less clear, always a little less useful, always just “less” than the one before. Ghost Recon: Breakpoint, while a really interesting idea for a setting, was simply less compelling than Wildlands, Far Cry 6’s narrative certainly pulls less emotional weight than 5’s.

Something else both Breakpoint and Far Cry 6 have in common is their use of famous actors for the main villains (Jon Bernthal and the aforementioned Giancarlo Esposito, respectively, both actors I really like). The problem has nothing to do with the actors themselves–it’s that the use of the actors seems to have been an excuse for not creating more interesting and vibrant villains in the first place.

This has me on two tangent thoughts. First, what would an Ubisoft game that drew on the best elements of each of these related games look like? From Ghost Recon, I’d take the realistic weapons (in designs and performance), the plausible tech (drones, NVGs, thermals). From Far Cry, capturing bases and strategic points, side missions about fleshing out characters and narrative rather than mechanics, treasure hunts, takedown systems (for both people and vehicles). I think I’d rather keep a skill-based character development over a gear-based one like Far Cry 6. If I remember correctly, Far Cry 2 had weapon jams–I’d bring that back. Suppressor overheating is a cool idea for a game, but the way it’s treated by Far Cry 6 is really only as realistic as the “Hollywood quiet” suppressors in just about any video game.

On this note, there was some very interesting commentary (way back) on video game weapon design from on of the developers of Rainbow 6: Vegas (also an Ubisoft game). The designer giving the commentary explained that they first developed the weapons to be as realistic as possible, but then modified them from that starting place to conform more with popular conceptions of weapons–the knockdown of a shotgun blast, the quiet of a suppressor, etc.

But the second, more important thought, is about what the next evolution of these types of games should be. The gameplay is fun; I’m partial to shooters and to open worlds. While there could be some additional improvements to gameplay (as described above), the place we need some real improvement on this games to feel like they’re not just reskinned rehashes of the same old, same old, is the narrative.

Here, I have two subpoints. The first is that we need more interesting narratives. Far Cry 6, like the other games, has its moments of emotional pull. It is a revolution after all, and the true cost of a revolution, so far as I can tell (never having been part of one) is in the lives it takes or otherwise changes irrevocably. We need more personal stories. I’ve grown bored with the weird and quirky, but ultimately shallow, characters. Mr. Esposito does a fine job with his role until the very end, but the writers could have given him so much more to work with. And, while some may care for the crazy companions in Far Cry 6, I do not. As is my want in just about all of my fiction, I want more nuance, more complexity. And along with that complexity, I want some agency.

What the UbiWorld games really need is to be removed from a “playground” experience where you merely ride the rides and placed into a participatory narrative. You should have to make choices that have tough consequences, should have multiple opportunities to change the story in a major way (what if Dani joined with Castillo?), and the way that missions are approached should have a consequence as well. Getting extra resources for taking over a base without setting off an alarm just doesn’t cut it anymore.

While we’re at it, let’s through in some random events in each playthrough and some systems that combine to make for emergent gameplay. I am convinced that a great part of the success of Sea of Thieves is the emergent nature of its gameplay. My friends with whom I play that game don’t talk about the Tall Tale missions, they talk about that time where something incredible and unexpected happened through a combination of interactions with other human beings and the (random) procedural generation of the game.

I’m not saying that UbiWorld games should be massively multiplayer (though it’s a thought worth experimenting with, I suppose), but the ability of a game to generate unique (or at least particularlized) experiences for different players should become a regular aspect of electronic games.

My overall experience with Far Cry 6 is that, if you like Far Cry games specifically, or UbiWorld games in general, you’re probably going to enjoy the time you spend with it. But for me, what it left me with was a desire for something more, for true evolution in the style of games that are coming out that builds upon this strong foundation and makes it into something truly amazing.

(Review) Cyberpunk 2077: This Isn’t the Future I Ordered

[I started to write this review back in mid-January, but I got distracted by life events and other writing projects and have only now come back around to finishing it.]

[WARNING: SPOILERS INCLUDED IN THIS ARTICLE.]

I waited a few weeks before I picked up my copy of Cyberpunk 2077. My brother had been playing since release day on a stock Xbox One and swore up and down he wasn’t having massive crashes or game-breaking bugs. So, about the start of the new year, I plunked my creds down and unlocked that deluge of bytes and bits that, a short time later, coalesced into the game on my Xbox One X. It seems only fitting to get the game through such a method, though I didn’t manage to find a way that I could download the game straight into my brain. Some of the things I was promised by the Cyberpunk of my youth are yet to come to fruition.

I played through most of the available content, having fewer than half-a-dozen side missions left and about as many of the available NCPD gigs. In that time, the game hard crashed fewer than tens times and, between the system’s assertive autosaving and my own constant backups, I never lost more than five minutes of playtime when a crash happened. I lost much larger chunks of play when AC: Valhalla crashed on me, which happened with less frequency than Cyberpunk crashes, but not by much.

I only noticed one other major glitch while playing, and that was that, once I equipped the Mantis Blades, they would never retract, even when I switched weapons, and continued to take up a good half of the screen. The issue resolved when I switched back to the monowire cyberweapon instead and I didn’t try the Mantis Blades again during my playthrough. There were a few minor visual bugs or errors–such as being unable to pick up certain (very low value) items that had been marked as pick-upable. Overall, the game played smoothly, was pretty to look at on an few-years-old Samsung HD flat screen, and didn’t suffer from the litany of problems I’d been led to expect. The game actually convinced me that upgrading to the Xbox Series X might not be as imminent a necessity as I’d previously thought. Your mileage may vary.

A subsequent second full playthrough (and about half of a third) had me see most of the rest of the game’s side missions, with fewer crashes or issues each time–thanks to consistent updates by CD Projekt Red.

Let’s Talk About Sex

Let’s talk about the ugly first; get it out of the way: Cyberpunk 2077 decided to resort to gimmick and shock value in its treatment of sexual issues. The range of gender presentations that had been promised in the character builder was lacking at best. Instead, you can pick your penis size, or have a vagina. None of the choices matters, and there’s really no purpose to them. I don’t mind sex and romance relationships being part of the story lines of video games–I’m a generally hard person to offend, so those things merely being there don’t incite me to anger. That said, I’m not sure that I’ve ever come across a romance system (or dialogue) or a “sex scene” in a video game that didn’t make me feel awkward and uncomfortable. You can find elsewhere a deep discussion of some of the sexualized gimmicks and mistakes made by the game designers. For my part, what I really want to comment on is the missed opportunity here, with the clumsiness of the shock-value choices made by the developers underscoring the lack of thought given to their approach. I’m not interested in the debate of whether sexual topics should have been omitted from the game altogether; with regard to such issues, my first question is always “what does the inclusion accomplish for the story” and, while the answer in Western media is often that it’s included only to pique the prurient interests of the audience, I also stand amazed, like G.R.R. Martin and others, that American society in general is simultaneously so uncomfortable with sexual issues and so comfortable with the graphic depiction of violence.

Cyberpunk, as a genre, provides us with warnings not just about technology used without regard for ethical considerations, but also the commodification of everything human by ultra-capitalist systems. While the former is certainly an increasing worry for modern society, the latter is the far more pressing issue in my mind–after making it through the widespread disaster that was Texas’s (lack of) preparedness for winter storms last week, which to my mind clearly demonstrates the problem with allowing profit-driven private interests to trump public welfare (as does the system of pharmaceutical development in the U.S. and its effects on the current pandemic)–the increasing dangers of a society caught in a death-spiral propelled by the veneration of capitalism above all other ideologies feels close to home. So, when Cyberpunk resorts to using sex and nudity only as window dressings, instead of commenting on the increasing commodification of sex and human desire, I honestly feel a little cheated about what could be meaningful narrative that could pull Cyberpunk 2077 from entertaining game into the realm of participatory literature. Even the plotline with Evelyn and her fate does little more than provide plot points without much consideration of what it means to be a “doll” sacrificing personal identity to satisfy the needs of others (sexual or not) and the plots that revolve around Clouds likewise use the profound sexual issues as a backdrop without making profound use of narrative potential.

You Get What You Give

I read another review of Cyberpunk 2077 that criticized the lack of defined personality for V, complaining that The Witcher had you play a character with a defined personality for whom you still had meaningful choices to make and further lamenting that V’s personality can swing psychopathically based on the whims of the player. I’d like to respond to that evaluation and, since it’s my blog, I will. My kneejerk reaction to this sentiment is that the critic needs to play more roleplaying games (pen and paper, preferably) to appreciate a video game in which you have the opportunity to create a personality for your character without having that personality defined for you. I, for one, would rather play a protagonist I get to design for myself rather than playing someone else’s character in a story. If the character comes across as inconsistent, that’s on the player more than the designers, because you have opportunities in Cyberpunk to make consistent character choices. If, on the other hand, you approach every dialogue option from the perspective of yourself staring at a screen where you have an avatar to wonder around in making choices according to your every whim, of course you’re going to end up with an inconsistent character. Feature, not bug, in my book.

That said, not all of the character choices have enough effect in the game to be meaningful. Some aspects work well without changing the storyline much or at all–the developing relationship between V and Johnny can be cathartic, dramatic and satisfying on its own (though this is undercut somewhat by having a “secret” end-mission option based on your relationship score with Johnny causing a split between immersively playing a character and meta-gaming the program). Otherwise, though, many of the choices are too limited in effects to truly be felt. Yes, some choices will open up romantic relationships, and some will allow for different end-game missions and resolutions to the main plot, and there are a very select few that will have a later result (freeing Brick in the initial confrontation with Maelstrom may have a later effect if you play through all of Johnny’s missions), many follow the pattern of “let them say whatever they want so long as they do what you want them to.”

Maybe I’m chained to my existentialist leanings, but it seems that there’s a lot of Cyberpunk’s story and main character that only bears the meaning you personally create for it. Just like the ambiguity of life in general, that could be immensely freeing and satisfying or terrifying and ennui-inducing. Or both at once.

Gameplay
I played my first playthrough on the “normal” difficulty setting, increasing the difficulty for each subsequent playthrough after I’d grokked the game’s systems and idiosyncrasies. My first character ended up as a sort of generalist, my second a street samurai foregoing any hacking for a Sandevistan and later a Berserk module, my third going full Netrunner.

The game is devastatingly easy, even on the highest difficulty setting, for netrunner characters. One reviewer compared netrunners to wizards in fantasy settings, with programs approximating spells. I think that’s relatively true, especially because the programs work in ways that are especially “gamey” and unrealistic. If you’re going to implant yourself with cyberware, you’re not going to allow that cyberware to be wirelessly-enabled for any punk with a computer to hack into, and you’re probably going to invest in a decent firewall as well. Systems aren’t going to be designed with such blatant faults in them that you can electrocute or overheat the user. So yes, the hacking in Cyberpunk is essentially magic.

The pure combat approach, even with a good deal of stealth, is much more difficult, especially on higher difficulties. Without the ability to hack cameras, you have to be especially careful. Attacks must be carefully planned so as not to be overwhelmed. I kind of think that this was the most enjoyable approach to the game, though, both for pleasure of gameplay itself and the satisfaction of achievement. There’s something thrilling about beating a machinegun-wielding punk to the punch while swinging a katana, and the gunplay in Cyberpunk 2077 is pretty good, too–and I love a good tactical shooter.

Another exploit to use or avoid is finding the Armadillo mod blueprint. I don’t think that there’s any Technical skill requirement on being able to craft the Armadillo mod at any rarity level–the rarity level of each one you make is just randomized–and few materials are required to make them. If you keep to clothes with multiple mod slots and fill them all with level-appropriate Armadillo mods, you can maintain an Armor rating sufficient at any given level to feel nearly invulnerable.

The game lacks some of the exploration elements you might expect in an open-world RPG; you’re not going to find as many of the sorts of locations that tell their own little stories like you would in Fallout or Elder Scrolls. But the side jobs are interesting–some of them more interesting than the main story, I think–and search them out, as well as the NCPD hustles, fills some of the gap.

Substance and Style
The feel of Cyberpunk 2077 is the feel of 80’s sci-fi in the setting tone and dressing. On the one hand, that’s fitting; cyberpunk was born in the 80’s. But it’s also been more than 30 years since the end of that decade. Technology and culture have changed. Our cultural fears and suppositions have evolved. World events have shown us that, while the danger of megacorporations is real, it might not be so melodramatic as we expected. We’ve had Brexit, the War on Terror, the War on Drugs, the realization (by us–often willingly–ignorant white folk) that racial injustice has never been overcome, the resurgence of far-right terror groups and white nationalists, and the shift of widespread economic fears focusing on Japan to focusing on China. But we’ve also seen some things change for the better–green energy is innovating and being taken seriously, a majority of the world (if a slight one) believes in the reality of climate change and the moral obligation to do something about, technology has provided for democratization and methods of social resistance as much as domination.

Bear in mind that the original Cyberpunk RPG setting took place in 2013; the most popular version of the game was set in 2020. Moving the timeline forward (either all the way to 2077 as in the video game or to the 2050’s as in the tabletop Cyberpunk Red) begs the question–why hasn’t anything really changed? Yes, Mike Pondsmith and the other members of the creative teams of both projects did hard work in balancing a setting that feels at once like nostalgic Cyberpunk and just a bit different. That’s a difficult line to walk, so I’ll admit that my comments here should really be applied to the cyberpunk genre in general and not to the Cyberpunk setting specifically, in any of its guises.

But I’m ready for cyberpunk as a whole to grow up, to evolve with us. It’s insufficient to continue to dwell on the cyberpunk of the early years–though we must acknowledge a debt to Pondsmith, Gibson, Stephenson and the early fathers of the genre. Where’s a cyberpunk for my middle years, one that includes all the myriad shades of gray endemic to any genre born from noir, but that also includes some dashes of color hear and there, that gives us a gritty optimism, reasons to fight the evil in the world to preserve the good, reasons to do more than only survive?

Maybe I need to read more cli-fi and other developments out of the cyberpunk genre. My own fiction writing, while fantasy in genre, takes a number of cues from cyberpunk–but that’s not quite what I’m talking about either. Where’s the wise old cyberpunk that’s introspective in new ways? I’m seriously asking–if you’ve found it before me, drop me a line!

So, while enjoying the neon retro-future that Cyberpunk 2077 offers, I’m also left wanting something more.

Conclusions
I enjoyed Cyberpunk 2077 well enough to return to play it with different character builds, and it’s definitely reminded me of my nostalgia for the cyberpunk genre. I think that it’s the gameplay, though, that did it for me more than anything. The narratives have their clever points, drama and empathy-invoking aspects, but if you’re looking for storytelling on quite the same level as The Witcher, you’re not going to find it here. Maybe it was just too much hype for its own good. Maybe too many promises that didn’t make it into the release build. Or maybe it promised us a world we’ve already left behind in our hearts and minds.

Review: The Queen’s Gambit

Note: This review is only about the TV Series. I haven’t read the book and currently don’t intend to.

I liked this TV series. I’m a little upset that I did.

Don’t get me wrong, there’s a lot to like in the series. Anya Taylor-Joy plays the role of Elizabeth Harmon beautifully, with a subtlety of expression and nuance of character far more mature than many older actors. The filmography, likewise, is intoxicating, well shot, full of dream-like color. The music suits the period and theme while providing a nostalgia for those who lived through the 60’s or, like K, who were raised on the songs of the era.

More than anything, the series builds an ethereal, mystical view of chess, depicting the tension in every move, the complexity of possibilities, the focus and forethought of the players as well as their emotional investment in one seamless package that would entice anyone to take up the game. I think that it’s this mystique that made the show so enjoyable for me.

But, at the same time, I found the storytelling to be disappointing. The show plods along from plot point to plot point in formulaic structure. Following genre and convention in the structuring of a story isn’t a bad thing–formal structures in writing have been adopted because they work, and in the commercial setting of TV shows and filmmaking, not following recognizable structure may be fatal to ever getting a first read of your work by someone with the authority to make a script a full production.

The Queen’s Gambit follows structure dutifully, though, dispassionately, focused on going through the proper motions than making them mean something. It is the difference between the dancer who is technically proficient and the one whose motions tell you a story that stirs the soul. If we’re going to be specific, the problem is that Elizabeth Harmon’s lows are never low enough. Without giving too much away, she suffers some significant obstacles in her path–some of them truly tragic–and yet we’re never given enough time with any of them to let them sink in, nor are we ever shown them affecting Beth in a deep (or even realistic) way.

Beth’s most significant flaws magically heal themselves in time for the climax. Those people she’s spent time using and then pushing away all return to loyal serve her in her time of need, with no real explanation for the change of heart. What should have been a central struggle for the character–her addiction to barbiturates and alcohol–is simply set aside when the time is right. Only Taylor-Joy’s face gives us any indication of a struggle over giving up the addiction–the script gives us about 5 seconds of film to turn around a character problem developed over episodes of the series. We’re given multiple instances of Beth indulging in her addiction, but only the flipping of a switch in being rid of it.

That’s why I feel bad about enjoying the series. The writing was passable for the most part, but sorely lacking in some of the most important aspects of story. When the climax is a foregone conclusion, you lose the drama, the catharsis, that causes us to immerse ourselves in story in the first place.

What we are left with is not a period piece or a character study, not a bildungsroman or hero’s journey, but a story about chess. The characters are merely present to show us the details–social, technical, emotional–of the game. They become pawns themselves in the writer’s moves, shadowing a game someone else played to perfection a long time ago. Pieces moving across a ceiling with dreamlike precision.

Assassin’s Creed Valhalla – a Strange Nostalgia

I haven’t quite finished the game yet, but I’m far enough in I think I can give a good review. Here it is.

First, the ugly. Feel free to skip these minor rants if you’d like.

I have a love/hate relationship with the Assassin’s Creed games. I love the historical aspects of them: running around in reconstructions of places I’ve studied but can never truly visit, hearing at least a palatable effort at ancient spoken languages (the Old English of Valhalla being the one I’m most familiar with, as it happens), and living an adventure–if overblown and grandiose–in another time. But I hate the framing device in which all of the Assassin’s Creed story takes place. If there weren’t so many people out there trying to peddle some version of historical belief in ancient aliens (an idea I find to be demeaning to historical peoples and often invoked as a matter of racism), I might not mind it in my fantasy games. But there are, and I do.

I’m also not a huge fan of the use of Templars and Assassins as factions for what is (at least in part) supposed to be a “good versus evil through history” struggle. Both factions are too nuanced and problematic for such use, and employing them in such a way, I think, plays too much into the conspiracy theories about them. From the narrative perspective, it’s sloppy writing to resort to them. From the historical perspective, its dangerous pseudo-revisionism thinly guised by fantasy. At best, their use makes unintended assertions about history that, while placed in a fictional environment that logically has no bearing on actual history, blends enough of the semblance of history into the setting to make that easy to forget. This is only partially side-stepped by the fact that the factions we’re dealing with in this game are the “Hidden Ones” and the “Order of Ancients,” the precursors, respectively, of Assassins and Templars.

So, I try to skip through those parts of AC games (though not all the Order hunting–I’m not a philistine) and focus on the “historical” portion of the games. Thankfully, they historical portions are by far the greater part, and I’ve only really had one cut-scene of the present “Animus” framing device in many hours of play.

Gripe #2: Assassin’s Creed Valhalla has no singlehanded swords for player use. Given that the early medieval sword (those that fall under Peterson’s typology rather than Oakeshott’s) is an iconic image of the Viking, it is nothing short of a travesty that they are missing from the game. This is exacerbated by several factors: (1) many enemies use a single-handed sword, so the assets and animations are at least partially present, and the “why can’t I just pick one up” question looms large; (2) you are given several ahistorical two-handed swords to use; (3) it’s just such an obvious oversight.

A further comment about the two-handed swords (with the caveat that I’ve mostly been using one in the game): my supposition is that the choice not to include dedicated one-hand swords arose out of a perk that allows you to use large weapons in a single hand (thus pressing the two-handed sword into service as a one-handed sword). Yes, it’s a video-game, but that choice strikes me as dumb anyway. From a mechanical standpoint, it reduces the value of choice of weapons, with the realism sacrificed for the “cool” value a bit over the line for my taste (which I admit is a personal matter). From a historical perspective, it pushes the problem of the lack of historicity even further.

You see, there really weren’t two-handed swords in the 9th century (when the game takes place). There are several reasons: first, the metallurgy of the time was not a precise science by any means, and making a durable blade of two-hander length wasn’t likely enough to succeed to be worth it. Viking blades, like katana, were created through the “pattern-welding” process of steel-making, which relies in turn on “forge welding.” In forge welding, several slats of metal are heated until they begin to fuse and then wrapped and twisted together into a cohesive whole, where the flaws of one original piece of metal are hedged by the presence of the other pieces. Because of the differing carbon content in the finished piece, a blade could be acid-etched to reveal the patterns in the twisted metal. The result is what the Vikings purportedly called “the serpent in the steel” and is often mistaken for Damascus steel.

There are a handful of photos sometimes claimed to be of archeological finds of two-handed swords, but these photos make their argument based on the length of the grip. That itself is problematic for two reasons: (a) these photos are not of complete weapons in useable condition, and it’s difficult (perhaps impossible) to know how much of the blade’s tang that would extend into the pommel is being touted as space for a hand, which it is not; (b) without full provenance and scholarly descriptions of these blades, the photos aren’t really that helpful anyway. The second and third reason two-handers weren’t common are related to the style and nature of early medieval warfare.

Valhalla never demonstrates this (missing some interesting mechanics, I think), but battles in the 9th century (and surrounding centuries) were largely fought based on the shield wall (as since ancient times with Romans and Greeks before them). For the shield wall to work, your shield is responsible for protecting part of your body, but also part of the body of the man standing beside you. That means that everyone in the rank needs to carry a shield. That leaves no place for two-handed swords.

There are anecdotes about brave warriors moving in front of their shield wall, exposing themselves and demonstrating that bravery, while throwing spears, collecting the gear of a fallen enemy, or undertaking other exploits, but it is the fact that this is extraordinary behavior, not common behavior, that makes these descriptions part of sagas (with parallels in Celtic literature and probable other cultures’ tales of the same period).

The two-handed sword largely (but not solely) developed in the high and late middle ages for a single reason–plate armor. The reliability of plate armor meant that a shield became unnecessary as a weapon of war, and that new weapons were needed to confront the threat. The acute-pointed, two-handed blades of the late 14th and the 15th century were a response to changes in armor, allowing a weapon that could be “half-sworded” to find the chinks in an opponent’s plate at close range and that could be wielded with greater speed, power and precision generally.

There is debate (and perhaps some consensus that the answer is “no”) as to whether a single-handed sword can break through the riveted maille used by Vikings and Anglo-Saxons. Even if it doesn’t, though the force exerted by a blade hitting mail can break bone and cause significant internal injury (of course, a padded gambeson was worn under mail to help resist this). Regardless, the single-handed sword (as well as spears and axes) where largely seen as sufficient to address this problem (or the metallurgy issue trumped all in preventing two-handed swords).

Okay, enough of that.

My third issue really has nothing to do with the game proper, so I’ll keep it short. I am concern about the idea of the “modern Viking.” I’m seeing an increase of clothing brands using that kind of terminology (on them or in advertising) in soliciting buyers in the tactically-minded, survivalist, or militia-type categories. This disturbs me because: (1) Vikings were not people to be emulated; (2) our society has no place for the kind of behavior for which Vikings are seemingly idealized; and (3) identifying oneself in such a way (except for a very small minority of people, perhaps) is not realistic. Even where it may be realistic, I’m not sure that it’s healthy. It’s essentially saying “I’m someone who thinks violence is the best answer.” I cannot disagree more. Alright, that’s done and done.

Now, what do I actually think about Assassin’s Creed Valhalla? A few things, in fact. Is it fun? Yes. Is there a lot of content to play through if you want it? Yes. Is it a beautiful game? Yes. If you liked AC Origins or AC Odyssey will you enjoy it? Absolutely.

All of that said, I have some reservations about Valhalla as an “Assassin’s Creed” game. This game has added some great elements to enhance the Viking side of things, but I think that this comes at the cost of the “Assassin’s Creed” heritage. The Raiding mechanic (in which your longboat crew assists you in attacking and pillaging monasteries to steal supplies and materials used to build and enhance your own settlement) is fun and, at least on a stereotypical level, emblematic of our ideas of Vikings. Likewise, references to holmgangs, weregilds and althings help immerse one in the Viking and Anglo-Saxon cultures. The reliance on tales of Ragnar Lodhbrok may lean too heavily on the recent History Channel (which, ironically, isn’t usually that great in its historicism, preferring in both documentary and fictional programming to serve entertainment over accuracy).

As an admission, I’m playing on “Normal” difficulty. I tell myself that this is because I don’t want to devote the additional time required to play at a harder difficulty level, but you’re free to substitute whatever rationale or psychology you’d like. On normal difficulty, there quickly becomes little reason to resort to stealth, as you become powerful enough to wade into even the most heavily-guarded fortresses and take out everyone without breaking a sweat. Very Viking saga, yes, but not very assassin-y.

Overall, the game has a lot more in common for me with the Witcher 3 (although less well-written, less complex, and generally less interesting than my travels with Geralt) than with the early AC games. Gone are the desperate roof-top escapes from guardsman in a world where everyone is inexplicably a parkour master. Gone are the hit-and-run tactics. Gone is the aching for the time when you unlock the second hidden blade to take out those pesky pairs of door guards. Do I really miss those things? I miss the Florence of AC 2 and the pirate shenanigans of Black Flag, but I’m not sure I miss the stealth gameplay as a whole. It is, though, notably deficient. Again, a higher difficult mode may sufficiently remediate that problem–at the expense of no longer feeling like a powerful Viking warrior in a saga. But, given my complaints about historical accuracy above, maybe I’m just not someone easy to please, and the fault lies more with me with the game. As you know from my last review, I just came off of playing Watch Dogs: Legion, so maybe I’ve been stealth game-played out for little while. Or maybe that’s just not my style of game, much as I’d like to think it is.

But there is an aspect of the game that leaves all of the rest by the wayside and has kept me coming back to sink hour after hour into it: the setting itself. If you’re a frequent reader of the blog, you know that my own historical study has more to do with the late medieval and early-modern periods than the time of the Vikings and Anglo-Saxons. But I took a semester of Old English in grad school; I’ve read Beowulf, The Dream of the Rood, and The Battle of Maldon, some of the sagas and the Norse mythologies. I know enough not to think of the 9th century as a “dark age.”

As with both Origins and Odyssey, the ways in which the culture, art and architecture of the setting are brought to life amaze me and put me in awe. In addition to the pure pleasure of dwelling in the setting for a while–what I’d argue is the game’s biggest draw–it’s actually helped me discover and think about some flaws in my own historical conceptions.

Some of these are part of our general culture, I think–our movies and books tend to conflate the material culture of the late medieval–knights in shining (plate) armor, palace-like fairy-tale castles, etc.–with oversimplified cultural concepts derived more from the late Viking age and early medieval.

Over the Thanksgiving weekend, in parallel to playing Valhalla, I spend some time re-reading through The One Ring roleplaying game books (impressed again at how well this system in particular captures the feel of Tolkien’s world without layering on other fantasy ideas and fandoms) and watching the Hobbit trilogy with K (we also got halfway through LotR, but some unexpected demands–mostly work in my case and football in hers–prevented the completion of the second trilogy). They reminded me how much Tolkien’s world should be conceptualized in light of the Anglo-Saxon world rather than later medieval ideas. The armored characters should be in maille, not plate, wielding Carolingian or Viking-style weapons rather than later-medieval ones. The Rohirrim embody the Anglo-Saxon feel within the films fairly directly (aside from having stirrups and cavalry), but that aesthetic, or riffs upon it, should extend far further. I wonder whether and hope that the impending Lord of the Rings reboot will follow that tack.

Since the films released, Tolkien’s Children of Hurin, relying as it does on elements of Kallervo from the Kalevala in the story of Turin Turambar, serves as a reminder that Middle-Earth belongs in the early-medieval more than the late in terms of material culture and style.

That, ultimately, is what I’ve come to love about AC Valhalla: that it makes me feel a nostalgia for a period of time I’ve discovered that I find far more enthralling and fascinating than I previously knew. I guess I’m going to have to start looking for a Great Course on the Vikings and Anglo-Saxons, or go back to reading Tolkien and Norse sagas.

Maybe this isn’t the kind of review you were looking for–with its diatribes and digressions, that’s perfectly understandable. But I’d like to conclude by saying that I think the praise I’ve given here, that the game immerses one in an amazing historical milieu, is about the best I can give. Except that, if you haven’t played The Witcher 3, for God’s sake, go play that first. Then you can play Valhalla. On the other hand, if you’ve never played an Assassin’s Creed game, Valhalla makes for an easy entry point, if one that won’t prepare you for the early titles in the series.

Review: Watch Dogs: Legion – Good Timing?

I picked up Watch Dogs: Legion on something of a whim, if I’m to be honest. I played the first one but passed on the second. What piqued my interest and put me over the edge was the fact that there is no “chosen one” central character and that you recruit your resistance against the forces that have overtaken near-future London from the general populace.

I probably spent as much time recruiting characters to DedSec as I did actually playing through the story. Certainly, I devoted much more time to recruitment than I did to side missions–about halfway through the game I decided I just wasn’t interested enough to spend that much time playing.

The situation in London is bleak at the beginning of the game; a terrorist group calling itself Zero-Day (or maybe lead by someone calling themself Zero-Day, this wasn’t quite clear to me) uses a spate of synchronized bombings across London to allow the city to largely turn over authority to a private military company, ironically named Albion. At least it’s leader isn’t named “Arthur.”

This puts London in a condition that represents some of my worst fears for the direction the U.S. is headed. I should mention that my father lived outside of London for about two years while I was in high school, so I spent a good deal of time in the city and, being too young to drive in the States, I learned to navigate the Tube long before I learned how to navigate Houston’s congested highways. So, in my mind, there’s a personal link between London and my own experience that perhaps made its familiar places (I always knew I’d gotten myself lost in the West End when I found myself walking between the adult-themed shops of Soho) feel like a strong link to my present concerns.

If you’d like it laid out for you, here are some of the aspects of the collapse of London’s (the country as a whole is rarely mentioned) democracy in the game: Albion patrols the streets in armored personnel carriers, armed with the kit expected of a warfighter, not a peace officer (blurred as that line is in the U.S. right now). Normal people are stopped and harassed as the already-prevalent camera system and the personal data captured by our smart devices turn London into a surveillance state. The vestiges of British democracy–the Home Office, the Parliament, etc., still exist, but only to provide cover for the authoritarian leanings of those really pulling the strings (the game explains that Parliament has been suspended and that the Queen–no indication of which Queen that is, mind you–has not been seen for some time since the bombings). Albion is disappearing its detractors left and right, the news stories that come up in your feed are often manipulated propaganda rather than reporting with integrity, and the current administration has formed unofficial alliances with the city’s largest criminal organization to facilitate its ends.

This is the situation in which your resistance hacker collective is formed. In today’s day and age–not just in the U.S. but in Europe and Britain as well, where the specter of conservatism dangerously flirting with fascism and/or populism raises its frightening head as well–there is a definite catharsis to be had for players needing to sublimate the angst they feel at the current political climate into imaginary action. I count myself among those players.

That’s why the recruitment missions feel so powerful–the need to bring in allies of similar mindset, who confirm and justify your beliefs that there’s something wrong with the current situation that calls for action, even of the direct and aggressive variety–is something many of us feel right now, whether or not that’s a reasonable mindset.

There are plenty of reviews talking about how cool it is to search out the various abilities (or weapons) different characters have as you build your team; I’ll acknowledge that aspect of the system but not dwell on it.

I will mention that the game has an option for permadeath for your operatives, and I can’t imagine playing the game without this option. The consequences, the drama of recruitment and selection of a particular character, make the whole system of having no single protagonist worth it; if you can’t lose the characters you recruit, that system loses much of its narrative weight. I lost about a half-dozen characters in my playthrough, most of them being “specialist” operatives with better skills and equipment than the average recruit: I lost an anarchist (one of the best character “classes” if you’re focusing on less-lethal tactics), a spy (my particular favorite character), a professional hitman (I thought that an amateur hitman was just a murderer, but, lo and behold, I did later recruit an “amateur hitman”), a deputy director of the Met, and a few others. Their losses–especially in otherwise successful story missions–were keenly felt, and that was the point, wasn’t it?

Otherwise, the gameplay was nothing unexpected for a GTA/Assassin’s Creed/Watch Dogs/Etc.-style of game. Less free-running and more hacking, but otherwise in line with expectations. Admittedly, I played the game on “normal” difficulty which, despite my losses, seemed easier than I should have selected for optimal enjoyment. If you liked the previous Watch Dog games, you’ll like the way this one plays.

Ultimately, the game’s narrative was less satisfying than I’d initially expected. I called the nature of Zero-Day a mile away, and the plot points of the missions hit a little too hard on the tropes and cliches of the genre: the THEMIS idea essentially rehashed Philip K. Dick’s The Minority Report, the Skye Larson plot played out the typical mad scientist trope (while sidestepping all of the actually-interesting philosophical and practical issues of mind-uploading by making her a monster), and Mary Kelley played an unnuanced criminal mastermind the likes of which have starred in many a poor detective story. The most emotional point of the story’s ending is immediately undone after the credits roll. Part of me liked that, but it was a cheap happiness to be sure.

Fortunately, the nature of the game itself, rather than the plot, brought some nuance with it. As with Watch Dogs 2 (so I’m told), the game pushes you toward a less-lethal approach to combat. You can only unlock less-lethal weapons for your characters (some recruits come with lethal weapons, but that’s the only way to get them) and even the “takedown” animations that show a neck being broken or the hitman garroting a victim to death are revealed to be less lethal attacks in the game’s treatment of them.

As a brief digression, I found the distribution of lethal weapons on recruitable characters–especially in London–to be ridiculous. It’s at least plausible that the Spy has a silenced pistol, or that the Professional Hitman comes with a pistol and assault rifle, but that’s not the half of it.

One of the first people I passed in the game was a “Tourist” with an M249 light machine gun. I chalked it up to satire of Americans, but then I also added to my potential recruits list a Chef with the same weapon. And then a University Researcher with a silenced pistol. As it turned out the number of people casually packing in dystopian London–heavy weapons no less–mystified.

But that aside, the game’s push toward less-lethal weapons made me continually ask myself about the morality of using lethal weapons in the fight. And this is particularly where I’d wished I’d set the difficulty to a higher level. As it stood, there where many missions where I could send in a Professional Hitman and run-and-gun my way through Albion personnel, stopping shooting to hack only when necessary. I wished that the difficulty had been higher so that the hero fantasy of blasting one’s way through faceless neo-fascist bad guys without a care in the world might have been less accessible, along with all of its accompanying problems. But, ignoring the moral question within the game, I continued to ponder the point at which armed resistance becomes an acceptable approach–it is never a “good” approach. As I’ve written elsewhere, I don’t think violence can ever truly overcome evil–only delay it–and that thought reverberated for me as I confronted my programmed “enemies.”

It was certainly the fact that the setting of the game resonated with current fears and concerns about the future of the U.S. that led me to all of these thoughts, and it was morality and politics that traveled through my brain while playing the game far more than any consideration of privacy or technology issues. Even now, as I write this review, I’m continually refreshing the AP’s report on 2020 election results, full of some hope for the presidential results but mostly dread at the stark divide in my nation, the number of people who seem to value their own economic prosperity (manufactured as that may be) over ideas of democracy, justice, equality, or any of the other things I see as the ideals that justify the messiness and difficulty of our political system.

I’d better quit while I’m ahead. Or at least before I’m too far behind. I’ll conclude with this: I enjoyed playing through Watch Dogs: Legion, but it was far from an amazing experience. More important, I came away from this game wondering (in all sense of the word) how the cyberpunk stories and games of my youth seemed to be more prophetic year after year. As much as I enjoy playing games like Shadowrun, or Deus Ex, or Watch Dogs, that’s not a direction I would consciously chose. Which, in turn, made me a little embarrassed to play this game after all, feeling like I was turning my angst to video games rather than getting “out there” and doing something that might help incite meaningful change in the world. Do I feel like that’s even possible, or have I turned to a game like this because I’m beginning to feel powerless? Or is the coincidence of this game’s release with the 2020 election simply a serendipitous synchronicity of memes and fears as to put me in existential angst?

I don’t think any of that was what Watch Dogs’ creators intended it to be. But for me, that was my Watch Dogs: Legion: a self-inflicted reverie about my place in and responsibilities to the world. As I look back at this article, weird as it turned out to be, I think it reflects the course of my experience with the game–a journey from light-hearted escapism into contemplating much tougher questions and concepts. Was that worth my sixty bucks? Maybe.

Review: The Sparrow

I know; I’m a little late to the game if I’m reviewing a book that’s twenty-five years old. But I’m excited about it enough that I really don’t care about that.

So, we’re gonna talk about Mary Doria Russell’s The Sparrow, an exposition of theodicy wrapped in a sci-fi tale that’s secretly a bildungsroman of sorts. If you’re not a theology nerd, “theodicy” is the word for the study of the problems of evil and suffering. In Christianity, in particular, this problem might be more specifically phrased as “If God is all-powerful and entirely good and loving, why does God allow evil and suffering in the world? Why do these things happen to seemingly good people?”

Job is my favorite book of the Old Testament, in part because it addresses this very question and gives us the best answer I think can be had for it. When God appears to Job at the end of the poem, God’s answer to Job’s questioning is to tell Job that he cannot understand the answer. It’s too complex, it’s too nuanced, for the human brain to comprehend in all its depths. The ultimate answer God gives that humans can understand is “Trust me.” Faith, faith that God is sovereign over all things, that God is love and intends ultimate good for God’s creation, hope that everything will one day be clear and suffering and evil will be conquered fully after having served their purposes–as inscrutable to us as those purposes may be–is the answer. It is, admittedly, an answer that I find at once entirely frustrating and comforting. It’s not my job to solve the problem of evil and suffering; it’s my job to respond to evil in suffering in the way that God has instructed me.

Part of the brilliance and beauty of Russell’s book–and only part, mind you–is that she takes the same approach. There is no attempt to answer the question of suffering, only an attempt to hold it in her hands and turn it at all angles for the reader to view, to experience in part, all of its manifest complexity and difficulty. There are no apologies here, no arguments, only an investigation of the issue that is by turns beautiful and terrifying, humbling and infuriating.

I don’t want to give too much of the plot away, but I’ve got to at least tell you what the book is about, right? All of that investigation into theodicy is not exposition or diatribe, it is examined through the experiences and humanity of the characters.

The Sparrow tells of the aftermath of a first-contact mission put together in secret by the Society of Jesus to the planet of Rakhat, discovered by the Arecibo facility in Puerto Rico in 2019, when the astronomy equipment there picks up radio signals that turn out to be the singing of the indigenous peoples of Rakhat.

Only priest and linguist Emilio Sandoz survives the mission; the handful of clergy and layperson companions that accompany him to Rakhat do not. The time dilation of space travel, the reports of the second, secular mission to Rakhat, and reports from the first missionaries themselves seem to tell the tale of a horrific fall from grace and into depravity on the part of Sandoz. The story jumps back and forth between the Jesuit interviews with the recovered Sandoz (in an attempt to discover the truth of the reports and, hopefully, salvage something of the Jesuit reputation after the reports of the missionary journey have decimated it), the first discovery of Rakhat and the synchronicity that brought Sandoz and his companions into the mission in the first place, and the events that actually unfolded on Rakhat. These separate narratives meet, as it were, at the climax of Sandoz’s telling of his story.

That main thread, and its analysis of theodicy, contrasted with the modern missionaries’ own thoughts about their relationship to the 16th century missions of the Jesuits to the “New World”, form the core of the text, but Russell’s writing of the missionary characters, their backgrounds, their feelings, their developing relationships to one another, their thoughts about their places in Creation as they confront their missionary (or priestly) status, provides just as much literary joy and human insight as the “mystery” that frames all of these subplots.

This is, after all, a sci-fi story (one for which Russell won the Arthur C. Clarke award in 1996, the year the book was published), and great detail is paid to the physiology and culture of the peoples of Rakhat, to the methods of space travel (the missionaries convert a mined-out asteroid into their spaceship) and the believable physics of story. At the same time, those elements never get in the way of the narrative; no time is lost on long exposition about the nature of technologies or theories of culture and alien psychology. These run seamlessly throughout the text, woven in with the unfolding plot instead of interrupting it.

The writing itself is beautiful, jealousy-inducing for an aspiring writer such as myself. The blend of familiar, practical tone with clever description and amusing turn-of-phrase reveals the intelligence and imagination of the mind behind this tale in an ever-delightful manner. The pacing and plotting of the story are an example of mastercraft in that aspect of the art, something especially apparent to me as I struggle with revising the plotting and pacing of my own fledgling work.

I must also express a debt of gratitude to my wife for bringing me to read this book. It’s one she first read–and told me about–almost a decade ago. It sounded interesting, but I must not have been paying close enough attention to her explanations, because this a book that fits with my own interests so uncannily perfectly. Only when she announced that she was going to read it again, now that her experiences in ministry and seminary had sharpened her abilities to appreciate the tale, did I agree to read it alongside her. As I must often admit, she was right all along. I should’ve read it the first time she told me to. So should you.

Darwinism Doesn’t Exist in Star Wars (A comment on the Mandalorian)

Warning: (Minor) spoilers ahead.

As I’ve said, holidays are for faith, for family–and for Star Wars. I indulged in two of the three yesterday, binging the first four episodes of The Mandalorian (which I’d held back from watching for just this occasion) with my dad.

Part of me still expects to see Clint Eastwood’s face when the Mandalorian finally removes his mask given the laconic gunslinging of the titular character and the show’s rigid–maybe too rigid–adherence to the tropes of the western genre.

It’s a fun show, if a little simplistic. The fights have plenty of eye-candy (though also a lot of flaws for those of us with some knowledge of the way of the gun) and the plot paces along quick enough to leave the gaps in logic behind before you think too much about them. In that way, it’s classic Star Wars, though part of me also feels that this story could take place in any space opera setting and has Star Wars grafted on as fan-service more than being a story deeply embedded within the Star Wars universe–though this is perhaps my watching with a too-critical eye rather than a reasonably critical one. Did I say that the show is fun? I can’t say that enough–if you want something fun to watch and/or need a Star Wars fix, The Mandalorian will fit the bill nicely.

But I’ve mainly put this post here to rain on the parade of “Baby Yoda” memes and paraphernalia. Yes, the kid is super-cute. Yes, he’s very endearing. Yes, his antics are highly amusing. And yes, the Star Wars nerd in me is very excited to learn more about Yoda’s species (even if our reference to the character has been relegated to “Baby Yoda” because neither the character nor the species has yet been given a name). The problem, though, is that I don’t believe in Baby Yoda beyond his (her?) status as McGuffin and marketing ploy by Disney (one that is sure to be extremely successful, I’m sure).

Here’s why this post is placed in both the “Fiction” and the “Fatherhood” portions of the blog: I’m now six months into fathering Hawkwood and Marshal. It’s been tough, which was not unexpected but which doesn’t change the fact that it’s tough. I haven’t written too much about it on the blog lately as I’m still struggling through and sorting out feelings myself, and while I’m usually willing to parse through my thoughts and feelings publicly (at least insofar as the blog’s readership qualifies this a truly “public”), these I feel it’s more appropriate to play closely to the chest for the time being.

Suffice to say, though, as I know all parents do, I have times when I ask myself, “how much longer is it going to be like this? How much more can I take?” There are redeeming moments that take the edge off of that frustration, but managing it sometimes feels like a full-time job. On top of an actual full-time job, the task of raising and caring for the children, staying closely-connected with K, writing on my novel and the blog, and making some time for some other hobbies, it’s a lot.

And that’s why I find Baby Yoda such an unbelievable character. I can accept a species that lives for 900+ years. But one that remains a toddler for at least fifty years? Nope. A species cannot survive such a catastrophic development–though perhaps it explains why there are so few of Yoda’s kind in the galaxy and why those there are seem to be possessed of boundless patience and Zen-like stoicism.

Yes, Baby Yoda is extremely well-behaved for a toddler (at least so far as I’ve seen), though also possessed of a stubborn streak characteristic of the age. I don’t expect to see a scene where, furious, the Mandalorian throws his helmet on the ground (forgetting his oath, of course), utters a string of profanities and wonders why he ever made the decision to become Baby Yoda’s protector in the first place. It would be in keeping with the tropes of the category of story that’s being told here, it might be the deepest characterization of the Mandalorian we get, and it might be the most verisimilitude we could expect to see in a Star Wars story. But we won’t get it.

Now, I don’t want to bring up the shame of midichlorians again, but I can’t help but compare the idea of a creature that stays an infant for more than five decades to that level of storytelling gaffe. I know, I know, we’re talking about a setting that includes easy faster-than-light travel, stories following relatively unnuanced workings of Campbell’s “Hero’s Journey”, the Force and many other elements that openly defy credulity and beg the kind of willing suspension of disbelief that is part and parcel of the enjoyment and success of the setting. Even so, it’s often (for me, at least) the attention to verisimilitude in the details that paves the way for the greater fantastical elements of a setting. For example, this is, I think, what makes Max Brooks’ World War Z so wonderful–if you can accept zombies, the rest of the stories within play out thoughtfully and believably, making the acceptance of zombies a low price of admission.

To see that Darwinian evolutionary forces sometimes simply don’t exist in Star Wars undermines that willing suspension of disbelief–I enjoyed watching the show in spite of this, but I spent an inordinate time while viewing wondering how a toddler could survive fifty years of being a toddler, what kind of saintly parents would be necessary to make such a system work, what benefit there might be to having a creature mature so slowly, etc., etc., ad nauseam.

Just me?

Post-Run Thoughts on Shadowrun 6th Edition

A ways back, when I did my set of posts on Shadowrun characters, I promised that I’d be doing a system review in the near future. I’m not sure that this qualifies, formally speaking, as a review, but I am going to share what I think about the system after having run a few sessions now.

As you might have gathered, I was excited about the 6th Edition system when it first appeared. I like the idea of Edge as something more like Fate points and, in theory, that it supplants the need for the endless lists of modifiers in previous editions of Shadowrun. I was much more forgiving than most about other complaints about the system–and particularly the problems with the first printing of the rulebook. A lot of that, really, is likely due to the fact that I got my copy in PDF, which had already been updated with errata by the time I read it, coupled with the fact that my familiarity with Shadowrun lead me to naturally assume things that were not originally included in the rulebook–things like how much Essence you start with.

Character creation is more or less what you remember being from 3rd, 4th or 5th editions with the Priority System and some amount of Karma to round things out (IIRC, more Karma than was standard in previous editions. I have developed some gripes with character creation and advancement, though.

First, I’ve noticed what I think are some balance issues in the Priority Table. Because spells cheap in Karma cost and the Adjustment Points granted by the Metatype selection on the Priority can be used to increase your Magic reason, there’s actually very little reason to choose the higher-tier selections for Magic use compared to selecting a lower tier, using your adjustment points to increase Magic, your Karma to buy some extra spells, and having more Skill and Attribute Points.

Second, I personally think that there’s too much of a gap between the tiers for Skills and Attributes–I’m not sure that you can really create a viable character if you chose Priority E for skills or Attributes.

With the amount of Karma available at chargen, it also strikes me that Physical Adepts may be more powerful than characters augged for a similar role. As an Adept, you can take three levels of Initiation, take extra Power Points at each level, and start with nine Power Points.

These balance issues would likely be resolved by using an entirely Karma-based character creation system with some limits on how much Karma can be spent where and how.

But that leads to another issue–Attributes and Skills cost the same amount of Karma to increase, and the value of some Attributes over others doesn’t necessarily give parity. Agility, for instance, applies to a lot more skills than most of the other Attributes. And, given that raising an Attribute–any Attribute, I think–increases your effective rating in more than one skill, this is problematic.

Character generation is one thing, and the more I look at it, the more the shiny new facade falls away, revealing cracks in the plaster underneath. The more damning issue, though, is that while the new use of Edge is a great idea in theory, it doesn’t really simplify things in play very well.

Now, instead of tracking lots of little modifiers, I have to track different pools of Edge, make sure I’m distributing Edge (as appropriate, without sufficient guidance from the rulebook), make sure my players are tracking and spending Edge, and keep in mind all of the different basic Edge spends and special Edge actions available. The worst part though, is how unnatural the system feels in play. With Fate or Cortex Plus/Prime, the economy and use of points is relatively straightforward and intuitive after a short time playing. Here, though, I’m supposed to hand out Edge points when using a dice pool modifier feels more appropriate and then use those Edge points at a later time–when it may not really feel appropriate or connected. Worse, I’m not sure it really simplifies much. Sure, I don’t need to track how many rounds were fired in the last turn to calculate a recoil modifier for this turn, but a simplification of the amount and types of modifiers or the use of an advantage/disadvantage system would do much more with less.

There are other places where the attempt at a more narrative approach to Shadowrun feels less than fully-realized. Spells are a huge example here. SR6 attempts to simplify spells somewhat by adding some variables that can be applied to spells rather than required the choice of a Force Level (such as expanding AoE or adding additional damage). But that system could be used to require so many fewer spells and give sorcerers so much more flexibility and the opportunity is lost. A few examples: (1) allow the caster to modify the base spell to touch or area of effect and eliminate the need for different spells with the same general effect but different minor parameters; (2) allow the caster to modify Illusion spells to affect technology rather than having two separate spells; (3) Allow the caster to add on the additional Heal spell effects rather than making her use spell selections for six different minor variations.

This is the second time recently I’ve come across a system that I think I like on reading but don’t in practice–the previous being the new edition of 7th Sea, where I find the core mechanic more limiting and cumbersome than freeing. I guess that that means that designers are taking more risks to push mechanics in new directions than has perhaps been the case in the past, but with mixed results for major titles. I see some influence from Dogs in the Vineyard in 7th Sea, the former being a game I love from a design perspective but would probably never run. But, as a smaller title, the price of admission easily covers exposure to the innovative mechanic, whereas the greatly heightened production value of 7th Sea means a much higher buy-in. That’s a discussion for a different time.

In running the past few sessions of Shadowrun, I’ve admittedly been ignoring much of the RAW, using dice pool modifiers when it seems more appropriate, simplifying hacking rolls, etc. I don’t think that, as a GM and within the art of running a game, that there’s anything wrong with that so long as what’s being done is consistent and allows the character stats to have comparable effect on results as they would using RAW. But that’s not a good sign in terms of game design, and I’m finding myself sorely tempted to go back to Fate or Cortex to run the game. Alas, I can foresee the groans from my players at the lost time in learning and going through SR6 chargen only to change to a simpler system a few games in, so I’m not sure I’ll try to make that sale.

Without getting overly technical or formal in reviewing the system, what I’m finding is (for me personally, your style of running games may achieve a completely different result) that the system is encumbering my running of the game more than facilitating it, giving me too many mechanics when I want fewer, and not enough when I could use a little more. As much as I’d like to continue liking the SR6 system, at the end of the day, I’m not sure that there’s a worse conclusion I can come to.

As I’ve hinted at in other posts, I’m really not a fan of D&D, because it doesn’t lend itself to the types and styles of games I like to play. But it’s well-loved because, despite its relative complexity (and I think it’s fair to say it’s really middle of the road as far as that goes), it supports a certain type of gameplay and approach. I’d argue that the OSR has so much support for exactly the same reason, though that approach is somewhat different from D&D 5e and at least partially stoked by nostalgia.

Shadowrun remains one of my favorite RPG settings, so I’ll probably continue to buy the books to keep up with setting material, but that doesn’t mean I’ll feel great about doing so.

Can I make SR6 work for a long campaign? Yes; yes I can. Will I feel like I’m fighting with the system all the way through? Probably.

 

 

Learning from Game of Thrones

[SPOILER ALERT: This post presupposes familiarity the sweep of the Game of Thrones TV series, with a focus on the final season. If you’re sensitive to having narrative spoiled for you and haven’t watched everything yet, don’t read.]

It would be hardly original of me to spend a post simply lamenting this last season of Game of Thrones, despite my desire to do so. Instead, I’m going to spend some time pointing out what I think are some lessons to be learnt by aspiring writers (in any medium) from the recent failures of the show.

To preface that though, I need to exhibit some due humility. The greatest lesson to be taken from the recent episodes is that good writing is difficult, no matter who you are. It is, as with so many things, far easier to criticize than to create. D.B. Weiss and David Benioff, and the other writers who contributed throughout the show’s run, have managed to create for widespread public consumption. At this point, I have not. I feel it’s only appropriate to bear that in mind and take what I have to say with a pinch of salt as we continue (though ultimately, I hope that the weight of my arguments, rather than the status of the people involved, carries the day in this discussion).

Show Don’t Tell

It’s one of the commonly-touted pieces of advice given to writers. Don’t use boring exposition when you can just as easily let the audience get the necessary information from context or from being immersed in the setting and story. Don’t explain the inner thoughts of the characters when we can understand them just as well by how the characters act and speak.

This is especially true of visual media–which is why Industrial Lights & Magic and Weta Workshop have been able to do such wonderful things for defining setting in films and TV, why concept art is such an important aspect of designing for those media (and even for the written word)!

So, for me, Game of Thrones’ after-the-show talks with the showrunners pointed out a key problem. When you have to explain what you were trying to get at in an episode after the episode, you haven’t written the episode well enough to stand on its own. When you smugly assume that everyone got exactly what you’re talking about while watching, you’re adding insult to injury.

This is largely a result of rushing the storytelling. Without time enough to lay all of the necessary groundwork to explain events and occurrences within the show, you’re going to have to either let the audience create their own explanations or hand the explanations to them elsewhere. The lesson here: make sure you’re taking the right amount of time to show what you need to show so that you don’t have to tell later.

To be clear, this is a general rule, and general rules can always be broken in good writing–if done well and only when appropriate. It is possible to have key events happen “off stage” and describe them later or to play with the relation of key information in other ways, but these decisions must be made carefully and deliberately. My recommendation is to start with a “more is more” approach when writing and then employ a “less is more” approach when editing. It’s easier (I think) to lay it all out and refine by cutting out the dross than to realize your narrative isn’t complete and then struggle to fill in gaps–I’ve been there!

Here are some specific examples from Season 8 of this being an issue: the tactics employed at the Battle of Winterfell, Daenerys’ suddden change in the attack on King’s Landing. This lesson could just as easily be called “Timing is everything,” or “Don’t Rush” (the latter of which is probably the cause of most of Season 8’s mistakes).

Reversals of Expectations: There’s a right way and a wrong way.

The showrunners made a great deal out of “defying audience expectations” in Season 8. Defying audience expectations is a key technique in good narrative, but there’s more nuance to it than that. The technique, properly employed, has two parts: (1) give the audience a twist that they don’t see coming AND (2) set up the narrative so that, in retrospect, that twist feels somehow inevitable.

This is not a game of “gotcha!” Good writers do not play with twists and surprises simply because its something to do. Good writers use twists to increase tension, remind us that, like life itself, the unexpected (but often foreseeable) occurs in narrative, to create drama.

A good surprise must satisfy multiple demands in addition to the two basics mentioned above. The twist must follow the internal consistency of the setting–it should defy expectations of plot, but not of the personality and character of the actors or the rules (spoken or unspoken) of the setting itself. It must have sufficient groundwork laid in the story; without this the “twist” feels random and unmoored from the themes and scope of the rest of the narrative.

In “gritty” fiction, there will be times when bad fortune or ill luck interjects itself into the story, times when both readers and characters are left wondering “is there a meaning to all of this, or is everything that happens just random?” But those types of events only work when explained by coincidence and happenstance–they must truly be strokes of bad luck. When we’re talking about the choices made by characters, there must be believable motivation and a way for the character to justify the action–even if we don’t agree with the logic or morality of that justification.

The example that undoubtedly comes to mind here, as above, is Daenerys’ sudden decision to kill everyone in King’s Landing. There is some building-up of her story arc in the early narrative (following Martin) that Dany might not be the great savior everyone hopes that she will be. She is a harsh mistress to the Masters of the cities of Slaver’s Bay, willing to commit atrocities in the name of “justice.” But this moral ambiguity (strongly based in the character of historical figures in similar situations) is not the same as the desire for justice slipping into a desire for power and control to implement that justice. That story arc certainly works (it is the rationale behind Morgoth and especially Sauron in Tolkien’s world), but we need a solid background for such a morally-repugnant act as mass murder of innocents. We are given the groundwork for her eventual “fall” into a person willing to use harsh means to achieve her idealistic ends, but not for her to do what she did. This lack of laying the proper foundation for her sudden change leaves it feeling like, as some commenters put it, “a betrayal of her character.” This leads us to the next point.

Internal Consistency versus Authorial Fiat

For me, the greatest issue I took with Season 8, the thing that left such a bad taste in my mouth, was my belief that the showrunners decided what would happen and then shoehorned in all of the details to get them to those decisions. Euron’s sudden (and nonsensical) appearance before an undefended Targaryen fleet and ability to quickly slay a dragon compared with his powerlessness before one remaining dragon at King’s Landing is only one exemplar here. Having Arya kill the Night King (which had been “decided early on”) is another. And just about all of Episode 6.

One of the great joys of writing (in my mind, though I hear this with some frequency from other writers) is when a story takes on a life of its own. What you thought would happen in your story gets suddenly left behind because of the momentum the story has accrued, the logic of the setting, the narrative and the characters within it. We find ourselves mid-sentence, suddenly inspired (in as true a sense as that word can be used) with the thought, “That’s not what happens, this character would do X instead! Which means Y needs to change!” All of sudden, you’re going somewhere better than you were originally headed, somewhere truly rewarding to write and to for your audience to read or see.

This is the result of a dialectic that forms between the moving parts of the story. The narrative, the dramatic tensiveness of the story, the themes and motifs, the characters involved and the conditions established by the setting; the gestalt of these elements becomes something that lives and breathes, something greater than the mere sum of its parts.

Pigeonholing the plot forces it to become stilted, forced and (worst of all) didactic. Dead and mechanical. This is, in part, the difficulty with story “formulae.” There are narrative structures that provide a general framework for certain types of genres or stories, but following the formula with nothing else results in something unsatisfactory.

Here, though, my suspicion is that the problem was more a matter of fan-service and a slavish devotion to defying expectations than rote adherence to fantasy-story formulae.

One of the things that made the Song of Ice and Fire books, and the Game of Thrones TV show so popular, so gripping for the audience, was that it pulled more from medieval chronicle than fantasy yarn for its structure. The story is about the world and the group of characters as a whole in a way that is bigger than any of the constituent characters, that survives the misfortunate end of any one (or more) of them. This left no character safe, allowed for real surprises that contradicted expectations of narrative structure rather than expectations based on the internal logic of the harsh, unforgiving setting and culture(s) in which the story takes place. The internal logic, then, drives the defiance of expectations instead of resisting forced twists of expectations inserted into the plot by the author’s whim.

In fantasy in particular, internal consistency is the golden rule. In settings where magic is real, where dragons may soar in the skies and burn down the enemies of a proud queen, we are required to suspend disbelief. Of course. But we can manage that suspension of disbelief only when there is a reward for doing so and the obstacles that might prevent us are removed from our path. Magic is a wonder to behold in the truest sense, but it fizzles and dies when it appears that the magic in a setting does not follow certain rules or structure (even if we don’t fully understand those rules or that structure). If the magic is simply a convenient plot device that conforms like water to whatever shape the author needs or desires, then it fails to carry wonder or drama. Drama constitutes the ultimate reward for the suspension of disbelief–allow yourself to play in world with different rules from our own and the stories you find there will satisfy, amaze, entertain and tell us truths about our own world, even if it is very different. But without internal consistency, there can be little meaning. Without meaning, narrative is nonsense.

Season 8 lacked this internal consistency on many levels. From the small, like the much-discussed “teleportation” around Westeros, to the glaring, like battles being predetermined by plot rather than by the forces and characters that participated in them.

But the greatest issue I took with Season 8 in its (lack of) internal consistency was the ending. To me, the sudden appearance of the nobility of Westeros to decide, “Yay! Constitutional monarchy from now on!” seemed far too after-school special for me. For a story where peoples’ personalities, desires and miredness in a culture of vengeance and violence long proved the driving factor, you need far more of an internal story arc for a sudden commitment to peaceful resolution of issues to be believable. They would have to reject their entire culture to do so, rather than rationalizing how the culture is correct all along (what much more frequently happens in real life). I can see such a decision for Tyrion and for Jon. But for Sansa and Arya, I do not. And why Yara Greyjoy and the new Prince of Dorne wouldn’t likewise declare independence, I cannot say.

In short, I just don’t think that the narrative satisfactorily supports the actions taken by the ad-hoc council of Westerosi nobles in the final episode.

When a Narrative Fails Your Narrative

Why did putting Bran on the throne fall flat in the final episode? Tyrion gave an impassioned speech about how stories are what bind people together and create meaning (something with which I wholeheartedly agree as aspiring fantasy author and aspiring existential Christian theologian) and then made an argument about the power of Brandon’s story.

Wait, what? You lost me there. What was the power of Brandon’s story? Yes, it started strong, and he did do some amazing things–crossing north of the Wall, becoming the Three-Eyed Raven (whatever the hell that means), surviving his long fall from the tower at Winterfell. But, given his role in Season 8, I’m not sure that any of that mattered. He played relatively no role at the Battle of Winterfell (at least that we mortals could see), the narrative of his role as Three-Eyed Raven was left impotent and undeveloped at the end of the series, and of those with decision-making authority in Westeros, few had any direct experience with a Three-Eyed Raven, the White Walkers or the Battle of Winterfell. To them, the whole thing is just a made-up story by the North.

For narrative to be effective, we must be able to use it to find or create meaning. Bran’s story is too jumbled a mess without a climax or denouement for us to be able to piece much meaning out of it. In fact, we’re left wondering if it meant anything at all.

Since the idea to put him on the throne relies on the meaning of his story, the act of crowning him itself becomes meaningless; we can find no internally-consisted basis for supporting making him king (other than that he can’t father children) and no meta-narrative logic for the event either. This is exacerbated by the fact that Bran earlier tells us that he doesn’t consider himself to be Bran anymore. Without continuity of character, narrative loses meaning.

Thus, the finale fails because it relies on a sub-narrative that has failed. It is a common trope for fantasy fiction to use other stories (often legends) from the setting’s past to convey meanings and themes for the main narrative (Tolkien does this, Martin himself does, Rothfuss does as a major plot device in The Kingkiller Chronicles); writers looking to follow suit need to make sure that any “story-within-a-story” they use itself satisfies the necessities of good storytelling, or one is only heaping narrative failure upon narrative failure. The effect, I think, is exponential, not linear.

What the Audience Wants and What the Audience Needs

Several of my friends who are avid fans of the show and the books, before the final episode, expressed their feelings about the uncertain ending in terms of “what they could live with.” This was often contrasted with both their hopes for what would happen and their expectations of what would happen.

There’s been much talk (even by myself) about the showrunners performing “fan-service” in this season, whether through the “plot armor” of certain characters or the tidy wrapping up of certain narratives.

The claim that the showrunners made plot choices in order to please the audience has set me thinking about these types of choices on several fronts. On the one hand, GoT rose to prominence in part directly because of G.R.R. Martin’s seeming refusal to do any “fan-service.” That communicates to me that there is a gulf between what readers want from a story and what they need to feel satisfied by the story.

We can all recognize that there are stories that don’t end happily, either in general or for our most-beloved characters, that nevertheless remain truly satisfying and meaningful narratives for us, ones that we return to time and again.

So, should giving the audience what they want (or, to be more accurate, what we think they want) be a consideration for the writer? There is no simple answer to this question. The idealistic writer (like myself, I suppose) might argue that crafting a good story–which is not the same as a story that gives the audience exactly what it wants–is more important than satisfying tastes. On the other hand, the publishing industry has much to say about finding the right “market” for a book, and knowing what kind of stories will or won’t sell. For the person who needs or wants to make a living as an author, playing to those needs may be a necessity. Even if income isn’t a concern, there’s still something to be said for what the audience organically finds meaningful as opposed to what the author seeks to impose as the meaning and value of the story.

I just want to point out this tension as something that the final season of Game of Thrones might help us think about, not something for which I have any answers, easy or otherwise. When the final books in the series are released (if that ever happens), maybe there will be some fertile ground for exploration of these ideas. Of course, the intent of the various creative minds on all sides of this collection of narratives may remain forever too opaque for us to glean any true understanding of the delicate relationship between author, craft and audience.

Conclusion

I, as many of you I suspect, was left profoundly unsatisfied with the ending of a story I’ve spent years being attached to by the final season of Game of Thrones, and my frustration is further stoked by the knowledge that the showrunners could have had more episodes to finish things the right way instead of rushing to a capricious and arbitrary ending.

That said, the failures of the season (not to mention the great successes of previous seasons) provide many lessons for we would-be authors.

What do you think?