What Writers (and Roleplayers) Need to Know about Swordplay, Part II: Swords

For the introduction to this series, click here.

We ought to start with the Queen of Battle, oughtn’t we? By this, I mean the sword, of course.

Weights and Measures
Let’s get the most glaring error out of the way first: swords were not heavy, nor were they clumsy. You will still even hear some historians claiming that swords weighed 20 pounds or more; this is hogwash.

If you’re able, do a quick test. Get your beefiest friend and a weighlifting barbell (the big one, for benchpressing). These typically weight 15 to 20 pounds. Ask your meatloaf friend (without calling him that) to try to swing the barbell like a sword. Stand back and prepare to laugh. The results should be slow, clumsy and obviously ridiculous.

The average one-handed sword (an “arming sword”) of the medieval and Renaissance periods likely weighs between one-and-a-half pounds and three pounds. The average two-handed sword (what is properly called a “longsword” by the way) usually weighs between two-and-a-half to three-and-a-half pounds, give or take. If you’ve taken some time to watch videos on YouTube, now maybe you’ll understand how they’re able to move so fast and so agilely–we’ll return to this.

Where did we get the idea that swords are so heavy? Bad scholars is the likeliest reason. The heaviest swords actually used of which I’m aware are the zweihanders (the “true twohanders”) use by the Landsknecht mercenaries. These could weigh between six and eight pounds and could be six feet from tip to pommel.

First, it’s important to know that this was a very specialized weapon (see my next point below). By the early 16th century, when this weapon came to use, Europe had (debatedly, at least) undergone a “military revolution.” Gone were the shieldwalls and rough battle lines of the medieval period, replaced by professional or semi-professional soldiers who spent more time drilling in formations and maneuvers than their manual-of-arms for their weapons. The standard was the use of large pike formations protecting musketeers or archers (the Spanish “tercio” is a prime example of this). With their (very) long pikes and the ability to maneuver and angle their weapons together, a pike formation proved a very difficult formation to assault.

The zweihander was one tactical response to this problem. If you look at the weapon, you’ll see a long grip followed by the crossguard and a typically long-and-blunt ricasso (the base of the blade coming from the crossguard). Some examples had this section wrapped in leather and/or topped by parierhaken (parrying hooks). The design will help you to understand the use.

Gripped as a sword, with both hands on the hilt, the weapon could deliver powerful swings, excellent for knocking pike spearpoints out of the way, or potentially even cutting them off (there is not agreement about this).

Once you’re inside the length of the pike, it becomes mostly useless to its user. The pikeman would need to drop his pike and draw whatever shorter weapon he had to hand. The user of the zweihander, however, only had to position his off hand on the ricasso and he suddenly had a weapon that performed more like a short spear than a heavy sword. Advantage dopplesoldner (as these men were called). By gripping the blade with itself with one hand, the dopplesoldner could even simply push pikes up and hold them out of the way while his compatriots slid into the pike formation to do the dirty work.

This was dangerous work, especially so, and dopplesoldners (literally “double soldier”) were probably called that because they received double pay.

Over time, as the tactics of warfare continued to evolve, the zweihander became less and less useful. It retained, however, some significance as a symbol of certain military units, and versions that were intended only to be carried in parade were created. Without care given to weight and balance as is done with a useful sword, these became quite heavy. When antiquarians of the 19th century rediscovered them, they assumed that the parade swords they’d found were actual weapons of war and marveled at the strength necessary to wield them.

If you’d like to take a more scientific approach, let’s look to physics. Force exerted equals mass times acceleration, where acceleration is measured in units squared. So, all other things being equal, you get more force, comparatively, with a lighter weapon swung faster than a heavier weapon swung slower. Medieval minds may not have had the equation, but they were smart enough to look at the evidence and draw a conclusion. Add to this the fact that you have to actually hit your target to do any damage and the usefulness of a faster weapon becomes doubly apparent.

A Sword is a Tool
Like all weapons, a sword is a tool, albeit one with a macabre purpose. Understanding that goes a great distance to understand swordplay, I think. Two particularly important parts: First, force (pressure, really) applied increases diametrically to the area over which it is applied. This is the entire purpose of a blade–the edge reduces the area over which force is applied, focusing and increasing it over a small space. This is why all bladed weapons are useful–they increase the force applied to the target, hopefully sheering and cutting through it.

Second, a sword is a lever, again a tool to amplify the force exerted by the user. This amplification increases the longer the length of the lever, making the cutting area near the tip of the sword the most dangerous area (it also accelerates fastest).

This covers the most basic design purposes behind the weapon, but there is much more. Tools are often improved incrementally over time, and we see that with swords in the historical record, from early bronze weapons to the carbon steel of the medieval and Renaissance sword or with the addition of a hilt capable of blocking an enemy blade.

Some tools are generic, able to perform multiple tasks passably, but not excelling anywhere. Others are specialized, becoming more effective at limited tasks to the detriment of other capabilities. Bear in mind that at all points of human history, there is also an “arms race” between the capabilities of weapons to cause injury and the capabilities of armor to stop injury.

Swords evolved over time in relation to the armor available. Just two examples: the two-handed sword did not become a common weapon until the advent of more-effective armors–the transitional period of the 14th century as we see progress toward true plate mail: brigandine “coats of plates,” the addition of plates to protect joints and limbs, etc. When one could more reasonably rely on one’s armor to stop a blow, a shield became a less necessary item (as we’ll discuss later, a shield should really be thought of as a weapon, not armor), freeing a hand for a longer, weightier weapon, which in turn provided more advantage against that same armor than a one-handed sword.

The second example: as plate armor became more common, a different approach was necessary to the design of swords. Cutting is typically ineffective against plate armor; this is partially a matter of its rigidity and resistance to cutting, but also a matter of its design–plate armor is designed to deflect a blow, directing the force of the attack in a way less harmful to the wearer, rather than to simply stop the blow. The result of this were blades with more acute points. Much fighting in plate armor, at least with swords, results in grappling, with the combatants grabbing the blade of their sword with one hand (called “half-swording, and yes, this can be done without injury”) and aiming to maneuver the point of the weapon through the gaps between plates. Harnessfechten is truly terrifying stuff, with the end results as often as not being achieved through grappling itself (the breaking of limbs as such) or through close work at the half-sword or with the dagger.

Swords also changed as firearms altered the types and amount of armor worn, becoming lighter and developing (though not solely) into the rapier and later smallsword. Both of these, the rapier and smallsword, are excellent examples of the very-specialized sword; we’ll discuss rapiers in detail shortly.

What does this mean for the writer and/or roleplayer (especially a GM)? If you’re describing a sword, or determining what kinds of swords are likely to be found in your setting, you’d be well-advised to do some research into sword typology and the types of swords that existed at various time periods, so think about relationship between relative historical equivalents and–especially–what kind of armor is available and how that would affect sword designs and styles. There’s not necessarily a need to make mechanical distinctions between variant sword types in the gaming realm, though you certainly can if you lean heavily simulationist (or gamist, I suppose), but it will help to visualize the setting.

There are some other storytelling opportunities here–if yours is a setting with ancient and magical weapons and armor (like most games of D&D, for instance), think about how that ancient weapon may differ in appearance and design from the ones made in the setting’s present. Do ancient swords of power look more like 9th-century viking swords rather than the more acutely pointed 15th-century style swords used by most people? Would the sword be less effective against “modern” armor (whatever that may be in your setting) except for the magic within it?

A side note here–as in our own historical record, the development of sword types was not solely a linear progression. Multiple sword designs competed with one another, or performed different functions, in the same period. Changes in sword morphology did not occur simultaneously over all geographic locales, and the evolution of any weapon involves some amount of discovery, forgetting, uneven development or acceptance, throwbacks, etc.

Like any invention, the discovery of the technology itself is far from the only factor involved in the “success” or acceptance of the technology. Cost, societal and cultural views, changing needs, and many other factors may cause some technologies never to be fully realized despite the fact that they perform better than alternatives.

Additionally, because weapons are tools, context is important. The comparison of European swords and Japanese swords during their respective feudal periods provides a good example. The katana is not an inherently “better” weapon than the European longsword; of course the reverse is also true. The two weapons developed in, and made sense in, different contexts.

While I’m not as well-read in Asian history as I am in western history, my understanding is that the katana’s design is a very specific response to several factors in Japan. Primary among these was the reduced availability of quality materials from which to produce reliable, weapons-grade steel. Two conditions flowed from this: plate armor did not developed or see broad usage in Japan as it did in Europe, so the importance of acutely-pointed weapons that could be used against enemies in a wide range of armors (including that “white metal” plate armor) did not exist in the same way in Japan as in Europe–the needs to be fulfilled by the weapon were different. Likewise, the resources available with which to make weapons in Japan necessitated different techniques in sword-forging, and the katana (and its variants, which are similarly diverse as European weapons, I believe) represented the best balance of effective weapon and (relative) ease of manufacture. Some exquisite weapons were made in both locales. Both, I’m sure, also saw a number of subpar weapons created because of lack of skill, the demands of semi-mass production, the corner-cutting of greedy manufacturers, or the penny-pinching of those who commissioned the weapons.

Making Swords was Difficult
The medieval and Renaissance periods did not have access to modern metallurgy. The field of chemistry was in its infancy, and though the understanding of metals and their properties certainly improved over the centuries in question, smithing metal was art and science during the medieval and early modern periods.

A sword is made of carbon steel, which is iron fused with carbon to create an alloy with the desired properties. If you’re at your local Renaissance Faire and someone is trying to sell you a sword made out of stainless steel, it is a cheap display piece. If that’s what you want it for, no worries. But if you want something you could actually swing, test cut with, or safely use for WMA, you need carbon steel.

Those physical properties change based on the amount of carbon in the steel and the properties required of good swords are quite specific. The sword needs to be able to take and hold a good edge (which I understand is something of a metallurgic “sweet” spot). It needs to be hard, but not brittle, and the blade needs to be able to flex rather than to be perfectly rigid. There are some variations on these needs based on the sword design, of course, but those facts are generally true.

Here’s the problem: early modern smiths had no way to accurately gauge the carbon content of steel. They had to learn an intuition for the right amount of carbon, and smiths developed, even before our time period, techniques for controlling carbon content (relatively if not exactly). One such technique was to create the billet for the sword from individual layers of iron and carbon-containing metallic strips, heating them together and combining them to get a steel with a semi-controlled carbon content; this is called “pattern welding”. Viking swords were commonly made this way, with the pattern of the mixed steel visible in the blade or fuller when acid etching revealed the “serpent in the blade” as it was called.

By the Tudor period, other techniques were available for increasing carbon content in steel but, admittedly, I don’t remember the specifics well enough to describe them now.

The important thing to note is that making swords required special knowledge and skill–this is not something a blacksmith would do. Basic economic theory tells us that, the more specialized knowledge and skill a product requires, the lower the supply and higher the price commanded by the commodity. This is true of swords. While it’s very difficult to determine the actual costs of swords at various levels of quality or design, I would note that, in many of the medieval laws requiring the ownership of certain arms and armor, the weapon required of most men was a spear, not a sword.

We also have some evidence of out-of-date styles of sword continuing to see use despite the changes in the “modern” design of the weapons. This likely indicates, and there is some corroborating evidence in the historical record, that swords might be passed down in a family as heirlooms because of the value they had and relative difficulty of acquiring a newer weapon. Sometimes the blade was kept and the weapon’s fittings were changed.

On the other hand, there is much evidence of schools of swordplay becoming available to the (paying) public by the 16th century–we have a number of woodcuts showing training in just such a setting. This means that, for the burgeoning middle class, the acquisition of swords and time and money enough to learn their use was not out of reach. While swords were not nearly as common as they are often portrayed, neither were they rare.

To analogize to the modern period, I think we be well served by thinking about military-style rifles. A lot of them are made by governments for warfare, and they don’t simply disappear once the war is over. An AR-15 in the United States might run $500 for a very basic model and into the thousands of dollars. According to CNN, 40% of Americans do not have $400 available to them in the event of an emergency, so at least 40% of people probably couldn’t come up with the money to purchase such a weapon without taking on serious financial risk. I would imagine for another fair percentage, the acquisition could be made only if saved for over time, financed or the budget stretched. Bear in mind that credit did not work the way it does in modern times during the medieval and Renaissance periods (which is not to say that there was no lending or borrowing of money or other extensions of credit, but the ease of access to credit was far lower). And of course, there are some people who could afford to arm an entire town or county.

So, in writing or roleplaying, think about the social status and wealth of a character when determining whether that person owns a sword. Most peasants and desperate folk won’t–they’re more likely to use something simpler, less expensive and easier to acquire–a spear, and ax, a knife, etc. As we’ll discuss shortly, using a sword is not easy and requires significant training, so most peasants wouldn’t have had sufficient free time (or resources) to sufficiently study swordplay, even if they could acquire a sword.

As with the other sections, these are guidelines to think about, choices that must be made after reasonable consideration, not strict rules to be slavishly followed. Some societies or cultures by their nature will have a higher focus on producing weapons and putting them into the hands of the populace. Switzerland’s famous status as a “neutral” nation is not simply a matter of its refusal to intervene in the affairs of foreign nations, but also the fact that mandatory military service and weapons training (members of the military store their weapons at home!) means a nightmare for any would-be invader.

You Couldn’t Just Wear a Sword Anywhere
The systems of law enforcement and public safety were not so clearly defined, structured or regulated as they are now, but they became moreso over the medieval period and into the Renaissance. As we discussed above, because a sword was not the commonest or most affordable of commodities, it was also a status symbol–as social mobility increased somewhat after the Black Death and especially into the Renaissance (though still nothing like modern social mobility), more and more people wanted to show off their success by the wearing of one.

As is ever the case, those who held power didn’t want to share power or prestige with others and made concerted efforts to hold the lower classes down. One of these efforts was the creation of sumptuary laws. Sumptuary laws were concerned with how a person could and could not dress based upon their social status and wealth–you had to have a certain annual income to be legally able to wear ermine (a popular type of fur), for instance. This also extended to the wearing of weapons.

More than that, though, wealthy aristocrats had good cause to fear the peasantry–they largely enjoyed their wealth and status on the backs of those less fortunate, as the German Bundschuh movements and frequency of peasant revolts (England in 1340, 1381 and 1450, France in 1358 and 1382, Friuli in 1511, the German Peasants’ War in 1525-26, just to name a few) attest. The aristocracy didn’t really want their peasants to be well-armed.

But the simple matter of public safety was also a concern, and Machiavelli’s view that “an armed society is a polite society” was certainly not held by all. We know that the wearing of weapons was specifically permitted for travelers and pilgrims of the lower classes (because of the threat of brigandry and banditry, of course)

Many towns and cities had restrictions on the length of a blade that could carried inside its limits, though the specifics varied widely by time and place and exceptions seem to have almost always existed based on social class or social function.

This is to say that, contrary to common D&D tropes, at least, people (at least by the Renaissance) didn’t often walk around in full armor and festooned with weaponry–that made people nervous and attracted attention. People were restricted from the wearing of weapons in certain settings, and even social norms played a role as well.

Bear in mind that different levels of armedness were permissible in various situations. Wearing a dagger or knife was rarely forbidden, and it was common for the nobility to wear a sword (though more commonly of a lighter “civilian” design such as an “espada de ropa” or rapier) in social settings where combat was not expected. The wearing of armor in particular (when not in an official capacity requiring it) advertised that you were looking for trouble.

The types of weapons–even swords–carried also varied by social status. I’ll give an example about what certain weapons communicated later by looking at the gang fight scene at the beginning of Shakespeare’s Romeo and Juliet. For now, I’ll just give this example–in late medieval and early-modern England, the retainers of a nobleman who were commoners but who were allowed to carry weapons by virtue of their service in the nobleman’s retinue were most commonly armed with a simple single-handed sword and a buckler. They were commonly referred to as “sword-and-buckler” men and the term “swashbuckler” derives from the practice of letting one’s buckler (hung from the belt) clash against one’s (sheathed) sword as one walked, advertising armedness with a good dash of bravado. For various reasons, but among them armed clashes between groups of retainers, laws restricting the size and makeup of liveried retainers were a common feature of this period. That they were issued with such frequency most likely indicates trouble in enforcing them–or at least a high level of concern with the problem.

And if good fiction is any indication, there’s a lot of good drama to be had when a character is caught without his armor or the weapon he’d prefer to use to defend himself. I’m certainly of the mind that this should be pursued in both “conventional” fiction and roleplaying–don’t let your characters carry an arsenal whenever and wherever they feel like it!

Using a Sword is Difficult
We’ll talk about the actual features of swordplay in the next Part, but for now, let me expound briefly on why swords are difficult to use.

A sword is not a club. That seems obvious, but think about the fact that the edge must actually contact the target for a sword to maximize its effect. Not only that, but the edge must contact the target at an appropriate angle to have an optimal effect. “Edge control” is one of the first difficult tasks faced by a student of the sword.

Then there’s the whole “not-cutting-yourself” thing. You want power and acceleration behind each swing of the blade, but you also need to control the blade after it has missed, struck its target, or been deflected. While moving. While trying not to be hit by your enemy. I have seen or heard of injuries requiring emergency medical attention and stitches during test-cuttings. If you’re not familiar, a “test cutting” is the practice of cutting a stationary object with a sharp blade. You’ll find many videos of test cuttings performed on water bottles and rolled tatami mats. I have attended and participated in test cuttings on animal carcasses (if it matters to you, the animal was not killed for the purpose of the test cutting–and certainly not during it!–so this was a matter of making the most of the carcass. If you are offended by this, I certainly understand, and there’s a perfectly reasonable question and conversation to be had there).  These are the most controlled environments in the use of a sword that you could hope to have–and yet people still manage to hurt themselves. Factor in all the fighting stuff and you have some serious concerns.

The body mechanics of the movement of the sword, whether the transition between one “guard” or manner of holding the sword ready for use, the transition from one attack to another, or from attack to defense and vice versa, are not always intuitive until you build muscle memory. The options for how to respond to any given blade contact are myriad. You can move, you can grapple your opponent, you can act “on the bind” by pressing your blade against theirs, you can counterattack; and all of these approaches have a number of decisions to make within them. Without getting too far into the “how” of swordplay in this Part (already very long!), let’s take a brief look at the questions involved in choosing to grapple: where will you grip the opponent? Where will you move as you close to grapple and how will your orient your body to theirs? In what directions will you apply force as you grapple? What is your goal: to disarm, to break a limb, to buffet the enemy with fists and elbows, to throw them or trip them? As with all hand-to-hand martial arts, it takes time and practice to understand the theory behind these choices, more to develop the skill to implement them, and even more to be capable of making and implementing split-second decisions about these techniques in the heat of combat. Add a blade, which is dangerous to both you and your opponent, and it becomes clear, I think, that a blade is more difficult to use than a club (though many of the same techniques can be employed, really).

The idea that a character will pick up a sword and suddenly be effective with it (at least against a capable opponent) is dubious at best. Keep this in mind when structuring narrative.

What is a Rapier and How is it Different?
As one of the easiest examples of how widely swords can differ in their morphology and function, let’s look at the rapier versus other types of sword.

As an introductory note, it must be stated that research about the rapier is somewhat difficult, as the usage of language in historical sources do not make the strict categorical distinction between rapier and other types of swords as modern scholars and WMA enthusiasts tend to. This is partially a result of the fact that the rapier evolved over a fairly long period, with a number of very different designs and approaches during that period.

As the fighting manuals consider them, rapiers are swords (very) heavily focused on the thrust over the cut (though some treatises do make use of cutting techniques). Modern scholars debate whether those swords called “rapiers” that are alluded to as also cutting should truly be referred to as rapiers (under modern categorization) or should be placed in the same category as “cut-and-thrust” swords or in the more ambiguous category of “sword-rapiers.”

The rapier developed starting in the early 16th century and continued to see significant use into the 17th, when it began to be supplanted by the smallword (a lighter, shorter variant, essentially).

Generally, a rapier has several distinguishing features. First, it is a one-handed sword. Second, a thinner blade than other sword types, with that blade often being more rigid than other sword types (to strengthen the thrusting ability of the blade while sacrificing some of the blade flex that is useful to “winding and binding” with the blade (see the next Part). Third, rapier blades tend to be quite long, and longer as their development continues. Fourth, rapiers have increasingly complex hilts (over the course of their development), starting with simple rings built into the crossguard so that the index finger may be wrapped over the crossguard (next to the sword’s ricasso). This allows greater control over the thrust, while again sacrificing some authority in cutting. Ricasso rings and complex hilts were not only used for rapiers, however; the “cut-and-thrust” blades (as modern scholars call them) that have wider blades (often acutely pointed) that favor the thrust but still allow for strong cutting). This style of gripping the blade is still emulated in certain grips for modern fencing epees.

The most “extreme” rapier designs had hexagonal or octagonal blade cross-sections, almost like a piece of sharpened rebar (albeit much better balanced). These weapons were clearly designed only to thrust; their cross-sections did not allow for holding an edge.

While a “standard” rapier design is difficult or impossible to pin down, their function is not. As a lighter weapon (compared to other swords), the rapier was easier and more comfortable to carry (provided that the length was not absurd). The use of the thrust allows for a greater maintenance of distance from the opponent as well obviating the need to draw the weapon away from the opponent to prepare a swing. The downside of this is that resorting only to the thrust makes it very difficult to hold multiple attackers at bay at once (already a very difficult thing). But the lack of a need to swing proved especially useful in the often-cramped streets and alleys of Renaissance cities, where there may not have been room to swing a cutting sword at all.

Despite being a thrusting weapon, the rapier does not appear to have been effective or intended to be used against an opponent in armor. Against an unarmored opponent, however, the weapon is truly deadly–in one of the aforementioned test-cuttings I attended, I witnessed a (quality) replica rapier lightly tossed underhanded into a slab of deer meat to the hilt. As we’ll see later on, the reputation of the weapon in its contemporary time (at least in England) was that it was especially deadly compared to other weapons.

Combine the effectiveness of the weapon in urban settings and the convenience of carrying it with it’s lack of effectiveness in group combat (bear in mind that in the press of battle you may not have room to pull back a weapon for a thrust and, in a strange opposite of the alley, a cutting weapon may prove more useful), and you have a weapon very well suited to daily self-defense and to the duel, but not to military purposes.

In the next Part, we’ll talk a look at how swords are actually used. After that, we’ll look at medieval/Renaissance armor and some common misconceptions held by roleplaying games and some fantasy writers. I’ll conclude with a sort of bibliography, including books for further reading and even some roleplaying games that really get swordplay “right.”

What Writers (and Roleplayers) Need to Know about Swordplay, Part I: Introduction and Sources

I’m far from the first person to pick up this subject, but I continue to hear so many mistakes made about medieval and Renaissance weapons, armor and combat that I feel that the subject merits continued treatment.

To begin, let me set out my bona fides: My undergraduate degree is a B.A. in History, focusing on Medieval and Renaissance History, my senior thesis was written on Henry VIII’s use of Arthurian Legend as propaganda and included research at the British Library and National Archives. My master’s degree is in English, focusing again on medieval and Renaissance literature; my master’s thesis was about the use of particular weapons and fighting styles in Shakespeare’s works, 16th century English fighting manuals and adventure pamphlets as a method of establishing identity, particularly national identity.

I was a sport fencer in high school (which hardly counts for anything in this field, unfortunately, except in the rudiments of all hand-to-hand combat: distance, timing and footwork) and spent my college years as a member (and later study-group leader) of the Association for Renaissance Martial Arts (ARMA), having broken a rib and an eardrum in separate sparring engagements during that time (no, I did not get stabbed in the ear). Though I did not continue my affiliation with them after I left my original study group, I have continued to practice and develop my skills as a swordsman, off and on. I have experience sparring at full contact with padded weapons carefully designed to emulate the weight and balance of actual swords, with wooden “wasters” and with blunted steel. I’ve studied and worked through Ringeck, Talhoffer, Fiori dei Liberi, Silver, Swetnam and others, including some sword-and-buckler work in the I.33 and some rapier work with Agrippa, Di Grassi and Saviolo. My primary experience (in practice) is in the two-handed longsword, single-handed swords of the “cut-and-thrust” variety (mostly with dagger or a free secondary hand), the rapier (again with dagger or free hand), and grappling/dagger work. I have used a shield sometimes, mostly a buckler, and all of my experience is in the realm of “blossfechten,” that is, fighting without (plate) armor. I’ve seen a number of demonstrations of fighting in plate (harnessfechten) and understand the theory, but have no direct experience there.

The subject of “medieval and Renaissance martial arts,” “historical European martial arts,” (HEMA) or “Western martial arts”  (WMA) has become an increasing interest in Western culture over the past thirty years, though I’d still venture to say that the subject, as both research field and martial art form, remains in the early stages of reconstruction.

If you want to see what swordplay looks like, I recommend you go to YouTube and look for clips under the search terms in the preceding paragraph, with particular attention to some of the European competition clips. Each year, it seems, there are more competitions, the “sport” becomes more like other sports (with organizations, sponsors, etc.), and the competitors seem to have greater skill. This will give you some perspective on the topics we’ll cover in this series.

How did we get to a point where, after such a far remove from the times when these weapons were actually employed, we can begin to understand and reconstruct their usage?

There are museum pieces and archeological records of course, which give us much of our information on actual weights and designs of swords, armor and other weapons. The Royal Armouries in Leeds, England has, as far as I’m aware, one of the very best collections of early modern armaments anywhere in the world. Bear in mind though, that we find far fewer pristine examples of items than we do pitted and degraded examples, this being more the case the farther in time we look back.

For the technical aspects of medieval arms and armor, I would refer the student to begin with the works of Ewart Oakeshott, with the caveat that his interpretations are not the final say in the matter, that even he was unsure about some of his classifications, and that there has been much debate and revision of his ideas since.

We also have art and literature but, as we’ll see, the interpretation of these alone can sometimes lead to misunderstandings (that continue to defy correction)!

For the actual reconstruction of the martial “arts” of the medieval and early modern periods, we have written instructions, what are generally referred to as the “fechtbuchs.” The earliest of which I’m aware is the Royal Armouries’ I.33 sword and buckler manuscript, probably written sometime around the turn of the 13th century to the 14th.

We continue to see handcrafted manuals, but the invention of Gutenberg’s printing press in 1439 led to the “mass-” production of printed manuals as well. These texts were put down by masters of arms, those teachers of the craft who had earned sufficient fame and demonstrated sufficient skill, both as instruction manual and as advertisement, probably. A list of Western martial arts manuals may be found (on Wikipedia) here.

In the modern age, groups like ARMA and many others–local schools like any Eastern martial arts dojo are popping up in U.S. cities all the time–take these manuals, translate them into English (or, more likely, purchase translated copies), and then work through the examples and instructions within to figure out what is intended and what actually works. Many of the fechtbuchs are illustrated, though there’s often some debate over whether the pictures accurately reflect what is referenced in the text. If modern artists’ renderings of concept firearms are any indication, some artists understood what they were depicting well enough to be accurate, but many, perhaps most, did not.

The amount of scholarly attention to this field grows a little year-by-year, but it is still (as far as I’m aware) the focus of only a small handful of professors and professional scholars. The Martial Arts of Renaissance Europe remains one of the best scholarly surveys of historical fighting manuals available; it is approaching its twentieth anniversary.

If you get on Amazon, you can now find instructional manuals like those for any martial art, the authors having purportedly gone through earlier texts to recreate the skills and portray them in modern language and pictures.

As a person writing or roleplaying in a setting with swords and armor, should you feel compelled to join some reenactment group or martial arts collective participating in the HEMA/WMA world? It certainly wouldn’t hurt to have the experience. In this series, though, I’m going to try to cover the essentials of what you need to know so that you’re at least not making any glaring errors.

To continue to the next Part in this Series, click here.

Some Thoughts on Writing: Plotting and Pantsing

There is no One True Way to write; anyone who tells you there is is a fool and/or trying to sell you something.

That said, I’d like to share some of my thoughts on writing fiction, particularly on plotting and “pantsing,” in this post. Maybe it will be useful for someone. Maybe someone will disagree with me in the comments and that will be useful for someone.

To me, writing fiction is really two things inextricably bound together. The first is storytelling, by which I mean the structural aspects of the craft: story structure and flow (and therefore “plot”), pacing, character creation and motivation and all of the building blocks of the a story. The second is style, the actual selection of words, grammar, syntax, etc. The storytelling never becomes a story without the application of words; but the words themselves never amount to anything without a structure and purpose to them.

Both of these things are always difficult and often frustrating. I’ll liken this to exams in law school: those people who weren’t nervous about the test didn’t understand how much there was to know in that field and just how complex its operations were. You could only be blasé about it through ignorance. Perhaps there is someone in this wide world who is naturally, innately and intuitively creative and powerfully-minded enough that this doesn’t apply to them, but I doubt it.

In my writing of late, both those projects that have ultimately “failed,” or at least not come to immediate fruition, and those that I have “finished” in a relatively complete form or on which I continue working, I’ve noticed that most of my instances of “writer’s block” occur because of the intersection of the two elements of writing fiction. I hang up when I’m in the middle of a sentence and the plot has brought me to the necessity of naming a new character. I struggle when I rewrite the same damn sentence over and over because I’m trying to make it sound right and figure out where the story is going next. In short, writer’s block ambushes me when I’m pantsing, trying to split attention between both necessary aspects of the craft.

So, I have moved to a more “structured” approach to writing, one that finds good analogy perhaps in other, more tangible art forms. In painting, you don’t put paint to canvas until you’ve done sketches to create the basics of the image or prepared the canvas. Working on story structure is like that, with the actual writing of it the painting, of course.

My approach has thus become one of meticulous plotting before beginning the actual writing. I start with the broad strokes, major plot points and characters, then parsing out into scenes, then plotting out each scene. While sometimes tedious and certainly time-consuming, it allows me to make my adjustments to structure and plot lines, to make sure callbacks, foreshadowing, etc. are all properly placed and linked, and to develop or edit side plots before doing so will cause me to do a lot of re-writing. Once all of this is done, the focus can shift almost entirely to the craft and style of the writing, since everything else will already be signposted.

This is not to say that pantsing has no value, either in general or to me. Pantsing can be a great way to get the creative juices going, and it’s how I ended up with the basis of the novel I’m currently working on–pantsing a short story became 20,000 words to provide the core plot of a larger, more complex story.

But much of the skill of storytelling lies in making sure that there is a place for everything, everything is in its place, and all the pieces fit together seamlessly. Story structure and plotting are the writer’s carpentry; it makes sense here to measure twice and cut once just as it does with lumber–so far as that analogy can be pushed, at least.

I’ve been thinking about this quite a bit both while working on my in-progress novel and while reading Joe Abercrombie’s stuff (I’ve just started Best Served Cold right after concluding the First Law trilogy). It’s been apparent to me that, while much of the punch comes from Abercrombie’s style of writing, the combination of that with masterful structure and plotting is what makes his novels so enjoyable.

An admission: I’m writing this in procrastination of working on the novel itself. It’s perhaps best I turn to that now.

Update 6-18-19

The blog’s been quiet for a little over a week and, as usual, I like to explain myself a little bit when that happens.

The Writing
I had planned to wait until NaNoWriMo to start working on finishing a novel again, but those plans have been happily dashed. What started as a short-story ended up as a twenty-thousand-plus-word text, one that needed a lot of work on pacing and a lot of filling in of details. So it just made sense to turn it into a novel. That’s been the bulk of my writing time in the past few weeks, both in the original story and now in plotting out the novelization.

Plotting is almost complete and I’ll begin the writing (and re-writing) proper shortly. If posts to the blog are sporadic over the next short while, that’s what’s going on.

This novel does not yet have a name, but I’ve also already got broad-stroke plans for three sequels; two of which will likely be part of an initial trilogy and the last of which (which also started as a short-story that expanded out of hand) will likely be the start to a second trilogy. The story is set in Avar Narn (of course) and is something of a noir story. Saying it feels in some ways like a bastardized mix of noir fairy-tale and dark fantasy is pretty close to the mark, I think.

If you’ve followed the blog for a while, you may remember that, in 2017, I began work on a different Avar Narn novel, tentatively called Wilderlands. I fully intend to finish that novel, potentially as part of a trilogy between the two trilogies discussed above (the characters in the current novel and Wilderlands are entirely different), but it’s on the proverbial back burner for now.

Kiddos
Any day now, K and I could get the call that brings kids into our lives again. We’re going a little crazy with the wait, to be honest. But, when it happens, you’ll see the Fatherhood section of the blog come to life again.

Shadowrun
If you’ve been keeping a weather eye out for RPG news, you’ll know that a Sixth Edition of Shadowrun has been announced for release this summer, boasting a “streamlined” ruleset.

Shadowrun: Anarchy disappointed me greatly in its failure to translate some of the most fun things about the Shadowrun universe into a more narrative-focused design. As such, while I’m excited to see a new edition, I’m not sure that it’s going to provide what I think the rules need to really present a modernized and excellent take on the game’s design. So, look out for two things: (1) continued posts for my Cortex Plus/Prime hack of Shadowrun (which may end up remaining the sweet spot for me for playing games in the Shadowrun universe), and (2) a thorough review of the Shadowrun Sixth Edition Rulebook when I get my grubby hands on it.

Theology
More to come and soon.

Learning from Game of Thrones

[SPOILER ALERT: This post presupposes familiarity the sweep of the Game of Thrones TV series, with a focus on the final season. If you’re sensitive to having narrative spoiled for you and haven’t watched everything yet, don’t read.]

It would be hardly original of me to spend a post simply lamenting this last season of Game of Thrones, despite my desire to do so. Instead, I’m going to spend some time pointing out what I think are some lessons to be learnt by aspiring writers (in any medium) from the recent failures of the show.

To preface that though, I need to exhibit some due humility. The greatest lesson to be taken from the recent episodes is that good writing is difficult, no matter who you are. It is, as with so many things, far easier to criticize than to create. D.B. Weiss and David Benioff, and the other writers who contributed throughout the show’s run, have managed to create for widespread public consumption. At this point, I have not. I feel it’s only appropriate to bear that in mind and take what I have to say with a pinch of salt as we continue (though ultimately, I hope that the weight of my arguments, rather than the status of the people involved, carries the day in this discussion).

Show Don’t Tell

It’s one of the commonly-touted pieces of advice given to writers. Don’t use boring exposition when you can just as easily let the audience get the necessary information from context or from being immersed in the setting and story. Don’t explain the inner thoughts of the characters when we can understand them just as well by how the characters act and speak.

This is especially true of visual media–which is why Industrial Lights & Magic and Weta Workshop have been able to do such wonderful things for defining setting in films and TV, why concept art is such an important aspect of designing for those media (and even for the written word)!

So, for me, Game of Thrones’ after-the-show talks with the showrunners pointed out a key problem. When you have to explain what you were trying to get at in an episode after the episode, you haven’t written the episode well enough to stand on its own. When you smugly assume that everyone got exactly what you’re talking about while watching, you’re adding insult to injury.

This is largely a result of rushing the storytelling. Without time enough to lay all of the necessary groundwork to explain events and occurrences within the show, you’re going to have to either let the audience create their own explanations or hand the explanations to them elsewhere. The lesson here: make sure you’re taking the right amount of time to show what you need to show so that you don’t have to tell later.

To be clear, this is a general rule, and general rules can always be broken in good writing–if done well and only when appropriate. It is possible to have key events happen “off stage” and describe them later or to play with the relation of key information in other ways, but these decisions must be made carefully and deliberately. My recommendation is to start with a “more is more” approach when writing and then employ a “less is more” approach when editing. It’s easier (I think) to lay it all out and refine by cutting out the dross than to realize your narrative isn’t complete and then struggle to fill in gaps–I’ve been there!

Here are some specific examples from Season 8 of this being an issue: the tactics employed at the Battle of Winterfell, Daenerys’ suddden change in the attack on King’s Landing. This lesson could just as easily be called “Timing is everything,” or “Don’t Rush” (the latter of which is probably the cause of most of Season 8’s mistakes).

Reversals of Expectations: There’s a right way and a wrong way.

The showrunners made a great deal out of “defying audience expectations” in Season 8. Defying audience expectations is a key technique in good narrative, but there’s more nuance to it than that. The technique, properly employed, has two parts: (1) give the audience a twist that they don’t see coming AND (2) set up the narrative so that, in retrospect, that twist feels somehow inevitable.

This is not a game of “gotcha!” Good writers do not play with twists and surprises simply because its something to do. Good writers use twists to increase tension, remind us that, like life itself, the unexpected (but often foreseeable) occurs in narrative, to create drama.

A good surprise must satisfy multiple demands in addition to the two basics mentioned above. The twist must follow the internal consistency of the setting–it should defy expectations of plot, but not of the personality and character of the actors or the rules (spoken or unspoken) of the setting itself. It must have sufficient groundwork laid in the story; without this the “twist” feels random and unmoored from the themes and scope of the rest of the narrative.

In “gritty” fiction, there will be times when bad fortune or ill luck interjects itself into the story, times when both readers and characters are left wondering “is there a meaning to all of this, or is everything that happens just random?” But those types of events only work when explained by coincidence and happenstance–they must truly be strokes of bad luck. When we’re talking about the choices made by characters, there must be believable motivation and a way for the character to justify the action–even if we don’t agree with the logic or morality of that justification.

The example that undoubtedly comes to mind here, as above, is Daenerys’ sudden decision to kill everyone in King’s Landing. There is some building-up of her story arc in the early narrative (following Martin) that Dany might not be the great savior everyone hopes that she will be. She is a harsh mistress to the Masters of the cities of Slaver’s Bay, willing to commit atrocities in the name of “justice.” But this moral ambiguity (strongly based in the character of historical figures in similar situations) is not the same as the desire for justice slipping into a desire for power and control to implement that justice. That story arc certainly works (it is the rationale behind Morgoth and especially Sauron in Tolkien’s world), but we need a solid background for such a morally-repugnant act as mass murder of innocents. We are given the groundwork for her eventual “fall” into a person willing to use harsh means to achieve her idealistic ends, but not for her to do what she did. This lack of laying the proper foundation for her sudden change leaves it feeling like, as some commenters put it, “a betrayal of her character.” This leads us to the next point.

Internal Consistency versus Authorial Fiat

For me, the greatest issue I took with Season 8, the thing that left such a bad taste in my mouth, was my belief that the showrunners decided what would happen and then shoehorned in all of the details to get them to those decisions. Euron’s sudden (and nonsensical) appearance before an undefended Targaryen fleet and ability to quickly slay a dragon compared with his powerlessness before one remaining dragon at King’s Landing is only one exemplar here. Having Arya kill the Night King (which had been “decided early on”) is another. And just about all of Episode 6.

One of the great joys of writing (in my mind, though I hear this with some frequency from other writers) is when a story takes on a life of its own. What you thought would happen in your story gets suddenly left behind because of the momentum the story has accrued, the logic of the setting, the narrative and the characters within it. We find ourselves mid-sentence, suddenly inspired (in as true a sense as that word can be used) with the thought, “That’s not what happens, this character would do X instead! Which means Y needs to change!” All of sudden, you’re going somewhere better than you were originally headed, somewhere truly rewarding to write and to for your audience to read or see.

This is the result of a dialectic that forms between the moving parts of the story. The narrative, the dramatic tensiveness of the story, the themes and motifs, the characters involved and the conditions established by the setting; the gestalt of these elements becomes something that lives and breathes, something greater than the mere sum of its parts.

Pigeonholing the plot forces it to become stilted, forced and (worst of all) didactic. Dead and mechanical. This is, in part, the difficulty with story “formulae.” There are narrative structures that provide a general framework for certain types of genres or stories, but following the formula with nothing else results in something unsatisfactory.

Here, though, my suspicion is that the problem was more a matter of fan-service and a slavish devotion to defying expectations than rote adherence to fantasy-story formulae.

One of the things that made the Song of Ice and Fire books, and the Game of Thrones TV show so popular, so gripping for the audience, was that it pulled more from medieval chronicle than fantasy yarn for its structure. The story is about the world and the group of characters as a whole in a way that is bigger than any of the constituent characters, that survives the misfortunate end of any one (or more) of them. This left no character safe, allowed for real surprises that contradicted expectations of narrative structure rather than expectations based on the internal logic of the harsh, unforgiving setting and culture(s) in which the story takes place. The internal logic, then, drives the defiance of expectations instead of resisting forced twists of expectations inserted into the plot by the author’s whim.

In fantasy in particular, internal consistency is the golden rule. In settings where magic is real, where dragons may soar in the skies and burn down the enemies of a proud queen, we are required to suspend disbelief. Of course. But we can manage that suspension of disbelief only when there is a reward for doing so and the obstacles that might prevent us are removed from our path. Magic is a wonder to behold in the truest sense, but it fizzles and dies when it appears that the magic in a setting does not follow certain rules or structure (even if we don’t fully understand those rules or that structure). If the magic is simply a convenient plot device that conforms like water to whatever shape the author needs or desires, then it fails to carry wonder or drama. Drama constitutes the ultimate reward for the suspension of disbelief–allow yourself to play in world with different rules from our own and the stories you find there will satisfy, amaze, entertain and tell us truths about our own world, even if it is very different. But without internal consistency, there can be little meaning. Without meaning, narrative is nonsense.

Season 8 lacked this internal consistency on many levels. From the small, like the much-discussed “teleportation” around Westeros, to the glaring, like battles being predetermined by plot rather than by the forces and characters that participated in them.

But the greatest issue I took with Season 8 in its (lack of) internal consistency was the ending. To me, the sudden appearance of the nobility of Westeros to decide, “Yay! Constitutional monarchy from now on!” seemed far too after-school special for me. For a story where peoples’ personalities, desires and miredness in a culture of vengeance and violence long proved the driving factor, you need far more of an internal story arc for a sudden commitment to peaceful resolution of issues to be believable. They would have to reject their entire culture to do so, rather than rationalizing how the culture is correct all along (what much more frequently happens in real life). I can see such a decision for Tyrion and for Jon. But for Sansa and Arya, I do not. And why Yara Greyjoy and the new Prince of Dorne wouldn’t likewise declare independence, I cannot say.

In short, I just don’t think that the narrative satisfactorily supports the actions taken by the ad-hoc council of Westerosi nobles in the final episode.

When a Narrative Fails Your Narrative

Why did putting Bran on the throne fall flat in the final episode? Tyrion gave an impassioned speech about how stories are what bind people together and create meaning (something with which I wholeheartedly agree as aspiring fantasy author and aspiring existential Christian theologian) and then made an argument about the power of Brandon’s story.

Wait, what? You lost me there. What was the power of Brandon’s story? Yes, it started strong, and he did do some amazing things–crossing north of the Wall, becoming the Three-Eyed Raven (whatever the hell that means), surviving his long fall from the tower at Winterfell. But, given his role in Season 8, I’m not sure that any of that mattered. He played relatively no role at the Battle of Winterfell (at least that we mortals could see), the narrative of his role as Three-Eyed Raven was left impotent and undeveloped at the end of the series, and of those with decision-making authority in Westeros, few had any direct experience with a Three-Eyed Raven, the White Walkers or the Battle of Winterfell. To them, the whole thing is just a made-up story by the North.

For narrative to be effective, we must be able to use it to find or create meaning. Bran’s story is too jumbled a mess without a climax or denouement for us to be able to piece much meaning out of it. In fact, we’re left wondering if it meant anything at all.

Since the idea to put him on the throne relies on the meaning of his story, the act of crowning him itself becomes meaningless; we can find no internally-consisted basis for supporting making him king (other than that he can’t father children) and no meta-narrative logic for the event either. This is exacerbated by the fact that Bran earlier tells us that he doesn’t consider himself to be Bran anymore. Without continuity of character, narrative loses meaning.

Thus, the finale fails because it relies on a sub-narrative that has failed. It is a common trope for fantasy fiction to use other stories (often legends) from the setting’s past to convey meanings and themes for the main narrative (Tolkien does this, Martin himself does, Rothfuss does as a major plot device in The Kingkiller Chronicles); writers looking to follow suit need to make sure that any “story-within-a-story” they use itself satisfies the necessities of good storytelling, or one is only heaping narrative failure upon narrative failure. The effect, I think, is exponential, not linear.

What the Audience Wants and What the Audience Needs

Several of my friends who are avid fans of the show and the books, before the final episode, expressed their feelings about the uncertain ending in terms of “what they could live with.” This was often contrasted with both their hopes for what would happen and their expectations of what would happen.

There’s been much talk (even by myself) about the showrunners performing “fan-service” in this season, whether through the “plot armor” of certain characters or the tidy wrapping up of certain narratives.

The claim that the showrunners made plot choices in order to please the audience has set me thinking about these types of choices on several fronts. On the one hand, GoT rose to prominence in part directly because of G.R.R. Martin’s seeming refusal to do any “fan-service.” That communicates to me that there is a gulf between what readers want from a story and what they need to feel satisfied by the story.

We can all recognize that there are stories that don’t end happily, either in general or for our most-beloved characters, that nevertheless remain truly satisfying and meaningful narratives for us, ones that we return to time and again.

So, should giving the audience what they want (or, to be more accurate, what we think they want) be a consideration for the writer? There is no simple answer to this question. The idealistic writer (like myself, I suppose) might argue that crafting a good story–which is not the same as a story that gives the audience exactly what it wants–is more important than satisfying tastes. On the other hand, the publishing industry has much to say about finding the right “market” for a book, and knowing what kind of stories will or won’t sell. For the person who needs or wants to make a living as an author, playing to those needs may be a necessity. Even if income isn’t a concern, there’s still something to be said for what the audience organically finds meaningful as opposed to what the author seeks to impose as the meaning and value of the story.

I just want to point out this tension as something that the final season of Game of Thrones might help us think about, not something for which I have any answers, easy or otherwise. When the final books in the series are released (if that ever happens), maybe there will be some fertile ground for exploration of these ideas. Of course, the intent of the various creative minds on all sides of this collection of narratives may remain forever too opaque for us to glean any true understanding of the delicate relationship between author, craft and audience.

Conclusion

I, as many of you I suspect, was left profoundly unsatisfied with the ending of a story I’ve spent years being attached to by the final season of Game of Thrones, and my frustration is further stoked by the knowledge that the showrunners could have had more episodes to finish things the right way instead of rushing to a capricious and arbitrary ending.

That said, the failures of the season (not to mention the great successes of previous seasons) provide many lessons for we would-be authors.

What do you think?

 

Nootropics for Writers

Disclaimer: I am not a doctor; this post is not intended to be medical or nutritional advice. It is only a description of some of my own experiences. “Dietary supplements” like the ones discussed herein are insufficiently regulated by the Food and Drug Administration or other agencies and there are no serious standards for the protection of consumers or for claims made by manufacturers. I highly recommend that you consult with medical professionals before making a decision to use any supplement, chemical or “herbal treatment.” Proceed at your own risk.

I don’t, as a rule, take drugs that are not prescribed for me or available over-the-counter for the short-term remedy of mild conditions. As I’ve expressed elsewhere on this blog, I suffer from clinical depression due to a chemical imbalance in my brain. It is well controlled under my current pharmaceutical regimen, and I have no desire to threaten that careful balance. I have never used an illegal substance and have no desire to start. I don’t smoke.

That said, if you tell me there’s a way to make myself a more productive writer, you can bet I’m going to investigate. While I’m passionate about writing, my brain tends to work in short bursts rather than long slogs and I personally find that much-vaunted “flow state” elusive more often than not.

As a writer of speculative fiction (mostly fantasy but with an interest in science fiction as well), I happened to come across the idea of “nootropics” while doing research into ideas and theories about human enhancement.

As best I can tell, there is a subculture evolving, partially an overlap with the more general “maker” and “biohacker” subcultures, devoted to the use of nootropics. You will find myriad sites and forums where advocates compare their personal “stacks.”

It starts with something we are all experientially aware of: some substances seem to have positive cognitive effects when administered in proper (and safe) doses. Caffeine is the most common and widely used of these substances, it seems, and it is in fact a part of most nootropic “stacks.”

The lists of nootropics is relatively long, ranging from things like gingko biloba to hardcore prescription-strength drugs like modafinil–a military-grade amphetamine alternative. Some of the substances touted for nootropic qualities act individually, while others supposedly provide greater effects when combined with other nootropics.

Most of those experimenting with nootropics (and I think it’s still safe to say that all nootropic usage is experimental at present given the lack of strong scientific support for usage or for most of the substances put forward) develop a “stack.” The “stack” consists of a collection of nootropics to be taken together, in hopes of maximizing effect.

For those who would rather not compile information and develop a stack for themselves, there are several commercially-available stacks such as Qualia.

Because I am not recommending that anyone use these substances, I’m not going to detail the particular ones I used to develop a stack for myself to see if there was anything to this whole idea.

But I will report my experiences. Over a handful of trials of the same stack (spread out over time–none of the substances in my stack, with the exception perhaps of caffeine are supposed to be addictive, but I’m trying to stay on the side of caution), I have experienced greater focus and even what I’d call “flow state.”

I would describe the most immediate effect I experienced as increased focus combined with maintained situational awareness. This is an odd sensation, but not unpleasant. While writing after using nootropics, I did experience increased word counts (as a measure of productivity) and longer periods over which I could sit and focus on writing, which is different from my typical experiences. So, yes, I did experience what I would call noticeable improvement in cognitive function, particularly for the purpose of writing productivity.

HOWEVER, I have a number of reservations as well. First, I cannot be sure that what I experienced is anything more than a placebo effect. My evaluation is, after all, entirely subjective. Further, I cannot be sure that nootropics were the direct cause of the increased productivity–I’ve been simultaneously and very consciously working on developing my writing focus and discipline. In an age of constant-partial attention, I’m not unconvinced that my difficulties writing for long periods of time or focusing for extended writing sessions are a matter of bad habits rather than chemical brain-states. Along with this, I have to question whether there are better–non chemically-dependent–measures for the achievement of the same effect. Is it possible that meditation, mindfulness exercises, actual exercise and other means could be used to do the same thing? I don’t know for sure, but I have a suspicion that the answer is “yes.” After all, the brain is a highly sophisticated organ, one which we do not understand nearly as well as we’d like to think in this information age. I think it’s probable, likely even, that there are natural ways of tapping into the body’s and brain’s natural ability to increase focus and creativity that do not rely upon the introduction of foreign substances to them.

So, if the question begged by this post is “do nootropics help writers to write?” then the definitive answer I can give is “maybe.” While I did experience some effects in productivity (going from writing about 1500 words in a sitting to writing 2500), I can’t be sure of causation in any logical or scientific way. And I can report that there are times when I naturally match the productivity I experienced without the need for a collection of horsepill-sized supplements. Further, there is no good information on the long-term effects of nootropics, and that alone should be concerning.

Given my lack of medical background, I’m not qualified to make a recommendation about the use of nootropics. Even though I personally experienced perceived cognitive enhancement through their use, I’d highly recommend that other strategies–development of habits, regular exercise, a routine where writing occurs at the time you naturally feel most creative and focused, careful curation of the writing space to be inspiring and free from distraction, etc.–be implemented before even considering nootropics as an aid to the writer’s craft.

When it comes down to it, we all want something for nothing. We writers all want some panacea, some magic trick that makes us brilliant authors without having to face difficulties. Combine that with the myth of the suffering artist, that we must either be crazy or despairing to be creative, and its easy to see why nootropics might be an enticing idea for the aspiring writer.

But the struggle with the craft, the wrestling with turning images, thoughts, ideas and emotions into words of power on a page, therein lies the true magic of the craft. For that, there are no shortcuts, no miracle drugs, no ways but the hard way. And at the end of the day, isn’t that one of the reasons we feel so driven to do it?

Temptation

This post is something of a confession; prepare yourself. It’s nothing so tantalizing as a comment about the temptation of drugs or sex; it’s about another insidious temptation with which society often plies us. Lately, I’m feeling its pull more strongly, it seems.

That temptation is the one of comparison. You know the one. It’s the one that gnaws at your soul a little, whispers doubts in the back of your mind, every time you open up a social media platform. You see people living their “best lives” and–even though you consciously know that 99% of what you see posted is manufactured and exaggerated, conveniently glossing over those problems, dilemmas, failures and weaknesses that everyone has and no one really wants to share–you still wonder, “Am I not doing as well as everyone else?” “Am I just not as good?”

I’m no exception, and lately I’m thinking about this much more than I’d like to. Part of it is a function of age: I’m thirty-five, fast closing in on thirty-six. But I can’t really lay the blame on that, because it’s just another measure I’m using for comparison.

I, like many people from upper-middle-class suburban backgrounds, was raised on a steady regimen of the importance of achievement. Explicitly or not, I was taught to weigh value based on achievements reached, things accomplished. To add to that, I fell into the belief (though I can’t, admittedly, say that anyone drilled it into me) that real achievers achieve things early and often.

This was an easy thing to satisfy when I was younger and in school. I maintained consistently high grades, took all of the advanced placement classes available to me and entered my first semester of college with forty-seven hours of credit already under my belt. I spent the next decade or so earning degrees, tangible (kind-of) certifications of achievement.

Now I’m much farther removed from academia, and I’ve become much more responsible for intrinsically maintaining my sense of self-worth.

And therein lies the battle. I have very consciously chosen certain ideals and values to live by, ideals and values inspired by my faith and my idealism, ideals and values about which I am convicted and passionate.

Sometimes, those values are counter-cultural. A significant point of my personality is the value I place on my independence. Combined with my moral compass, that’s very much influenced my career path as a lawyer. Those choices are not without consequences. One of my wisest friends once said, “you’re only as free as you’re willing to accept the consequences of your actions.” Fulfilling that statement is truly living without fear, and it’s something that has resonated with me ever since I first heard it.

So–most of the time–I’m perfectly content with the career choices I’ve made. I work in a small firm with two partners who are like family, I have great independence in how I do my work and for whom I work. This has given me a lifestyle balance that truly fits with who I am, and I often tell people that I wouldn’t be happy lawyering if I was working for someone else.

But it also means that there are consequences. Balancing my broader life goals against my career and placing my moral values first when working mean that I sometimes turn down work that might be lucrative or that I perform my work in ways that place income as a secondary concern. I don’t take on new clients when I don’t believe that I can achieve anything for them; I don’t bill my clients for every little thing; and I don’t charge the exorbitant fees I sometimes see other attorneys charging.

I feel those choices every time I look at my bank account. Don’t get me wrong, I make a decent living and my practice grows with each passing year–it turns out that being honest and capable actually is a good business model! I’m happy to accept the consequences of those choices; I’ve found in the past few years that I need far fewer material things to be happy than I thought I did, and I have mostly disdain for the pursuit of wealth, power and status.

Imagine my surprise, then, when I was scrolling through Facebook over the weekend and happened across a post by a couple I went to law school with and felt a pang of jealousy. Here’s the strangest part: my jealousy was about the background of the picture, about their kitchen. I’ll be very excited to see K’s reaction when she reads this, because she knows me well and knows how little stock I typically put in the size and fanciness of a person’s home.

Of course, my feelings weren’t really about the kitchen. They were the result of the doubting my own adequacy in light of the financial success this couple presumably enjoys. These feelings were really about me asking myself if I’m really good enough, according to a standard I don’t believe in and actually reject!

I don’t want a house like theirs. I don’t want the type of life consequences that are attached to such a choice (which is not intended to be a judgment of their choices, simply a statement that that is not the path for me). But it doesn’t matter who you are, that temptation will reach its ugly tendrils into each of us at some point, if not regularly.

When it comes down to it, though, career achievement is the place where the temptation of comparison to others is easiest for me to bear. I’m very proud of how I conduct my business and uphold my values; that I try to practice the Christian ideals I so often discuss on this site. Again, that’s not intended to be a judgment on others, just a matter of trying to keep my own hypocrisy to a minimum.

The two other temptations I frequently feel to compare with others hit closer to home. The first of these is about parenthood; the second: my writing.

Those of you who have followed this blog for some time, or perused it in depth, or who know me personally, know that K and I plan to foster to adopt, and that we’re again waiting for a placement of kids. That’s difficult enough as it is, but we’re quickly approaching a time where it seems that we’re the only ones without children. One of my partners at the law firm has two; the other is expecting his first this Fall. My (younger) sister is pregnant with her first (and I am very happy about this and excited for her!) and I’ve got several siblings and cousins–many of whom are younger than me–who already have children as well.

I know better than to think of having children as a matter of achievement, really I do. But the fact that I have to write that here is revelatory in and of itself, is it not? And I know that K and I are not the only ones to deal with such comparisons with others–not by a long shot.

For me, my writing is where this temptation cuts deepest. If I can discern any sort of divine calling for myself, it lies in writing fiction and theology. If there is a personal pursuit about which I am truly passionate,  it is in writing. If there is a single most-powerful, non-divine source of my sense of self-worth, it is in my writing.

I’ll make a true confession by way of example, so get ready for some vulnerability on my part: This past weekend Rachel Held Evans died. She was an outspoken writer for progressive Christian values and, even in her short life, accomplished much in service of Christian faith and demonstrating to the unchurched (and perhaps millennials in particular) a Christianity that rejects fundamentalism, embraces the Gospel truth of love and reminds us that Christ calls us to pursue an agenda of social justice that does not rely on identity politics, a rejection of immigrants, or fear. (Here is one article with some information if you’re not familiar with her).

To my shame, I have to admit that, in addition to the sincere sorrow I feel at her passing, I was awash in a sense of unreasonable jealousy. She was only a little older than me and already had five published books! Obviously, my feelings of inadequacy have nothing to do with her; they’re really about me questioning myself, worrying that maybe I just don’t have what it takes.

I told myself that I’d get my first major work published before I turned 40. As that time slips ever closer, I find myself often looking up other author’s ages when they were first published. I can say that I understand that their life isn’t mine, nor should it be. I can write that I know that the value of a writing originates in the writing itself, not how old the author was at the time of creation.

And that knowledge, I think, is where the truth will out. Particularly in my theology, I talk about the importance and beauty of ambiguity. I also admit the difficulty we naturally have with the ambiguous. And let this post be evidence that I don’t stand above that difficulty; I’m not free from that struggle.

There are no easy ways to judge the value of a writing, whether fiction or non-fiction. Style is so highly varied and individual, the myriad ways in which a story might be told so dependent upon the consciousness in control of the tale, that there can be no single measuring stick. And yet, we humans like to have some certainty, even if that certainty is artificial and illusory.  For me, I can find some tangible standard of measure by looking at age at time of publication as a meaningful comparison (though I know in my heart it is not).

Again, the craziest part about falling into self-doubt by making such comparisons is that I intellectually do not value them! In my fiction, I follow after Joss Whedon: “I’d rather make a show that 100 people need to see than one 1,000 people want to see.” At this point in my writing, I’m not sure that I can do either, yet, but the point is that I’m more interested in deep connections with a smaller group of people than broadly appealing in a commercially-viable way. The same goes for my theology–I’d rather write something that resonates deeply and inspires just a few people to legitimate faith, that gives even a single person permission to practice Christianity in a way that isn’t “one-size-fits-all,” than to establish some great presence in the history of theology.

As I’ve mentioned on this blog before, I’m not even sure that I’m interested in traditional publication avenues right now. I’d love to be able to make a living writing, to devote all of my time to it, but not at the cost of having to cater to publishers or what will be successful on the current literary market to do it. My self-comparisons with published authors, though, makes me wonder if all of this idealism is simply cover for the fear of failing. “Know thyself,” the oracle says. “I’m trying!” I complain in response.

Ultimately, the temptation to compare ourselves comes from a positive place–we want to be meaningful, to be creators of meaning and to live lives where others can easily recognize meaning. That is a natural and divine thing. It’s where we let society tell us that meaning must look a certain way that we go wrong, where we try to make someone else’s meaning our own that we lose ourselves. Perhaps that is what Jesus means when he warns us about the temptation of the world, what Paul is alluding to when he warns us not to be “conformed to this world.”

What I do know is that I am passion about writing, and in particular I’m passionate about writing speculative fiction and easily-accessible theology. I’m working on the discipline to match that passion, and with every passing day I’m probably coming to understand the art and craft of writing just a little bit better–no that anyone truly ever masters it. Those things need no comparisons to be true, to be inspiring, to be fulfilling. So why look beyond them? As with so many things, easier to know what to do than to actually do it.

How do you cope with such temptations? Having read the blogs of some of my dear readers, I know that there is insight out there, meaningful stories to share. If you’ve got one, comment, or post a link to a post on your blog, or send me a message!

Post Script: Maybe in talking about my struggles writing, it would be useful to give a short update on where that writing stands:
(1) Children of God: This is the tentative title of my first theological book. I’ve had finished about 75% of a first-draft for several years now, but it needs a rewrite from the beginning and I need set aside the time to do that.
(2) Wilderlands: This is the first Avar Narn novel I’ve seriously set to working on. The first draft is about 40%-50% complete. I’m starting to feel an itch to return to the story; I’m not sure whether I’ll do that soon or wait until NaNoWriMo this year (which is how it started). It needs to be finished and then needs some significant rewrites in the portion already written.
(3) Unnamed Story of Indeterminate Length: This is an almost-noir-style story set in Avar Narn and what I’ve been working on most recently. I had envisioned it as a short-story, but it’s already swelled to 16,000 words and I’m not finished. I’ll be sending to some volunteers for review and advice on whether it should be left as a novella, cut down significantly, or expanded into a novel. I’ve got several other “short stories” in mind with the same major character, so this could end up being a novella set, a collection of short stories, or a novel series. I’ve also got an unfinished novella-length story with the same character I may return to while this one is under review. If you’d like to be a reader, send me a message.
(4) Other Avar Narn Short Stories: I’ve got several other short story ideas I’m toying around with, but I’m trying not to add too many other projects before I make substantial progress on the above.
(5) Avar Narn RPG: I have a list of games to spend some time with and potentially steal from for the rules here, but I’m mostly waiting to get some more fiction written to elaborate the setting before continuing seriously here. I’m occasionally working on additional worldbuilding and text that could fit in an RPG manual.
(6) The Blog: Of course, more blog posts to come.

 

Introduction to Dark Inheritance (A Warhammer 40k Wrath & Glory Campaign)

(This is the 4th of seventeen posts remaining in my 200 for 200 goal. If you enjoy what I do on this blog, please share and get your friends to follow!)

I have obliquely referenced that I am working on a large-scale campaign for the new Warhammer 40k Roleplaying Game, Wrath & Glory, that I have titled Dark Inheritance. The depth and breadth of this campaign have made it the focus of my writing time lately and, while it’s still far from finished, I’m ready to share at least a summary of the campaign (safe for both GMs and players) with you. Here it is:

Campaign Summary

“The year is 12.M42. In the time since the Great Rift, the Rogue Trader captain Eckhardt Gerard Sigismund Immelshelder has operated his ship, the Righteous Obstinance, in a multitude of schemes to generate wealth and power. He is quite secretive, but often whispered about in gossip throughout the Gilead System. Rumors abound that he and his crew have been able to navigate the Warp despite the lack of the Astronomicon’s light, even successfully penetrating the Cicatrix Maleficarum and returning safely. Of course, there is no proof of any of this.

What is known is that Immelshelder has developed significant interests, business and otherwise, throughout the Gilead system. To what end is again the subject of many whispers but little substance. He is the distant relative of a noble family on Gilead Prime and the last of his own family.

One of the players will play the eldest child of the noble family on Gilead related to Immelshelder. The other players’ characters will represent other members of the noble household, retainers, or allies and confidants of the aforementioned noble character. When the campaign begins, the characters are gathered celebrating a reunion–members of the Astra Militarum are home on leave, those friends who have ventured to other planets in the Gilead system have returned to visit Gilead Prime, and the noble household has gathered its closest allies and its honored retainers.

But this party is interrupted by the sudden appearance of Inquisitor Amarkine Dolorosa, who bears strange tidings. Immelshelder and his closest companions have been assassinated. As a friend of Immelshelder and a person of power and stature within the Gilead System, Dolorosa has taken it upon herself to settle the Rogue Trader’s affairs. Therefore, she comes with both gifts and commands. Immelshelder’s will grants the Righteous Obstinance, his Warrant of Trade, and all of his other assets to the eldest child of the noble family. This character had met Immelshelder a handful of times but did not know him well. Dolorosa promises she’ll provide what assistance she can to see the noble scion settles into the life of a Rogue Trader as easily as possible.

In confidence, she explains that she also expects the newly-minted Rogue Trader’s help in finding and bringing Immelshelder’s killers to justice. Even with allies like the other player characters, can the young noble survive being thrown into the shark pool of Gilead politics and the web of allies and enemies that lead to Immelshelder’s demise? If they survive, will they bring Immelshelder’s killers to justice? How many ‘favors’ will Amarkine Dolorosa expect as fair exchange for her assistance?”

Additional Info for the Campaign

Dark Inheritance has been structured into three acts, with each part composed of numerous adventures playable in nearly any order (as the characters pursue various leads and clues to the final revelations and conclusions of each Act and, ultimately, the campaign). At present, I anticipate that each act will require ten or more gaming sessions (of 2 to 3 hours each) to complete.

Also included are subplots that can play out over the course of all three Acts as the GM sees fit (and as make sense given the actions of the characters in various places). It is my intention that the Campaign provide months, if not a year, of Wrath & Glory gaming.

Some Notes on Writing the Campaign (and Microsoft OneNote)

I’m using OneNote (for the first time), to write and organize the campaign. In the past, I’ve used Lone Wolf Development’s Realm Works to organize campaign materials, but I’m finding OneNote to be more intuitive and much more efficient. Yes, Realm Works has additional features and functionality over OneNote specific to the needs of the RPG campaign-writer, but–in all honesty–I’m not going to spend the time to learn all of the details of that functionality. For me, OneNote’s ability to allow me to focus on the writing, with just enough tools for organization and hypertextuality to order everything for maximum efficiency, provides exactly what I need.

I tend to write fiction with what I’m going to call the “accretion approach.” What I mean by this is that I begin with the barest ideas for a story: Dark Inheritance started as a combination of a Rogue Trader-type game with an idea for using a Warhammer voidship to tell haunted-house, sins-of-father type story influenced by games like The Room Series, the old Alone in the Dark games, Darkest Dungeon and numerous other tales (Lovecraft and the gothic horror of Clark Ashton Smith among others) and films (The Skeleton Key comes to mind). From that basis, I begin to add on more ideas and details–some that flow directly from the premise and others that at first seem discordant. After the basics of each new idea are added, I must go through and modify other concepts of the story (characters, plot devices and points, etc.) to account for the new material. Often, ripple effects from these changes beget the next set of ideas that get incorporated, until the basic story begins to take full narrative shape and the details come more and more into focus. OneNote has proved a godsend in as a tool for this approach.

For some fiction writing (particularly the novel I’m working on), I very much like Literature & Latte’s Scrivener program. In some ways, though, OneNote is a stripped down version of this (without functionality such as auto-compiling scenes and chapters, etc.) and I wonder if, for me, a more minimalistic approach might actually be better.

For Dark Inheritance, OneNote allows you to export the “binder” as just that–a PDF of linked pages in a binder sort of format. Unless I find something more efficient than that, Dark Inheritance will eventually appear for the public’s use in such a format.

I am preparing in the new year (as at least Act I becomes fully playable) to playtest the campaign with at least two different groups. If you’d like to help me with playtesting, please send me a message–I could certainly use the help and feedback!

 

200 for 200

WordPress tells me that, in the roughly two-and-a-half years since I started this blog, I’ve posted 182 posts (this will be 183). Considering my goal has been a minimum of one post a week (even though sometimes posts come in bursts following periods of silence rather than on a regular schedule), I’m pretty proud of that.

But I aspire to more, so I’m setting a goal for myself, one with which I very much need your help! Here it is: I want to have 200 followers through WordPress by the time I hit 200 posts. I currently have 137 WordPress followers, so that’s 63 new followers in the next 17 posts.

If you like what I do here and want to help me reach a wider audience (and perhaps be motivated to do even more), here’s what you can do: (1) invite your friends and followers to come take a look at the blog and follow if they like what they see; (2) repost your favorite posts from this blog on your blog; (3) “like” articles and posts that you, well, like; (4) comment on posts; (5) send me a message about what you like (or don’t) and what you’d like to see more of; (6) generally tell your friends.

Here’s what you can expect to see in some of those next 17 posts: at least two new theology posts I’m working on, one of which is called “Is God’s Will General or Specific?” and the other of which is titled “Jesus’ Anti-Apocalyptic Message;” a review of Wrath & Glory RPG; some preliminary notes on the Dark Inheritence 40K Campaign I’m currently writing; some more notes on the development of Avar Narn RPG; at least one Avar Narn short story.

That certainly doesn’t cover 17 posts, so I’m free to take some suggestions or requests.

All it takes is clicking a few buttons to help me reach more people; please take a little time to spread the word!

Types of Evil (or at Least Antagonists)

This post could just about as easily be a theological one, but since I’ve come to these ideas in working on Avar Narn, I figured they’re better suited to being addressed to the writers out there–anyone who wants to extrapolate into the realms of spirituality and morality is welcome of course.

As an opening, let me first say that it is difficult to write an “evil” character, whether major antagonist or supporting character. It’s difficult because few things in the world are black and white, so a character that isn’t nuanced in his/her morality isn’t believable in stories that intend to maintain verisimilitude. On an obviously allegorical, mythological or moralistic tale, there’s a lot more leeway for capital “E” evil characters. But that has its own bag of tropes and expectations tht I’m not going to address here.

Instead, I’m going to try to put together a few general categories of character types we might describe as “evil.” I think we (myself included) are quick to use terms like “bad guys” when we mean “antagonist” in the more literary criticism sense of the term. That’s probably something we should all be careful of. That said, on to some gross oversimplification that I hope will nevertheless prove useful:

(1) Capital “E” Evil
This is the character who just wants to watch the world burn, who enjoys inflicting suffering for suffering’s sake, who exists to malign and misuse everything around him for the sake of just that.

As such, this should also be the rarest kind of evil in fiction, becuase it’s the hardest kind to get right. I think that there are two subtypes to be thought of here.

The first is cosmic evil–that kind of supernatural evil that is unknowable in its reasoning or motivation. Think Lovecraftian horror. We sidestep the major problem here by positing that we just can’t understand this evil. It just is. Particularly in fantasy, we can often get away with this, but it requires special suspension of disbelief or extra worldbuilding to swing. Even then, we’ve created a de facto villain that is really only interesting in an existential sense.

The second type is the corrupted individual. What we need, I think, to make this work is a believable backstory. Nobody begins that way, so we need an explanation as to what suffering the person has gone through to mold him into this type character.

This runs two ancillary risks, however. The first is that in describing said backstory, we humanize the character to the point that he no longer really fits into the Capital “E” Evil category. The second is that we turn our story into an analysis of the nature of evil. That can be an enthralling type of tale, particularly if the “evil” character is the protaganist of the story.

(2) Mistaken Beliefs
This subgroup belongs to those characters who honestly believe that they are doing the right thing while they commit atrocities the rest of us would find blatantly evil.

There are plenty of real-life examples to draw upon here to make the argument concisely. Take the Islamic State for example. Adherents to this would-be theocracy believe that they are practicing true Islam while murdering the innocent. This is an extreme case that can be attributed to any radical/fundamentalist religious group–Christians who kill doctors who perform abortions, for example. If you truly believe that God (or gods) demanded it and that makes it right, it’s easy to justify your actions.

Next, think of the person gripped by psychosis such that they are driven by an irrational belief that they cannot bring themselves to disavow. This is a particularly moving type of antagonist because they are driven by an affliction and not by their own agency–we can’t actually morally blame those who aren’t in control of themselves. This gives us a good opportunity to explore our “hero’s” approach to evil–is she only interested in ending threats or is she interested in redemption? What does she do when that redemption isn’t something she can achieve.

There are plenty of “lower magnitude” mistaken beliefs that make interesting villains. Les Miserables’ Javier is an excellent example–a man so overcommitted to his idea of “justice” that he cannot allow himself any mercy. This type of extremism in belief is all around us–just listen to how some people think we should fight the “War on Terror” or what we should do to criminals.

We can also extend this to what in the law we would consider a “mistake of fact.” When the antagonist believes that the protaganist is a villain who must be stopped, for instance. Yes, the antagonist’s belief is untrue, but if it were true would we think of the antagonist as a “good guy?”

A brief aside here: what if the protaganist is acting immorally? Watching a character spiral out of control is heck of a dramatic ride, and testing a character’s willingness to act as he says he believes is a classic conflict to explore.

Mistaken identity (along with the particular of being falsely accused) is one of the great archetypal plots, one which fits directly into the mistake of fact.

(3) The End Justifies the Means
This is a commonly-used type of antagonist, perhaps because it’s so relatable. The constant moral choice that faces all of us in life is whether we’ll sacrifice our values to get what we’re after. The only difference here is one of scale. For the sake of drama, the means to achieve the end must be dire–the determination of life and death, or the fates of many. For what profiteth it for a man to gain the world but lose his soul?

One of my favorite examples of this type of evil is the Operative from Serenity. The Operative is a man who accepts that he does evil things, but he is sincere in the belief that it will bring about a better galaxy (which perhaps makes him fall under (2) as well). In fact, he views his sins as a form of sacrifice–he does the unspeakable so that others don’t have to. There is a sort of nobility to his principles, even if they are ultimately wrong. And, for those of you who prefer your characters to wear capes rather than swords, Batman isn’t far off here, either. In fact, I’d say that Batman and the Operative have far more in common than we should be comfortable with if we’re going to call one “hero” and the other “villain.”

Speaking of Batman, most vigilantes fit into this category. Because we love it when the bad guys get theirs, even when they get it in a way that requires a sacrifice of our values, this can be a popular protagonist as well–think of the Punisher.

I would wager that most of our favorite anti-heroes fall into this category as well–it’s their beliefs and the willingness to risk for those beliefs that make them heroes, but the way they go about pursuing the fulfillment of those beliefs that adds the “anti-.”

(4) Honor and Identity
This is perhaps a subcategory of “Mistaken Beliefs,” but it’s a significant-enough subtype that it deserves its own treatment.

People do evil things in the name of maintaining honor all the time. As a student of history–and particularly the medieval and Renaissance periods, the first examples that pop into my mind are the duel and the vendetta. I’ve recently read a book called Mad Blood Stirring: Vendetta and Factions in Friuli in the Renaissance, which reinforces the connection for me. But Renaissance Italy is not the only honor culture known for the tit-for-tat systemic murder that defines vendetta–the Hatfields and McCoys come to mind in slightly more recent history.

And, of course, we could discuss “honor killings” in certain Middle Eastern or South Asian cultures (though, to be fair, the Napoleonic Code also permitted a husband to kill an unfaithful wife and her lover, and even in American law a murder is often considered manslaughter when a husband kills his spouse after finding her “in flagrante delicto.”)

Honor cultures and actions taken under the justification of defending one’s honor are typically about maintaining a sense of identity–either one of purity or of strength (or perhaps both). The ideology of the honor culture says that if one does not maintain honor, one will be viewed as weak and will be taken advantage of by the rest of the culture.

And defending one’s sense of identity is a strong motivator, one that can create fascinating internal conflict, because it can be the conflict between internal belief and external pressures of society. For instance, “I believe that I should show mercy, but my culture tells me that I am not a man if I do not take vengeance.” Powerful stuff.

Honor, of course, is not the only identity-related factor that can lead a character to become “evil” or antagonistic. The need to belong to something greater than oneself is a fundamental human motivation, on that can lead to similar conflict between the will of the individual and the will of the group. Is there a story about gangs that doesn’t include this plotline? What about cults and religions (which takes us back to (2))?

(5) Cross-Purposes and Limited Resources
I don’t have to explain that characters don’t have to possess malicious intent to be antagonists. The world has a habit of pitting humans against each other by its very nature–or at least tempting us to work against instead of with one another.

The core of successful narrative is conflict, and all it takes is characters who want things that are opposed (or even better, mutually-exclusive) to create such.

This suits certain types of stories especially well–the noir and anything else that might be considered “gritty” immediately come to mind. The story doesn’t need to be one of moralistic pedantry, though one must be careful not to let ambivelence about morality become relativism (at least I’m going to moralize on that point).

The Game of Thrones novels come to mind, as does Abercrombie’s First Law books. The political intrigue inherent to both puts POV characters at odds with one another, certainly giving us occasional “villains,” but not as a central theme of the stories.

But this type of conflict does not just suit the morally-ambiguous; it plays well to analysis of morality. I’m going to turn here to my favorite atheist philosopher (and one of my favorite storytellers), Joss Whedon. He’s been quoted as saying, “If nothing we do matters, the only thing that matters is what we do.” As an existentialist theologian, this freedom to create meaning when meaning is not thrust upon us is a core concept to me (but not one we’ll discuss here). Likewise, when the there’s no clear “good and evil,” we must judge the morality of the characters by the choices that they make. This can, of course, be easily combined with all but (1) above.

The conflict within a character of wanting to do the right thing, but perhaps being unwilling to pay the cost to do so, is a conflict we can all relate to. I’m inclined to argue that there is nothing in the craft of fiction so real as this. If you want your writing to have that air of verisimilitude, readers will suspend disbelief for a lot of things when the characters seem lifelike and complex to them. That’s not an excuse to write fiction that is sloppy except for the characters.

That, I think, is why I’m personally drawn to “gritty” stories. They’re rich with meaning.

(6) Inanimate Evil

I include this mostly as a footnote becuase it needs little explanation. This is the classic “(wo)man versus nature” story, where an uncaring and unresponsive natural force (i.e. the elements) forces a struggle for the protagonist to survive.

Conclusion

This list is, of course, not exhaustive. Each category has subcategories and nuances to be explored (and isn’t that one of the great joys of writing?). More general categories could be appended to this. When I think of them, I’ll post an update. I’m also inclined to write more about creating the types of characters that fit into (5), or at least stories of ambivalent morality–that is, dispassion on the part of the narration about moral judgment, leaving such a task to the reader. For now, this seems sufficient.