June 16, 2024

Extreme D&D DIY: adventures in hypergeometry, procedural generation, and software development (part 2)

In part 1, we developed a mathematical model to determine the timeline of key events in a modified version of the D&D adventure “Gates of Oblivion”, in which the player characters must repel an incursion into their home world by Lacc, the vast, shadowy City of Monoliths. This time, we’ll examine another key part of running that adventure: the maps.

Contents

The need for maps

As I noted last time, every aspect of a well-designed adventure should contribute to the adventure’s feel. In the case of “Gates of Oblivion”, the players should have a strong experience of Lacc as impossibly vast, maybe even infinite; the broad avenues (silent except for the grinding sound of great masses of stone and the distant moans of the unquiet dead) and the great monoliths of black stone should seem to stretch endlessly in all directions. From the text of the module:

The sky overhead is a featureless black, and ebon monoliths loom on all sides, divided by broad avenues. The avenues are layered with bones, many of them humanoid, though the bones of dragons and other monstrous beasts are visible as well. Some of the monoliths flicker with ghostly radiance, which does little to illuminate the darkened expanse. The blackened vault of heaven seethes with scraps of shadow, and the bone-strewn avenues are choked with tenebrous shapes that flit about without direction and purpose. In some places, vast mounds of rubble or dunes of pale sand obscure the bones of the avenue.

Lacc is a haunting and ever-shifting environment. Strings of glowing runes dance across the surfaces of the monoliths, while half-seen shapes twist and flutter across the featureless sky. Pools of sludgy black water gather inside the skulls of giant beasts, and the breeze carries the laments of countless tortured souls. Still, despite the shadows that flit through its streets, most of Lacc is desolate and empty. Though it is not truly infinite, a man would die of old age before he could cross the city on foot.

Very atmospheric!

But there’s a problem. While the module does include several combat maps (one for the interior of Xin’s tower, and one each for the immediate vicinity of the three magical Gates), they are tiny, describing, in total, less than half a square mile of area. Meanwhile, we are given a well-stocked random encounter table for Lacc, and a solid chunk of the adventure consists of traveling through the city, encountering various things, people, and creatures (many of which encounters are likely to resolve via combat).

Of course, these encounters may be resolved in the “theater of the mind” style, with the DM simply describing the environment, positioning, distances, etc.; but, for one thing, that is distinctly less fun for any players who enjoy tactical combat challenges (and if your players do not, then it’s not entirely clear why you’re running an 18th-level D&D 3.5 adventure), and for another thing, using “theater of the mind” style for encounters that take place in a strange, alien environment (as opposed to something like “a forest road”) can create a feeling of unreality and dissociation from the game world, which severely detracts from the play experience.

In many other cases similar to this one, DM might find it easy enough to quickly sketch a map of a very small section of a very large, abstractly specified environment (whether that’s a city or an extensive cavern system or whatever else), just big enough for a combat encounter… but, for one thing, that is unusually tricky in this case due to the sizes and distances involved (see below); and for another thing, being able to see the edges of the map creates a claustrophobic feeling for the players (“I want to move 100 feet that way” “Uh… that’s off the map, so…” “Eh, never mind, I’ll just… kinda… stay in this area…”), which is precisely the opposite of what we want here. (Of course the DM can improvise the off-the-map areas, but then that just makes things feel unreal again: if the DM is making the place up as he goes along, that’s pretty much the same thing as the place not really existing in any meaningful sense.)

What to do? Well, recall what we know: Lacc is vast, most of it is empty, and it’s filled with endless miles upon miles of basically the same sort of thing: bone-strewn avenues and black monoliths. Here, by the way, is what we are told about the dimensions of both of those things:

The avenues of Lacc range from 50 to 400 feet wide. Any street narrower than 100 feet wide is invariably rough terrain, as mounds of rubble and heaps of bones make progress difficult. A typical monolith is a block of black stone 200 feet wide, 200 feet long, and 600 feet tall, with a pyramidal crest (though they come in all shapes and sizes).

This is a setup that practically screams “procedural generation”.

With a simple generator, we can create, in advance, multiple (arbitrarily many, in fact) very large maps of big (i.e., a dozen miles or more across) chunks of Lacc. At any point in the adventure when the PCs are traveling through the city and encounter something interesting (i.e., something with which the encounter is worth playing out, rather than merely describing), the DM can pick a spot somewhere in the middle of one of these maps and set the encounter there. The players will then find that their characters can move even large distances in any direction, and not find themselves “off the map”—in other words, the players will feel like the characters’ surroundings really exist, can be described concretely, aren’t just being made up on the spot, etc. This sort of thing will go a very long way toward making Lacc feel like a real place. And that will, in turn, contribute to making the city feel like (ironically) a fantastic place—eerie, alien, haunting.

There’s also another major advantage of having large combat maps (i.e., that cover a large area and can support tactical-scale movement across substantial distances): it broadens the tactical scope of combat by enabling a wider variety of movement and positioning options, long-range attacks, etc. to be used in play. (Consider, for example, how rare it is to find a tactical-scale map that covers an area large enough that it is possible for two opponents to be placed on the map in such a way that they are too far apart to hit each other with a fireball. 600 feet is just 120 grid squares on a standard 5-foot-square grid, but even that evidently turns out to be too much—when did you last see a combat map that was more than 120 grid squares across?1)

(Note that what we’re mapping here is a straightforwardly two-dimensional street layout. The hypergeometry described in the previous post is not relevant here; it’s not Lacc’s incursion into the Prime Material Plane that we’re concerned with, this time, but the landscape of the city itself, once the player characters make their way into it; and—locally, at least—Lacc will be experienced by the PCs as having the same sort of basic geometry as any other city. Of course, the DM is free to create various weird effects and consequences of Lacc’s alien nature, which may be linked to the hyperdimensionality we previously discussed—but that is beyond the scope of this post.)

Our procgen should be very simple. It should generate a layout of streets and monoliths—nothing else. (If the DM wishes to place specific things in some spots of the map, that is trivial to do, even on the fly; it’s generating the basic layout that’s the tedious and not-doable-spontaneously part, so that’s what we’re focusing on.)

Must we DIY?

Before we proceed, a seemingly reasonable question might be: why should I need to write a custom procgen for this? Surely there are innumerable D&D map generators out there already? Indeed there are, but I have been able to find none which can do, or can easily be configured to do, something that is both so simple and so specific. (This, I find, is a pattern which recurs often, whenever one determines to really do something the right way, and refuses to accept compromises for the sake of convenience.)

Let’s look at a couple of examples:

None of these are even approximately what we need here. None can be configured to output a simple rectilinear grid, with arbitrary (and quite large) dimensions, consisting of streets of a certain range of widths with rectangular buildings of a certain range of sizes. It’s such a simple thing, and yet—none of these city map generators have anything even sort of like this capability. They’re just not designed to do what we want, which, again, is both extremely simple (all the graphical “frills”, all the ornamentation, all the styling details offered by all of the above generators—it is all absolutely worthless for our purposes) and very specific (any deviation whatsoever from the requirements makes the results useless, because Lacc is not a generic vaguely-medieval city with some houses and castles and such—it is the City of Monoliths, and not anything else).

There’s nothing for it: we have to write our own.

In the next section, I’ll talk about the reasoning behind the algorithm we’ll use. Afterwards, I’ll give a high-level explanation of the algorithm, and comment on some design decisions. (There’s a link to the complete code at the end of the post.)

Developing the algorithm

First, we should define some key parameters. One pair of key parameters is the permissible range of dimensions for the footprint of monoliths. (That is, we care about their width and length, not their height; of course we can also generate random height values if we wish, but since the height of a monolith does not in any way affect the heights of neighboring monoliths, no special algorithm is needed for this. We may return to this point later.)

The module text quoted above informed us that a typical monolith is “200 feet wide, 200 feet long … (though they come in all shapes and sizes)”. Let us now pick a reasonable range—say, that length and width should vary, independently and uniformly, from 150 feet to 600 feet, in 50-foot increments. (As we’ll see shortly, the 50-foot increment of variation will permit us to represent the map in a very efficient way.) We can alter the values of these parameters later, of course.

The other key parameter is the permissible set of possible street widths. This is also given in the module text: “The avenues of Lacc range from 50 to 400 feet wide.”. (We will once again assume that street widths vary in 50-foot increments.) However, for the first iteration of our algorithm, we will actually ignore this variability, and only include 50-foot-wide streets. (We will relax this restriction later, of course.)

We are ready to proceed.

Divide & conquer

One obvious approach might be the following:

  1. Start with one available lot (i.e., rectangular region of map not divided by any streets running through it), spanning the whole map.
  2. Grab a lot from the set of available lots.
    • If that lot is already no greater, in both dimensions, than the maximum size of a monolith (i.e., if both the lot’s length and its width are within the permissible ranges for those parameters), set it aside as claimed.
    • Otherwise, divide the lot by placing a street such that the lot is cut along its shorter axis (i.e., we want to divide a long and narrow lot into two lots of a more equitable aspect ratio, rather than dividing it into two equally long but even narrower lots).
      • Place this dividing street not necessarily equidistantly from the two ends of the lot, but at a uniformly-randomly selected position, picked in 50-foot increments from the permissible positions, which are only those positions that result in the two lots resulting from the division both being no smaller than the minimum permissible monolith size. (In other words, we do not want to end up with any lots that are too small to fit a monolith.)
      • The ends of the street should not extend beyond the boundaries of the lot.
    • For each of the two lots created by the division, if the lot is already the appropriate size (by the same criteria as above), set it aside as claimed. Otherwise, place it back into the set of available lots.
  3. If there are still lots available, repeated step 2.
  4. All lots should now be claimed. Place a monolith on each.
  5. Done!

The code (leaving out assorted boilerplate, utility functions, etc.) is below. (Note that the distance measurements are in units of 50-foot increments, i.e. a lot width of 3 represents a lot 150 feet wide, etc. This convenient uniformity is the advantage of enforcing 50-foot increments in variation of key proportions.)

Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12
};

L.lots = [ { x:      0,
			 y:      0,
			 width:  L.gridWidth,
			 height: L.gridHeight
			 } ];
L.claimedLots = [ ];

function newStreetDividingBoundingBlock(block) {
	let streetWidth = 1;
	if (block.width > block.height) {
		return {
			width:  streetWidth,
			length: block.height,
			orientation: "v",
			x: (  block.x 
				+ L.minMonolithFootprint 
				+ rollDie(block.width - 2 * L.minMonolithFootprint - streetWidth)),
			y: block.y
		};
	} else {
		return {
			width:  streetWidth,
			length: block.width,
			orientation: "h",
			x: block.x,
			y: (  block.y 
				+ L.minMonolithFootprint 
				+ rollDie(block.height - 2 * L.minMonolithFootprint - streetWidth))
		};
	}
}

function lotIsClaimable(lot) {
	return (   lot.width  <= L.maxMonolithFootprint
			&& lot.height <= L.maxMonolithFootprint);
}

function newLotsAfterStreetDividesLot(street, lot) {
	if (street.orientation == "h") {
		let topLot = {
			x:      lot.x,
			y:      lot.y,
			width:  lot.width,
			height: street.y - lot.y
		};
		let bottomLot = {
			x:      lot.x,
			y:      street.y + street.width,
			width:  lot.width,
			height: lot.y + lot.height - (street.y + street.width)
		};

		return [ topLot, bottomLot ];
	} else {
		let leftLot = {
			x:      lot.x,
			y:      lot.y,
			width:  street.x - lot.x,
			height: lot.height
		};
		let rightLot = {
			x:      street.x + street.width,
			y:      lot.y,
			width:  lot.x + lot.width - (street.x + street.width),
			height: lot.height
		};

		return [ leftLot, rightLot ];
	}
}

function subdivideLots() {
	while (L.lots.length > 0) {
		let lot = L.lots.pop();
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			newLotsAfterStreetDividesLot(newStreetDividingBoundingBlock(lot), lot).forEach(newLot => {
				if (lotIsClaimable(newLot)) {
					L.claimedLots.push(newLot);
				} else {
					L.lots.push(newLot);
				}
			});
		}
	}
}

subdivideLots();
addMonoliths(L.claimedLots);

And here’s one sample result (click to zoom in):


Figure 1. (Click to zoom in.)

Not bad! This map depicts a 60,800 foot × 49,100 foot region of Lacc. (That’s 11.5 miles × 9.3 miles, or 12,160 × 9,820 standard 5-ft. grid squares.) This is already quite large enough to create the illusion of a vast, unbounded space—and we can create as many such maps as we like.

The generator defaults to creating a map exactly as big as the browser viewport (i.e., the above map was generated by loading the demo in a browser window 1216 pixels wide by 982 pixels tall, resulting in a map 1216 units × 982 units in size; remember that 1 map unit here represents 50 feet), but that’s no intrinsic limitation; we can just as easily set the grid dimensions to something much larger—say, 10,000×10,000. That does, of course, make the code take correspondingly longer to run, and creates a correspondingly larger output file (warning: 48 MB SVG file). (A ~100-fold increase, in both cases, as both run time and output size scale linearly with area, the latter due to the fixed average monolith size and thus an expected monolith count proportional to area—the 1216×982-unit map above has ~18,000 monoliths, while the 10,000×10,000-unit map has ~1,500,000 monoliths.) A map of those dimensions represents a region of Lacc that is ~94.7 miles square, a.k.a. 100,000 standard grid squares to a side.

Not bad, as I said… but not great. The obvious flaw is that we only have one width of street represented, whereas what we’d like is a reasonable mix: a few great thoroughfares that run across great lengths of the map, some decently-wide connecting avenues between them, a bunch of medium-sized streets, and a whole lot of short little (well, “little” by the standards of the City of Monoliths) alleys.

Random street widths

We might first think to just make a minimal tweak to the above algorithm that gives each placed dividing street a random width (omitted sections of code are unchanged):

Click to expand code block
L = {
	minMonolithFootprint: 3,
	maxMonolithFootprint: 12
	streetWidths: {
		8: 0.01,
		4: 0.09,
		2: 0.25,
		1: 0.65
	}
};

function randomStreetWidth() {
	let dieSize = 100;
	let roll = rollDie(dieSize);

	let minRoll = dieSize + 1;
	for (let [ width, probability ] of Object.entries(L.streetWidths)) {
		minRoll -= probability * dieSize;
		if (roll >= minRoll)
			return parseInt(width);
	}
}

function newStreetDividingBoundingBlock(block) {
	let streetWidth;
	do {
		streetWidth = randomStreetWidth();
	} while (streetWidth > (Math.max(block.width, block.height) - L.maxMonolithFootprint));

	if (block.width > block.height) {
		return {
			width:  streetWidth,
			length: block.height,
			orientation: "v",
			x: (  block.x 
				+ L.minMonolithFootprint 
				+ rollDie(block.width - 2 * L.minMonolithFootprint - streetWidth)),
			y: block.y
		};
	} else {
		return {
			width:  streetWidth,
			length: block.width,
			orientation: "h",
			x: block.x,
			y: (  block.y 
				+ L.minMonolithFootprint 
				+ rollDie(block.height - 2 * L.minMonolithFootprint - streetWidth))
		};
	}
}

Notice that we’ve specified the distribution of street widths that our algorithm should sample from when attempting to place a street: 400-ft.-wide (a.k.a width of 8 units) streets—1%; 200-ft.-wide—9%; 100-ft.-wide—25%; 50-ft.-wide—65%. (The resulting distribution of street widths in the generated map will be somewhat different from this—more skewed toward narrower streets, to be precise—because the widths of dividable lots at each iteration will increasingly limit the average maximum width of a street that can fit in a lot).

Unfortunately, the result is not what we want:


Figure 2. (Click to zoom in.)

This looks wrong. The problem here is that wider streets should, in general, be longer than narrow streets; having broad avenues that go for only a tiny distance is weird—you never see this sort of thing in a real city. To be precise, what we want is for a street to be able to cut across any orthogonal-orientation street that is narrower than it. So the broadest avenues could span the whole map, while most of the small alleys would run only between the two nearest larger streets.

(Actually, is that quite right? Hold that thought; we’ll come back to it. For now, let’s take for granted the desideratum described in the previous paragraph.)

Drawing lots

Let’s try a different approach. Instead of iterating over lots and subdividing them, we will do the following:

  1. Randomly select a street width and an orientation.
  2. Selecting from only those lots in which a street of the selected width and orientation can fit (without resulting, after dividing the lot, in the creation of any lot that’s too small to contain a monolith), pick a random such lot.
    • If no lot can be found that fits a street of the selected width and orientation, go back to step 1.
  3. Place the street in the selected lot (at a random position within the lot, selecting from only those positions that satisfy the constraint described in the previous step). (This will divide the lot into two lots.)
    • For each of the two newly created lots, check whether the lot is ready to have a monolith placed on it. If so, set it aside as claimed. Otherwise, place it back into the set of available lots.
  4. If any available lots remain, repeat from step 1.
Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12,

	streetWidths: {
		8: 0.02,
		4: 0.08,
		2: 0.25,
		1: 0.65
	}
};

function newLotsAfterStreetDividesLot(street, lot) {
	if (street.orientation == "h") {
		let topLot = {
			x:      lot.x,
			y:      lot.y,
			width:  lot.width,
			height: street.y - lot.y,
			left:   lot.left,
			right:  lot.right,
			top:    lot.top,
			bottom: street.y
		};
		let bottomLot = {
			x:      lot.x,
			y:      street.y + street.width,
			width:  lot.width,
			height: lot.bottom - (street.y + street.width),
			left:   lot.left,
			right:  lot.right,
			top:    street.y + street.width,
			bottom: lot.bottom
		};

		return [ topLot, bottomLot ];
	} else {
		let leftLot = {
			x:      lot.x,
			y:      lot.y,
			width:  street.x - lot.x,
			height: lot.height,
			left:   lot.left,
			right:  street.x,
			top:    lot.top,
			bottom: lot.bottom
		};
		let rightLot = {
			x:      street.x + street.width,
			y:      lot.y,
			width:  lot.right - (street.x + street.width),
			height: lot.height,
			left:   street.x + street.width,
			right:  lot.right,
			top:    lot.top,
			bottom: lot.bottom
		};

		return [ leftLot, rightLot ];
	}
}

function lotIsClaimable(lot) {
	return (   lot.width  <= L.maxMonolithFootprint
			&& lot.height <= L.maxMonolithFootprint);
}

function getRandomStreetWidth() {
	let dieSize = 100;
	let roll = rollDie(dieSize);

	let minRoll = dieSize + 1;
	for (let [ width, probability ] of Object.entries(L.streetWidths)) {
		minRoll -= probability * dieSize;
		if (roll >= minRoll)
			return parseInt(width);
	}
}

function getRandomOrientation() {
	return (rollDie(2) - 1) ? "h" : "v";
}

function getRandomLot(width, orientation) {
	let suitableLots = L.lots.filter(lot => {
		if (orientation == "h") {
			return lot.height >= 2 * L.minMonolithFootprint + width;
		} else {
			return lot.width >= 2 * L.minMonolithFootprint + width;
		}
	});

	let orientationString = orientation == "h" ? "horizontal" : "vertical";

	if (suitableLots.length == 0)
		return null;
	else
		return suitableLots[rollDie(suitableLots.length) - 1];
}

function getRandomPointInLot(lot, width, orientation) {
	if (lot == null)
		return null;

	if (orientation == "h") {
		return {
			x: lot.x + Math.floor(lot.width / 2),
			y: lot.y + L.minMonolithFootprint + rollDie(lot.height - 2 * L.minMonolithFootprint - width)
		};
	} else {
		return {
			x: lot.x + L.minMonolithFootprint + rollDie(lot.width - 2 * L.minMonolithFootprint - width),
			y: lot.y + Math.floor(lot.height / 2)
		};
	}
}

function placeRandomStreet() {
	if (L.lots.length == 0)
		return false;

	let width = getRandomStreetWidth();

	let orientation = getRandomOrientation();

	let lot = getRandomLot(width, orientation);
	let position = getRandomPointInLot(lot, width, orientation);

	if (position == null)
		return true;

	let length = orientation == "h" 
				 ? lot.width 
				 : lot.height;
	if (orientation == "h") {
		position.x = lot.x;
	} else {
		position.y = lot.y;
	}

	L.lots.remove(lot);
	newLotsAfterStreetDividesLot({
		width:  width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	}, lot).forEach(newLot => {
		if (lotIsClaimable(newLot)) {
			L.claimedLots.push(newLot);
		} else {
			L.lots.push(newLot);
		}
	});

	return true;
}

while(placeRandomStreet());

addMonoliths(L.claimedLots);

Figure 3. (Click to zoom in.)

We’re making progress! We’ve still got a lot of those short, wide streets, though…

Unbounded extension

Perhaps we can simply extend each street we place past the bounds of the lot we’re placing it in, all the way to the edges of the map? Of course, when placing a street, we will have to divide not only the lot we picked out, but all the other lots which that street will intersect. Like so (as before, unchanged code is omitted):

Click to expand code block
function placeRandomStreet() {
	if (L.lots.length == 0)
		return false;

	let width = getRandomStreetWidth();

	let orientation = getRandomOrientation();

	let lot = getRandomLot(width, orientation);
	let position = getRandomPointInLot(lot, width, orientation);

	if (position == null)
		return true;

	let length = orientation == "h" 
				 ? L.gridWidth 
				 : L.gridHeight;
	if (orientation == "h") {
		position.x = 0;
	} else {
		position.y = 0;
	}

	let newStreet = {
		width:  width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	};
	let lotsAfterSubdividing = [ ];
	for (let lot of L.lots) {
		if (streetIntersectsLot(newStreet, lot)) {
			lotsAfterSubdividing.push(...(newLotsAfterStreetDividesLot(newStreet, lot)));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}
	for (let lot of L.claimedLots) {
		if (streetIntersectsLot(newStreet, lot)) {
			lotsAfterSubdividing.push(...(newLotsAfterStreetDividesLot(newStreet, lot)));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}	
	L.lots = [ ];
	L.claimedLots = [ ];
	for (let lot of lotsAfterSubdividing) {
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.lots.push(lot);
		}
	}

	return true;
}

while(placeRandomStreet());

addMonoliths([ ...L.lots, ...L.claimedLots ]);

Figure 4. (Click to zoom in.)

Hmm. Not quite.

Top-down planning

Let’s revisit the question of which streets should be able to cut across which other streets. We said before that wider streets should be able to cut across narrower streets, but not vice-versa; but what does that actually mean? If street A intersects street B, then street B by definition also intersects street A. Yet there’s clearly something wrong with both figure 4 and figure 2 (and, to a slightly lesser extent, figure 3), which has to do with what streets intersect what others.

Recall what we said we want to see: “a few great thoroughfares that run across great lengths of the map, some decently-wide connecting avenues between them, a bunch of medium-sized streets, and a whole lot of short little alleys”. So, the widest streets should run all the way across the map, but a small alley should not run all the way across the map. One way to think about this is to imagine the city layout being planned in stages: first we place the widest streets, then we place slightly narrower streets in the lots between those widest streets (but never crossing the boundaries of those lots), then we place even narrower streets in the lots that remain, etc., until we’ve reached the narrowest street type, which we can continue placing until we’ve subdivided all the lots down to our desired size.

The problem with implementing this approach algorithmically is that having started placing streets of width 8, say, we don’t know when to stop—how many such streets should we have? When can we say “that’s enough streets of width 8, let’s now start placing streets of width 4”? Since we’re placing streets in order of width, we can’t rely on random sampling from a defined probability distribution to give us (an approximation to) the street width frequency distribution we’re going for.

However, perhaps we can cheat, by means of a simple heuristic. The code that produced figure 3 results in ~23,000 streets at a grid size of 1216×982 (that’s in width units representing 50 feet each, remember, not feet or grid squares). Let’s round that down to 20,000, and then simply create as many streets of each type as we estimate there should be, given our desired street width frequency distribution. (That distribution will not match the probability distributions we’ve been using in the previous code samples, because there, the width of a placed street was constrained by the average available lot size. Since that won’t meaningfully be the case any longer, we’ll have to adjust the frequencies significantly downwards for the larger street widths, e.g. there certainly cannot be 200 or 400 streets of width 8 on a 1216×982 map!)

A bit of tweaking of the frequencies, and we have (omitted code unchanged, as before):

Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12,

	streetWidths: {
		8: 0.00002,
		4: 0.00028,
		2: 0.00470,
		1: 0.99500
	}
};

function placeRandomStreet(streetWidth) {
	if (L.lots.length == 0)
		return false;

	let width = streetWidth;

	let orientation = getRandomOrientation();

	let lot = getRandomLot(width, orientation);
	let position = getRandomPointInLot(lot, width, orientation);

	if (position == null)
		return true;

	let length = orientation == "h" 
				 ? lot.width 
				 : lot.height;
	if (orientation == "h") {
		position.x = lot.x;
	} else {
		position.y = lot.y;
	}

	let newStreet = {
		width:  width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	});

	let lotsAfterSubdividing = [ ];
	for (let lot of L.lots) {
		if (streetIntersectsLot(newStreet, lot)) {
			lotsAfterSubdividing.push(...(newLotsAfterStreetDividesLot(newStreet, lot)));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}
	L.lots = [ ];
	for (let lot of lotsAfterSubdividing) {
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.lots.push(lot);
		}
	}

	return true;
}

let totalNumStreets = 2E4;
let streetWidthsInReverseOrder = [ ...(Object.keys(L.streetWidths).sort()) ].reverse();
for (let width of streetWidthsInReverseOrder) {
	width = parseInt(width);
	let numStreetsOfThisWidth = L.streetWidths[width] * totalNumStreets;
	for (let i = 0; i < numStreetsOfThisWidth; i++)
		placeRandomStreet(width);
}

addMonoliths([ ...L.lots, ...L.claimedLots ]);

Figure 5. (Click to zoom in.)

We’re making progress!

We’ve got a bunch of un-subdivided lots in there, though (the large blocks of black). Those definitely exceed our defined monolith size ranges. Now, we could adjust our anticipated total street count upwards, and tweak the defined street width frequency distribution… but that seems like a very fragile solution (even more so than the heuristic we’re already using).

A mixed approach

For a more robust fix, let’s bring back the approach we started with (which generated the map shown in figure 5): subdividing lots until all lots are suitably small. We will do this after we have placed all the streets as per the previous code sample:

Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12,

	streetWidths: {
		8: 0.00002,
		4: 0.00028,
		2: 0.00470,
		1: 0.99500
	}
};

function placeRandomStreet(streetWidth) {
	if (L.lots.length == 0)
		return false;

	let width = streetWidth;

	let orientation = getRandomOrientation();

	let lot = getRandomLot(width, orientation);
	let position = getRandomPointInLot(lot, width, orientation);

	if (position == null)
		return true;

	let length = orientation == "h" 
				 ? lot.width 
				 : lot.height;
	if (orientation == "h") {
		position.x = lot.x;
	} else {
		position.y = lot.y;
	}

	let newStreet = {
		width:  width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	});

	let lotsAfterSubdividing = [ ];
	for (let lot of L.lots) {
		if (streetIntersectsLot(newStreet, lot)) {
			lotsAfterSubdividing.push(...(newLotsAfterStreetDividesLot(newStreet, lot)));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}
	L.lots = [ ];
	for (let lot of lotsAfterSubdividing) {
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.lots.push(lot);
		}
	}

	return true;
}

function newStreetDividingBoundingBlock(block) {
	let streetWidth = 1;
	if (block.width > block.height) {
		return {
			width:  streetWidth,
			length: block.height,
			orientation: "v",
			x: (  block.x 
				+ L.minMonolithFootprint 
				+ rollDie(block.width - 2 * L.minMonolithFootprint - streetWidth)),
			y: block.y
		};
	} else {
		return {
			width:  streetWidth,
			length: block.width,
			orientation: "h",
			x: block.x,
			y: (  block.y 
				+ L.minMonolithFootprint 
				+ rollDie(block.height - 2 * L.minMonolithFootprint - streetWidth))
		};
	}
}

function subdivideLots() {
	while (L.lots.length > 0) {
		let lot = L.lots.pop();
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.streets.push(newStreetDividingBoundingBlock(lot));
			newLotsAfterStreetDividesLot(L.streets.last, lot).forEach(newLot => {
				if (lotIsClaimable(newLot)) {
					L.claimedLots.push(newLot);
				} else {
					L.lots.push(newLot);
				}
			});
		}
	}
}

let totalNumStreets = 2E4;
let streetWidthsInReverseOrder = [ ...(Object.keys(L.streetWidths).sort()) ].reverse();
for (let width of streetWidthsInReverseOrder) {
	width = parseInt(width);
	let numStreetsOfThisWidth = L.streetWidths[width] * totalNumStreets;
	for (let i = 0; i < numStreetsOfThisWidth; i++)
		placeRandomStreet(width);
}

subdivideLots();

addMonoliths([ ...L.lots, ...L.claimedLots ]);

Figure 6. (Click to zoom in.)

Definitely starting to get somewhere.

One problematic behavior that this algorithm has is that it often places parallel wide streets much closer to one another than is plausible. You can see many cases of this in the preceding two samples, e.g.:


Figure 7. (Click to zoom in.)

Breathing room

So let’s explicitly correct for this. We’ll introduce a “repulsion factor”, which will prevent streets from being too close to each other. Streets will repel one another in proportion to width—specifically, the width of the narrower of two streets. Thus, a narrow street can be close to a narrow street, or to a wide street; but two wide streets must be further apart. Each time we pick a random location to place a street, we’ll check whether it’s too close to another street, according to this repulsion factor; if so, we’ll try again with another random location.

And so (unchanged code omitted, as before):

Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12,

	streetWidths: {
		8: 0.00010,
		4: 0.00090,
		2: 0.00900,
		1: 0.99000
	},

	streetRepulsionFactor: 10
};

function streetsBoundingLot(lot) {
	let boundingStreets = L.streets.filter(street => {
		if (street.orientation == "h") {
			return (   (   street.y + street.width == lot.top
						|| lot.bottom == street.y)
					&& (   street.x < lot.right
						&& street.x + street.length > lot.left));

		} else {
			return (   (   street.x + street.width == lot.left
						|| lot.right == street.x)
					&& (   street.y < lot.bottom
						&& street.y + street.length > lot.top));
		}
	});

	let streets = {
		all: boundingStreets
	};
	for (let boundingStreet of boundingStreets) {
		if (boundingStreet.y + boundingStreet.width == lot.top)
			streets.top = boundingStreet;
		if (lot.bottom == boundingStreet.y)
			streets.bottom = boundingStreet;
		if (boundingStreet.x + boundingStreet.width == lot.left)
			streets.left = boundingStreet;
		if (lot.right == boundingStreet.x)
			streets.right = boundingStreet;
	}

	return streets;
}

function placeRandomStreet(streetWidth) {
	if (L.lots.length == 0)
		return false;

	let width = streetWidth;

	let orientation = getRandomOrientation();

	let lot = getRandomLot(width, orientation);
	let position = getRandomPointInLot(lot, width, orientation);

	if (position == null)
		return false;

	let boundingStreets = streetsBoundingLot(lot);
	if (orientation == "h") {
		let spaceAbove = position.y - (boundingStreets.top.y + boundingStreets.top.width);
		let spaceBelow = boundingStreets.bottom.y - (position.y + width);

		let minSpaceAbove = Math.max(L.minMonolithFootprint, 
									 Math.min(width, boundingStreets.top.width || width)    * L.streetRepulsionFactor);
		let minSpaceBelow = Math.max(L.minMonolithFootprint,
									 Math.min(width, boundingStreets.bottom.width || width) * L.streetRepulsionFactor);

		if (   spaceAbove < minSpaceAbove
			|| spaceBelow < minSpaceBelow)
			return false;
	} else {
		let spaceLeft = position.x - (boundingStreets.left.x + boundingStreets.left.width);
		let spaceRight = boundingStreets.right.x - (position.x + width);

		let minSpaceLeft = Math.max(L.minMonolithFootprint, 
									Math.min(width, boundingStreets.left.width || width)   * L.streetRepulsionFactor);
		let minSpaceRight = Math.max(L.minMonolithFootprint,
									 Math.min(width, boundingStreets.right.width || width) * L.streetRepulsionFactor);

		if (   spaceLeft  < minSpaceLeft
			|| spaceRight < minSpaceRight)
			return false;
	}

	let length = orientation == "h" 
				 ? lot.width 
				 : lot.height;
	if (orientation == "h") {
		position.x = lot.x;
	} else {
		position.y = lot.y;
	}

	L.streets.push({
		width:  width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	});

	let lotsAfterSubdividing = [ ];
	for (let lot of L.lots) {
		if (streetIntersectsLot(L.streets.last, lot)) {
			lotsAfterSubdividing.push(...(newLotsAfterStreetDividesLot(L.streets.last, lot)));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}
	L.lots = [ ];
	for (let lot of lotsAfterSubdividing) {
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.lots.push(lot);
		}
	}

	return true;
}

let totalNumStreets = 2E4;
let streetWidthsInReverseOrder = [ ...(Object.keys(L.streetWidths).sort()) ].reverse();
for (let width of streetWidthsInReverseOrder) {
	width = parseInt(width);
	let desiredNumStreetsOfThisWidth = L.streetWidths[width] * totalNumStreets;
	let numStreetsOfThisWidth = 0;
	if (width > 1) {
		while (numStreetsOfThisWidth < desiredNumStreetsOfThisWidth) {
			if (placeRandomStreet(width))
				numStreetsOfThisWidth++;
		}
	} else {
		for (let i = 0; i < desiredNumStreetsOfThisWidth; i++)
			placeRandomStreet(width);
	}
}

subdivideLots();

addMonoliths(L.claimedLots);

Figure 8. (Click to zoom in.)

Not too bad.

We’ve definitely reached the point where we can generate maps (in as great a quantity as we please) which have the property that they locally look plausible at pretty much any point, and offer a variety of types of street configurations (from great highways with smaller avenues branching off them, to haphazardly-arranged warrens of alleys). In other words, this is already usable for the purpose of picking an arbitrary location somewhere in the middle of the map and setting an encounter there (which is what we wanted in the first place).

Still, there are some problems. One obvious problem is that the algorithm is fragile and contains multiple “magic numbers” which must be tweaked in an ad-hoc manner to produce desired results—an expected street count (which doesn’t even end up matching the street count in the resulting map; the map shown in figure 8, for example, has ~15,000 streets, not ~20,000), and a carefully tweaked expected street width frequency distribution (which also doesn’t end up matching the actual frequency distribution of streets in the resulting map). The former, in particular, has to be adjusted for different map sizes (though perhaps we could pick a reasonably suitable number automatically as some function of map size…). Robust and flexible, it ain’t.

Even setting aside implementation details, though, something about this map (and this is true of basically all maps created by this version of the algorithm) looks wrong—unnatural, somehow. This is not necessarily a problem for our very specific motivating purpose (using one or more such maps as settings for random encounters while traveling through the City of Monoliths)… but, as with our other design decisions so far, it would be better if every aspect of the design contributed to the feel of the adventure, while avoiding any visibly artificial-seeming elements.

Clumping

Why does the map shown in figure 8 look vaguely wrong? On close inspection, we can see that it has two odd properties. First, this algorithm has a tendency to subdivide some parts of the map many times (producing regions with many streets of intermediate width and heterogeneous sizes of “blocks” between them), while leaving other parts of the map relatively uniform (producing large regions with only small alleys, and no intermediate-width streets crossing them).

This is easier to see if we pause the generator in the middle of the process, before any streets of width 1 have been placed. Here’s one map with only streets of width 2 or greater:


Figure 9. (Click to zoom in.)

And here’s a map with only streets of width 4 or greater:


Figure 10. (Click to zoom in.)

Whether this effect is good or bad by itself is probably a question of aesthetics (although in combination with the next quirk, it’s definitely bad). But why does it happen? Actually, that’s a pretty straightforward mathematical consequence of how the street-placement algorithm works. Remember that when placing a street, we select a random point at which to (attempt to) place the street by first picking an un-subdivided lot at random, with the distribution across lots being uniform and—importantly—not weighted by lot size. Now suppose that we place two streets, like this:


Figure 11. (Click to zoom in.)

We now have three lots—and each of them has an equal chance to be selected for attempted street placement! The top two lots together cover a smaller area than the bottom lot, and the top-left lot covers a smaller area than the top-right lot—which means that, when selecting a random lot to place a street in, that street is more likely to be placed closer to the top edge of the map than to the bottom edge, and more likely to be placed closer to the left edge of the map than to the right edge. And the more streets we place in some region of the map, the more lots we create there, and the more likely it becomes that further streets will be placed in that region, etc. Of course, the relatively likelihood that a street will be place in some part of the map levels off once the repulsion effect and the minimum lot size rule start to prevent streets from being placed in more and more of the randomly chosen points in that part of the map… but by then, most or all of the widest and intermediate-width streets have been placed, and the skew in the distribution of street widths by map region is already in place.

Whither crossroads?

The other odd property of maps generated by this algorithm can be seen if we take a close look at the map in figure 8. Take a look at the intersections between streets. Notice anything odd?

There are almost no X-intersections (a.k.a. crossroads).

There’s plenty of T-intersections, of course. But very few where two streets cross and both of them continue in both directions—and only one such intersection involving streets of width greater than 1!

This is, actually, pretty weird. Real cities tend to have plenty of X-intersections; their near-total absence from this map is a large part of what makes it look vaguely artificial. The way this algorithm is written means that narrower streets can never cut across wider streets… but that turns out to mean that the reverse is also true, with the result that streets don’t cut across one another at all. (The few existing X-intersections are created by coincidence—each consists not of two streets, but of three, with two coincidentally-collinear streets meeting, at the same point, a third which is orthogonal to the other two.)

Another way to look at the city layout that results from this is that there is no way out of a region of the map except by traversing increasingly wide avenues. In effect, there are no “take the local streets” routes as alternatives to the major thoroughfares. This has the effect of making the map less interesting on a large scale. (Again, this is not relevant for the very specific purpose of using the map to run disjoint encounters, each of which is localized to a small sub-region; however, it would be preferable if these maps we are generating could stand up to a bit more prodding and “off-label use” without their Potemkin facades collapsing.)

Robert Moses would approve

Let’s take a third crack at the idea that wide streets should be able to cut across narrower streets, but not vice-versa. Consider what happens if we once again place streets not in descending width order, but randomly (as in figure 3), allowing a street to extend past the ends of the lot we placed it in (unlike in figure 3), but not necessarily all the way to the edges of the map (unlike in figure 4). Instead, we will extend each street, in either direction, only until it hits another street of the same width or wider (or the edge of the map).2

The full code (sans boilerplate / utility functions):

Click to expand code block
L = {
	gridWidth: window.innerWidth,
	gridHeight: window.innerHeight,

	minMonolithFootprint: 3,
	maxMonolithFootprint: 12,

	streetWidths: {
		8: 0.02,
		4: 0.08,
		2: 0.25,
		1: 0.65
	},

	streetRepulsionFactor: 10,
};

L.lots = [ { x:      0,
			 y:      0,
			 width:  L.gridWidth,
			 height: L.gridHeight,
			 left:   0,
			 right:  L.gridWidth,
			 top:    0,
			 bottom: L.gridHeight
			 } ];
L.claimedLots = [ ];
L.streets = [
	{ width: 0,
	  length: L.gridWidth,
	  orientation: "h",
	  x: 0,
	  y: 0
	  },
	{ width: 0,
	  length: L.gridHeight,
	  orientation: "v",
	  x: 0,
	  y: 0
	  },
	{ width: 0,
	  length: L.gridWidth,
	  orientation: "h",
	  x: 0,
	  y: L.gridHeight
	  },
	{ width: 0,
	  length: L.gridHeight,
	  orientation: "v",
	  x: L.gridWidth,
	  y: 0
	  }
];

function newLotsAfterStreetDividesLot(street, lot) {
	let newLots = [ ];

	if (street.orientation == "h") {
		let topLot = {
			x:      lot.x,
			y:      lot.y,
			width:  lot.width,
			height: street.y - lot.y,
			left:   lot.left,
			right:  lot.right,
			top:    lot.top,
			bottom: street.y
		};
		let bottomLot = {
			x:      lot.x,
			y:      street.y + street.width,
			width:  lot.width,
			height: lot.bottom - (street.y + street.width),
			left:   lot.left,
			right:  lot.right,
			top:    street.y + street.width,
			bottom: lot.bottom
		};

		if (topLot.width    * topLot.height    > 0)
			newLots.push(topLot);

		if (bottomLot.width * bottomLot.height > 0)
			newLots.push(bottomLot);
	} else {
		let leftLot = {
			x:      lot.x,
			y:      lot.y,
			width:  street.x - lot.x,
			height: lot.height,
			left:   lot.left,
			right:  street.x,
			top:    lot.top,
			bottom: lot.bottom
		};
		let rightLot = {
			x:      street.x + street.width,
			y:      lot.y,
			width:  lot.right - (street.x + street.width),
			height: lot.height,
			left:   street.x + street.width,
			right:  lot.right,
			top:    lot.top,
			bottom: lot.bottom
		};

		if (leftLot.width  * leftLot.height  > 0)
			newLots.push(leftLot);

		if (rightLot.width * rightLot.height > 0)
			newLots.push(rightLot);
	}

	return newLots;
}

function lotIsClaimable(lot) {
	return (   lot.width  <= L.maxMonolithFootprint
			&& lot.height <= L.maxMonolithFootprint);
}

function getRandomStreetWidth() {
	let dieSize = 100;
	let roll = rollDie(dieSize);

	let minRoll = dieSize + 1;
	for (let [ width, probability ] of Object.entries(L.streetWidths)) {
		minRoll -= probability * dieSize;
		if (roll >= minRoll)
			return parseInt(width);
	}
}

function getRandomOrientation() {
	return (rollDie(2) - 1) ? "h" : "v";
}

function getRandomLot(width, orientation) {
	let suitableLots = L.lots.filter(lot => {
		if (orientation == "h") {
			return lot.height >= 2 * L.minMonolithFootprint + width;
		} else {
			return lot.width >= 2 * L.minMonolithFootprint + width;
		}
	});

	let orientationString = orientation == "h" ? "horizontal" : "vertical";

	if (suitableLots.length == 0)
		return null;
	else
		return suitableLots[rollDie(suitableLots.length) - 1];
}

function getRandomPointInLot(lot, width, orientation) {
	if (lot == null)
		return null;

	if (orientation == "h") {
		return {
			x: lot.x + Math.floor(lot.width / 2),
			y: lot.y + L.minMonolithFootprint + rollDie(lot.height - 2 * L.minMonolithFootprint - width)
		};
	} else {
		return {
			x: lot.x + L.minMonolithFootprint + rollDie(lot.width - 2 * L.minMonolithFootprint - width),
			y: lot.y + Math.floor(lot.height / 2)
		};
	}
}

function getRandomPoint(width, orientation) {
	return getRandomPointInLot(getRandomLot(width, orientation), width, orientation);
}

function getBoundingBlockAtPoint(point, options) {
	options = Object.assign({
		streetWidth: 1,
		orientation: null
	}, options);

	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >= options.streetWidth 
		   	)
		&& street.orientation != options.orientation
	);

	let compareFunction = (a, b) => {
		if (a.orientation == "v") {
			if (a.x < b.x)
				return -1;
			if (a.x > b.x)
				return 1;
			return 0;
		} else {
			if (a.y < b.y)
				return -1;
			if (a.y > b.y)
				return 1;
			return 0;
		}
	};

	let nearestStreetLeft, nearestStreetRight, nearestStreetAbove, nearestStreetBelow;
	let blockLeft, blockTop, blockRight, blockBottom;

	let lotAtPoint = L.lots.find(lot => 
		   lot.left   <= point.x
		&& lot.right  >= point.x
		&& lot.top    <= point.y
		&& lot.bottom >= point.y
	);
	if (lotAtPoint == null)
		return null;

	if (options.orientation == "h") {
		nearestStreetLeft = crossStreets.filter(street => 
			   street.x <= point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).last;
		nearestStreetRight = crossStreets.filter(street => 
			   street.x > point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).first;

		blockLeft = nearestStreetLeft.x + nearestStreetLeft.width;
		blockRight = nearestStreetRight.x;

		blockTop = lotAtPoint.top;
		blockBottom = lotAtPoint.bottom;
	} else {
		nearestStreetAbove = crossStreets.filter(street =>
			   street.y <= point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).last;
		nearestStreetBelow = crossStreets.filter(street => 
			   street.y > point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).first;

		blockTop = nearestStreetAbove.y + nearestStreetAbove.width;
		blockBottom = nearestStreetBelow.y;

		blockLeft = lotAtPoint.left;
		blockRight = lotAtPoint.right;
	}

	return {
		x:      blockLeft,
		y:      blockTop,
		width:  blockRight - blockLeft,
		height: blockBottom - blockTop,
		left:   blockLeft,
		right:  blockRight,
		top:    blockTop,
		bottom: blockBottom
	}
}

function streetIntersectsLot(street, lot) {
	if (street.orientation == "h") {
		if (   street.x < lot.right
			&& street.x + street.length > lot.x
			&& street.y < lot.bottom
			&& street.y + street.width > lot.y)
			return true;
	} else {
		if (   street.y < lot.bottom
			&& street.y + street.length > lot.y
			&& street.x < lot.right
			&& street.x + street.width > lot.x)
			return true;
	}

	return false;
}

function placeRandomStreet() {
	if (L.lots.filter(lot => lot.width * lot.height > 0).length == 0)
		return false;

	let width = getRandomStreetWidth();

	let orientation = getRandomOrientation();

	let position = getRandomPoint(width, orientation);
	if (position == null)
		return true;

	let boundingBlock = getBoundingBlockAtPoint(position, {
		streetWidth: width,
		orientation: orientation
	});

	if (boundingBlock == null)
		return true;

	let length = orientation == "h" 
				 ? boundingBlock.width 
				 : boundingBlock.height;

	if (orientation == "h") {
		position.x = boundingBlock.x;
	} else {
		position.y = boundingBlock.y;
	}

	L.streets.push({
		width: width,
		length: length,
		orientation: orientation,
		x: position.x,
		y: position.y
	});

	let lotsAfterSubdividing = [ ];
	for (let lot of [ ...L.lots, ...L.claimedLots ]) {
		if (streetIntersectsLot(L.streets.last, lot)) {
			lotsAfterSubdividing.push(...newLotsAfterStreetDividesLot(L.streets.last, lot));
		} else {
			lotsAfterSubdividing.push(lot);
		}
	}
	L.lots = [ ];
	L.claimedLots = [ ];
	for (let lot of lotsAfterSubdividing) {
		if (lotIsClaimable(lot)) {
			L.claimedLots.push(lot);
		} else {
			L.lots.push(lot);
		}
	}

	return true;
}

while(placeRandomStreet());

addMonoliths(L.claimedLots);

And the result:


Figure 12. (Click to zoom in.)

Hmm. Something has gone wrong here. On closer inspection, we can see that most of what at first look like very wide streets, are actually just narrower streets that run directly adjacent to each other (or even overlapping) for some part of their lengths. But how can this be, given that we explicitly check for proximity of a street to the edges of the lot that it’s placed in? Well, that’s the problem, actually: we check for proximity to the edges of the lot it’s placed in, but we’ve just decided to extend streets in both directions past the ends of the lots they’re placed in—and we aren’t checking for proximity to the edges of those other lots that the street will pass through!

Breathing room, redux

So we’ll add an explicit check for that (as usual, omitted code is unchanged):

Click to expand code block
function getBoundingBlockAtPoint(point, options) {
	options = Object.assign({
		streetWidth: 1,
		orientation: null
	}, options);

	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >= options.streetWidth 
		   	)
		&& street.orientation != options.orientation
	);
	let parallelStreets = L.streets.filter(street => 
		street.orientation == options.orientation
	);

	let compareFunction = (a, b) => {
		if (a.orientation == "v") {
			if (a.x < b.x)
				return -1;
			if (a.x > b.x)
				return 1;
			return 0;
		} else {
			if (a.y < b.y)
				return -1;
			if (a.y > b.y)
				return 1;
			return 0;
		}
	};

	let nearestStreetLeft, nearestStreetRight, nearestStreetAbove, nearestStreetBelow;
	let blockLeft, blockTop, blockRight, blockBottom;

	if (options.orientation == "h") {
		nearestStreetLeft = crossStreets.filter(street => 
			   street.x <= point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).last;
		nearestStreetRight = crossStreets.filter(street => 
			   street.x > point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).first;

		blockLeft = nearestStreetLeft.x + nearestStreetLeft.width;
		blockRight = nearestStreetRight.x;

		nearestStreetAbove = parallelStreets.filter(street => 
			   street.x < blockRight
			&& street.x + street.length >= blockLeft
			&& street.y <= point.y
		).sort(compareFunction).last;
		nearestStreetBelow = parallelStreets.filter(street => 
			   street.x < blockRight
			&& street.x + street.length >= blockLeft
			&& street.y > point.y
		).sort(compareFunction).first;

		let spaceAbove = point.y - (nearestStreetAbove.y + nearestStreetAbove.width);
		let spaceBelow = nearestStreetBelow.y - (point.y + options.streetWidth);

		let minSpaceAbove = Math.max(L.minMonolithFootprint, 
									 Math.min(options.streetWidth, nearestStreetAbove.width || options.streetWidth) * L.streetRepulsionFactor);
		let minSpaceBelow = Math.max(L.minMonolithFootprint,
									 Math.min(options.streetWidth, nearestStreetBelow.width || options.streetWidth) * L.streetRepulsionFactor);

		if (   spaceAbove < minSpaceAbove
			|| spaceBelow < minSpaceBelow) {
			return null;
		} else {
			blockTop = nearestStreetAbove.y + nearestStreetAbove.width;
			blockBottom = nearestStreetBelow.y;
		}
	} else {
		nearestStreetAbove = crossStreets.filter(street =>
			   street.y <= point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).last;
		nearestStreetBelow = crossStreets.filter(street => 
			   street.y > point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).first;

		blockTop = nearestStreetAbove.y + nearestStreetAbove.width;
		blockBottom = nearestStreetBelow.y;

		nearestStreetLeft = parallelStreets.filter(street => 
			   street.y < blockBottom
			&& street.y + street.length >= blockTop
			&& street.x <= point.x
		).sort(compareFunction).last;
		nearestStreetRight = parallelStreets.filter(street => 
			   street.y < blockBottom
			&& street.y + street.length >= blockTop
			&& street.x > point.x
		).sort(compareFunction).first;

		let spaceLeft = point.x - (nearestStreetLeft.x + nearestStreetLeft.width);
		let spaceRight = nearestStreetRight.x - (point.x + options.streetWidth);

		let minSpaceLeft  = Math.max(L.minMonolithFootprint, 
									 Math.min(options.streetWidth, nearestStreetLeft.width  || options.streetWidth) * L.streetRepulsionFactor);
		let minSpaceRight = Math.max(L.minMonolithFootprint,
									 Math.min(options.streetWidth, nearestStreetRight.width || options.streetWidth) * L.streetRepulsionFactor);

		if (   spaceLeft  < minSpaceLeft
			|| spaceRight < minSpaceRight) {
			return null;
		} else {
			blockLeft = nearestStreetLeft.x + nearestStreetLeft.width;
			blockRight = nearestStreetRight.x;
		}
	}

	return {
		x:      blockLeft,
		y:      blockTop,
		width:  blockRight - blockLeft,
		height: blockBottom - blockTop,
		left:   blockLeft,
		right:  blockRight,
		top:    blockTop,
		bottom: blockBottom
	}
}

However, when we run this code, it seems to hang. On closer investigation, it’s actually running just fine—but because of the check that we’ve added, the algorithm will, at a certain point, start failing to find a suitable street placement (i.e., discovering that the randomly chosen point at which to attempt street placement is actually such that a street of the randomly chosen width and orientation cannot be placed there without passing impermissibly close to an edge of one or more lots) almost all the time, and finding a suitable placement more and more rarely. To illustrate this, we add a bit of debugging output:

Click to expand code block
function placeRandomStreet() {
	// … code omitted …

	if (boundingBlock == null) {
		console.log("Bounding block is null!");
		return true;
	}

	// … code omitted …

	console.log(`Placed a street (${L.streets.length} streets total). ${L.lots.length} lots remain.`);

	return true;
}

And this is what we see after less than a minute of runtime:


Figure 13. (Click to zoom in.)

If we wait for this to complete, we’ll be here all day…

Stopping halfway

Well, what happens if we stop the process after placing, let’s say, 4,000 streets? (The number is obviously chosen in an ad-hoc manner, and that won’t do for the actual implementation, but we’re just experimenting here.) The code still takes uncomfortably long to get even that far (around 15 seconds—much too long for a procgen like this), but:


Figure 14. (Click to zoom in.)

This seems fine. Of course most of the lots are bigger than we need them to be, but that is fixable, with the subdivision technique we’ve already used. And so:


Figure 15. (Click to zoom in.)

Nice! This is basically what we’re looking for. We’ve certainly solved our problem with the lack of X-intersections. The street clumping problem remains, but we’ll get to that later. First, though: how do we fix this excessive runtime? And how can we pick a stopping point in a less ad-hoc and arbitrary way than simply picking a street count? (Which would be map-size-dependent, anyhow.)

Roll to give up

We can use probabilistic stopping. That is: whenever we pick a random point (and random street width and orientation) and fail to place a street there (because of impermissible proximity to the edges of one or more of the lots the street would cross, if extended as far as it can be), we will roll a die (a 10,000-sized die, let’s say). If the die comes up anything but 1, we’ll keep trying; if it comes up 1, we’ll give up, and declare that random street placement is concluded.3 (At this point, we subdivide all remaining too-large lots with streets of width 1, as before.)

The modification to the code is trivial:

Click to expand code block
L = {
	//	… other configuration fields omitted …

	keepTryingHowHard: 10000
};

function placeRandomStreet() {
	// … code omitted …

	if (boundingBlock == null) {
		return (rollDie(L.keepTryingHowHard) != 1);
	}

	// … code omitted …
}

Figure 16. (Click to zoom in.)

Cool. Now the code finishes in couple of seconds at most—mission accomplished.

Cornstarch added to prevent

We’re very close to perfection now, but let’s see if we can solve that “street clumping” problem. Recall that this was caused by the fact that we are selecting a random lot to place a street in, which means that if the left half of the map has 10 lots and the right half has one big lot, then new streets are 10 times more likely to be placed on the left half of the map than on the right half, which results in the left half having even more lots and thus being even more likely to have new streets placed there… etc.

Well, one obvious solution is to instead select a point at random from a uniform distribution of all grid locations. (We will, of course, have to check that the point we’ve selected isn’t on a street instead of being within any lot, but that is easy enough.) Thus (omitted code unchanged, as usual):

Click to expand code block
function getRandomGridPoint() {
	return {
		x: rollDie(L.gridWidth) - 1,
		y: rollDie(L.gridHeight) - 1
	};
}

function pointIsOnStreet(point, street) {
	if (street.orientation == "h") {
		return (   point.x >= street.x
				&& point.x < street.x + street.length
				&& point.y >= street.y
				&& point.y < street.y + street.width);
	} else {
		return (   point.x >= street.x
				&& point.x < street.x + street.width
				&& point.y >= street.y
				&& point.y < street.y + street.length);
	}
}

function getBoundingBlockAtPoint(point, options) {
	options = Object.assign({
		streetWidth: 1,
		orientation: null
	}, options);

	if (L.streets.findIndex(street => pointIsOnStreet(point, street)) !== -1)
		return null;
	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >= options.streetWidth 
		   	)
		&& street.orientation != options.orientation
	);
	let parallelStreets = L.streets.filter(street => 
		street.orientation == options.orientation
	);

	let compareFunction = (a, b) => {
		if (a.orientation == "v") {
			if (a.x < b.x)
				return -1;
			if (a.x > b.x)
				return 1;
			return 0;
		} else {
			if (a.y < b.y)
				return -1;
			if (a.y > b.y)
				return 1;
			return 0;
		}
	};

	let nearestStreetLeft, nearestStreetRight, nearestStreetAbove, nearestStreetBelow;
	let blockLeft, blockTop, blockRight, blockBottom;

	if (options.orientation == "h") {
		nearestStreetLeft = crossStreets.filter(street => 
			   street.x <= point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).last;
		nearestStreetRight = crossStreets.filter(street => 
			   street.x > point.x
			&& street.y <= point.y
			&& street.y + street.length >= point.y
		).sort(compareFunction).first;

		blockLeft = nearestStreetLeft.x + nearestStreetLeft.width;
		blockRight = nearestStreetRight.x;

		nearestStreetAbove = parallelStreets.filter(street => 
			   street.x < blockRight
			&& street.x + street.length >= blockLeft
			&& street.y <= point.y
		).sort(compareFunction).last;
		nearestStreetBelow = parallelStreets.filter(street => 
			   street.x < blockRight
			&& street.x + street.length >= blockLeft
			&& street.y > point.y
		).sort(compareFunction).first;

		let spaceAbove = point.y - (nearestStreetAbove.y + nearestStreetAbove.width);
		let spaceBelow = nearestStreetBelow.y - (point.y + options.streetWidth);

		let minSpaceAbove = Math.max(L.minMonolithFootprint, 
									 Math.min(options.streetWidth, nearestStreetAbove.width || options.streetWidth) * L.streetRepulsionFactor);
		let minSpaceBelow = Math.max(L.minMonolithFootprint,
									 Math.min(options.streetWidth, nearestStreetBelow.width || options.streetWidth) * L.streetRepulsionFactor);

		if (   spaceAbove < minSpaceAbove
			|| spaceBelow < minSpaceBelow) {
			return null;
		} else {
			blockTop = nearestStreetAbove.y + nearestStreetAbove.width;
			blockBottom = nearestStreetBelow.y;
		}
	} else {
		nearestStreetAbove = crossStreets.filter(street =>
			   street.y <= point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).last;
		nearestStreetBelow = crossStreets.filter(street => 
			   street.y > point.y
			&& street.x <= point.x
			&& street.x + street.length >= point.x
		).sort(compareFunction).first;

		blockTop = nearestStreetAbove.y + nearestStreetAbove.width;
		blockBottom = nearestStreetBelow.y;

		nearestStreetLeft = parallelStreets.filter(street => 
			   street.y < blockBottom
			&& street.y + street.length >= blockTop
			&& street.x <= point.x
		).sort(compareFunction).last;
		nearestStreetRight = parallelStreets.filter(street => 
			   street.y < blockBottom
			&& street.y + street.length >= blockTop
			&& street.x > point.x
		).sort(compareFunction).first;

		let spaceLeft = point.x - (nearestStreetLeft.x + nearestStreetLeft.width);
		let spaceRight = nearestStreetRight.x - (point.x + options.streetWidth);

		let minSpaceLeft  = Math.max(L.minMonolithFootprint, 
									 Math.min(options.streetWidth, nearestStreetLeft.width  || options.streetWidth) * L.streetRepulsionFactor);
		let minSpaceRight = Math.max(L.minMonolithFootprint,
									 Math.min(options.streetWidth, nearestStreetRight.width || options.streetWidth) * L.streetRepulsionFactor);

		if (   spaceLeft  < minSpaceLeft
			|| spaceRight < minSpaceRight) {
			return null;
		} else {
			blockLeft = nearestStreetLeft.x + nearestStreetLeft.width;
			blockRight = nearestStreetRight.x;
		}
	}

	return {
		x:      blockLeft,
		y:      blockTop,
		width:  blockRight - blockLeft,
		height: blockBottom - blockTop,
		left:   blockLeft,
		right:  blockRight,
		top:    blockTop,
		bottom: blockBottom
	}
}

Figure 17. (Click to zoom in.)

Excellent. The street clumping problem has been banished. We have the result we want.

Optional Manhattanization

One final thought: in our current implementation, we allow streets to cut across existing streets that are narrower (than the street being placed). What if we tweaked this condition? The relevant part of the code is this, in the getBoundingBlockAtPoint() function:

Click to expand code block
	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >= options.streetWidth 
		   	)
		&& street.orientation != options.orientation
	);

Here’s the kind of thing we get if we allow streets to cut across existing streets that are narrower or as narrow:

Click to expand code block
	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >  options.streetWidth 
		   	)
		&& street.orientation != options.orientation
	);

Figure 18. (Click to zoom in.)

And here’s what we get if we allow streets to cut across existing streets that are at most twice as wide:

Click to expand code block
	let crossStreets = L.streets.filter(street => 
		   (   street.width == 0
		    || street.width >  options.streetWidth * 2
		   	)
		&& street.orientation != options.orientation
	);

Figure 19. (Click to zoom in.)

Probably this isn’t very useful, but the option is there, if we want it.

(If we allow streets to cut across existing streets of any width, we simply get figure 4, of course.)

The algorithm

It may seem otherwise from the lengthy discussion thus far, but this really is a simple approach. (One might even call it “simplistic”. There isn’t anything very clever about this algorithm, and there’s no particular reason to be impressed by it. But it works well.)

  1. Start trying to place streets of random width and orientation at random points on the grid, checking to make sure they don’t run too close to any other streets. Each time you fail to place a street, roll a big die; if you roll a 1, give up and go to the next step.
  2. Find any lots (i.e., blocks of un-divided space) that are bigger than the minimum lot size, and divide them into two lots (at a random split point, down the shorter axis of the lot). Keep doing this until there are no too-big lots remaining.
  3. You’re done! Render to the output format, display, save, whatever.

Implementation details

How should we save these maps, by the way? I’ve chosen SVG for the output format, because it’s widely supported, editable, usable for other purposes (see the next part of this series), and convertible into bitmap format, if need be. I see no compelling arguments for doing it in any other way.

There is one small detail of the output format which is a bit less obvious. What exactly should be represented? “Just monoliths” is one possible answer; “just streets” is another; “both” is a third. (Note that this question arises only because we’re using a vector file format; in a bitmap image, these three cases are completely indistinguishable given the constraints we’ve imposed on the generated map.)

I think that monoliths have to be represented (because it is otherwise unnecessarily tedious to derive the set of monolith coordinates from the set of street parameters), but whether streets also need to be represented is an open question, the answer to which depends on the use to which the SVG map file will be put. I have opted not to include streets by default, to reduce SVG file complexity (SVGs with many elements take longer to render in browsers or other viewing programs), but you can override that default by passing true to the saveSVG() function.

The demo

You can try out the generator for yourself:

https://dnd.obormot.net/lacc/

Note that there’s no GUI. To generate a new map, simply reload the page. To save the generated map, open the JavaScript console and run the saveSVG() function. (Pass true to said function if you want streets to be included as distinct elements in the SVG; by default, only monoliths will be included, with streets left implicit as the negative space between monoliths.)

The code

Download the code (zip file). (Everything in the package is MIT licensed.)

1 This is pure speculation on my part, but I can’t help but suspect that the reason why the fact that the 5th edition of D&D features so many changes (relative to 3rd and earlier editions) like making fireball have a range of 150 feet, instead of “400 feet + 40 feet per caster level”, is at least partly caused by the increasing popularity of virtual tabletop software—almost all existing examples of which have very poor (or nonexistent) support for large combat maps—for running tactical combats. (Of course, I have no intention of passively accepting that state of affairs. The third part of this series of posts will be about remedying this very deficiency.)

2 We might think of this as simulating a progressive process of urban development, where, when a street is built, it connects to streets of equal or greater width, and cuts across narrower streets. That is, narrower cross streets are not taken into consideration when deciding how far a street should stretch, presumably because the goal is to ensure that traffic flow along the street integrates properly into existing traffic flows in the rest of the city, and a street that does not connect to streets of equal or greater width would cause traffic jams.

3 This problem of identifying a stopping point is akin to looking for a black cat in a dark room—if we fail to find the cat, is that because the cat isn’t there, or because it’s merely very well hidden (the room is dark, after all, and the cat doesn’t exactly stand out)? If we keep searching, will we find the cat eventually? Of course, in our situation, there are many cats (how many? we don’t know), and whenever we’ve found one, we can’t be sure that it’s the last one. We certainly don’t want to find ourselves in the position of having found what, unbeknownst to us, is the final cat, yet continuing to look—in vain, if we but knew it—never knowing when to stop, while the cats we’ve already found behold our doomed efforts with feline indifference…

June 13, 2024

Extreme D&D DIY: adventures in hypergeometry, procedural generation, and software development (part 1)

The scenario:

In the depths of the Plane of Shadow stands Lacc, an ancient, unimaginably vast city inhabited by restless shades and creatures of darkness. This City of Monoliths consumes entire worlds, incorporating each world into itself as the monoliths that give Lacc its nickname grind their way across the landscape, expanding like a growing ink-stain upon a planet’s surface.

The important question, of course, is: how should I model this process mathematically?

The scenario described above comes from an adventure module called “Gates of Oblivion” (by Alec Austin), published in Dungeon magazine #136. The module states that “The area consumed by Lacc expands at a rate of half a mile per hour”. This is clear enough, but: (a) it’s entirely the wrong sort of rate function for the needs of the adventure; (b) it’s much too arbitrary; and (c) it’s much too straightforward and thus boring.

Elaborating on each of these (click to expand):

The wrong rate function. In my adaptation of this module, Tenaris Xin (ruler of the City of Monoliths) is carrying out a ritual that, if completed, will incorporate the player characters’ world into Lacc. Xin begins the ritual when Lacc makes contact with the PCs’ world (and the PCs should learn of Lacc’s incursion shortly thereafter); the completion date should (from a design standpoint) be far enough into the future that the PCs have time to hear about the strange zone of darkness which has begun to spread across the land, investigate, take some time to explore Lacc, discover the nature of the threat, and finally carry out a successful raid of Xin’s redoubt to stop the ritual and save their world. A reasonable time scale would be somewhere between a week and a month (probably closer to the latter); shorter, and it’s too easy for the PCs to end up feeling like they never had a chance and were set up to fail, while a longer time scale would undermine the sense that doom approaches and it’s up to the PCs to stop it.

Now, the adventure as written lacks the “ritual to be completed in a certain time period” aspect; instead, Tenaris Xin is totally passive (he’s basically just waiting in his tower for the PCs to show up and kill him), and Lacc is an automatic threat—it expands at the aforementioned fixed rate, and will eventually (at an absolutely predictable time) consume everything that the PCs value, if not stopped. When is “eventually”? Well, Lacc’s placement (see below) determines when any particular city or other locale of interest is consumed, but the entire world (assuming an Earth-sized planet) will take almost three years to fully incorporate. That is simultaneously too fast and too slow: too fast, because it means that there’s no need for any active measures on Xin’s part (consistent with the adventure as presented—Xin can just sit back and wait), but too slow, because a time scale of years makes it hard to have any sense of urgency about the problem.

The other problem with a fixed half-mile-per-hour rate of radius increase is that it can make Lacc’s placement feel arbitrary.

Too arbitrary. That is: where on the planet’s surface should Lacc’s incursion begin? From an authorial standpoint the answer is obviously “wherever the DM wants it to”, but what motivates the choice, and how will that choice appear both to the PCs’ (from an in-character perspective) and to the players (from a metagame perspective)? For instance, if the zone of darkness starts 100 miles away from the city of Greyhawk, then Lacc will consume Greyhawk exactly 200 hours after it arrives. What that looks like to the PCs is nothing: it’s an apparently arbitrary location. (If Xin were targeting Greyhawk for early assimilation, surely he could’ve aimed better? What’s a hundred miles away? Nothing in particular…) What it looks like to the players, meanwhile, is that the DM has decided, by fiat, that their characters have 200 hours to complete the adventure, and has placed Lacc’s point of contact accordingly. That’s not great for immersion and roleplaying.

Too straightforward. Every aspect of a well-designed adventure should contribute to feel of it, the atmosphere; the perceived theme should be consistent. The City of Monoliths should feel like something vast and ancient and powerful, lurking at the edge of comprehension, existing on a scale that daunts even high-level characters. Lacc’s rate of expansion, trivial a detail though it may seem, should serve that thematic purpose. It should seem to the players like there is a reason why it is thus and not otherwise, some mystery which they could perhaps unravel (though they don’t necessarily need to do so to achieve their goals), and that reason should play into the sense of Lacc’s otherworldly vastness.

The “half a mile per hour” fixed radius growth rate clearly fails to serve this purpose. It is obviously arbitrary; there’s no reason why it should be that, and not some other number; indeed, there’s no reason for a fixed radius growth rate at all. It’s immediately clear that there’s no deeper sense behind it. It’s not the most severe design sin, but we can do better.

Importantly, simply changing the rate of expansion wouldn’t fix these problems—if it’s a mile of radius growth per hour, or a mile per day, or ten miles per hour, the trouble is much the same. (The details would change: at ten miles per hour, the entire planet is consumed within just under two months, and an entire continent within perhaps 1–2 weeks. This is much too fast; it gives the PCs no chance to truly win, unless the point of origin for Lacc’s incursion is placed on some uninhabited continent. A rate could be found that will give the PCs just enough time… but the problems of arbitrariness and straightforwardness remain unchanged.) The rate function must be different. A linear rate of radius increase simply will not cut it.

What is the geometry of the Plane of Shadow?

Most planes (certainly including the Inner Planes) are said to be infinite in all directions (cf. the Manual of the Planes)—but how many directions is “all”? While considering this, I recalled certain suggestions (e.g. this blog post, Dragon magazine #8, #17, #38, etc.) that the planes exist in four spatial dimensions. This makes sense: if there’s only one Plane of Shadow, say, for all the Material Planes in existence (and such is indeed the canonical depiction of the planes in D&D), there will hardly be enough of it to contain everything that it would need to have (i.e., shadow reflections of Material Plane locations and inhabitants, for multiple—possibly infinitely many—Material Planes), if its dimensionality is no greater than that of a Material Plane.

This gave me the idea of modeling the manifestation of the City of Monoliths upon the Prime Material Plane as an intrusion of a four-dimensional object into a three-dimensional space. In this model, Lacc exists in four spatial dimensions, of which three are “lateral” (“north”, “south”, “east”, “west”, “kata”, “ana”) and one is the usual up–down. An incursion of Lacc onto the planet where our player characters reside will have the shape of an expanding hypersphere in four-dimensional space, the three-dimensional surface of which contains two “lateral” spatial dimensions and the one “vertical” dimension; at the point of contact with the PCs’ three-dimensional space, the contents of this surface will “flow” out into that space, the two “lateral” spatial dimensions of Lacc expanding onto the planet’s surface, and constituting the growing zone of darkness and inexorably advancing stone monoliths that the world’s inhabitants behold.


Consider first the one-dimension-lower analogue of the situation. We have two parallel planes (2D spaces), call them {$M$} and {$L$}, separated by a distance of {$d$} in the three-dimensional space within which they’re both embedded (figure 1); these are our analogues of the Prime Material Plane, and the city of Lacc in the Plane of Shadow, respectively. In {$M$} is a circle {$C_{M}$}; this is our analogue of a three-dimensional planet. (Note that the exterior of a circle is a finite one-dimensional space, just as the exterior of a sphere is a finite two-dimensional space.)


Figure 1.

At a point {$P_{M}$} on {$C_{M}$}, we construct a line, perpendicular to plane {$M$}; designate the point where this line intersects plane {$L$} as {$P_{L}$} (figure 2).


Figure 2. The length of {$\overline{P_{M}P_{L}}$} is equal to {$d$}.

Now we construct a sphere {$S$}, centered at {$P_{L}$} (figure 3). The radius {$r_{S}$} of sphere {$S$} begins at 0 and increases as time passes, the increase going as the square root of time {$t$} (with a constant growth rate coefficient {$k$}). (Why the square root? We’ll come to that. Note that this naturally means that the surface area of {$S$}—which is proportional to the square of the radius—will increase linearly with {$t$}.)


Figure 3. Sphere {$S$} at some time {$0<t<t_{M}$}; {$0<r_{S}<d$}.

At some {$t_{M}$}, the radius {$r_{S}$} of sphere S will be equal to the distance {$d$} that separates planes {$M$} and {$L$}. At this time, {$S$} will be tangent to {$M$} (figure 4). The point of contact will of course be {$P_{M}$} (which, recall, is a point on the exterior of the circle {$C_{M}$} in plane {$M$}).


Figure 4. Sphere {$S$} at {$t=t_{M}$}; {$r_{S}=d$}.

As time continues to pass and {$r_{S}$} keeps increasing, the sphere {$S$} will intersect plane {$M$}, which will cut off a spherical cap {$\mathit{CAP}_{S}$} (figures 5 and 6).


Figure 5. Sphere {$S$} at {$t_{M}<t<t_{total\ doom}$}; {$r_{S}>d$}; {$A(\mathit{CAP}_{S})>0$}; {$0<r_{C_{S}}<2r_{C_{M}}$}.

The shape of the intersection of {$S$} with {$M$} is of course a circle, but we are not interested in that. As noted before, the contents of the surface of the expanding sphere (or hypersphere, in the actual model) should “flow” into the Material Plane. Thus what we’re actually interested in is the surface area of {$\mathit{CAP}_{S}$}. We will map that area onto plane {$M$} by drawing a circle {$C_{S}$}, centered at point {$P_{M}$}, with area equal to the surface area {$A(\mathit{CAP}_{S})$} (figure 7).


Figure 6. {$\mathit{CAP}_{S}$}.

Figure 7. Orthogonal view onto plane {$M$}. Note that the smaller of the two circles centered at {$P_{M}$} represents the intersection of {$S$} with {$M$} (i.e., the base of {$\mathit{CAP}_{S}$}), and is not what we are interested in. Rather, we are concerned with {$C_{S}$}, the circle with area equal to {$A(\mathit{CAP}_{S})$}.

As {$t$} increases, {$A(\mathit{CAP}_{S})$}, and thus the area {$A(C_{S})$} of circle {$C_{S}$}, will increase approximately linearly with time as well (after {$t_{M}$}, that is; prior to {$t_{M}$}, {$A(\mathit{CAP}_{S})$} is of course zero); the radius {$r_{C_{S}}$} of {$C_{S}$} will increase approximately as the square root of t. As {$C_{S}$} grows (remember that it is centered on point {$P_{M}$} on the exterior of {$C_{M}$}), it will encompass an increasingly large sector of the circumference of {$C_{M}$} (figure 8), until a time {$t_{total\ doom}$} is reached when {$C_{S}$} fully contains {$C_{M}$}. (As {$C_{M}$} is our two-dimensional analogue of a planet, its circumference is the one-dimensional analogue of the planet’s surface.)


Figure 8. One-half of the arc of {$C_{M}$} which is encompassed by {$C_{S}$} is {$l_{semiarc}$}. This is the arc of {$C_{M}$} which is cut off by a chord with one endpoint at {$P_{M}$} and length equal to {$r_{C_{S}}$}.

We can now derive the formula for the arc-length of the encompassed sector {$C_{M}$}.

Radius of the expanding sphere {$S$}:

{$$ r_{S}=k\sqrt{t} $$}

Surface area of the spherical cap {$\mathit{CAP}_{S}$}:

{$$ A(\mathit{CAP}_{S})=2πr_{S}(r_{S}-d) $$}

(This value, and all subsequent equations that depend on it, will be zero when {$r_{S}\leq d$}, i.e. when {$S$} has not yet grown large enough to intersect {$M$}.)

Radius of circle with area equal to {$A(\mathit{CAP}_{S})$}:

{$$ r_{C_{S}}=\sqrt{\frac{A(\mathit{CAP}_{S})}{2π}} $$}

Arc-length of one-half of the arc of {$C_{M}$} which is contained within {$C_{S}$}:

{$$ l_{semiarc}=2r_{C_{M}}\cdot asin\left(\frac{r_{C_{S}}}{2r_{C_{M}}}\right) $$}

Substituting and simplifying:

{$$ l_{semiarc}=2r_{C_{M}}\cdot asin\left(\frac{\sqrt{k\sqrt{t}(k\sqrt{t}-d)}}{2r_{C_{M}}}\right) $$}

We can see that the value of {$t_{total\ doom}$} (which will occur when {$l_{semiarc}$} equals one-half the circumference of {$C_{M}$}) naturally depends on three variables:

  • the planar separation distance {$d$} between {$M$} and {$L$}
  • the growth rate coefficient {$k$} of {$r_{S}$} (a constant factor applied to the equation which relates {$r_{S}$} to {$t$})
  • the radius {$r_{C_{M}}$} of circle {$C_{M}$} (which is the 2-dimensional analogue of the PCs’ home planet)

So that was the simplified (one-dimension-lower) version. Now we must add one dimension to the above considerations, which, unsurprisingly, makes our task rather more tricky. (There will be fewer diagrams in this part, as I do not have an easy way to depict four-dimensional spaces in a two-dimensional medium.)

We have two parallel three-dimensional spaces, again called {$M$} and {$L$}, separated by a distance of {$d$} in the four-dimensional space within which they’re both embedded; these are the Prime Material Plane, and the city of Lacc in the Plane of Shadow. In {$M$} is a sphere {$S_{M}$} (the world of the player characters). At a point {$P_{M}$} on {$S_{M}$}, we construct a line, perpendicular to space {$M$}; designate the point where this line intersects space {$L$} as {$P_{L}$}. Now we construct a hypersphere (specifically, a 3-sphere) {$H$}, centered at {$P_{L}$}. The radius {$r_{H}$} of hypersphere {$H$} begins at 0 and increases as time passes, the increase going as the cube root of time {$t$} (with a constant growth rate coefficient {$k$}). (This naturally means that the three-dimensional surface area of H will increase linearly with t.)

At some point {$t_{M}$}, the radius {$r_{H}$} of hypersphere {$H$} will be equal to the distance d that separates spaces {$M$} and {$L$}. At this time, {$H$} will be tangent to {$M$}. The point of contact will of course be {$P_{M}$} (which, recall again, is a point on the surface of the sphere {$S_{M}$} in space {$M$}). As time continues to pass and {$r_{H}$} keeps increasing, the hypersphere {$H$} will intersect space {$M$}, which will cut off a hyperspherical cap {$\mathit{CAP}_{H}$}.

The shape of the intersection of {$H$} with {$M$} is of course a sphere, but once again that does not interest us; in our scenario, the contents of the three-dimensional surface of the hypersphere should flow into the Material Plane (i.e., space {$M$}). Thus what interests us is the (three-dimensional) surface area of {$\mathit{CAP}_{H}$}. We will map that area onto space {$M$} by drawing a sphere {$S_{H}$}, centered at point {$P_{M}$}, with a volume {$V(S_{H})$} equal to the (3D) surface area {$A(\mathit{CAP}_{H})$}. As t increases, {$A(\mathit{CAP}_{H})$}, and thus the volume {$V(S_{H})$} of sphere {$S_{H}$}, will increase approximately linearly with time as well (but, again, only after {$t_{M}$}, prior to which {$A(\mathit{CAP}_{H})$} is zero); the radius {$r_{S_{H}}$} of {$S_{H}$} will increase approximately as the cube root of t. As {$S_{H}$} (which is centered on point {$P_{M}$} on the surface of {$S_{M}$}) grows, it will encompass an increasingly large (circular) sector of the surface of {$S_{M}$}, until a time {$t_{total\ doom}$} is reached when {$S_{H}$} fully contains {$S_{M}$} (i.e., totally encompasses the planet).

So, in order to determine the rate of Lacc’s expansion across the surface of the PCs’ world, we need to determine {$r_{sector}$}, the arc-radius of the circular sector of {$S_{M}$} which is subsumed by {$S_{H}$} at any given time {$t$}. This depends on the radius of the sphere {$S_{H}$}, which we can trivially determine from the volume {$V(S_{H})$}; and that, in turn, is stipulated to be equal to the (three-dimensional) surface area {$A(\mathit{CAP}_{H})$}.

Determining the (3D) surface area of a hyperspherical cap turns out to be a tricky problem, which seems to have been solved in closed form only surprisingly recently. The most recent (and, indeed, only) source I was able to find is a paper called “Concise Formulas for the Area and Volume of a Hyperspherical Cap”, by Shengqiao Li, published in 2011 in the Asian Journal of Mathematics and Statistics. The formula which Li gives for the surface area of a hyperspherical cap makes use of a function called the regularized incomplete beta function:

{$$ A(\mathit{CAP}_{H})=\frac{1}{2}\cdot A(H)\cdot I_{sin^{2}\phi}\left(\frac{n-1}{2},\frac{1}{2}\right) $$}

(Note that {$\phi$} here is one-half the arc angle of {$\mathit{CAP}_{H}$}; this is also known as the colatitude angle.)

As used here, this function takes as its one parameter the number of dimensions {$n$} of the sphere whose surface area is to be computed; in our case this is 3 (remember that a circle is a 1-sphere; a regular sphere, e.g. a planet or a basketball, is a 2-sphere; a hypersphere, i.e. a sphere in four dimensions, is a 3-sphere). Li helpfully gives equivalent forms for this function, for integer {$n\in [1,4]$}. For {$n=3$}, the formula thus becomes:

{$$ A(\mathit{CAP}_{H})=\frac{1}{2}\cdot A(H)\cdot \frac{2\phi-sin(2\phi)}{π} $$}

We can now derive the formula which gives us {$r_{sector}$} (the arc-radius on the planet’s surface of the expanding zone of Lacc’s incursion into the PCs’ world) as a function of {$t$} (time in days).

Radius of the expanding hypersphere {$H$}:

{$$ r_{H}=k\sqrt[3]{t} $$}

Surface area (3D) of {$H$}:

{$$ A(H)=2π^{2}r_{H}^{3} $$}

Colatitude angle of hyperspherical cap {$\mathit{CAP}_{H}$}:

{$$ \phi=acos\left(\frac{d}{r_{H}}\right) $$}

Surface area (3D) of hyperspherical cap {$\mathit{CAP}_{H}$}:

{$$ A(\mathit{CAP}_{H})=\frac{1}{2}\cdot A(H)\cdot (\frac{2\phi-sin(2\phi)}{π}) $$}

Radius of sphere with volume equal to {$A(\mathit{CAP}_{H})$}:

{$$ r_{S_{H}}=\sqrt[3]{\frac{3}{4π}\cdot A(\mathit{CAP}_{H})} $$}

Arc-radius of the circle on the surface of {$S_{M}$} encompassed by {$S_{H}$}:

{$$ r_{sector}=2r_{S_{M}}\cdot asin\left(\frac{r_{S_{H}}}{2r_{S_{M}}}\right) $$}

Substituting and simplifying:

{$$ r_{sector}=2r_{S_{M}}\cdot asin\left(\frac{\sqrt[3]{\frac{4}{3π}\cdot A(\mathit{CAP}_{H})}}{2r_{S_{M}}}\right) $$}

{$$ =2r_{S_{M}}\cdot asin\left(\frac{\sqrt[3]{\frac{2}{3π}\cdot A(H)\cdot (\frac{2\phi-sin(2\phi)}{π})}}{2r_{S_{M}}}\right) $$}

{$$ =2r_{S_{M}}\cdot asin\left(\frac{\sqrt[3]{\frac{2}{3π^2}\cdot A(H)\cdot \left(2\cdot acos\left(\frac{d}{r_{H}}\right)-sin\left(2\cdot acos\left(\frac{d}{r_{H}}\right)\right)\right)}}{2r_{S_{M}}}\right) $$}

{$$ =2r_{S_{M}}\cdot asin\left(\frac{\sqrt[3]{\frac{4}{3}r_{H}^{3}\cdot \left(2\cdot acos\left(\frac{d}{r_{H}}\right)-sin\left(2\cdot acos\left(\frac{d}{r_{H}}\right)\right)\right)}}{2r_{S_{M}}}\right) $$}

{$$ =2r_{S_{M}}\cdot asin\left(\frac{k}{2r_{S_{M}}}\cdot \sqrt[3]{\frac{4}{3}t\cdot \left(2\cdot acos\left(\frac{d}{k\sqrt[3]{t}}\right)-sin\left(2\cdot acos\left(\frac{d}{k\sqrt[3]{t}}\right)\right)\right)}\right) $$}

Once again the value of {$t_{total\ doom}$} (and, more generally, the relationship of the time {$t$} to the radius {$r_{sector}$} of the sector of {$S_{M}$} encompassed by the growing sphere {$S_{H}$} of Lacc’s incursion into the Prime Material Plane) depends on three variables:

  • the spatial separation distance {$d$} between {$M$} and {$L$}
  • the growth rate coefficient {$k$} of {$r_{H}$}
  • the radius {$r_{S_{M}}$} of sphere {$S_{M}$} (i.e., of the PCs’ home planet)

It is customary to assume that the worlds of most D&D settings are approximately Earth-like in basic characteristics, such as size, mass, etc.1 We will therefore set {$r_{S_{M}}$} to match that of Earth: 3959 miles.

That leaves the rate of growth {$k$} and the spatial separation distance {$d$}. These are essentially free variables, in that they are not constrained by any aspect of the model we’ve described so far, nor is there (yet) any obvious reason why either of them should take on any particular value rather than any other. This leaves us free to choose the values of both variables on the basis of adventure design considerations.

As noted previously, we would like to give the PCs a reasonable but not excessive amount of time (namely, somewhere from a week to a month, probably closer to the latter; let’s now make a concrete choice, and name three weeks as our critical time frame) to complete the adventure. Note that this period starts not at {$t=0$}, but rather at {$t=t_{M}$}, i.e. the time at which the hypersphere radius {$r_{H}$} is just large enough for the hypersphere {$H$} to make contact with space {$M$} (the Prime Material Plane). Likewise, the end of the critical period (i.e., the time at which, if the PCs have not defeated Tenaris Xin, they fail in their quest and their world is forfeit) occurs not at {$t=t_{total\ doom}$} (the hypothetical—and, as it turns out, irrelevant—time at which the sphere {$S_{H}$} would encompass the entirety of {$S_{M}$}); rather, it’s the moment at which Xin completes his ritual (whereupon the PCs’ world is absorbed into Lacc immediately).

Obviously, we may stipulate Xin’s ritual to take however long we wish it to take. If we say it takes three weeks to complete, then that’s how long it takes. The task is to make the ritual’s casting time (and, ideally, every other detail of the adventure setup) seem to be non-arbitrary.2 There’s also another, related, design goal: to communicate to the players (i.e., to give their characters a way to intuit) that they are facing a concrete deadline (and what the time scale of that deadline is).

Here is how we will do it. In “Gates of Oblivion”, the means by which Lacc consumes a world is linked to three magical gates (the Colorless Gate, the Empty Gate, and the Hollow Gate); these are arranged in an equilateral triangle (figure 9), with Tenaris Xin’s tower at the center, equidistant from all three. Each Gate is exactly 16 miles from Xin’s tower. (This results in a triangle with sides approximately 27.7 miles long.)


Figure 9. The three magical Gates and the tower of Tenaris Xin, in Lacc, the City of Monoliths.

We are now going to slightly modify the setup described above. Instead of a single point {$P_{M}$} on the planet’s surface, we identify three points, arranged in an equilateral triangle, 27.7 miles to the side. From each of these points, we will construct a line perpendicular to {$M$}, then a hypersphere with center at the point where the line intersects {$L$}, etc., all as described previously. Thus we will have not one but three expanding zones of darkness (figure 10), as Lacc will make contact with the world of the PCs at three points (the vertices of the triangle).


Figure 10. Not one, but three distinct intrusion zones, located somewhere on the surface of the player characters’ home world.

Furthermore, we will now specify the spatial separation distance {$d$} to be 16 miles (matching the distance from each of the three Gates to the tower of Tenaris Xin).

This has several results. First, it naturally creates a temporal Schelling point which the players can identify as a plausible candidate for when something might happen: namely, the moment when the three expanding zones meet (figure 11).3 (It does not take a great leap of intuition to suspect that this will be a bad thing.) This is the hint to the players (and the PCs) that there’s a much closer deadline than merely “when the intrusion zone expands to encompass the whole world” (which will quite clearly be many months or even years away, given the observed expansion rate of the intrusion zones).


Figure 11. The expansion of the three intrusion zones brings them into contact at {$t=t_{zone\ contact}$}, when {$r_{sector}$} is equal to one-half the distance between the centers of the three zones (and also, necessarily, one-half the distance between the three Gates in Lacc).

Second, it allows the players to identify parallels between elements of the setup, which both creates the perception of a logically connected system and offers hints that point the PCs to the key locations which they must visit in order to gain more information about Tenaris Xin’s plans and, thereby, to successfully complete the adventure. That is: the center points of the intrusion zones form a triangle 16 miles from a center point. The spatial separation distance (which the PCs will have to cross in their journey to the heart of Lacc; see below) is likewise 16 miles; noticing this will give the PCs a hint that this value is somehow important. When the PCs arrive in the Plane of Shadow, they can notice the correspondence of the location of one of the Gates to the center point of the intrusion zone in their world, and intuit that the two other Gates exist and that they are important. If they think to travel to the center point of the triangle formed by the three Gates (which again is 16 miles away from each Gate), they will arrive at Xin’s tower.4

(Needless to say, none of these hints or deductions should be the only avenue by which the players can reach any of the aforementioned conclusions or learn any of the information I’ve described. In accordance with the three clue rule, there should be multiple hints or clues pointing the players toward these nodes in the scenario structure. Enumerating the rest of that scenario structure is beyond the scope of this post. Here it is important to notice only that carefully planning out the geometry of Lacc’s incursion allows us to use that very geometry as one source of clues for the players. This rewards players who pay attention to such details, encourages careful thought, and contributes greatly to the sense of the adventure’s events as coherent, logical, predictable, and amenable to understanding.)

Having specified the planetary radius {$r_{S_{M}}$} and the spatial separation distance {$d$}, it now remains for us only to pick the growth rate coefficient {$k$}. We will select a value of {$k$} that results in a period of exactly three weeks between {$t_{M}$} (the time at which the three hyperspheres of Lacc’s intrusion zones first make contact with the Prime Material Plane, and the beginning of the adventure) and {$t_{zone\ contact}$}3 (the time when the three expanding intrusion zones on the surface of the PCs’ world meet at three points of tangency), which we have decided is the moment at which Tenaris Xin completes his ritual, and the PCs’ world is permanently absorbed into the City of Monoliths. This value is 5.36.5

The formula we previously derived for the arc-radius of each of the three intrusion zones is:

{$$ r_{sector}=2r_{S_{M}}\cdot asin\left(\frac{k}{2r_{S_{M}}}\cdot \sqrt[3]{\frac{4}{3}t\cdot \left(2\cdot acos\left(\frac{d}{k\sqrt[3]{t}}\right)-sin\left(2\cdot acos\left(\frac{d}{k\sqrt[3]{t}}\right)\right)\right)}\right) $$}

And the values we’ve chosen for our three key variables are:

  • {$r_{S_{M}}$} (the radius of sphere {$S_{M}$}, the PCs’ home planet): 3959 miles
  • {$d$} (the spatial separation distance between {$M$} and {$L$}): 16 miles
  • {$k$} (the growth rate coefficient of the hypersphere radii {$r_{H}$}): 5.36

Substituting those values, we get:

{$$ r_{sector}=7918\cdot asin\left(\frac{5.36}{7918}\cdot \sqrt[3]{\frac{4}{3}t\cdot \left(2\cdot acos\left(\frac{16}{5.36\cdot \sqrt[3]{t}}\right)-sin\left(2\cdot acos\left(\frac{16}{5.36\cdot \sqrt[3]{t}}\right)\right)\right)}\right) $$}

And the resulting values of {$r_{sector}$}:

Day{$r_{sector}$}
0–260
271.9
283.6
294.7
305.6
316.4
327.1
337.7
348.3
358.8
369.3
379.8
3810.3
3910.7
4011.1
4111.5
4211.9
4312.3
4412.6
4513.0
4613.3
4713.7
4814.0
4914.3
5014.6
5114.9
5215.2
5315.5
5415.7
5516.0

Figure 13. Lacc intrusion zone radius in miles as a function of time in days since Tenaris Xin begins his ritual.

Rows in bold are key days. Day 27 is {$t_{M}$}, when Lacc first makes contact with the PCs’ home world, and the three intrusion zones appear (27.7 miles apart from one another) somewhere on the world’s surface (growing to a radius of 1.9 miles within that first day). Day 48 is {$t_{zone\ contact}$}, when the three intrusion zones first touch one another, and begin to merge; this is the first of the two possible endpoints for the adventure. Day 55 is {$t_{zone\ merge}$}, at which point no gap remains between the three intrusion zones; this is the second possible endpoint for the adventure. (The DM may select one of the endpoint dates in advance, or hold off the decision until later in the adventure; needless to say, in the latter case, he should be careful not to provide any hints which identify one of the dates as the endpoint.)

By the way, what are we to make of the period prior to day 27? (This is the period when the hyperspheres {$H$} are expanding but have not yet reached {$M$}, the Prime Material Plane; so there are as yet no intrusion zones on the surface of the PCs’ world, i.e. {$r_{sector}$} is zero.) We may suppose that after selecting a world for assimilation into Lacc, Tenaris Xin begins an almost-month-long process of magical preparation, during which time the City of Monoliths begins “reaching” across the interplanar distances toward the Prime Material World of the player characters. It is reasonable to assume that there is no feasible way for the PCs (or anyone else on their world) to know about this process prior to {$t_{M}$} (day 27). However, if the DM wishes to give the players a hint of what’s to come (e.g. if using this adventure in a larger campaign), he may have signs and portents manifest during this time period (visions experienced by oracles, for example, or warnings from the gods, or strange observations recorded by scholars of the arcane who study the lore of the planes, or other things of this nature).

(Incidentally, what would be the value of {$t_{total\ doom}$} given the values we’ve selected for our key variables? That is, how long would it take for Lacc to fully engulf the PCs’ home world in the absence of the ritual (i.e. if the hyperdimensional model described above were applied to the adventure as written)? We are looking for the lowest value of {$t$} that yields an {$r_{sector}$} equal to or greater than one-half the circumference of {$S_{M}$} (that being {$πr_{S_{M}}$}, i.e. ~12437.6 miles). This value happens to be 772,797,939 days, or approximately 2,117,254 years.)

Finally, do we now have a non-arbitrary answer for where on the surface of the PCs’ world Lacc’s intrusion zones should be located? Yes: from Tenaris Xin’s perspective, Lacc should always make contact with a target planet in some location as remote as possible from civilization, to minimize the chance that any powerful inhabitants of that world (i.e., high-level characters such as the PCs, and/or any other entities likely to possess the capability to stop Xin’s plans) will discover the intrusion zones in time to learn their nature, reach Xin’s tower, and interrupt his ritual. (Determining what other measures Xin might take to minimize the chance of being discovered and interrupted in time, as well as the details of how the PCs will, despite Xin’s plans, learn of Lacc’s incursion before it’s too late—as indeed they must, or else the adventure can’t take place—is left as an exercise for the DM.)

1 See, e.g., Concordance of Arcane Space from the Spelljammer boxed set, which places Oerth, Krynn, Toril, etc. in the same celestial body size category, drawing no distinctions between them. Exceptions do exist, such as Mystara (the planet on which the “Known World” campaign setting of D&D Basic is located), which features a significantly smaller size and hollow interior. (Of course, players in the Known World may never even realize the existence of these major structural differences, due to the “world shield”—an internal layer of ultra-dense material; perhaps neutronium?—that, among its other effects, provides sufficient mass to bring Mystara up to Earth-normal gravity.)

2 “Seem” is really the key word here. This is not an idea original to me (I recall first seeing the formulation I give here in a social media post which I have long since lost the link to), but: when creating explanations for whatever the player characters encounter, you only need to go two levels deep. That is, if the players ask why something is the way that it is, they should be able to discover an explanation; and if they ask why that is the way that it is, they should be able to discover an explanation for that, too; and at that point, they will not keep asking, because you will have successfully created the impression that there’s an explanation for everything. Furthermore, if you then, e.g., tie the explanation for the third thing back to the first thing, you have created a complete system which can support explanations of arbitrary depth.

(One can, of course, always ask “but why is this whole system the way that it is”, but given that the same question can be asked just as easily of the real world, it’s unlikely that the lack of an in-world answer will detract from verisimilitude and immersion. If the players do, after all, ask the question, Morgenbesser’s reply should suffice.)

3 Actually, if the players consider the matter carefully, they will realize that there are two uniquely identifiable moments here. One—call it {$t_{zone\ contact}$} (figure 11)—is the moment when the three expanding zones come into contact, i.e. the moment at which the three subsumed circular sectors of the planet’s surface are tangent to each other. This will occur when the arc-radii of the subsumed surface sectors are equal to one-half the length of the sides of the equilateral triangle formed by the points of contact, i.e. ~13.86 miles. The other—call it {$t_{zone\ merge}$} (figure 12)—is the moment when there is no longer any gap in the middle of the three zones; this will occur when the arc-radii of the subsumed surface sectors are equal to the distance of a vertex of the triangle from the triangle’s center point, i.e. 16 miles.


Figure 12. The three intrusion zones expand far enough to fully merge (leaving no gap in the middle between them) at {$t=t_{zone\ merge}$}, when {$r_{sector}$} is equal to the distance from the center of each zone to the center of the triangle (which is, necessarily, also the distance from each of the Gates to Xin’s tower).

These two moments will—given the values we’ll choose for our key variables—occur one week apart. From a design standpoint, this works in our favor, as the ambiguity in which of these two key moments marks the completion of Tenaris Xin’s ritual offers the DM a means of either giving the PCs an extra week of time to succeed in their quest (if they’ve made slow progress), or not (if they’ve made good time).

4 And—strictly as a bonus for very clever players—if they manage to deduce the hyperdimensional structure of Lacc’s incursion, the players can work out the way in which the expansion rate of the three intrusion zones depends on the geometric layout of the Gates, the tower, and the spatial separation distance between the two planes of existence. The success of the PCs should in no way depend on making this leap; it exists only as a reward for players who wish to unravel the puzzle, to reinforce the sense of underlying order and logical coherence which is created by the existence of the mystery in the first place.

5 Unlike the spatial separation distance, there is no need to justify the growth rate coefficient in any “in-universe” way. It’s well known that our own universe is full of such essentially arbitrary constants, which have the values that they have for no reason other than “that’s just the way it happens to be”. One may appeal to anthropic reasoning to justify some such values, but philosophical points like this are epiphenomenal with respect to our design considerations here.

Of course, another option is to stipulate that this value is not some universal constant but rather a variable, which may depend on any number of factors, from the mystical to the logistical. Perhaps Lacc’s intrusion zones expand faster on worlds where a greater portion of the mortal population are sinners or evildoers, making {$k$} a sort of measure of a world’s aggregate righteousness. Or perhaps the Gates, and Lacc’s expansion, must be powered by the souls of the damned, which Xin sources from fiendish contacts in the Lower Planes, exposing Lacc’s incursion timeline to supply-chain disruptions caused by current events in the Blood War. One may derive an endless variety of plot hooks from such free variables.

May 15, 2024

The Law of Demeter and the right to bear arms

This essay was originally posted on 2016-01-12.

Contents

I.

Patterns crystallize in strange ways.

Have you ever had a conversation like this?

Person A is leaving the house; person B notices.
B: Hey, where are you going?
A: Hm? Why?
B: What, you can’t tell me?
A: Why do you ask, though?
B: Oh, well, if you were going to <place>, I was going to ask you to do something on the way, but if you’re not going there then never mind…

(Variants include “what are you doing tomorrow” / “oh in that case you’ll have time to do something for me”, “what are your plans this Saturday” / “oh well if you’re free then you can come help me with something”, “are you planning to do <thing X>” / “oh in that case you can do something for me while you’re at it”, etc.)

As person A in this conversation, you might be tempted to reply (as I often am) along these lines:

“Look, if you want me to do something for for you, just ask me to do that thing. I’ll decide whether it’s on the way, whether it’s convenient, whether it’ll be ‘no bother’, and so forth. If it’s not, I’ll tell you; I’m an adult, I’m capable of saying ‘no’. Or maybe it’s not on the way, but I’ll decide to do it anyway. In any case, it’s my decision; don’t try to make it for me! Don’t interrogate me about my plans; they’re none of your business. Tell me what it is you want, and I’ll decide what I’ll do about it.”

But maybe I’m just a curmudgeon.

II.

The Law of Demeter (a.k.a. the principle of least knowledge) is a concept in object-oriented software design. It says that an object should have limited knowledge about other objects, and shouldn’t know anything about the inner workings of other objects (as opposed to their behavior). The Law of Demeter is closely related to the “tell, don’t ask” principle—the idea that an object in a program should tell another object what to do, rather than requesting some component of the second object’s implementation or internal state, then doing something directly using that “borrowed” component. (If one object provides another object with access to its inner workings, who knows what the other object might do with that access?)

The Law has been stated and explained in many ways. One oft-encountered analogy involves a paperboy who wants to get paid for delivering the newspaper. Should the paperboy reach into the customer’s pocket, pull out the wallet contained therein, and take some cash right out of said wallet?

Of course not. The paperboy tells the customer to pay him, and the customer pulls out his own wallet, takes out some cash, and hands it to the paperboy. Or maybe he goes and gets some cash from under his mattress instead. Or maybe he asks his wife for cash, because she handles the finances in the family. None of this is the paperboy’s business; he simply does not need to know anything beyond the fact that he asks for payment, and he gets paid. (After all, if the customer just hands the wallet over to the paperboy, who knows what the paperboy might do with it? Take out more money than he ought to get, maybe! We just don’t know; and trusting the matter to the paperboy’s honesty and judgment is foolish in the extreme.)

What implications the Law of Demeter has in actual software engineering practice is a matter of much discussion. But reading some of that discussion reminded me of that conversation from part I. And then… well, everyone knows that generalizing from one example is bad; but once you’ve got two examples, you’ve got a hammer pattern.

III.

In the comments section of Scott Alexander’s recent post on gun control, commenter onyomi talks about pro-gun-control arguments of the form “do you really need <insert some form of rifle or other “powerful” firearm>?”:

See, I just hate seeing a policy debate framed in terms of what the citizen “needs.” The question should be, “is there a really good reason to restrict a citizen’s presumptive right to buy whatever he wants in this particular case?” rather than “do you need this?” If you want it, are willing to pay for it, and there’s no very pressing reason why you shouldn’t be allowed to have it, then you should be allowed to have it. The question of “need” shouldn’t even arise.

I sympathize with this view. If I encountered an argument like that—“do you really need …?”—I might be tempted to reply along these lines:

“It’s none of your damn business what I do or don’t ‘need’. If you want to ban me from owning something, or doing something, well—make a positive claim that I shouldn’t be allowed to have or do that thing, make a case in support of that claim, and we’ll go from there. But I am certainly not obligated to stand for an interrogation about what I ‘need’. In any free society, and certainly in the United States, the default is that I have a right to do and to own whatever I damn well please, without having to explain or justify myself to anyone. If you want to curtail that right somehow, the responsibility is on you, the advocate for state intervention, to demonstrate to my satisfaction (or, to put it another way, to the public’s satisfaction) that this curtailment is necessary or desirable—not on me, the citizen, to prove to you that it’s not. Make your case, and I might even agree with you! Certainly I don’t think that people should be allowed to do just anything; I’m not an anarchist. But you don’t get to start the conversation by demanding an account of what I ‘need’.”

This isn’t an argument against gun control. Even if you agree with every letter of my hypothetical reply, the question of gun rights is not settled thereby. This is a response to one class of argument: the sort that demands access to my internal state, and then bases further arguments on the basis of that revealed internal state. My view is that once I’ve agreed to discuss with you the question of what decisions or conclusions should follow from this or that aspect of my internal state, I’ve lost—regardless of what side of the debate I’m on. Instead, I reject the presumption that my internal state is any of your business.

IV.

Tabletop role-playing games are notoriously rife with quasi-religious debates about the Right Way To Play. One such dispute concerns the concept of the so-called “mother, may I” style of play. The question at issue is, what determines what your character can do? Different games, different game-masters (GMs), and different gaming groups or communities have answered this question in various ways. (Often, different aspects of the same game system answer this question in ways that fall on different places along the spectrum of possible answers.) But roughly speaking, any answer to this question falls into one of two categories.

In one camp are the people who say: here are the rules. These rules fully describe what options are available to your character—what actions you can take, how your character can develop, what exists in the game world, exactly what happens when your character takes this or that action, and so forth. If you want to do something that isn’t covered by the rules—too bad; you can’t. No pleading or argument will let you step outside what the rules allow. But within the rules, you’re free to do as you like; no further restrictions are placed upon you. Even if the GM doesn’t like it, “the rules is the rules”—no special treatment for anyone.

The people in the other camp think that “the rules is the rules” is fundamentally too restrictive; it stifles players’ creativity, and places artificial limitations on characters’ actions that may make no sense within the logic of the game world. Also in this camp are GMs who don’t like being rigid and inflexible, or who pride themselves on being able to improvise and invent ways to handle anything their players can dream up, as well as those GMs who feel that their role is to make the players’ desires and intentions happen, rather than to be computers who faithfully implement the rules as written. Players who don’t like being told that their character can’t do something that they think they should be able to, and those who enjoy coming up with unusual, bizarre, or creative plans and solutions, according to their own model of the game world and the situations in it (even when that model contradicts the game rules), are in this camp as well.

The answer to the “what can a character do” question that’s espoused by those in the latter camp is: describe to the GM what you want your character to accomplish. He’ll tell you if it’s possible or not, and he’ll decide what happens if your character tries to do that thing. The GM decides how the game world reacts to your character’s actions, the details of how your abilities work, and so forth. The rules, whatever they may say, are not a straitjacket; they’re guidelines, perhaps, or suggestions—and you’re not limited by them. Your character can, at least in principle, do whatever you can think of; tell the GM what you want to accomplish or what sort of action you want to take, and if it’s something that makes sense, or if the GM agrees that your character should be able to do that thing, then you can do it. Likewise, the GM will determine, and tell you, how your character can accomplish what you’re trying to do, and what the results will be, and so forth.

The term “mother, may I?” is a derogatory description of the latter approach, encapsulating a criticism of that approach that’s often made by those in the former camp. The criticism is this: yes, the rules no longer limit you. But neither do they shield, empower, or inform you. Now, you’re at the mercy of the GM’s whim. Before, the challenge was: can you figure out how to accomplish what you want to do, with these known rules, which are laid out before you and will not be altered? Now, the challenge is otherwise: can you persuade the GM—no computer he, but a fickle human, whose mind is full of biases, strange notions, and moods which the winds of chance, or last night’s ill-digested pizza, might sway this way or that—of your plan’s reasonableness? Before, you could confidently say: “My character does this-and-such, as the rules clearly say that he can”. Now, you’re the child who beseeches the GM to permit your actions: “Mother, may I…?”

“Hmm, and what is it you’re trying to accomplish? What’s your goal?” asks the GM. You must explain, as the GM listens and nods, and judges whether he wants your plan to succeed or fail, whether your goals shall be accomplished or whether your designs shall be frustrated, according to his own inscrutable design; and on that basis, he decides what options for action he will permit your character to have. Even if he does allow your character to do what you ask him to permit, the consequences of that action are decided not by any mechanism which you may inspect, and whose workings you can master, but by the GM—by the unpredictable goings-on inside his mind, which determine whether your plan “makes sense” to him—a determination which is quite beyond your control. You can attempt to convince him, of course. You can plead your case—not by mere reference to the rules, no! they’re only suggestions now, after all, and may be overruled as the GM pleases—but by vague and dubious appeals to “common sense”, ill-defined principles of dubious applicability (from game design, physics, personal experience, or just about any other source you can pull in), tangentially related “facts” of questionable accuracy, emotional arguments about what you want and what’s “important to the character concept”, and anything else you can think of. If it works—if it gets the GM to adopt your view—it’s fair game.

When “mother, may I?” reigns, you don’t know what your options really are. Those options might change from day to day. You don’t know how the game world really works (because it works in whatever way “makes sense” to the GM, who is no longer bound by those same rigid rules which you discarded in your pursuit of greater freedom). And the game is no longer a cognitive challenge, an exercise in creativity, strategic thinking, understanding and mastery of complex systems, and all the other sorts of challenges that TTRPGs excel at providing. Now, it’s a game of persuasion, of manipulation, of phrasing your pleas in the right way (or just being louder and more insistent). How droll.

That’s the argument, anyway. Of course, in practice, no sensible gamer takes either view to its extreme, nor takes the same position on the spectrum in every single situation. But at the heart of this conflict of two philosophies of game design is the same sort of thing as in the examples given in the previous parts. Instead of simply being told—what someone wants from you, what you must do, what is legal, what is allowed—you’re asked to open up your internal state for inspection and judgment. Your interlocutor refuses to provide well-defined rules, binding upon you and him both. Instead, he demands that you tell him all about your plans, your needs, your intentions, your inner workings; then, he says, he’ll make his decision—on the basis of a decision procedure that he, and not you, will carry out.

In the RPG case, gamers with views closer to the “rules is the rules” end of the spectrum may respond to such demands along these lines: “Look, it’s none of your business what my intentions are, or what I’m trying to accomplish. My plans are my own; my internal state is not available for your viewing. I have no interest in attempting to convince you that you should permit me to do what I want to do; don’t ask me to justify why you should allow me to take this or that action. Describe the world; tell me what the rules are; and I’ll decide on a course of action.”

V.

Of course, devotees of role-playing games aren’t the only people to prefer rigid, known-in-advance rules over the chance to secure an ideal outcome by persuading a human decision-maker that they deserve it. Legal scholars and rationalists alike have recognized the value of stare decisis, or the notion of precedent—that the courts should not (at least, barring extraordinary circumstances) make decisions that contradict previous decisions made by other courts in similar cases.1 Why have such a principle? Surely a more context-sensitive approach would be better? Even “similar” cases differ in details, after all, and may involve different people, who come from different backgrounds, etc.; shouldn’t judges evaluate each case in isolation, and make whatever decision seems most appropriate to the circumstance?

A similar question may be asked about the concept of equality before the law. Why should people be equal before the law? Aren’t different people… different? Shouldn’t a judge treat every person who appears before them as distinct individuals, rendering whatever decisions seem most appropriate to the circumstances—regardless of how other courts (or even the same court) may have treated other people in “similar” cases?

I leave it to others to explain the value of the law’s predictability, and its effect on freedom of self-determination. Links between stare decisis and the psychology of motivation, too, would make fertile ground for blog posts, and the essay on the implications of both said topics for the design of user interfaces is practically written in my head already—but that is a matter for another day. Right now, I’ll just say that contextualism—the view, described above, that says we should judge legal cases on an individual basis, and that opposes the principles of both precedent and legal equality—inevitably requires exposure of the petitioner’s internal state. For one thing, it’s simply a competitive advantage: someone who bares their heart to the (precedent-rejecting, context-embracing) arbiter of their fate will often have a better shot at a favorable outcome than someone who won’t—we humans are suckers for a sob story, we sympathize more when we relate better, we relate better when we feel that we know someone better, and so on… But even more: the arbiter will demand it, will demand explanations, justification, an account of intent, belief, and mental state, a claim of needs and a convincing case for them… because how else can you fully grasp the context of a matter? In short, if we commit to judging every case on its own merits, and taking full view of the unique, individual situation that surrounds it, then we will find ourselves saying: you who stand before us—give us full access to your internal state, that we may inspect and judge it.

VI.

One may wonder what comes of having such a system. Maybe it’s nothing bad? Isn’t being open a good thing, in general? Open source, open borders, open bars… seems legit. Why not throw open the shutters of our hearts? Why shouldn’t other people have access to our internal state? Ok, if not literally everyone, then at least people who have power over us, who make decisions that affect us—shouldn’t they have all the information?

“Those people might be bad people! They might maliciously use this privileged access for bad purposes!” Yeah, that’s true. But let’s not stop at the easy answer. Let’s say we trust that those who stand in judgment over us are good people, who have the best intentions. Perhaps we should tell them everything about ourselves—present an account of our needs, say—and let them decide what we should get, and what we should do. What comes of this?

“Well, there was something that happened at that plant where I worked for twenty years. It was when the old man died and his heirs took over. There were three of them, two sons and a daughter, and they brought a new plan to run the factory. … The plan was that everybody in the factory would work according to his ability, but would be paid according to his need. …

“It took us just one meeting to discover that we had become beggars—rotten, whining, sniveling beggars, all of us, because no man could claim his pay as his rightful earning, he had no rights and no earnings, his work didn’t belong to him, it belonged to ‘the family’, and they owed him nothing in return, and the only claim he had on them was his ‘need’—so he had to beg in public for relief from his needs, like any lousy moocher, listing all his troubles and miseries, down to his patched drawers and his wife’s head colds, hoping that ‘the family’ would throw him the alms. He had to claim miseries, because it’s miseries, not work, that had become the coin of the realm—so it turned into a contest between six thousand panhandlers, each claiming that his need was worse than his brother’s. How else could it be done? Do you care to guess what happened, what sort of men kept quiet, feeling shame, and what sort got away with the jackpot?” …

“Any man who tried to play straight, had to refuse himself everything. He lost his taste for any pleasure, he hated to smoke a nickel’s worth of tobacco or chew a stick of gum, worrying whether somebody had more need for that nickel. He felt ashamed of every mouthful of food he swallowed, wondering whose weary nights of overtime had paid for it, knowing that his food was not his by right, miserably wishing to be cheated rather than to cheat, to be a sucker, but not a blood-sucker. He wouldn’t marry, he wouldn’t help his folks back home, he wouldn’t put an extra burden on ‘the family.’ Besides, if he still had some sort of sense of responsibility, he couldn’t marry or bring children into the world, when he could plan nothing, promise nothing, count on nothing. But the shiftless and irresponsible had a field day of it. They bred babies, they got girls into trouble, they dragged in every worthless relative they had from all over the country, every unmarried pregnant sister, for an extra ‘disability allowance,’ they got more sicknesses than any doctor could disprove, they ruined their clothing, their furniture, their homes—what the hell, ‘the family’ was paying for it! They found more ways of getting in ‘need’ than the rest of us could ever imagine—they developed a special skill for it, which was the only ability they showed. …

“We were a pretty decent bunch of fellows when we started. There weren’t many chiselers among us. We knew our jobs and we were proud of it and we worked for the best factory in the country, where old man Starnes hired nothing but the pick of the country’s labor. Within one year under the new plan, there wasn’t an honest man left among us. That was the evil, the sort of hell-horror evil that preachers used to scare you with, but you never thought to see alive. Not that the plan encouraged a few bastards, but that it turned decent people into bastards, and there was nothing else that it could do—and it was called a moral ideal!”2

(Don’t let the talk about the “shiftless and irresponsible” distract you. Change happens at the margins; you get what you incentivize. And as usual, the greatest enemy is not around us, but within us.)

VII.

So maybe it takes a contrarian and malcontent like me to see this pattern in the sort of conversation described in Part I (a fairly innocent exchange, all things considered), but clearly, many people have noticed it in lots of other places, and they seem to take a rather dim view of it. The interesting question is, why does anyone not object to this sort of thing? Why does “I demand to know all about your internal state” ever work?

Our as-yet-unnamed pattern is a certain sort of illusion—a thing that masquerades as its opposite. The one offers you a bargain: “Open yourself up to me—give me information about yourself, allow me access to your inner workings, your internal state—and I will control you less.” Less? “Sure—if I know more about you, I can make less onerous demands; I can ask for only what you’re able to give; I can restrict you less, changing the rules so that they don’t prevent you from doing what you want to do; I can make sure your needs are met, as much as possible. In doing these things, I’ll be relieving your hardships, easing your burdens.”

In short, you’re offered greater freedom. But in every case, it’s an illusion. By surrendering the secrecy of your internal state, you make yourself less free, not more; though it may not always be obvious how.

What motivates the conversation described in Part I? Person B wishes to promote illusion that their request isn’t really an imposition at all, that person A is doing them a very minor favor, at best. After all, if you’ve revealed your internal state to me, I can now judge for myself what is or isn’t an imposition for you, and what costs you bear for doing what I want you to do, and whether those costs are reasonable. You can argue with me, dispute my judgments; but by doing so, you tacitly admit that this is now a matter that we have to settle by debate. You’re no longer sole authority on what you may reasonably be asked to do, and what you ask in return; now I may weigh in also.

In the RPG case, the game-master who says “Tell me what you want to accomplish, what your intentions and goals are, and I’ll decide whether that’s possible and allowed, and how you can do it”—is he giving his players more freedom, as he might claim? With the flexible “cooperation” of the GM replacing the cold and rigid rules, the players are much less limited! No; rather, the GM gives himself more power—the power to shape the game world according to whim, the power to decide at each step and level whether the players’ plans succeed or fail, where otherwise they might fairly win, surprising him with their cleverness and cunning, their success assured by the rules.

And in our other examples, you might gain the chance to secure a better outcome (if your powers of persuasion are good enough) than you would if the arbiter were systematic and impartial—but now your obligation to make your internal state available for judgment binds you like a set of chains. And what have you really gained? Today the outcome might be favorable, but tomorrow I might alter my accounting of what conclusions or decisions follow from what you’ve revealed, and your fate reverses. Or, with your inner workings laid bare for my inspection, perhaps I might decide that those inner workings are objectionable somehow. Tell me what your needs are, and perhaps I’ll judge that you shouldn’t have such needs—no one ought to have such needs. You’re not even back at square one, then; it’s much worse than that. Rather than chafing under rules that tell you what to do, you’re now given rules about whom to be.3

VIII.

But there’s another sort of cost to abandoning encapsulation.

Once I reveal my internal state you—once I report my plans and intentions to you, once I present an account of my needs, once I submit the details of my life for your inspection—I lock myself down, on all of the things I’ve revealed. The fact that you are basing decisions—ones that affect my life, and perhaps the lives of others—on what I’ve revealed to you, exerts pressure on me, to be exactly that way, to conform to what I’ve revealed, not to deviate, not to change. If you’ve only based your weekend plans on what I’ve let you see, the pressure comes from the threat of mild social disapproval, and I am constrained only in a minor way—but constrained nonetheless. If my internal state forms the basis for decisions about legal cases, or the distribution of resources, or national policy—then I am hard pressed indeed, and loss of freedom I suffer thereby is great, much greater than the imposition from all but the most draconian of laws. There are few freedoms as vital as the freedom to change one’s mind.

Of course, programmers have known about this for some time. Thus the Law of Demeter—because my internal state is none of your business.

1 And I am not even the first to notice the parallel with RPGs.

2 Atlas Shrugged, of course. The entire speech makes for riveting reading, despite Ayn Rand’s lack of talent for brevity.

3 An interesting parallel is the point that Slavoj Žižek makes in this video.

November 14, 2021

Different views on the same data

The idea of “different views on the same data” is crucial. It’s ubiquitous in desktop applications—so much so that we forget about it—the proverbial water to the proverbial fish. It’s not nearly as common in modern web apps as it should be. Here are some examples, that we may better grasp just how basic and fundamental this concept is.

Contents

Example one: Finder windows

Note for non-Mac-users: the Finder is the graphical file manager on the Mac OS. (It also does other things, but that's the part of its role that we're concerned with here.)

In all of the examples in this section, the data is the same: “what files are in this folder?” Let’s look at some of the possible views onto this kind of data.

Figures 1–4 show the same folder in each of the four different available view modes: list, icon, column, and cover flow.


Figure 1. Finder window in list view. Miscellaneous files.

Figure 2. Finder window in icon view. Miscellaneous files.

Let's play a game of “spot the difference” between Fig. 1 (list view) and Fig. 2 (icon view). Here we're not concerned with visual differences, but with UX differences. Here's a partial list:

(1) List view shows more metadata. (Here we see modification date, size, type; view options allow us to show more/other columns, like date added, version, tags, etc.)

Does the icon view show no metadata at all? Nope, it shows at least one piece of metadata: the file type—via the file icon. (This is an example of multiplexing; the icon has to be something—to provide a visual representation of a file, and to provide a click target—so why not multiplex file-type data into it?)

Of course, the icon is also visible in list view (but smaller); this means that in list view, file type is conveyed twice (if the “Kind” column is enabled). This is an example of redundancy in UI design, and of good use of sensory (in this case, visual) bandwidth (of which there is quite a lot!). Notice that this redundancy affords the UI a degree of freedom it would not otherwise have: the “Kind” column can be turned off (making room for other data columns, or allowing the window to be made smaller, to make room for other stuff on the screen) with minimal loss of information throughput for the UI.

But wait! What about the file name? There's metadata lurking there, too—the file type again, encoded this time in the file extension. Redundancy again; the file type is therefore displayed in three ways in list view (“Kind” column, icon, file extension) and in two ways in icon view (icon, file extension).

All of this gives the UI several degrees of freedom. How is this freedom spent? In at least two ways:

  1. To allow for one or more of the channels through which file type information is communicated to be disabled or repurposed in certain circumstances, with minimal loss of information. (An example of a disabled channel: the “Kind” column is absent in icon view, but file type information is still visible. For an example of a repurposed channel, see the notes on Figures 5–9, below.)
  2. To compensate for unreliability of some or all of the channels through which file type information is communicated. Sources of unreliability include:
    • The Finder may not recognize some obscure file types (the “Kind” column would then display no useful information); the file extension may be the only source of file type data in this case
    • The file extension may be missing (but Finder attributes may be set, thus allowing an appropriate icon to be shown and an appropriate value to be displayed in the “Kind” column)
    • The file icon channel may be repurposed (Again, see the notes on Figures 5–9, below, for an example)

(2) List view allows sorting. (Click on a column name to sort by that column's value; click again to reverse the sort order.)

… or is this really a difference? Actually, files can be sorted in icon view as well (there is both a “one-time sort” option and a “keep sorted by…” option). This is not obvious, because the UI for sorting in icon view is not discoverable by mere physical inspection, whereas in list view the column headers are visible, the sort order indicator is visible (the triangle, pointing up or down), and the “click column header to sort tabular data by that column's value” is a well-known idiom. (In icon view, sorting is done via a menu—either from the menubar, or from the context menu, or from a menu in the window toolbar.)

There is, however, a more subtle difference: in icon view it is not possible to sort in reverse order. Why not? The only reason is that Apple was unable (or unmotivated) to design a good UI for reversing sort order in icon view.

General lessons:

  • The same (or analogous) forms of interaction with the data may be implemented via different UI designs in one view vs. another view.
  • If the UX for a particular interaction in one view is obvious, don't assume that in other views it's impossible to design and implement.
  • However, not all interactions that are possible in one view need to be (or can be) available in all views. (It makes little sense to provide “one-time sort” functionality in list view.)

Design principles:

  1. In each view, provide as many interactions as is reasonable, no more and no less. (Provide more and you clutter and complicate the UI; provide fewer and some or all of the views will be too capability-poor to be useful.)
  2. Strive to have each view provide as complete a set of interactions with the data as possible.
  3. To reconcile the tension between the above two design principles, remember that it's better to provide a capability and hide it away behind a non-obvious or non-trivial-to-discover UI than not to provide it at all. This way, it will be available for power users but will not trouble less experienced users. (Of course, this is not an excuse to hide capabilities behind non-obvious UIs when there's a good reason, and a good way, to provide an easily-discoverable UI for them.)
  4. At the same time, look for ways to exploit the unique properties of each view to provide additional interactions that would impossible or nonsensical in other views.
  5. The more ways the user can interact with the data, the better.

(3) Icon view allows arbitrary grouping and arrangement; I can position the files in the window wherever I like (example 1, example 2, example 3, example 4, example 5).

(Unless a “keep sorted by…” option is enabled.)

Some file managers don't have this feature; the Finder does. The lesson:

Do not carry over UX/interaction limitations necessitated by one view, to another view where they are not necessary.

Arbitrary grouping and arrangement makes little sense in list view. In icon view, there's no reason not to permit it—except, of course, that allowing the user to set, and then tracking, arbitrary icon positions, takes work! Does it offer a benefit? Find out! Ask users, survey other implementations, etc. In general, users resent limitations on their freedom, and appreciate the lack of them.

(4) What aspect(s) of the data may be easily gleaned via visual inspection differs from one view to another.

Different views (usually) look different. It's easy to forget this, but it's crucial. Here (in the “Finder list view vs. Finder icon view” example) this manifests in a couple of ways:

  1. In list view, it's easier to pick out files which differ from the others in any displayed metadata value (modification date, file name, etc.). This is true not only due to the sort feature, but also because humans find it easy to scan down a list of text items (which are horizontally aligned) and notice ones which stand out.
  2. In icon view, the "file icon" data channel is wider (because the icon is displayed at a larger size); more data is coming through this channel. This makes it easier to distinguish icons, but also allows this channel to be used for other purposes (see notes on Figures 5–9, below).

General lessons:

For humans, the visual channel is a high-bandwidth one. Use it. Some ways to optimize UI visual bandwidth:

  • Multiplex meaning.
  • Allow the repurposing of high-bandwidth components.
  • Remove obstacles to visual apprehension of patterns (minimize "non-data ink", etc.).
  • Assist the brain's pattern-recognition abilities by using alignment, contrast, repetition, and proximity cues.

The same folder as in Fig. 1 and Fig. 2, but now in column view (Fig. 3) and cover flow view (Fig. 4):


Figure 3. Finder window in column view. Miscellaneous files.

Figure 4. Finder window in cover flow view. Miscellaneous files.


Figure 5. Finder window in icon view. Folder containing low-resolution icons.

Figure 6. Finder window in list view. Folder containing low-resolution icons.

Figure 7. Finder window in list view. Folder containing high-resolution icons.

Figure 8. Finder window in icon view. Folder containing high-resolution icons.

Figure 9. Finder window in icon view, zoomed to cover most of desktop. Folder containing high-resolution icons.

Figure 10. Finder window in list view. Miscellaneous files. No toolbar.

Figure 11. Finder window in list view. Miscellaneous files. One folder expanded to depth 1.

Figure 12. Finder window in list view. Miscellaneous files. One folder fully expanded.

Example two: Microsoft Word document windows


Figure 13. Word document window in draft view.

Figure 14. Word document window in print layout view.

Example three: structured data

  1. A .csv file, displayed as plain text
  2. The same .csv file, opened in Excel
  3. An HTML file, containing the same data, plus markup such that the data will be displayed in tabular form, displayed as plain text
  4. The same HTML file, rendered in a browser

(Analysis of examples two and three left as exercise for the reader.)

January 07, 2020

Three levels of mastery

I’ve never seen this concept named, or concisely articulated, anywhere else. The idea itself is not original to me, of course.


Of any skill, or any domain where expertise of execution may be gained, there are three levels of mastery.

At the zeroth level, you break the rules, because you do not know the rules. Success is accidental; failure is likely; excellence, all but impossible.

At the first level, you know the rules, and follow them. You do well enough, though excellence is unlikely.

At the second level, you know, not just the rules, but the motivations behind them; you understand why the rules must be as they are. You follow the rules or break them, as the task demands; your actions are governed by deep principles. Success is near-effortless; excellence becomes possible, and even likely.


To achieve greater mastery, you cannot skip levels. At the zeroth level, you may look at one who has achieved the second level of mastery, and note that he routinely breaks the very rules he has instructed you to follow. Are there no rules, then? But there are; and they exist for good reasons. You will not achieve the second level of mastery before the first.

Likewise, the one who has achieved the second level of mastery says to him who has yet to achieve the first: “Do as I say, not as I do”. This is not hypocrisy. One who does not understand the three levels may think: “He is allowed to break the rules, as I am not, because of some privilege of rank”. But it is only that to think outside the box, you must know the shape of the box, its contours; if you cannot see the box, you will never escape it.

And once more: you cannot explore the space of possibilities, if you do not know its dimensions. The axes of that space are not the bars of a cage, but signposts; not seeing them, you are not infinitely free—but only doomed to wander forever in a Flatland of amorphous mediocrity.

March 27, 2019

“Screen serif” fonts

There’s a small-ish cluster of serif fonts—all of recent design, not digitizations of classic typefaces, nor even designed for (professional) print1—that people always have trouble fitting into one of the traditional categories of serif typefaces.

In appearance, it looks something like if Baskerville, a 225-year-old typeface that has been shown to shape our perception of truth, and Caecilia made a baby.

The Kindle Finally Gets Typography That Doesn’t Suck

These fonts are sort of like transitional serifs, but they’re also sort of like slab serifs, and sometimes they’re called “transitional serif but with features of a slab serif”2 etc. etc.

… a crisp, modern serif typeface … avoids the stuffiness of historical text faces and doesn’t overreach when it comes to contemporary detailing … a balanced, low-contrast typeface with economic proportions…

Elena font description

Fonts in this category share these properties:

  • fairly thick strokes in the normal weight
  • low stroke weight variation
  • serifs that are not sharply tapering nor thin and dainty, but thick (yet not geometric or square, as slab serifs)
  • relatively open counters
  • relatively large x-heights

… simply a contemporary body text font.

Tuna font description

Fonts in this category include:

… and quite a few more more—see the full list (that I’ve found so far) on my wiki (and feel free to suggest additions on the Talk page!).

Some samples:

The category does not seem to have any accepted name3—yet unquestionably this is a real cluster in font-space. This blog post is meant to call attention to the cluster’s existence.

The characteristics listed above mean that fonts like this will render well across a variety of environments, software and hardware. And empirically, these fonts make for pleasing and readable body text on the web. So, at least for now (unless and until someone tells me that there’s already an accepted name), I’m calling these fonts “screen serif” fonts.

If you want your pages to be readable and attractive, try setting your body text in one of these fonts! (Again, check out my wiki for the full list—I’ll be adding more “screen serif” fonts to it as I come across them.)

1 Some of them—notably including the oldest font I’ve found that belongs to this category, Charter—are designed for consumer printing situations, i.e. laser or even inkjet printers.

3 It’s not the same as Clarendon-type fonts—though there is a good bit of similarity. In fact, Fonts.com includes Charter in the Clarendon category, but that seems to be a minority view.

June 09, 2018

A UX design puzzle for fans of SimTower

SimTower was an elevator simulation game.

OK, it actually had other things in it, not just elevators. But the elevators were the heart of it—they were the most engaging part of the gameplay, with the most complex game mechanics—and, more than anything else, it was mastery of the elevator design that would bring a player success in SimTower.

I played SimTower a lot when I was younger. Read more...

May 28, 2018

Shared interests vs. collective interests

Suppose that I, a college student, found a student organization—a chapter of Students Against a Democratic Society, perhaps. At the first meeting of SADS, we get to talking, and discover, to everyone’s delight, that all ten of us are fans of Star Trek.

This is a shared interest. Read more...

May 06, 2018

Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards

This is the second in a series of posts about lessons from my experiences in World of Warcraft. I’ve been talking about this stuff for a long time—in forum comments, in IRC conversations, etc.—and this series is my attempt to make it all a bit more legible. I’ve added footnotes to explain some of the jargon, but if anything remains incomprehensible, let me know in the comments.

Previous post in series: Goodhart’s law.


“How do we split the loot?”

That was one of the biggest challenges of raiding in World of Warcraft.Read more...

May 03, 2018

Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law

This is the first in a series of posts about lessons from my experiences in World of Warcraft. I’ve been talking about this stuff for a long time—in forum comments, in IRC conversations, etc.—and this series is my attempt to make it all a bit more legible. I’ve added footnotes to explain some of the jargon, but if anything remains incomprehensible, let me know in the comments.


World of Warcraft, especially WoW raiding1, is very much a game of numbers and details.

At first, in the very early days of WoW, people didn’t necessarily appreciate this very well, nor did they have any good way to use that fact even if they did appreciate it. (And—this bit is a tangent, but an interesting one—a lot of superstitions arose about how game mechanics worked, which abilities had which effects, what caused bosses2 to do this or that, etc.—all the usual human responses to complex phenomena where discerning causation is hard.)

And, more importantly and on-topic, there was no really good way to sift the good players from the bad; nor to improve one’s own performance. Read more...