Reply: “The Simulation Dream” by Tynan Sylvester

I was recently recommended a Gamasutra article on “The Simulation Dream,” by Tynan Sylvester.  I wanted to write down a quick response that I’d love to start a discussion around.

Overall, I agree with Sylvester’s article, which essentially echoes that what matters in a game  is what players perceive, as opposed to what the game offers.  In essence, a game may hold great potential (through extensive narrative content, or intricate game mechanics, etc.), but if that potential is never acted upon, then it is, for lack of a better phrase, wasting space.  This also implies, as Sylvester suggests, that you can play with the player’s perception to elicit game content that isn’t actually there, something my colleagues and I have studied in our paper on The Illusion of Agency.  While we didn’t call it that, Sylvester references “Apophenia,” the cognitive bias of perceiving relationships which aren’t necessarily there, as the reason for the primacy of player perception.  Apophenia is not the only cognitive bias relevant to the study of games, but it certainly merits a great deal of attention.  Also, his comment regarding creating story-richness reminds me of some of the problems that the field of Artificial Intelligence faced when studying issues relating to Knowledge Representation.  Sylvester writes that we should “choose the minimum representation that supports the kinds of stories you want to generate,” a question that has beleaguered AI researchers for quite some time (although not necessarily in the context of games).

However, there are some things I disagree with, of which I will name two.

First, the statement that “anything in the Game Model that doesn’t copy into the Player Model is worthless,” to me sounds like too broad of a catch-all.  Perhaps this complaint is due to my desire to be precise, but the statement implies that everything in the Game Model ought to be in the Player Model (because if not, it is not worth putting in).  I think what he’s implying is that things intended (i.e.) to be perceived ought to have feedback to advertise them, otherwise they risk not making it into the person’s mental model of the game, which I think is fair to say.  However, I’m taking a risk in further refining his original thoughts, since he doesn’t explicitly define the Game Model; does he mean the mechanics of the game?  or all the supporting code?  the code that helps the game run smoothly is certainly not worthless, but it may not be perceived in the player’s mind.  Small tangent: part of the Game Model (I presume) includes the game’s AI, which we as developers actively try to shield from the player’s mind in the sense that we do not want them to know how the AI “is smart,” because we want to maintain the willing suspension of disbelief.

Second, the statement that a real complex system will “constantly break the Player Model Principle” (succinctly defined as: “The whole value of a game is in the mental model of itself it projects into the player’s mind.”) to me is a gross over-generalization of complex systems.  While it is true that complex system interactions make it difficult to tease apart causal relations (and transitively, to parse stories), I don’t think it will constantly break the Player Model Principle as long as the interactions with the story are actively mediated by the game itself.  I don’t think I necessarily disagree with Sylvester’s point, but I would rather have the problem of “from all the content I have, let me pick and choose what I think you would like best,” rather than the problem of “let me come up with content that I think you would like best.”  Thus, I say I disagree with him, but what I really want is to avoid discarding complex systems, because I think there is great potential in them for cool stories that we have not even come up with yet.  A computer is excellent at bookkeeping, and it has the capacity to store relationship information between artificial story agents living in a fictional world.  A  human, contrarily, must focus on content with a limited set of characters, because the author herself is a bottleneck in terms of keeping up with intricate social relations and happenings.  Complexity, for lack of a better word, is good.

The Simulation Dream is alive and kicking in me, and I think generous amounts of complexity is the way to do it.  The task is knowing where, when, and how much.

What does your gut tell you?

Lately, I’ve been encountering a problem when I mentor others.  During mentoring, I try to follow a very Rogerian approach, choosing to let others talk through the problems they are facing, as opposed to providing direct advice.  Like the fallen Jedi Kreia, I feel that sometimes you may inadvertently rob someone of a growing experience if you provide too much help, and so sometimes, you need to let others trip, fall, and get back up.

The problem I have found, however, is that fear of the unknown may stunt others from even thinking about the consequences of their actions.  And so, when I ask people: “What do you think you should do?,” I’m often met with a blank stare, often times succeeded by the phrase: “I don’t know.”

What I’ve resorted to asking (with odd success) is: “What does your gut tell you?” On countless occasions, this has helped my mentees at least utter what they’re thinking of doing.  It’s almost as if relinquishing agency to an external entity (even if it’s a conscious-less object) makes the process of detaching and objectively evaluating yourself even easier.  Admittedly, relinquishing agency could be thought of as what people are doing when they solicit advice on what to do next, but what’s interesting is that in this case it’s completely imagined; maybe people think that their gut is actually telling them something?  Maybe it is easy to abdicate the choice point, so as to not take direct responsibility for the consequence of the decision.

What does your gut tell *you*?