Chris’ Feedback on Campaign Puppets

I emailed Chris Crawford to get his feedback on my Campaign Puppets game idea. He replied that the design had several problems:

  1. Previous games of this type have failed.
  2. Vague or unknown voter preference algorithms make scoring and victory determination difficult if not impossible and/or reliant on chance.
  3. Large icon vocabulary increases player confusion.
  4. How do you show relationship between words, i.e. grammar
  5. Large vocabulary leads to problems with language comprehension algorithms

Chris raises some very good points which I’ll try to start addressing below.

Prior Failures (#1)

Previous games in this genre have appealed to a niche audience, those interested in political elections, and focused on resource management–opening campaign offices, running ads, raising funds, giving speeches–and moving the candidate between the states to raise your poll results.

Campaign Puppets focuses on “People not Things,” specifically on the primary candidates and the one area where their direct actions can affect their campaigns–the candidate debates. The game will still appeal to a niche audience (though the debate audience might be larger than the campaign manager audience) and the focus is on conversation which, in today’s political climate, includes trash talk.

There are many more things that have to be done before it can be determined if this design can be judged a success of a failure (some of which are listed below). Right now Campaign Puppets looks like an interesting experiment. Several things need to be hashed out before it can become a reality.

Voter Preference Algorithms (#2)

I haven’t even thought about this until Chris mentioned it. My previous attempts to create polling algorithms for Camelot were incomplete but reviewing them gave me an appreciation of how difficult this work might be.

For Campaign Puppets I want a real-time display of candidate rankings during the debate. At the end of the debatewinner, which means the electorate believes that they should be the party’s candidate in the general election.

IMG_0006 43%
IMG_0007 22%
IMG_0008 17%
IMG_0009 13%
IMG_0010 5%

In order to do this I’ll need to know how candidate reactions are affecting the electorate which means I’ll need an electorate population with positions on certain issues.

I’ve got some ideas building on my previous work (see the link at the beginning of this section). I’ll write up another blog post specifically addressing this issue.

Large Icon Vocabulary (#3)

The icons displayed in the inverse parser are not the final icons but merely placeholders. They’re there to give the player an idea of how and where they’ll create a candidate’s responses. At this point I don’t know what the final “words” will be or how many there will be.

Whatever the final word count turns out to be, I believe that an inverse parser will reduce confusion since it puts the player in the driver’s seat, allowing them visually construct sentences from only the appropriate words for that particular part of the sentence.

Here’s an example of an inverse parser in action creating a sentence from the Storytron version of Siboot.

storytron-siboot-inverse-parser-1

  1. You’re given a list of valid choices for the sentence’s Verb.
  2. You’re given a list of valid choices for the sentence’s Direct Object (in this case it’s characters who aren’t at the current location).
  3. Piece by piece you construct a complete sentence from only valid options. Here we’re about the select the final word of the sentence before having the character “say” it.

The video below shows the Storytron Siboot inverse parser in action. You’ll see how easy it is to undo previous selections as your mind changes and that at no point should the user be unclear as to what options they can select.

This video of Chris’ work on the latest incarnation of Siboot shows how the same type of inverse parser would work with icons.

[INSERT VIDEO HERE]

Grammar (#4)

Grammar is the syntax and structure of a language. In Storytron the language was sentence-based, with each sentence being composed of a Subject, a Verb, and one or more storyteller-configurable WordSockets. Here’s an example, the “offer to reveal” verb illustrated in the inverse parser above.

storytron-siboot-verb-properties-1

The focus in Storytron was conversations between characters. Campaign Puppets is more about swaying an audience, about rhetoric and debate. But it still boils down to what does the candidate say in reply.

I’ll have to give this some additional thought and see what the verbs might be which will help me determine whether candidate replies can fit into the Storytron sentence model (which is quite flexible) or whether I have to invent something new.

Language Comprehension Algorithms (#5)

Teen Talk had a very simple language–”X says that s/he likes Y by amount Z”–and the code behind that language reflected it’s simplicity. The code to parse a Candidate Puppet statement might be several magnitudes larger but I won’t know how much until I’ve done some additional work.

This entry was posted in Side Projects and tagged . Bookmark the permalink.