Monday, December 3, 2012

Starting AI

Took the starting steps to implement the storytelling AI. There's a lot left to do, but I've been seeing tremendous results even in these early stages.

Right now I've done two things:

  • Removed the hardcoded "turn" termination of the program. Before now the program ran until each character had been granted an x number of turns. Now I've adopted a "storytelling" approach based on screenwriting formatting - a story is made up of 3 major events, commonly referred to as "the inciting incident," "plot point 1" and "plot point 2". As a result, the simulation now runs until three actions of AIV level 5 (the max) have been performed. After the turn of the third one, the simulation ends. This is rudimentary - I still need to figure out a way to allow for the story's "conclusion" - but so far it's proved incredibly effective at keeping the focus of the "story" fairly narrow.
  • Made it so any actions below a certain AIV don't print. It's amazing how much of a difference this makes... At some point I'll try to do a test where I output the same "story" with this suppression and without it so that it's clear just how important omission of irrelevant/boring details is to telling a good story. This is also far from complete... In the finished version, we need to select a "main character" (in a game, this would always be the player), then we construct a tree of degrees of separation (Kevin Bacon style) in which a lower degree means that less relevant information is still important. That is, any AIV prints for the main character, any above AIV 2 prints for a character of degree 1, any above AIV 4 for a character of degree 2, and for degrees 3 and higher we never see what they do. Right now we're just equally suppressing everyone's "uninteresting" actions, which goes a long way towards telling a story but doesn't factor in concepts of protagonists and story focus. That comes next.

The Unity Problem

I've discussed in the past my fears concerning Unity and how difficult it would be to use. Those fears were mostly quelled thanks to a one on one meeting with Aline a couple of weeks back. Though one or two were also substantiated.

Aline sat down with me and helped me build a basic framework for a character walking around a scene and a movable camera that I would theoretically be able to expand to incorporate my simulation. Unfortunately, we hit trouble when Aline tried to help me port the program back to my computer. Luckily, she was able to get it up and running on my PC, and while I'm still not entirely sure I understood how she set up some of the more complex animations, I believe she taught me enough about how the program works that I could start moving the code over without any major catastrophes. If you are reading this, Aline, thank you so much for the foothold into Unity. It's given me a lot of confidence about the graphics side of things, and I'm sure will be invaluable going forward.

I've started shaping up my code to work with how the Unity simulation works, in preparation for porting it over. Characters no longer look for other free characters to start interactions with, but wander from (x,y) location to (x,y) location, randomly generated on a predefined grid (room, playing field, whatever you want to call it - this will be defined in the world .txt file starting soon). When a player takes a turn to look for a free interaction partner, they check only with the character closest to them. If the character is busy, they continue to wander. If not, they stop and talk. Once they reach a selected point, a new one is generated for them to wander to. I chose this wandering implementation to coincide with the Unity file I set up with Aline, where a character needs a set point to walk to passed into it.

At some point I'm going to have to address user functionality. It's looking like, if I work over my vacation, I might be able to enter next semester having a very basic "story generator" implemented using Unity. If that's the case, based on a conversation I had with Norm and Aline at the "Beta" review, I might shift gears and try to actually apply this generator to something by implementing a Unity based mini-game, something like a detective story where the murder/murderer/circumstances are different every time.

Back Where We Were

So last week I presented my "Beta" (only not really, because the majority of my work will be done next semester). There's a lot that's happened since the last time, I suddenly realize. I think I forgot to do a blog the week of the beta, but we'll rectify that now.

I'm back where I was in terms of the code, which is to say I have a functioning text based simulation that generates randomized interactions for characters for a given number of turns. It's radically different in that the construction of the world is done entirely outside of the code in .txt files of a format I created myself (and outlined in past blog posts). Here's an example of a simple simulation world structure:

emotions
3
love
hate
boredom

actions
3
kill.txt
converse.txt
search.txt

characters
3
Bob
m
random

end


A world file contains names of emotions, names of text files containing actions, and then the number of characters. Characters can be explicitly defined using gender and name. If after a certain number of defined characters the person building the world would rather just randomize them, they can put in "random" to randomly generate characters until the number they requested has been reached.

Here's an example of a simple action file:

converse
1
talks to
1
boredom
30
.05
.05
partner
end

The action file has the name of the action, the "Action Importance Value" I discussed in an earlier post (important for storytelling), a text phrase for implementation of the action (to be replaced with a file linking to a specific animation to play in a fully implemented solution - since I won't be generating these animations, I'll probably just leave them with text expressions). Next it lists all emotions relevant to the action, first by name, then by what the ideal value is for implementation of this action (the closer to the ideal value the emotional state of the character, the more likely they will select that action), then how much that value should be increased by for the character and for the interaction partner. Finally, there is a list of specific commands (partner/solo,samesex/oppositesex,kill,suicide,stop) that can be used to specify when the action is used and what built in commands (like killing) does it cause when it is implemented.


Below is a "story" generated using just three or four actions and emotions. Whereas adding new emotions and actions in the previous implementation could take hours, now adding a new one of either takes a matter of minutes, and doesn't require the code to recompile. Ignore the "0,0" printed between every turn cycle... That was a test of something that will come up in a later post.




Wednesday, November 14, 2012

Starting Again

Oops! Realized I forgot to update my blog on Monday, so here it goes for this week:

I've been given two options for "goals to achieve by the beta review". One is having a graphical component up and running in Adapt, and the other seems to be "get to where I was in the Alpha, but with extendable emotions and actions."

I'm leaning towards the second option, though I'm not sure if it's for the right reasons. The first reason is that I've already started implementing the new version of the code based on my notes from last week, so I'm already on my way to having that implemented. Another reason I want to pursue this implementation in place of graphics is that having my notes from last week implemented is a huge step towards writing the storytelling A.I., since it will incorporate action importance into the simulation and allow for a way to distinguish between plot points, subplots, and extraneous data.

On the other hand, I worry that a big reason why I'd prefer the second option is just that I'm avoiding doing graphics. I've never used Unity before, and I think fear of the unknown might be causing me to put the Adapt implementation on the backburner. If that's the case, maybe I should familiarize myself with Adapt now, just to take myself out of my comfort zone and figure out the one part of this project that I'm still unsure of.

I've e-mailed Aline, and am waiting to hear her thoughts on what the next best step is. In the meantime, I'll continue working on my implementation of extendable interactions, as I've been doing.

Tuesday, November 6, 2012

Restructuring

I received an e-mail from Aline after my last blog post saying that extendable actions and emotions should be a much higher priority than I give them credit for. Unfortunately, I can't implement them with the code that I've written thus far, so I'm going to have to start writing a new implementation from scratch (using most of the concepts I've already come up with in this blog).

I've been giving a lot of thought to how one might make actions and emotions extendable. After all, "fighting" couldn't possibly have the same results as "flirting". Actions that affect emotions, like yelling and kissing, are fine, but what about actions like killing, that result in someone's death?

I decided there need to be a certain number of actions - like "dying" - that need to be hard coded into the program, but that the emotions and actions will be able to access in certain ways. The program, as I need to implement it now, will accept a text file that lists the number of emotions in the simulation and the number of actions in the simulation. Following the number of emotions will be the names of emotions found in the character, which will automatically be added to each character. Emotions will now be a class of their own, with a name and a value. This will be a good time to specify emotions between individuals... I can make another class for the full emotional spectrum, filled with emotions determined by the text file. Each character can have just one at first, simulating the code I have working right now, but then it will be relatively easy to create a hashmap linking character keys to emotional spectrums for each character, allowing every character to feel differently about every other character. I'll probably have to update the algorithms slightly to include baseline emotions, since when Person 1 makes Person 2 mad, they will be angry at Person 1 and not Person 3, but will still be in a bad mood around Person 3.

Actions are trickier. After the number of actions, I'll have a list of file names of text files for actions that need to be implemented in the game. Actions will be created as their own classes now as well. I think the way I'll need to implement them is cycling through all of the actions that exist in game, plug in the emotions of the characters and have each action spit out a separate (slightly randomized) probability. Then, the highest probability is the action that takes effect for that turn.

The action text file (and, by extension, the action class) will need a name for the action, a list of all the emotions that effect it, whether it is an action that requires an interaction partner, and what the ideal emotional distribution would be for said action to start off. (For instance, Fighting would probably be 100% Anger, but maybe playful insults are 25% anger 75% love). The closer to this distribution the character has, the more likely it will be to perform said action. Finally, we need the effect that the action has on others. This will be another list of emotions: -.25 Love; 2 Hate would decrease love 25% and double hate values. Finally, there would be a list of built-in-software actions that the character performs on themselves as part of the action if it successfully completes and a list of ones they can perform on their interaction partner. These are actions that really need to be hard coded into the program, but can be included in these extendable actions. These are actions like "seek interaction partner," "stop interaction," "kill," and others that I still need to think up (next week I'll attempt a list of these built in fundamental actions, in addition to starting this implementation). Continuing last week's thoughts on the storytelling A.I., this is where we'd include an Action Importance Value (AIV) to weigh how important the performance of this action is to a narrative structure. Finally, we'll need an indication of how this action will look. In a graphics based system, this line would be a reference to an animation that would play along with the action, but since ADAPT won't have any animations that match my actions for now I'll just keep them Strings that contain language like "flirted with" or "attacked", so the end result is floating text over the characters' heads that read "Person 1 attacked Person 2".

I'm pretty sure that this implementation will work. And what's more, I think it'll be genuinely easy to build on and extend out. I'm fairly excited about this new direction, even if it means more or less starting over.

Monday, October 29, 2012

Thoughts on Storytelling AI Structure

Second blog update of the day.
I've been thinking about how to best implement my storytelling AI, and I've thought of two separate possibilities:

A) Social Network Structure
Based on the "six degrees of separation", this version of the storytelling A.I. selects a main character and focuses on their "social network". The main character selection process could be random, but most likely it will be based somehow on character emotional levels. This will require some level of trial and error... I'm not sure yet whether a good protagonist is someone who starts off emotional, or someone who starts off calm (because it leaves them with more room for emotional change). My experience as a writer leads me to believe it is the latter. Preferably, actually, a complex storytelling A.I. would allow the system to run for a while before the story starts, with a number of "calm" potential protagonists. Then, the first time a calm character experiences a substantial emotional change, they become the main character of the story. If we do that, though, we might need to implement a "live delay," so that we remember a few steps back for each character until the main character is selected. Otherwise, we might lose some build up, starting with our main character getting mad and missing why. For now, I think the best way to do this would be to pick a main character based on "calm initially," so that they have the most room for growth, and so we don't miss out on story buildup. Many, many stories start with a character wandering around aimless... I'd hate to cut that out by jumping right to interesting interactions.

Then, we show all story elements of the main character, and of all characters that have interacted with the main character, and of all characters that have interacted with those characters, up to a certain degree n. That way, in a world of thousands of characters, we only see up to a handful of them (up to maybe only the third degree). The story continues until the main character resumes equilibrium (comes full circle), having learned their lesson... Or dies, and ceases to exist.

B) Weighting Important Story Actions
This time we select which actions to focus on not based on characters but based on their actions. If a character talks with someone else, we don't need to hear about it. If they flirt, we might want to hear about it. If they kill someone, we definitely want to know. For this to work, we need to assign certain actions "importance levels" that correspond to beats in story structure. On a 1-10 level scale, a 10 level action is an important story beat, always. Any time it appears, it will be a plot point, and it is strong enough to open or close a story. A level 5 beat is important enough to hear about, but doesn't drive the story forward (the AI doesn't stop the story from going forward). If a level 1 story beat occurs, it is suppressed. No one needs to know about such low level occurrences. Different importance levels allow for nuances in storytelling. If a level 7 element happens at the same time as a level 10, the 7 element might be suppressed or undone so that the level 10 element will be a plot element and drive the story into the next act. If both drove the story forward, the "act" will have lasted less than one turn.

C) Combination Weighting-Social Structure System
Thinking about it, I realize we probably need is a system that combines these two trains of thought. Weighting important actions is probably the only way to have the storytelling AI know when to create the beginning, middle, and end, but those terms mean nothing unless those actions involve a singular character (or group of characters) that we care about. I think the final system needs to weigh important story actions, and use those to determine beginning, middle, and end points for the "A" plot (anything involving the main character directly) and any "B" plots (stories featuring second degree characters with beginnings, middles, and ends, all of which are second priority to the A plot).

Now that I have a plan mapped out to implement the Storytelling A.I., I think I might want to actually implement User Interactivity first. It sounds like more fun, and it will let the user be the main character when storytelling A.I. gets implemented.

Checkpoint and Goal Updates

I met with Norm and Aline last week to give an update on what I have so far. They seemed pleased with my text-based character simulations so far, and gave me a number of ways I could expand my work outward from here. Here are some of the things they said I could (or should) try to implement over the coming weeks:

  • User interaction. This is where it being a "game" comes in. Conceptually, this is pretty simple. Add in a character stand-in for the player (for added fun, let them customize their own name and gender), and each turn ask them what they want to do instead of generating an action for them probabilistically. Emotions for a player character would have no effect on their actions (since those are decided), but actions they perform affect the emotions of other characters.
  • Smart Camera. Potentially based on Dan Markowitz's paper about in world camera motions, this would use "camera as narrative guide," or storytelling AI as cinematographer. The idea being, much like in the text based version of my program the AI chooses what lines of text to show and which to omit, in the graphics based version the AI chooses what scenes to train the camera on and which ones to let slip. For potential added merit, I could try intercuts - cutting back and forth between two "important" scenes - but this already sounds incredibly difficult. This requires me to port my code to Unity and graphics first.
  • Extendable Actions/Emotions. This one actually sounds the most challenging to me, and I'm not sure if I'll be able to do it the way my code is currently structured. The idea is really interesting... Let anyone create a text file that defines actions and ramifications, and then that action and those ramifications get folded into the interactions and, by extension, the storytelling program. The problem with this is that actions and emotions don't all function the same way. "Fighting" has a completely different affect on the world than any "romance" style interaction, since "fighting" has a winner and a loser, and can result in a character's death. I'm also not sure how to implement this with the story AI, since the AI needs to recognize story beats.
On top of this, I've still got my own expansions to my system that I need/want to implement:
  • The storytelling AI. This is the entire basis of the project, and is therefore my first priority.
  • Finding a way to show off the software through Unity and simplified ADAPT.
  • Fleshing out the interaction simulation (in particular, making fighting and character emotions more robust).
This is the order in which I'm choosing to prioritize moving forward:
  1. Storytelling AI. I still need to figure out an algorithm/technique for getting a story "recognized" and told optimally. I still haven't found any papers that attempt this, which is why this is so exciting to me. I have to develop this technique on my own.
  2. User Interaction. All things considered this should be simple enough, and will make showing off the code much more interesting.
  3. Showing off my code in Unity/ADAPT.
  4. Fleshing out the interaction simulation
  5. "Smart Camera"
  6. Extendable Actions/Emotions
More later!

Tuesday, October 23, 2012

Early Results!

Some early results! I've implemented anger, lust, talking, flirting, loving and hateful actions. I haven't done "fighting" quite yet, as it involves further complications in terms of who defeats who and how. Included below are early results as I was coding them. Which is to say, the top ones are the oldest results, and the bottom ones are the most recent.

Funniest bug: "Romney compliments Romney. Romney ends the conversation."


Pretty boring interactions. Like begets like, so flirting begets more flirting, but this required more levels of nuance to the behaviors.





Finally, made it so that characters can start conversations for themselves based on boredom. As they walk around, they calm down slowly (lose anger and lust) but their boredom increases until they are forced to start a conversation with someone new. Conversations can only be held between two people at a time. Finally, I have it now so that any number of characters can be in a scene at once.

I definitely want to make the world and character interactions more robust (fighting, killing, multiple rooms, items, specific relationships, etc.) but right now I theoretically have enough to start my storytelling A.I. if I needed to. I could create 50 randomly generated characters in one room, have infinite steps (the ones above have 10 each), and design my AI to find a start a main character, a beginning, middle, and an end.

Monday, October 15, 2012

The Shift to C#, The Start of Programming

I met with Alex late last week to talk about ADAPT and possible graphical applications for my project. What resulted was a really difficult discussion concerning my project and potential graphical representations of it. We both agreed that graphics were not the primary concern of my program, but that the presentation would benefit greatly from some sort of graphics-based representation. After a discussion of ADAPT, its benefits and its limitations, he admitted that he thought ADAPT might be overkill for something like my story based project. The ADAPT framework allows characters to go to a directed point - useful, given my project, though I'd prefer a way to get them to wander randomly - and can reach or turn their heads. I can't customize the graphics, though, and they don't have any animations that correspond to all the things I would need them to do ("reaching" and "looking" doesn't really drive story).

Alex recommended I play around with Unity and a more basic, older version of adapt, as well as some 2D graphical representation techniques, to see what I'm most comfortable with. Ultimately, he recommended the older version of adapt that only includes people walking, and taught me briefly how to display text over people's heads so that I can have their names and the actions they are performing display (i.e. Beth compliments Jack, Jack flirts with Beth). While this isn't the most elegant way to show how the storytelling system works, it might be the best way to show off its possible applications.

Since ultimately Alex thought I should use Unity and ADAPT for the graphical component of my project, I asked again whether starting in Python was really the best idea (particularly because my Python skills are pretty rusty). Thinking it over, Alex decided I should ultimately just start in C#, since Unity uses C# and it would make my code much easier to eventually integrate into the system.

Unfortunately, I've never programmed in C# before. I spent the weekend going over the language and discovered it's not all that different from Java (I'd say it was closer to that than C++, which is actually my preferred coding language). I've managed to create characters that begin with randomly generated names, genders, and emotions, though these emotions can be hard coded to allow for a "writer" (Pam and Jim love each other, but Dwight hates Jim... what happens?). I'm starting to implement interactions in the world class, starting with conversing. Conversing will be hard, since it's an action that keeps going until it is broken out of, and since it is affected and affects the moods of the participants. Also, because conversations need to be so nuanced... Flirtation and complimenting need to be different actions with different intentions but perhaps similar results. Different levels of anger might cause someone to storm off, but it might cause them to punch the other person... what causes those differences? I've actually written the base code for people conversing, but I'm tweaking random conversation variables to make the emotional reactions more "realistic". Next I'll implement just a handful of more interactions - one for anger (fighting?) and one for love (kissing?) - and then I'll work on an engine that steps through "turns" of character interactions.

Tuesday, October 9, 2012

Getting Started

I've put into motion the beginnings of my project, but I'm still mainly in the design phase.

I've been hashing out pages and pages of design documentation for potential character interactions and behavior controls. As a reminder, my project creates a "primordial stew" of simulated character behavior and actions, and then the storytelling AI recognizes story elements and suppresses any interactions that don't further the story (without actively controlling where the story goes). After talking with Norm and Alex, I'm going to start off by creating the basic, ungoverned character interactions in Python and then experiment with my storytelling AI in text based format before finally attempting to apply this system to graphics.

I've downloaded Python tools and I'm getting reacquainted with the language (it's been several years since I last used it), and so far things are going well. I'm going to start programming "The Sandbox" (what I'm calling the semi-randomized character interactions) later this week based on the designs I've laid out. A brief description follows:

For now my characters will be driven by three emotional levels - boredom, anger, and lust. These states will be values between 0 and 1, and will affect the probabilities of certain actions occurring (though there will always be an element of chance). Boredom will control when a character initiates or leaves a conversation or a room (I'd like the sandbox to have a number of rooms, though this might have to wait until after the demonstration next week). Boredom rises as time goes on and is reduced by actions being performed and the change of the character's emotional state. Anger makes it more likely for a character to be mean to another (more on "verbal" interactions later) or to attack another (the intensity of the attack/the type of attack change based on levels of anger). Anger is increased by unpleasant verbal interactions or having romantic advances spurned, and abates slowly over time. Lust controls a character's romantic interactions with another, and - like anger - these interactions can be different based on the intensity of the emotion (from flirting to sleeping together). For now, lust will be increased by having pleasant verbal interactions with a member of the opposite sex. Eventually I would like to replace "anger" and "lust" with "hate" and "love", more specific emotions that refer specifically to how characters feel about one another (stored in arrays). I hope to implement more specific emotions by next week, but I need to get the basics set in stone first. I have notes on how to implement relationships, affairs, and jealousy once the transition has been completed to "hate" and "love". I believe this will be hugely important to do before I move on to the storytelling AI... relationships tend to be at the heart of most stories.

It's important than when characters engage in a conversation, that the conversation isn't just "talk." They must engage one another in a way that reflects their emotional states and therefore affects the emotional state of the other. This way emotions change, get heated, and drive the characters into action. These interactions, like actions, will be affected by emotions. Characters can flirt, yell at each other, insult each other, compliment each other, or just make idle conversation. When a character begins a conversation, the interaction will keep going (again, driven by emotion with a randomized component) until the conversation is broken by boredom or an action (storming off, an attack, a kiss, etc).

Finally, characters must be able to die. If one character kills another, the dead character can no longer interact with the others, but the effects of its death must be felt. I might at some point add a sadness or fear emotion that would be triggered in the killer, but with just "love" and "hate" implemented, the way characters treat each other could already be greatly affected by the death of one (anger or love grows towards the killer based on how a character feels about the victim).

By next week, I want to be able to drop a bunch of characters into an environment, give them a few turns in which they run free, and have them interact with each other in partially randomized but emotionally driven ways. At that point, I will be able to start writing my storytelling AI. Eventually, I want to implement more emotions and story elements... Items would be greatly helpful, though not necessarily important at this early stage. Existing relationships (either randomized or pre-set by a "writer") would also be interesting, as it would specify the response to certain actions (like affairs or deaths). I'd also like to give some intelligence to the characters, so that they can "know" certain things (someone killed someone else), share and gain knowledge verbally, and even figure things out for themselves. All these elements would greatly improve the story that could be told, and while their implementation might be for a point in the future, I think it is important to plan for them now.

Wednesday, October 3, 2012

Talking with Alex

Continuing my conversations with Alex, who is doing similar (but not identical) procedural story based research.

Today I heard back saying he had read my proposal and that it looked solid. He mentioned that if I'm looking to stress story over game (which I am), I might want to "downplay the idea of win/loss states." I think this is valid... A good story has no loss state. If the main character dies, it's still in service to the story. As Alex said, "this is about creating drama, not beating the system."

I'd asked Alex about how to proceed with scripting my A.I. systems, and he recommended Python for speed's sake, at least in prototyping stage. I have some Python experience, but it is limited. I'll have to get back into practice if it's going to save me any time at all. I'm more comfortable in C++, but was warned that might be overkill in this situation. Finally, Alex said if I am ever to try graphics I might want to do it in Unity, and therefore in C#. This concerns me as I have no C# experience, but I assume it's a problem for a later day. While graphics don't strike me as the most essential part of this project, I do feel like it's crucial that I at least try to get them. A visual example of the story generator in action would be far more impressive than a text based one. Alex also mentioned that I should at least try for graphics, even if it doesn't work out. I haven't asked him many questions about ADAPT just yet, or about the compatibility of our projects in general. I need to follow up on that.

It looks like step one of my project (at least according to Alex) is to start work on constructing a virtual environment populated by characters as early as possible so that I can start designing my story A.I. (really the centerpiece of the project). I'm going to find time this week to reacquaint myself with Python, at which point I'll decide if it's worth coding my environment in Python or C++ first (based on ease and comfort). Next week I'll start programming what Alex called "the sandbox," the world and characters that will interact with each other in order to create the random events the story A.I. will have to select from.

The Beginning

Apologies for the delay in starting this blog. There was a confusion about when I would be doing my senior project (the attempted implementation of an A.I. based storytelling system), but it has been resolved. I will be implementing my senior project over the course of the year, hopefully under the guidance of Alexander Shoulson, who is currently doing character based story interaction research.

A quick summary of what I have done thus far:
  • Drafted a project proposal and submitted it to Norman Badler and Aline Normoyle
  • Met with Alex Shoulson to discuss his research and how my project might align with his work
  • Met with Amy Calhoun to discuss senior design time management skills
My proposal is very long, and I haven't found a way to post it to this blog in PDF format, but here is a basic summary of what I want to do and why:

While working at Microsoft Studios (formerly Microsoft Games) over the summer in their Narrative Department, I noticed that a lot of people seemed interested in dynamically generated game stories... the idea that we could have someone replay a game ten times and in that time never get the same story once. The way that was being dealt with (in my time at Microsoft) was through story trees... Every time the player makes a decision, the story branches. It's the "Choose Your Own Adventure Book" concept, and it's one that is prohibitively costly. I saw many "multiple storyline" game ideas shot down because the production doubles whenever you reach a split in the story tree. I became fascinated with the idea of procedurally generating a story in much the same way that one can procedurally generate levels, rooms, enemy encounters, etc... I was convinced that there was a way to maximize miniscule permutations so that, with a small number of story elements, you could have huge numbers of combinations that made up an actual story.

On Alex's suggestion, I abandoned my initial idea to try to create permutations of story elements, and instead chose to focus on a character driven approach. My new idea - the one outlined in the proposal - was to create simulated characters, capable of walking around and interacting with the player and one another in slightly randomized ways that allow for simple conflicts and resolutions. The core ideas I had were that characters had two basic emotions for love and hate, that could be filled by various interactions (both direct and indirect). When a character gets really emotional, they act based on the level of the emotion. They might attack another character and injure or kill them, or flirt with or sleep with another character. These events would create beats that would have ripple effects on the story, potentially based on prewritten character interactions based on relationships that remain constant (friendships, familial or romantic relationships that affect how characters react to story beats that involve their loved ones).

What I believe I have to offer is the unique perspective from which I am approaching this project, which is to say as a screenwriter and storyteller first and a programmer second. My interest is not in event generation but in telling good stories, and so I devised the most important (and probably complex) element of my proposed code... An artificial intelligence capable of recognizing story. My proposed A.I. would run through the occurring character interactions and look for anything it recognizes along standard storytelling guidelines. Which is to say, it will recognize and remember when something occurs that could be:
  • The Inciting Incident
  • Plot Point 1
  • Plot Point 2
  • Plot Point 3
  • The Conclusion
The storytelling A.I. would then suppress any new information that is not relevant or contradictory to the main story (if another random murder were to almost happen unrelated to the one driving the story, the AI would prevent it to keep the story from veering off topic). Most importantly, the AI would know what elements advance the story based on what has already happened, and would know what a good place for a conclusion would be. It's this A.I. that would distinguish my project from anything that's come before... Because I'm not interested in devising a complex, sensical story with a large number of variations. I'm interested in devising a variable story that is still good.

When I spoke to Alex he pointed me towards many articles in the field on procedural character interaction and event generation. Character interaction as a way of story generation was his suggestion, but he was intrigued by my story A.I. idea and said he'd never really seen anything like it in any papers he's read, but that it sounded like it would be worth pursuing.

My current big question is where graphics play a role in all of this. In the response to my proposal, the following was requested of me:

1. ADAPT framework setup
2. A simple scene with agents walking around doing abstract actions (perhaps shown in text over the character's head). This scene should implement some simple character variables (sim-like: hungry, angry, patient) and some simple interactions with the world (such as "use-phone", "talk", "fight", etc). This scene need not be sophisticated.


While I would love to have a graphics based demonstration of my intended world, most of the papers I read stuck with text based examples of their procedural story elements in action. I worry that translating my intended idea into ADAPT might mean I have to make concessions on the "A.I. in search of story structure" idea that really drives my interest in the project. It's something I must discuss further with Alex.