Second blog update of the day.
I've been thinking about how to best implement my storytelling AI, and I've thought of two separate possibilities:
A) Social Network Structure
Based on the "six degrees of separation", this version of the storytelling A.I. selects a main character and focuses on their "social network". The main character selection process could be random, but most likely it will be based somehow on character emotional levels. This will require some level of trial and error... I'm not sure yet whether a good protagonist is someone who starts off emotional, or someone who starts off calm (because it leaves them with more room for emotional change). My experience as a writer leads me to believe it is the latter. Preferably, actually, a complex storytelling A.I. would allow the system to run for a while before the story starts, with a number of "calm" potential protagonists. Then, the first time a calm character experiences a substantial emotional change, they become the main character of the story. If we do that, though, we might need to implement a "live delay," so that we remember a few steps back for each character until the main character is selected. Otherwise, we might lose some build up, starting with our main character getting mad and missing why. For now, I think the best way to do this would be to pick a main character based on "calm initially," so that they have the most room for growth, and so we don't miss out on story buildup. Many, many stories start with a character wandering around aimless... I'd hate to cut that out by jumping right to interesting interactions.
Then, we show all story elements of the main character, and of all characters that have interacted with the main character, and of all characters that have interacted with those characters, up to a certain degree n. That way, in a world of thousands of characters, we only see up to a handful of them (up to maybe only the third degree). The story continues until the main character resumes equilibrium (comes full circle), having learned their lesson... Or dies, and ceases to exist.
B) Weighting Important Story Actions
This time we select which actions to focus on not based on characters but based on their actions. If a character talks with someone else, we don't need to hear about it. If they flirt, we might want to hear about it. If they kill someone, we definitely want to know. For this to work, we need to assign certain actions "importance levels" that correspond to beats in story structure. On a 1-10 level scale, a 10 level action is an important story beat, always. Any time it appears, it will be a plot point, and it is strong enough to open or close a story. A level 5 beat is important enough to hear about, but doesn't drive the story forward (the AI doesn't stop the story from going forward). If a level 1 story beat occurs, it is suppressed. No one needs to know about such low level occurrences. Different importance levels allow for nuances in storytelling. If a level 7 element happens at the same time as a level 10, the 7 element might be suppressed or undone so that the level 10 element will be a plot element and drive the story into the next act. If both drove the story forward, the "act" will have lasted less than one turn.
C) Combination Weighting-Social Structure System
Thinking about it, I realize we probably need is a system that combines these two trains of thought. Weighting important actions is probably the only way to have the storytelling AI know when to create the beginning, middle, and end, but those terms mean nothing unless those actions involve a singular character (or group of characters) that we care about. I think the final system needs to weigh important story actions, and use those to determine beginning, middle, and end points for the "A" plot (anything involving the main character directly) and any "B" plots (stories featuring second degree characters with beginnings, middles, and ends, all of which are second priority to the A plot).
Now that I have a plan mapped out to implement the Storytelling A.I., I think I might want to actually implement User Interactivity first. It sounds like more fun, and it will let the user be the main character when storytelling A.I. gets implemented.
Monday, October 29, 2012
Checkpoint and Goal Updates
I met with Norm and Aline last week to give an update on what I have so far. They seemed pleased with my text-based character simulations so far, and gave me a number of ways I could expand my work outward from here. Here are some of the things they said I could (or should) try to implement over the coming weeks:
- User interaction. This is where it being a "game" comes in. Conceptually, this is pretty simple. Add in a character stand-in for the player (for added fun, let them customize their own name and gender), and each turn ask them what they want to do instead of generating an action for them probabilistically. Emotions for a player character would have no effect on their actions (since those are decided), but actions they perform affect the emotions of other characters.
- Smart Camera. Potentially based on Dan Markowitz's paper about in world camera motions, this would use "camera as narrative guide," or storytelling AI as cinematographer. The idea being, much like in the text based version of my program the AI chooses what lines of text to show and which to omit, in the graphics based version the AI chooses what scenes to train the camera on and which ones to let slip. For potential added merit, I could try intercuts - cutting back and forth between two "important" scenes - but this already sounds incredibly difficult. This requires me to port my code to Unity and graphics first.
- Extendable Actions/Emotions. This one actually sounds the most challenging to me, and I'm not sure if I'll be able to do it the way my code is currently structured. The idea is really interesting... Let anyone create a text file that defines actions and ramifications, and then that action and those ramifications get folded into the interactions and, by extension, the storytelling program. The problem with this is that actions and emotions don't all function the same way. "Fighting" has a completely different affect on the world than any "romance" style interaction, since "fighting" has a winner and a loser, and can result in a character's death. I'm also not sure how to implement this with the story AI, since the AI needs to recognize story beats.
On top of this, I've still got my own expansions to my system that I need/want to implement:
- The storytelling AI. This is the entire basis of the project, and is therefore my first priority.
- Finding a way to show off the software through Unity and simplified ADAPT.
- Fleshing out the interaction simulation (in particular, making fighting and character emotions more robust).
This is the order in which I'm choosing to prioritize moving forward:
- Storytelling AI. I still need to figure out an algorithm/technique for getting a story "recognized" and told optimally. I still haven't found any papers that attempt this, which is why this is so exciting to me. I have to develop this technique on my own.
- User Interaction. All things considered this should be simple enough, and will make showing off the code much more interesting.
- Showing off my code in Unity/ADAPT.
- Fleshing out the interaction simulation
- "Smart Camera"
- Extendable Actions/Emotions
More later!
Tuesday, October 23, 2012
Early Results!
Some early results! I've implemented anger, lust, talking, flirting, loving and hateful actions. I haven't done "fighting" quite yet, as it involves further complications in terms of who defeats who and how. Included below are early results as I was coding them. Which is to say, the top ones are the oldest results, and the bottom ones are the most recent.
Funniest bug: "Romney compliments Romney. Romney ends the conversation."
Pretty boring interactions. Like begets like, so flirting begets more flirting, but this required more levels of nuance to the behaviors.
Finally, made it so that characters can start conversations for themselves based on boredom. As they walk around, they calm down slowly (lose anger and lust) but their boredom increases until they are forced to start a conversation with someone new. Conversations can only be held between two people at a time. Finally, I have it now so that any number of characters can be in a scene at once.
I definitely want to make the world and character interactions more robust (fighting, killing, multiple rooms, items, specific relationships, etc.) but right now I theoretically have enough to start my storytelling A.I. if I needed to. I could create 50 randomly generated characters in one room, have infinite steps (the ones above have 10 each), and design my AI to find a start a main character, a beginning, middle, and an end.
Monday, October 15, 2012
The Shift to C#, The Start of Programming
I met with Alex late last week to talk about ADAPT and possible graphical applications for my project. What resulted was a really difficult discussion concerning my project and potential graphical representations of it. We both agreed that graphics were not the primary concern of my program, but that the presentation would benefit greatly from some sort of graphics-based representation. After a discussion of ADAPT, its benefits and its limitations, he admitted that he thought ADAPT might be overkill for something like my story based project. The ADAPT framework allows characters to go to a directed point - useful, given my project, though I'd prefer a way to get them to wander randomly - and can reach or turn their heads. I can't customize the graphics, though, and they don't have any animations that correspond to all the things I would need them to do ("reaching" and "looking" doesn't really drive story).
Alex recommended I play around with Unity and a more basic, older version of adapt, as well as some 2D graphical representation techniques, to see what I'm most comfortable with. Ultimately, he recommended the older version of adapt that only includes people walking, and taught me briefly how to display text over people's heads so that I can have their names and the actions they are performing display (i.e. Beth compliments Jack, Jack flirts with Beth). While this isn't the most elegant way to show how the storytelling system works, it might be the best way to show off its possible applications.
Since ultimately Alex thought I should use Unity and ADAPT for the graphical component of my project, I asked again whether starting in Python was really the best idea (particularly because my Python skills are pretty rusty). Thinking it over, Alex decided I should ultimately just start in C#, since Unity uses C# and it would make my code much easier to eventually integrate into the system.
Unfortunately, I've never programmed in C# before. I spent the weekend going over the language and discovered it's not all that different from Java (I'd say it was closer to that than C++, which is actually my preferred coding language). I've managed to create characters that begin with randomly generated names, genders, and emotions, though these emotions can be hard coded to allow for a "writer" (Pam and Jim love each other, but Dwight hates Jim... what happens?). I'm starting to implement interactions in the world class, starting with conversing. Conversing will be hard, since it's an action that keeps going until it is broken out of, and since it is affected and affects the moods of the participants. Also, because conversations need to be so nuanced... Flirtation and complimenting need to be different actions with different intentions but perhaps similar results. Different levels of anger might cause someone to storm off, but it might cause them to punch the other person... what causes those differences? I've actually written the base code for people conversing, but I'm tweaking random conversation variables to make the emotional reactions more "realistic". Next I'll implement just a handful of more interactions - one for anger (fighting?) and one for love (kissing?) - and then I'll work on an engine that steps through "turns" of character interactions.
Alex recommended I play around with Unity and a more basic, older version of adapt, as well as some 2D graphical representation techniques, to see what I'm most comfortable with. Ultimately, he recommended the older version of adapt that only includes people walking, and taught me briefly how to display text over people's heads so that I can have their names and the actions they are performing display (i.e. Beth compliments Jack, Jack flirts with Beth). While this isn't the most elegant way to show how the storytelling system works, it might be the best way to show off its possible applications.
Since ultimately Alex thought I should use Unity and ADAPT for the graphical component of my project, I asked again whether starting in Python was really the best idea (particularly because my Python skills are pretty rusty). Thinking it over, Alex decided I should ultimately just start in C#, since Unity uses C# and it would make my code much easier to eventually integrate into the system.
Unfortunately, I've never programmed in C# before. I spent the weekend going over the language and discovered it's not all that different from Java (I'd say it was closer to that than C++, which is actually my preferred coding language). I've managed to create characters that begin with randomly generated names, genders, and emotions, though these emotions can be hard coded to allow for a "writer" (Pam and Jim love each other, but Dwight hates Jim... what happens?). I'm starting to implement interactions in the world class, starting with conversing. Conversing will be hard, since it's an action that keeps going until it is broken out of, and since it is affected and affects the moods of the participants. Also, because conversations need to be so nuanced... Flirtation and complimenting need to be different actions with different intentions but perhaps similar results. Different levels of anger might cause someone to storm off, but it might cause them to punch the other person... what causes those differences? I've actually written the base code for people conversing, but I'm tweaking random conversation variables to make the emotional reactions more "realistic". Next I'll implement just a handful of more interactions - one for anger (fighting?) and one for love (kissing?) - and then I'll work on an engine that steps through "turns" of character interactions.
Tuesday, October 9, 2012
Getting Started
I've put into motion the beginnings of my project, but I'm still mainly in the design phase.
I've been hashing out pages and pages of design documentation for potential character interactions and behavior controls. As a reminder, my project creates a "primordial stew" of simulated character behavior and actions, and then the storytelling AI recognizes story elements and suppresses any interactions that don't further the story (without actively controlling where the story goes). After talking with Norm and Alex, I'm going to start off by creating the basic, ungoverned character interactions in Python and then experiment with my storytelling AI in text based format before finally attempting to apply this system to graphics.
I've downloaded Python tools and I'm getting reacquainted with the language (it's been several years since I last used it), and so far things are going well. I'm going to start programming "The Sandbox" (what I'm calling the semi-randomized character interactions) later this week based on the designs I've laid out. A brief description follows:
For now my characters will be driven by three emotional levels - boredom, anger, and lust. These states will be values between 0 and 1, and will affect the probabilities of certain actions occurring (though there will always be an element of chance). Boredom will control when a character initiates or leaves a conversation or a room (I'd like the sandbox to have a number of rooms, though this might have to wait until after the demonstration next week). Boredom rises as time goes on and is reduced by actions being performed and the change of the character's emotional state. Anger makes it more likely for a character to be mean to another (more on "verbal" interactions later) or to attack another (the intensity of the attack/the type of attack change based on levels of anger). Anger is increased by unpleasant verbal interactions or having romantic advances spurned, and abates slowly over time. Lust controls a character's romantic interactions with another, and - like anger - these interactions can be different based on the intensity of the emotion (from flirting to sleeping together). For now, lust will be increased by having pleasant verbal interactions with a member of the opposite sex. Eventually I would like to replace "anger" and "lust" with "hate" and "love", more specific emotions that refer specifically to how characters feel about one another (stored in arrays). I hope to implement more specific emotions by next week, but I need to get the basics set in stone first. I have notes on how to implement relationships, affairs, and jealousy once the transition has been completed to "hate" and "love". I believe this will be hugely important to do before I move on to the storytelling AI... relationships tend to be at the heart of most stories.
It's important than when characters engage in a conversation, that the conversation isn't just "talk." They must engage one another in a way that reflects their emotional states and therefore affects the emotional state of the other. This way emotions change, get heated, and drive the characters into action. These interactions, like actions, will be affected by emotions. Characters can flirt, yell at each other, insult each other, compliment each other, or just make idle conversation. When a character begins a conversation, the interaction will keep going (again, driven by emotion with a randomized component) until the conversation is broken by boredom or an action (storming off, an attack, a kiss, etc).
Finally, characters must be able to die. If one character kills another, the dead character can no longer interact with the others, but the effects of its death must be felt. I might at some point add a sadness or fear emotion that would be triggered in the killer, but with just "love" and "hate" implemented, the way characters treat each other could already be greatly affected by the death of one (anger or love grows towards the killer based on how a character feels about the victim).
By next week, I want to be able to drop a bunch of characters into an environment, give them a few turns in which they run free, and have them interact with each other in partially randomized but emotionally driven ways. At that point, I will be able to start writing my storytelling AI. Eventually, I want to implement more emotions and story elements... Items would be greatly helpful, though not necessarily important at this early stage. Existing relationships (either randomized or pre-set by a "writer") would also be interesting, as it would specify the response to certain actions (like affairs or deaths). I'd also like to give some intelligence to the characters, so that they can "know" certain things (someone killed someone else), share and gain knowledge verbally, and even figure things out for themselves. All these elements would greatly improve the story that could be told, and while their implementation might be for a point in the future, I think it is important to plan for them now.
I've been hashing out pages and pages of design documentation for potential character interactions and behavior controls. As a reminder, my project creates a "primordial stew" of simulated character behavior and actions, and then the storytelling AI recognizes story elements and suppresses any interactions that don't further the story (without actively controlling where the story goes). After talking with Norm and Alex, I'm going to start off by creating the basic, ungoverned character interactions in Python and then experiment with my storytelling AI in text based format before finally attempting to apply this system to graphics.
I've downloaded Python tools and I'm getting reacquainted with the language (it's been several years since I last used it), and so far things are going well. I'm going to start programming "The Sandbox" (what I'm calling the semi-randomized character interactions) later this week based on the designs I've laid out. A brief description follows:
For now my characters will be driven by three emotional levels - boredom, anger, and lust. These states will be values between 0 and 1, and will affect the probabilities of certain actions occurring (though there will always be an element of chance). Boredom will control when a character initiates or leaves a conversation or a room (I'd like the sandbox to have a number of rooms, though this might have to wait until after the demonstration next week). Boredom rises as time goes on and is reduced by actions being performed and the change of the character's emotional state. Anger makes it more likely for a character to be mean to another (more on "verbal" interactions later) or to attack another (the intensity of the attack/the type of attack change based on levels of anger). Anger is increased by unpleasant verbal interactions or having romantic advances spurned, and abates slowly over time. Lust controls a character's romantic interactions with another, and - like anger - these interactions can be different based on the intensity of the emotion (from flirting to sleeping together). For now, lust will be increased by having pleasant verbal interactions with a member of the opposite sex. Eventually I would like to replace "anger" and "lust" with "hate" and "love", more specific emotions that refer specifically to how characters feel about one another (stored in arrays). I hope to implement more specific emotions by next week, but I need to get the basics set in stone first. I have notes on how to implement relationships, affairs, and jealousy once the transition has been completed to "hate" and "love". I believe this will be hugely important to do before I move on to the storytelling AI... relationships tend to be at the heart of most stories.
It's important than when characters engage in a conversation, that the conversation isn't just "talk." They must engage one another in a way that reflects their emotional states and therefore affects the emotional state of the other. This way emotions change, get heated, and drive the characters into action. These interactions, like actions, will be affected by emotions. Characters can flirt, yell at each other, insult each other, compliment each other, or just make idle conversation. When a character begins a conversation, the interaction will keep going (again, driven by emotion with a randomized component) until the conversation is broken by boredom or an action (storming off, an attack, a kiss, etc).
Finally, characters must be able to die. If one character kills another, the dead character can no longer interact with the others, but the effects of its death must be felt. I might at some point add a sadness or fear emotion that would be triggered in the killer, but with just "love" and "hate" implemented, the way characters treat each other could already be greatly affected by the death of one (anger or love grows towards the killer based on how a character feels about the victim).
By next week, I want to be able to drop a bunch of characters into an environment, give them a few turns in which they run free, and have them interact with each other in partially randomized but emotionally driven ways. At that point, I will be able to start writing my storytelling AI. Eventually, I want to implement more emotions and story elements... Items would be greatly helpful, though not necessarily important at this early stage. Existing relationships (either randomized or pre-set by a "writer") would also be interesting, as it would specify the response to certain actions (like affairs or deaths). I'd also like to give some intelligence to the characters, so that they can "know" certain things (someone killed someone else), share and gain knowledge verbally, and even figure things out for themselves. All these elements would greatly improve the story that could be told, and while their implementation might be for a point in the future, I think it is important to plan for them now.
Wednesday, October 3, 2012
Talking with Alex
Continuing my conversations with Alex, who is doing similar (but not identical) procedural story based research.
Today I heard back saying he had read my proposal and that it looked solid. He mentioned that if I'm looking to stress story over game (which I am), I might want to "downplay the idea of win/loss states." I think this is valid... A good story has no loss state. If the main character dies, it's still in service to the story. As Alex said, "this is about creating drama, not beating the system."
I'd asked Alex about how to proceed with scripting my A.I. systems, and he recommended Python for speed's sake, at least in prototyping stage. I have some Python experience, but it is limited. I'll have to get back into practice if it's going to save me any time at all. I'm more comfortable in C++, but was warned that might be overkill in this situation. Finally, Alex said if I am ever to try graphics I might want to do it in Unity, and therefore in C#. This concerns me as I have no C# experience, but I assume it's a problem for a later day. While graphics don't strike me as the most essential part of this project, I do feel like it's crucial that I at least try to get them. A visual example of the story generator in action would be far more impressive than a text based one. Alex also mentioned that I should at least try for graphics, even if it doesn't work out. I haven't asked him many questions about ADAPT just yet, or about the compatibility of our projects in general. I need to follow up on that.
It looks like step one of my project (at least according to Alex) is to start work on constructing a virtual environment populated by characters as early as possible so that I can start designing my story A.I. (really the centerpiece of the project). I'm going to find time this week to reacquaint myself with Python, at which point I'll decide if it's worth coding my environment in Python or C++ first (based on ease and comfort). Next week I'll start programming what Alex called "the sandbox," the world and characters that will interact with each other in order to create the random events the story A.I. will have to select from.
Today I heard back saying he had read my proposal and that it looked solid. He mentioned that if I'm looking to stress story over game (which I am), I might want to "downplay the idea of win/loss states." I think this is valid... A good story has no loss state. If the main character dies, it's still in service to the story. As Alex said, "this is about creating drama, not beating the system."
I'd asked Alex about how to proceed with scripting my A.I. systems, and he recommended Python for speed's sake, at least in prototyping stage. I have some Python experience, but it is limited. I'll have to get back into practice if it's going to save me any time at all. I'm more comfortable in C++, but was warned that might be overkill in this situation. Finally, Alex said if I am ever to try graphics I might want to do it in Unity, and therefore in C#. This concerns me as I have no C# experience, but I assume it's a problem for a later day. While graphics don't strike me as the most essential part of this project, I do feel like it's crucial that I at least try to get them. A visual example of the story generator in action would be far more impressive than a text based one. Alex also mentioned that I should at least try for graphics, even if it doesn't work out. I haven't asked him many questions about ADAPT just yet, or about the compatibility of our projects in general. I need to follow up on that.
It looks like step one of my project (at least according to Alex) is to start work on constructing a virtual environment populated by characters as early as possible so that I can start designing my story A.I. (really the centerpiece of the project). I'm going to find time this week to reacquaint myself with Python, at which point I'll decide if it's worth coding my environment in Python or C++ first (based on ease and comfort). Next week I'll start programming what Alex called "the sandbox," the world and characters that will interact with each other in order to create the random events the story A.I. will have to select from.
The Beginning
Apologies for the delay in starting this blog. There was a confusion about when I would be doing my senior project (the attempted implementation of an A.I. based storytelling system), but it has been resolved. I will be implementing my senior project over the course of the year, hopefully under the guidance of Alexander Shoulson, who is currently doing character based story interaction research.
A quick summary of what I have done thus far:
While working at Microsoft Studios (formerly Microsoft Games) over the summer in their Narrative Department, I noticed that a lot of people seemed interested in dynamically generated game stories... the idea that we could have someone replay a game ten times and in that time never get the same story once. The way that was being dealt with (in my time at Microsoft) was through story trees... Every time the player makes a decision, the story branches. It's the "Choose Your Own Adventure Book" concept, and it's one that is prohibitively costly. I saw many "multiple storyline" game ideas shot down because the production doubles whenever you reach a split in the story tree. I became fascinated with the idea of procedurally generating a story in much the same way that one can procedurally generate levels, rooms, enemy encounters, etc... I was convinced that there was a way to maximize miniscule permutations so that, with a small number of story elements, you could have huge numbers of combinations that made up an actual story.
On Alex's suggestion, I abandoned my initial idea to try to create permutations of story elements, and instead chose to focus on a character driven approach. My new idea - the one outlined in the proposal - was to create simulated characters, capable of walking around and interacting with the player and one another in slightly randomized ways that allow for simple conflicts and resolutions. The core ideas I had were that characters had two basic emotions for love and hate, that could be filled by various interactions (both direct and indirect). When a character gets really emotional, they act based on the level of the emotion. They might attack another character and injure or kill them, or flirt with or sleep with another character. These events would create beats that would have ripple effects on the story, potentially based on prewritten character interactions based on relationships that remain constant (friendships, familial or romantic relationships that affect how characters react to story beats that involve their loved ones).
What I believe I have to offer is the unique perspective from which I am approaching this project, which is to say as a screenwriter and storyteller first and a programmer second. My interest is not in event generation but in telling good stories, and so I devised the most important (and probably complex) element of my proposed code... An artificial intelligence capable of recognizing story. My proposed A.I. would run through the occurring character interactions and look for anything it recognizes along standard storytelling guidelines. Which is to say, it will recognize and remember when something occurs that could be:
When I spoke to Alex he pointed me towards many articles in the field on procedural character interaction and event generation. Character interaction as a way of story generation was his suggestion, but he was intrigued by my story A.I. idea and said he'd never really seen anything like it in any papers he's read, but that it sounded like it would be worth pursuing.
My current big question is where graphics play a role in all of this. In the response to my proposal, the following was requested of me:
1. ADAPT framework setup
2. A simple scene with agents walking around doing abstract actions (perhaps shown in text over the character's head). This scene should implement some simple character variables (sim-like: hungry, angry, patient) and some simple interactions with the world (such as "use-phone", "talk", "fight", etc). This scene need not be sophisticated.
While I would love to have a graphics based demonstration of my intended world, most of the papers I read stuck with text based examples of their procedural story elements in action. I worry that translating my intended idea into ADAPT might mean I have to make concessions on the "A.I. in search of story structure" idea that really drives my interest in the project. It's something I must discuss further with Alex.
A quick summary of what I have done thus far:
- Drafted a project proposal and submitted it to Norman Badler and Aline Normoyle
- Met with Alex Shoulson to discuss his research and how my project might align with his work
- Met with Amy Calhoun to discuss senior design time management skills
While working at Microsoft Studios (formerly Microsoft Games) over the summer in their Narrative Department, I noticed that a lot of people seemed interested in dynamically generated game stories... the idea that we could have someone replay a game ten times and in that time never get the same story once. The way that was being dealt with (in my time at Microsoft) was through story trees... Every time the player makes a decision, the story branches. It's the "Choose Your Own Adventure Book" concept, and it's one that is prohibitively costly. I saw many "multiple storyline" game ideas shot down because the production doubles whenever you reach a split in the story tree. I became fascinated with the idea of procedurally generating a story in much the same way that one can procedurally generate levels, rooms, enemy encounters, etc... I was convinced that there was a way to maximize miniscule permutations so that, with a small number of story elements, you could have huge numbers of combinations that made up an actual story.
On Alex's suggestion, I abandoned my initial idea to try to create permutations of story elements, and instead chose to focus on a character driven approach. My new idea - the one outlined in the proposal - was to create simulated characters, capable of walking around and interacting with the player and one another in slightly randomized ways that allow for simple conflicts and resolutions. The core ideas I had were that characters had two basic emotions for love and hate, that could be filled by various interactions (both direct and indirect). When a character gets really emotional, they act based on the level of the emotion. They might attack another character and injure or kill them, or flirt with or sleep with another character. These events would create beats that would have ripple effects on the story, potentially based on prewritten character interactions based on relationships that remain constant (friendships, familial or romantic relationships that affect how characters react to story beats that involve their loved ones).
What I believe I have to offer is the unique perspective from which I am approaching this project, which is to say as a screenwriter and storyteller first and a programmer second. My interest is not in event generation but in telling good stories, and so I devised the most important (and probably complex) element of my proposed code... An artificial intelligence capable of recognizing story. My proposed A.I. would run through the occurring character interactions and look for anything it recognizes along standard storytelling guidelines. Which is to say, it will recognize and remember when something occurs that could be:
- The Inciting Incident
- Plot Point 1
- Plot Point 2
- Plot Point 3
- The Conclusion
When I spoke to Alex he pointed me towards many articles in the field on procedural character interaction and event generation. Character interaction as a way of story generation was his suggestion, but he was intrigued by my story A.I. idea and said he'd never really seen anything like it in any papers he's read, but that it sounded like it would be worth pursuing.
My current big question is where graphics play a role in all of this. In the response to my proposal, the following was requested of me:
1. ADAPT framework setup
2. A simple scene with agents walking around doing abstract actions (perhaps shown in text over the character's head). This scene should implement some simple character variables (sim-like: hungry, angry, patient) and some simple interactions with the world (such as "use-phone", "talk", "fight", etc). This scene need not be sophisticated.
While I would love to have a graphics based demonstration of my intended world, most of the papers I read stuck with text based examples of their procedural story elements in action. I worry that translating my intended idea into ADAPT might mean I have to make concessions on the "A.I. in search of story structure" idea that really drives my interest in the project. It's something I must discuss further with Alex.
Subscribe to:
Posts (Atom)