Untitled Stream of Consciousness Project: STREAM

STREAM


March 20, 2007:

Created detailed mockups of the interface in Photoshop and started building a new project web site. I am pleased with the commercial appeal that it seems to be developing.


March 18, 2007:

Made some sketches of the widget interface and in the process discovered that I wanted to liken it to a TV with a remote control attached. The avatar's face is centered on the screen and is framed on the top and bottom by two bands - one that displays the current URL and the other that serves as a ticker of emotion words extracted from the web page. Each word is color-coded red or blue depending on its valence - negative or positive. Users can create and manage their own list of URL "channels", deciding which ones get processed automatically and which ones don't. The user can skip to the next or previous URL channel in the playlist by clicking on one of the two arrow buttons directly beneath the screen. I decided on a new working title: "Universal Emote", a take-off on a universal remote control.


March 15, 2007:

I've decided on an actor that has the look I am going for - a universal and untainted face, but still interesting, not a "character" type. He requested that I forward him a detailed shot list so that he can adequately prepare himself to perform the expressions. We will shoot in the AV room Tuesday after break. Everything has been reserved. I briefly tested the Canon XL2 MiniDV Pro camera and it is pretty easy to operate for my purposes. I want to devise some kind of apparatus connected to a chair to keep his head still though. That would be very useful because the facial components need to stay consistently aligned when I edit the footage.


March 14, 2007:

I like the idea of a downloadable desktop widget that Stephanie proposed to me. Users can manage and customize their own list of URLs that they want the avatar's face display emotionally. A closeable side panel of some sort would show the emotion words from the web page that are currently being analyzed. I will work on developing this as my final prototype.


March 13, 2007:

With some help from Igor, I successfully parsed the emotion data (words and corresponding emotions) from LIWC into my database. Next, I will be adding arousal and valence (pleasantness) values from ANEW.


March 7, 2007:

I am gradually simplifying this whole project, moving toward doing away with thought-formation altogether and focusing exclusively on mapping the raw emotional words that are extracted from URLs using psycholinguistic analysis to the avatar's facial expressions. There are definitely practical and conceptual benefits to not dealing with thought by thought processing of emotions and instead having the reaction to the words be singular and cumulative. One is that the element of impact and surprise is stronger, and another is that emotional assessments of text are most valid when they take large amounts of text into account at once as opposed to a single sentence at a time. My only regret right now is that I feel that the project has lost some of its deeper philosophical meanings in the process of distilling it.


March 5, 2007:

The prototype is working, but making the grammar rules better could be an endless process. The idea of the words forming sentences creates some interesting dynamics -- more words increases likelihood that thoughts will form and hence the avatar thinks more rapidly -- but in terms of their effect on the face they don't serve a unique purpose.


March 3, 2007:

Continued working on prototype, enhancing the smoothness of the randomized movements of words and starting to code some grammar rules.

I now see the project standing up as an "emotional barometer" for a society of people that tend to cognitively dissociate from their emotions. The avatar processes the core emotional content from data mined from various blogs and externalizes it on his face 24/7. The application will check the current list of user-entered blogs or other sites for updates on a regular basis. In addition to the emotional words, I would like to have the system pull out words related to universal areas of an everyday human's life such as work, family, friends, etc. and add those to the mix so that emotional thoughts can be formed around them.

I have been thinking about new possibilities for the project and am close to 100 percent sure that I will not be creating a Life View. Instead, I am entertaining the idea of a sort of "body meter" in which users see a persistent view of the avatar's body along a stretch of ground or clouds leading up to a cliff or precipice. The avatar paces closer and closer to this cliff as he gets more sad and depressed, and moves away from it as he gets happier. His distance from the edge at any time is a reflection of his level of contentment and his gait changes accordingly as he gets closer to it. Eventually he will jump off, and the system gets wiped clean of thoughts (and URLs). During a single lifetime, the system will only allow up to maybe 10 URLs at once in order to keep the number of words manageable. The oldest URL gets overwritten with the newest one added.


March 2, 2007:

The class and Stephanie like the direction that the project is going in right now. I am leaning away from including the Life View at this time. I plan to concentrate on the avatar's mind and emotions.


March 1, 2007:

Received responses from two more actors from the Drama School. I will be meeting each of the three candidates next week to determine which is best for the role.

Started a new prototype in which the words are contained in their own individual clouds, or pieces of fog, and they move around more chaotically and randomly, not in a steady horizontal stream. This enables more collisions between words and sentences to be formed more frequently. The sky starts out clear and as more words get added via users entering URLs, the fog builds and builds, making for a more congested mind. I am thinking that the opacity of each cloud could be directly related to the intensity value of its emotional word. The avatar's head is placed off to the side so that the user can focus more easily on the words colliding and forming thoughts.

I decided that I will need to incorporate some simple grammar rules after all to determine what thoughts will form rather than concoct 5000 different preset thoughts. I believe that the grammar will be less annoying, and potentially more fun and awe-inspiring, in an automated scenario as opposed to a game in which users are clicking words.


FEBRUARY 28, 2007:

Gained access to LIWC data through Sven Travis' contact with a former Parsons student who used it for her own thesis. The word-emotion designations should be fairly simple to parse into my database. However, I need to obtain intensity/valence ratings for each of these words as well. The ANEW study has some of these, and I am looking into DAL, Dictionary of Affect in Language, as another supplement. DAL was brought to my attention by another contact of Sven's, at Cornell.

I put out an email ad calling for an actor to play the role of my "universal man" and have already received one response from an interested student in the New School Drama department. He sent me a resume and headshot as I requested, and he looks like a good candidate for the role.


FEBRUARY 27, 2007:

What I discussed with Igor today:

Facial expressions need some more enhancements and modifications; avatar's mouth opening and closing repeatedly seems too unnatural. I will probably need to incorporate some branching of video clips when I reshoot.

Igor threw out a random idea that he found amusing: having users physically grab words out of a bucket and throw them at the avatar's head on screen or literally insert them into his head. I thought this was a funny idea, but I don't feel inclined to take the project in this direction, which would involve physical computing.

I need to think about exactly what statement I want to make with the clouds as an interface element. It carries certain connotations that I may or may not want. What other kinds of visual enhancers for the words might I use? Something more chaotic perhaps? How can I add more poetry to the movement of words in the stream?

Igor liked my idea of users contributing URLs (of blogs or perhaps other web sites too) as their main or only mode of interaction. The web pages would be run through a psychoanalytic checker to extract emotional words, and these emotional words would populate the stream along with miscellaneous words with which they would combine to form sentences.

We discussed how to connect to a web page URL with cfhttp and analyze the text, which could theoretically be quite simple in my case since there aren't any particular "sections" of a page that need to be parsed out.

The idea of the Life View is being put on hold while I concentrate on getting a solid prototype of the Mind View by midterm. I see some form of Life View as an important aspect of the project, but it is possible that I will end up doing without it.


FEBRUARY 25, 2007:

Worked on enhancing facial expressions, adding dynamic color tinting and default eye and mouth movements. Changed dragging to clicking for easier testing. Began building and testing automated thought generation from words colliding with one another. Many variables need to be changed in order to make this process happen more quickly and fluidly.


FEBRUARY 24, 2007:

I would like to have all of the emotional words in the stream come from a set of blogs, particularly "stream-of-consciousness" blogs. It should not be too difficult to run a LIWC dictionary keyword-check on the content of a web page once I connect to one. My goal is to make explicit to users that the words are in fact coming dynamically out of blogs. This could be done in a preloader in which the URL of each blog that is being read is displayed as it is being read. I am also thinking it would be nice to give users the opportunity to add their own URLs to a growing list that the system would incorporate.


FEBRUARY 22, 2007:

Igor, my advisor, gave me some very helpful advice on the project, recognizing its potential and offering suggestions for bringing that potential out. To make the face appear more alive, I can give it some random eye and mouth movements while he is not processing any thoughts. Maybe once in a while it can display a surprising look or twitch to heighten the entertainment value. We both agreed that it would be very annoying and distracting if the avatar were constantly moving around. We also talked about tweaking the variables so that changes in the avatar's expression don't seem to just go on and off like he is being poked and so that users can experience the enjoyable aspects sooner. Though he thought I did a pretty decent job performing the facial expressions, he said that I could try to find an actor with a more interesting face who could really captivate people's attention.

In terms of the words in the stream, we agreed there should some discernable emotional reaction for every word, including words like "food". Igor also suggested perhaps making the text itself have visual properties to make it more attractive to those who don't care much for text. We discussed the design of the interface, how the clouds and words around the head could have some simulated three-dimensional depth. I like the idea of the avatar's head being literally immersed in the clouds and sticking out above them.

Most importantly, I addressed the fact that I wanted this to be more of an automated spectacle rather than a very user-centric experience. I believe a major source of my struggles and frustration has come from my trying to force it to be a game in which users are actively engaged. Igor liked my idea of the words randomly colliding and chaining together on their own based on a set of prescribed thoughts in the database. We speculated on some possibilities for having the system be connected to some real world data sources that influence the avatar's mental predispositions. Igor suggested that fluctuations in the stock market and weather might cause certain words, positive or negative, to be present in the stream and determine the likelihood of certain thoughts forming over others. News headlines would be more difficult to analyze programmatically but are another possibility. I could offer users a link to "contribute" their own thoughts using the given set of words and phrases as in magnetic poetry, but user interaction wouldn't be essential.

After thinking some more about it on my own, I would like to use the LIWC dictionary to scan BLOG content and check for emotional words, and those words would be automatically fed into the stream. Perhaps I might add stock market as an additional variable that can enter the stream at a certain time such as when the avatar has a job. As a new title, I am thinking of something movie-like such as: "Mind Hack" or "Mind Filler". And the subtitle or slogan would be: "Your Thoughts are Not Your Own." Another possibility is "Netstream of Consciousness", but that might be sound too odd or confusing to people. "Life Sentence." is also still a consideration.


FEBRUARY 21, 2007:

After shooting the video components five times and in different ways, I finally brought this prototype to a state of presentability and completion. What appears to be a problem is that the actual complexity and subtlety of the programmed facial animations are not shining forth. One has to use it for a while and drag words at rather furious pace in order to see some of the more pronounced and somewhat unpredictable expressions that can emerge.

There is also some confusion in the class about my goals aesthetically. People are under the impression that I want the avatar to appear very naturalistic and not look like a puppet. That is not really the case. I certainly want it to appear more realistic and less robotic than Tim Hawkinson's robotic experiments with facial expressions, and with a bit of tweaking and some enhancements, I believe that will definitely be realized. However, I have always intended the avatar to be a bit surreal. Hence the head's detachment from a body that moves about its life like a "suburban drone" on automatic. Users are collectively playing the role of the avatar's mind rather than controlling his body, and that is the fundamental difference between this avatar and others. Instead of the head JUST being an empty vessel that obeys the user's direct commands, users are filling the empty vessel with thoughts. And there is a certain haunting and self-reflexive quality that goes along with this. From my final paper last semester: "...there is a slightly spooky feeling in putting one's own thoughts into the husk of an individual that is known to be existent in the real world but whom has given up his identity in a virtual world." This is a major reason why I chose to use video to portray the avatar. It strengthens this effect more than a cartoon would.

It is not that this avatar has a personality and mindset all its own like the avatars on Oddcast, which acknowledge the user as an other. Personality is an individual phenomenon, not a universal one. Emotions, however, are universal, and this is a transpersonal avatar.


FEBRUARY 17, 2007:

Worked on new prototype that layers the avatar's head over the cloudstream. The user drags words directly into the avatar's head as opposed to forming complete thoughts in a mind bubble. The dragging will probably be a temporary mechanic for testing purposes, but seeing the words go directly into the head is a great mode of emphasis that I plan to keep. A good recommendation from Stephanie.


FEBRUARY 13, 2007:

Sven Travis directed me to an online linguistic resource/dictionary builder called LIWC that, among other things, detects emotional affect in text. The data is somewhat incomplete for my purposes, but it is a viable option. Perhaps I may be able to expand its vocabulary somehow.


FEBRUARY 12, 2007:

Continued coding facial expressions. I'm happy with the smooth motion and realism, but wish there was a way to have a little less linearity to each facial component.


FEBRUARY 11, 2007:

Working on creating avatar's facial expressions from three dynamic components: eyebrows/forehead, eyes/eyelids, and mouth/nose/lower face. With feathering and proper positioning, the eyes and eyebrows blend together quite seamlessly, though some surreal anatomical impossibilities can result. This may or may not be a detriment to the project.

Wayne Chase emailed and said that his reference tools will not be available for another couple years or more.


FEBRUARY 10, 2007:

Thought about an alternate design in the event that a connotationary proves inaccessible and the A.N.E.W. data for positive/negative valence/ intensity, pleasure, arousal, and dominance,cannot be viably translated into actual emotions. I may have to catalogue a core set of words on my own and have users click on them to activate them in the cloudstream and in the avatar's mind bubble. I would do away with whole sentence formation, and have them trigger emotionally inflected words one at a time. In addition, I could implement a user contribution system outside of the actual game itself, in which users can add their own words and phrases and assign them corresponding emotions. Of course, this would not be ideal. I also thought about using images instead or maybe mixing in images with the words.

Shot video sequences of my own face for the avatar's closeup. Began working with the footage in After Effects.


February 9, 2007:

Discovered connotative.com, home of the developers of the world's first emotional/connotative language references, including a connotative dictionary. The suite of tools and references they are developing is being marketed as the biggest breakthrough in this area since Roget's Thesaurus in 1852. Wayne Chase, a pioneering psychologist leads this R & D group in Vancouver, British Columbia. He spent 30 years overcoming the technical challenges involved in statistically linking emotions to words and phrases in the English language. In the late 1990s, he patented what he called the Emotional Meaning and Impact Analyst (EMMI), a software system designed to evaluate a piece of writing or a speech according its emotional effect on the average reader or listener.

Chase's revolutionary "connotationary", which provides positive and negative emotional ratings and other connotative information on just about every English word, would be EXTREMELY useful to my project. I would like to parse the entire dictionary into my database. Unfortunately, it seems that they may not release these reference tools commercially for quite some time. I emailed him to inquire further. Perhaps I can obtain an advance copy somehow.