asli sevinc

telecommunicating

Archive for the ‘Computational Media’

THE YAWN: Final version for ICM, with many things to improve in the future

December 23, 2008 By: Asli Category: Computational Media

I should wrap up what I’ve achieved with THE YAWN project before I make more progress on it, now that we are done with our ICM class. I say before I make more progress on it, because I definitely want to keep working on the project until the program is in nearly excellent condition, even if it’s not written in Processing. Our ICM research assistant Jeremy likes my project and says he’ll be happy to help me should I choose to work on it more next semester and that’s encouraging me!

I showed the final version of the project during our last class. I didn’t have much time to introduce and talk about my project in detail though. It’s mostly my fault, one because I was a little nervous, two because I didn’t take the initiative to take my time in presenting my project. I wish I had more time and/or I were more organized with the time I had.

So, the final version of my program can:

1. Show a prerecorded yawn video in loop in a separate processing sketch.

2. Detect a yawn. (not so accurately though)

3. When it detects a yawn, starts recording.

4. Records for 8 seconds and saves it as a new movie.

5. The movies are saved in the data folder of the first processing sketch playing videos in loop.

6. The viewer needs to wear a ridiculous apparatus made of white board to clear out the background images.

After I presented my project in class, I saw some problems that I wanted to fix right away. Why didn’t I think of using background removal?! Why wasn’t I able to incorporate face detection to the code, even though I had tried it? Why was it so hard to pull out random videos that were added to the first processing sketch’s data folder?

Immediately, I tried the background removal code that Shiffman has on his Learning Processing book, but it wasn’t as accurate and it didn’t look clean. Maybe there are more complex codes out there that deal with background removal? I have to research more.

Then I searched a whole lot of forums on how to play QuickTime videos at a random from the data folder. To my surprise, I found out that Processing’s video library isn’t at all that great at dealing with playing and recording videos! Many contributors say that the memory gets used up very fast and I also experienced the same problem. Maybe there are better tools for THE YAWN project and I should look into them as well.

This was by far my favorite project of my first semester at ITP and I learned so much during the process. I hope I’ll get to make a much better version of it next semester and be able to share it with more people and collect lots of more yawns! Before I go, here is a video of Michelle yawning during my final presentation.

THE YAWN: Michelle yawning from Asli Sevinc on Vimeo.

THE YAWN: Checklist

December 10, 2008 By: Asli Category: Computational Media

  1. Play video in loop. check!
  2. Start recording video when blob is detected. check!
  3. Record for 10 seconds. check!
  4. Save videos in an ArrayList. check! [Thanks to Lee Jay!]
  5. Pull videos from the array of videos to play in loop.
  6. Yawning sounds.
  7. White box with hole.

THE YAWN: Detecting and recording!

December 04, 2008 By: Asli Category: Computational Media

YES! IT WORKS!

Ok, not the whole thing, but my code detects the yawn and records it as a movie file!

I’m so excited I don’t know what to do with myself!

Oh I know, I’ll go get coffee from Think and keep working…

THE YAWN: Saving the video

November 30, 2008 By: Asli Category: Computational Media

We’ve been working on the Applications presentation all weekend, so I didn’t have any time to look at my final projects until now, Sunday evening.

It’s not such a grand progress, but I did get the code together for saving video as a QuickTime file. Once you start the program, the camera starts recording until you press the space bar. The movie is saved as “theYawn.mov” in the sketch folder. One problem is that usually, there are drop frames in the saved file, possibly due to the fact that I’m flipping the video to get a mirror image. I will ask this to Jeremy tomorrow.

Here is the code:

Code for saving video

Code for saving video

THE YAWN: Blob detection

November 29, 2008 By: Asli Category: Computational Media

I downloaded the BlobDetection library, following Dan Shiffman’s recommendation and started trying it out. I took the example code from BlobDetection’s website, made a few minor changes and ran a few tests. The most suitable threshold range for me was 0.13f. BlobDetection analyzes images for blobs in comparison to the given threshold value, which is a float number between 0.0f and 1.0f. To find blobs with dark pixels, the threshold value should be set to low (close to 0.0f), and for finding brighth blobs, the threshold value must be set to a high value (close to 1.0f).

Here is another picture of me yawning -it’s 1:40am! - with blob detection:

Yawn with blobDetection

Yawn with blobDetection

So what BlobDetection does is that it frames each and every dark blob it detects in the picture. But I only want it to detect the largest blob: the open mouth. Can I constrain the size of the blob? Can I add a function to make processing detect only those blobs that have, say, at least 50 * 50 pixel diameters?

I’ll go back to my code and see if I can figure it out.

THE YAWN: Progress

November 19, 2008 By: Asli Category: Computational Media

This week, I made an outline of the ideal process for my Final project and worked out a timeline based on each step of the progress.

1. IDEA: Making the viewer interact with the computer with yawns.

2. PARTS:

a. Algorithm Pseudocode: Write a program that…

  • triggers camera and captures video.
  • records video and saves it.
  • tracks color using blob detection library.
  • starts recording when a certain color blob is detected.
  • uses the recorded video as the next video.
  • uses sound library to play pre-recorded yawning sounds.

b. Algorithm Code:

c. Objects

3. INTEGRATION

4. BUILDING THE SPACE

Based on the above process outline, the timeline will roughly be

NOV 19:

  • Start writing the algorithm codes for video capture, color detection and saving video.
  • Test the yawn video in class.

NOV 26: (No class)

  • Use the blob detection library and integrate it to the color detection program.
  • Record and add sound to the code.
  • Start building the space.

DEC 3:

  • Integrate the all the algorithm into one program.
  • Test the program.
  • Finish building the space.

DEC 10:

  • Set it up and show the work!
  • I shot a video of myself yawning and I want to test its effects on my classmates this morning in class! I’m hoping everyone will still be sleepy and my yawn will trigger many many yawns!
  • Post-Final Proposal Presentation

    November 13, 2008 By: Asli Category: Computational Media

    After my final proposal presentation in ICM yesterday, everyone gave me a lot of ideas as to how to execute the face detection part of my project. The common concern was that people thought it would be very difficult to use the camera as a sensor and make it detect the mouth opening. Many suggested that I should use something else to make the mouth stand out more, such as:

    - Make the viewer put lipstick on

    - Make the viewer smile before the program starts to take the whiteness of teeth as a point of reference

    - Make a wearable object to be put around viewer’s mouth

    - Make the viewer put sensors on his/her lips to be used as a switch

    - etc.

    But my concern was that requesting the viewer to do something unnatural would take away the humanly interaction that I want to create between the viewer and the computer.

    To my surprise, — and I was angry at myself for not having tried it a lot sooner — the pictures of me yawning that I took yesterday show that the blackness of the mouth during the yawn is quite predominant!

    That makes me think, perhaps, I may be able to use the camera as a sensor and make it detect the color black and emphasize it more by having a white background and hopefully make it work!

    Shawn suggested that I talk to Dan O’Sullivan about face detection, so I’m going to try to get a spot on his office hours now.

    and I have to sketch out a timeline for the project.

    YES. TIMELINE.

    THE YAWN

    November 12, 2008 By: Asli Category: Computational Media

    Since the beginning of the semester, I wanted to do something humorous with programming. For some reason, I think there is something funny about the quest of programming: We, humans, trying to talk to/to control/to instruct/to dictate computers that we, ourselves have created… Or simply, maybe, it’s all Shiffman’s style of writing in his Learning Processing that makes me associate programming with humor.

    Maybe because I think it’s just fun. (Not everything is fun yet though. The advanced programming that’s way over my head isn’t fun; it’s scary!)

    Going back to humor; I found myself associating humor as something very humane. Obviously, computers can’t be humorous, (or can they?) but what I mean is, humor is more humane than, say, walking or a lot of other actions or activities we engage in and that’s something I want to tackle.

    The contrast between a computer and a human is something that intrigues me a lot, and I want to explore that contrast.

    I can explore it both by emphasizing the contrast or by making the contrast vaguer.

    In this project, I want to try make that contrast vaguer with THE YAWN PROJECT. (The name is subject to change.)

    The idea is simple: Using video and sound libraries of Processing, I want to make people yawn. And hopefully record people yawning and make more people to come to yawn as well.

    There are two ways to execute this (I think).

    The first one is:

    1. THE PROGRAM STARTS WITH A VIDEO OF ME YAWNING. SOUND INCLUDED.

    2. THE VIEWER STARTS YAWNING.

    3. THE CAMERA IS USED AS SENSOR; IT DETECTS THE MOVEMENT OF THE MOUTH.

    4. STARTS RECORDING THE VIEWER YAWNING.

    5. WHEN THE VIEWER STOPS YAWNING, CAMERA STOPS RECORDING.

    6. SAVES VIDEO.

    7. THE SAVED VIDEO BECOMES THE NEXT VIDEO SHOWN TO THE NEXT VIEWER.

    The second option is:

    1. THE PROGRAM STARTS WITH A VIDEO OF ME YAWNING. SOUND INCLUDED.

    2. a) A VOICE-OVER SAYS: “IF YOU FEEL LIKE YAWNING, PRESS “…” KEY NOW.”

    b) A TEXT APPEARS ON THE SCREEN, SAYING: “IF YOU FEEL LIKE YAWNING, PRESS “…” KEY NOW.”

    3. IF THE VIEWER FEELS LIKE YAWNING, SHE/HE PRESSES THE DESIGNATED KEY AND THE CAMERA IS TURNED ON.

    4. THE CAMERA STARTS RECORDING THE VIEWER YAWNING.

    5. AFTER 5 SECONDS, CAMERA STOPS RECORDING.

    6. SAVES VIDEO.

    7. THE SAVED VIDEO BECOMES THE NEXT VIDEO SHOWN TO THE NEXT VIEWER.

    With this project, I’m aiming to use the computer to detect and trigger the human action.

    I want to take a simple idea and play with that simple idea.

    I wish to create an interaction between computer and human that is humanly. I want to trigger something very human (a yawn) (a laugh) (a smile), using programming.

    Something funny.

    Something fun.

    What is the point? That’s something I’m asking myself. Why don’t I do something that is more applicable, more practical? I think this semester, and the next semester, the two other semesters left before I graduate, I essentially want to play. Have fun. And learn along the way. I hope…

    the beginning

    the beginning

    the climax

    the climax