Final Project – Blend

Taking a look at my original mind map, I realize that I achieved everything I intended. 

For my final project, I used a TV screen, two cameras, and a Mac Mini. The Mac mini proved to be better than my laptop because the higher number of cores allowed it to work faster without overheating. Though displaying on the TV was slower than on my laptop, Vince helped me solve this problem by lowering the resolution of the TV.  I really enjoyed the experience of getting to play around with the Mac Mini. I didn’t even know Mac Minis existed up until this point. 

As expected, I used the video library and image processing algorithms I described in my mind map. These were the backbone of the blending happening on both the single player and double player modes. I also used the blending library to blend the background colors according to the position of my mouse. 

The code is divided into numerous modes. Each mode has a different background. The menu and score modes have a dynamic background that updates according to the position of the mouse by blending two images of different, solid colors. The single player image calculates the movement of the user by comparing the RGB values of the previous frame and the current. If the player is moving fast enough, their image will appear stronger than the preloaded image. The same concept applies during the two player mode. Finally, I am using a hashmap to define the max Scores, as attaching to the csv file was giving me an error. There are several classes for the circles at the main menu, the squares for the single player menu, and the menu box at the bottom right of every mode. These classes make it easier to tell when the mouse is hovering over the figure.

About 90% of the project took two weeks to complete, but the last 10% proved to be harder than expected. During the last week, I realized that my project gave surprising outcomes when two people played the game. The results seemed completely unpredictable and I had to do hours of debugging and numerous print statements to figure out the problem. I realized that the types of cameras used yielded different values in the calculateMovement function. I tried to solve this problem by changing the size of the cameras and making them equally as big. Perhaps this would make the resolution equal and it would not affect the calculations. However, though the cameras available list allowed me to choose between different sizes, not two of them were the same, which made it really difficult to isolate the variables and come up with a solution. After a lot of testing, I noticed patterns between the numbers each camera was yielding, and I decided to simply modify the numbers in the map() function to work according to the values that the cameras were giving. This proved to be a short term solution, as the cameras I used in the presentation were different than the one in my laptop and I had to do further testing to analyze the pattern between the new cameras. However, I was able to modify the numbers in order to adapt them to the circumstances and making the game easy to play. 

During the presentation I realized that one of the biggest flaws in my project was the fact that it was not self explanatory. If I stepped away from the project, very few people would actually figure out its purpose. Since the name was “Blend,” people did not know what they were supposed to do. I included some short instructions but I guess they were not enough. Sometimes, it would take people two tries to understand what the game. However, it was very pleasing to see people enjoy the game once they understood the purpose. I achieved the desired effect when I watched them dance around when competing. 

In order to make the project work, please download the images in the following folder and save them in the same folder as the Processing Sketch. 

https://drive.google.com/drive/folders/1RTfId1fJe7zRuBnEnii0OcYMtr4yJrO_?usp=sharing

Finally, here is the actual code. Notice that for the code to run, there has to be two distinct available cameras.

 

The videos from my user testing are in the following link. 

https://drive.google.com/drive/folders/1cjkkLBbgYy6i8WpZC3mc6T3NjlvHY1ca?usp=sharing

I changed several things from this moment because I realized the blending was not behaving the way I wanted. However, the updated project was presented during the showcase. I didn’t take pictures during the showcase but I spoke to Michael and we agreed it was alright.

User Testing

In order to test my project, I asked Vic, Lucas and Deena to test my project without any previous instructions. I immediately noticed that even though I included the word “Move” at the top of the screen, people would not react quickly. When they would understand that they had to move, they would move very slowly, not allowing the computer to notice a significant change. In order to achieve the desired effect, I need people to move around quickly. Thus, I could come up with a way of making the move show urgency, or I could simply provide more detailed instructions.

When Vic and Lucas tested my code, I realized I had a bug. The bug is seen in the video when the code does not react properly to their movements. Thus, Deena’s video is more accurate because she is testing the final code.

Deena mentioned that I should have a more descriptive title, as she didn’t have any interest in the word “BLEND.” Vic mentioned that I should choose images that would be related to blends, like coffee blends or ice cream blends. She added that I could add a surprise feature in which the image that appears is not actually the blend you chose, but instead it is a scary image. Finally, Lucas didn’t provide meaningful feedback, he simply mentioned that the way people reacted to the game was “scary.”

Please find all the videos and the feedback in the link below.

https://drive.google.com/drive/folders/1cjkkLBbgYy6i8WpZC3mc6T3NjlvHY1ca?usp=sharing

Prototype: Update

The goals for my prototype for Wednesday were basically adding a feature that would measure motions and increase the intensity of the image with the most motion. I used a motion measurer we saw in class a few weeks ago, but when I added it to the code, I began getting errors. It’s very strange because I didn’t change anything, the error just started appearing. I am still in the process of debugging.  The following is my code:

 

Final Project Prototype

Recall that my list of challenges for my final project was the following:

Risky features:

  • The blending happens real-time, and I am not sure of the image processing algorithms will work with live video. For now, I am assuming I can choose to capture each frame and blend it real time.
  • Along the same lines, I’m not sure how Single user aspect of my project will work. I will allow the user to choose from a  number of images with which you can be blended with. The idea is to have the live camera feed blend with the static images. I’m not sure if this will be possible, and if it’s possible, I’m not sure if it will be interesting.
  • As dumb and trivial as it sounds, I am worried about the development process and how I will manage to plug two cameras into my laptop. Maybe I can use the incorporated one and an additional one using the USBc adapter. However, in order to present I would want to use two separate cameras. Most likely, both of these cameras will need an adapter. My laptop has only two USBc connectors, which means that my laptop cannot be charged if I have two laptops connected.
  • I am scared of overcomplicating my project and adding too many features in fear of simplicity. I tend to overcomplicate things unnecessarily because I feel like the original idea is not good enough.
  • I am wondering how will light play a role in my project. The camera feed will only look good if the light is good, yet I don’t want a very harsh light because that takes away from the mystery of my project.

Some of these challenges I was not able to test out because I don’t have a second camera yet, but I was able to figure out how to blend a static image with a real time feed. The idea would be to add features to this prototype, and to eventually integrate two cameras.

I also added a “Menu” feature to my code in order to begin visualizing how to User Experience will be. Right now it looks kind of ugly, but I will work on the aesthetics of it later on. Right now I want to be able to test it with a second camera in order to figure out  if I can add more features to my project.

 

In order to make my project work, you have to download the attached files and save them on the same folder as the project.

 

 

 

Final Project Idea

Brainstorm ideas for your final project.

  • Your blog post should include the concept, technical requirements, equipment needs, and block diagrams of the physical construction, electronics, and program in as much detail as possible. Hand drawn sketches are fine. You may incorporate concepts that we have not yet covered in class if you are quite confident that you can accomplish them; otherwise, stick to what we’ve covered already. Your design should be fairly detailed.
  • Identify what additional components and parts you require
  • Identify the 5 most risky, frightening, or complicated parts of your project. In your blog post explain why you chose these aspects.

 

For my final project I have decided I want to use two cameras in order to display a blended image on a projector. I found this example on github https://github.com/milchreis/processing-imageprocessing. It explains how to blend two saved images. My goal is to apply the same concept but using the camera. Ideally, the images would be blended in real time, meaning that as the users move, the images would continuously display a blended version of the video feeds. In order to make my project work, I would use a Processing Library called “Image Processing Algorithms.”

The following diagram explains my project in more detail.

 

Risky features:

  • The blending happens real-time, and I am not sure of the image processing algorithms will work with live video. For now, I am assuming I can choose to capture each frame and blend it real time.
  • Along the same lines, I’m not sure how Single user aspect of my project will work. I will allow the user to choose from a  number of images with which you can be blended with. The idea is to have the live camera feed blend with the static images. I’m not sure if this will be possible, and if it’s possible, I’m not sure if it will be interesting.
  • As dumb and trivial as it sounds, I am worried about the development process and how I will manage to plug two cameras into my laptop. Maybe I can use the incorporated one and an additional one using the USBc adapter. However, in order to present I would want to use two separate cameras. Most likely, both of these cameras will need an adapter. My laptop has only two USBc connectors, which means that my laptop cannot be charged if I have two laptops connected.
  • I am scared of overcomplicating my project and adding too many features in fear of simplicity. I tend to overcomplicate things unnecessarily because I feel like the original idea is not good enough.
  • I am wondering how will light play a role in my project. The camera feed will only look good if the light is good, yet I don’t want a very harsh light because that takes away from the mystery of my project.

 

Digitize Everything

I really enjoyed reading this article because I love learning about the different ways that technology develops, as well as learning about what happens in the background of an application that can run pretty quickly. What caught my attention the most about this article were the examples that highlighted the use of data in order to improve applications.

A lot of people are very skeptical about websites asking to collect data from our actions. They believe that this is a violation of our privacy, but they don’t realize that the main purpose of this data is to improve the applications we want to use. Data is the reason that apps have become so effective at fulfilling their purposes. Waze, Google Translate, and Facebook are all examples of apps that use our data and other people’s data in order to predict our actions and facilitate our lives.

Robot with Arduino

In order to connect the moving robot with Arduino, I used the SerialCallResponse, just like I did with my game. This time, however, rather than using the arduino to modify xpos and ypos, I used it to modify leftChange and rightChange. Now, each arm’s movement depended on its respective variable change, which depended on the user’s use of the Arduino.

 

Design meets disability

This article made me think about the way that we perceive disabilities and how design could play a role in this perception. The reason that eyesight problems have become so acceptable is because people hide them less and less every day. People are preferring glasses over lenses more each day because the fashion industry is creating amazing eyewear that makes people want to wear glasses. Even people who don’t need glasses want to start wearing them because of the effort that has been put into making glasses for each person’s face shape. It is interesting that hearing aids have not taken this turn. Though some glasses want to be as obvious as possible, hearing aids try to be as discreet as possible. I find it interesting, however, that headphones, especially Airpods, have become fashionable, but a hearing aid, which could resemble a headphone, is not fashionable. Perhaps what makes these items fashionable is the evolution of mankind and how these “disabilities” become more common than not. Most people need some sort of seeing aid, whereas hearings aids are not as common. Thus, they are still somewhat of a taboo. Legwear and armwear, being even more uncommon, have not developed the same way that eyewear has.  

Game (Explanation + Code)

 

 

My game replicates the lifecycle of a caterpillar. When you begin the game,  a caterpillar will appear, along with numerous leaves scattered across the screen. Your goal is to move the caterpillar using the arduino potentiometers in order to catch all the leaves. When you catch them all, the caterpillar becomes a coccoon, and you will notice that numerous clocks have appeared scattered across the screen. Once again, your goal is to collect all the clocks in order to become a butterfly and win the game! In order to make the code work, make sure to download all the images above and save them in the same folder in which the processing file is located.

Processing Code:

 

Arduino Code:

 

Computer Graphics and Art

My code is focused on random patterns forming triangular shapes along the screen. I used counters to change the triangle sizes as they moved along the screen. Finally, I used numerous while loops to create the different lines. In order to make the borders, I also used different strokeWeight() values to replicate the borders from the original images. The different colors are meant to overwhelm the eye but cause a desire to keep looking.