MyX – Augmented Reality Project

Introduction

I think everybody knows that Windows 10 came with a new feature called Paint 3D, which brings Augmented Reality in the hands of every user. I’ve been thinking constantly about the process behind it ever since the first time I tried it. After familiarizing with Processing, I wondered if there are such libraries that can handle Augmented Reality (AR); I knew that my final project would be somehow linked to this idea. After a two-day research on the web I found a library that could do that, called NyARtoolkit. Despite the fact that it hasn’t been updated in a very long time, I managed to make it work in Processing. I told myself that if the prototype works, then I would continue with the project.

The shapes that are displayed have a mathematical concept behind them. When in Math class, I’ve always been amazed by the shapes and rhythms born out of equations. I’ve been browsing through Daniel Schiffman’s tutorials in Processing and I was lucky to come across his tutorials on two kinds of shapes made with equations. They are called SuperShapes and the Lorenz Attractor, and I will attach links to videos and references for them at the end of the post.

How it works

The program searches for markers in the video stream from camera. When the black frame of the sticker is identified (detecting the contrast between the white frame and the black frame of the surface), the image is converted to a binary image. The positions and orientations of markers are calculated in relation to the camera, and the symbol inside the marker is matched with templates in memory. Then it aligns the 3D shapes with the sticker and they are rendered in the video. Here is a diagram form the library’s website:

The code shows different objects on the four markers.

  • on the “Hiro” marker – it displays a SuperShape that fluctuates, changing its form.
  • on the “Kanji” marker – it displays a cube that changes color constantly, using ‘random’ in the code.
  • on the NyIdMarker 0 and the NyIdMarker 1 it displays Lorenz shapes that infinitely draw themselves.
From left to right: the Kanji marker, the NyID 0 and 1 markers, and the Hiro marker

How I built it

I designed the entire project in Adobe Illustrator. Concerning the hardware part, I thought of making an interface that will be explicit and intuitive for the users, that would include a brief description of the project and instructions. I had the screen and the table I was going to mount my project on, but I made a paper surface with markers for the stickers and with an introduction, which you can see below.

  • this is how it looked when I tried it for the first time, on the big TV screen and with a web camera.

The stickers

I chose acrylic as a medium for making the stickers because it is durable and works very well with the laser cutter in the lab. I made a template for the acrylic plates at the same dimensions of the stickers. If you want to print your own stickers and play with this library, I made an .ai file with all 4 of them which can be downloaded from here.

Thoughts on the User Experience

I can say that the project exceeded my expectations in terms of the user engagement. This may be to the fact that is a very simple and clear interface, which gives the user sort of an immediate effect. You just pick up and hold the pattern facing the camera and the objects pop up. Here is a video showing the experience and how it has been received by the audience:

https://vimeo.com/306631991

Issues and Challenges

I wanted to display .OBJ files as well, but there are not many 3D objects of this particular type online and I discovered they aren’t really stable in the program.

Seeing the world in black and white is tricky! At least for the camera. When someone wore a black & white T-shirt or when in the background the camera detected the black TV screen along with the white wall, the shapes would show on those ‘patterns’. At the end of the day, it is just an algorithm and it does what it is taught to.

The Code

The code only runs together with the data folder. You can find them here.

 

SuperShapes

https://www.youtube.com/watch?v=akM4wMZIBWg

 

The Lorenz Attractor in Processing

https://www.youtube.com/watch?v=f0lkz2gSsIk&t=1125s

 More about the Lorenz Attractor can be found here and here.

On ‘Computer Vision for Artists’

By the end of this class, I look back at everything I have learned, mesmerized by how much I have changed my way of thinking about interactive media and computing. I think that this last reading, Computer Vision for Artists, sums up the main aspects and goals of this fields and offers a glimpse into this context. But if I’d had read it at the beginning of the class it would have been totally different.

Levin included many references to different software in his article, and I think that’s useful because he gives the novices a direction in which they can go. I will definitely have a look at the source code when I get back home in winter break.

I can link some of the concepts mentioned here with Bret’s A Brief Rant on the Future of Interaction Design,  in terms of interacting with computers. Projects like Myron Krueger’s Videoplace had the interaction between the entire human body and computers as a starting point. The author further mentions that this was developed between 1969 and 1975, before the mouse and keyboard took over and the interaction was reduced only at the level of the hands. Also, the project in which two vocalists are visualized and augmented in real-time by synthetic graphics reminds me of Kaki King, an artist that was a guest speaker in one of my classes this semester, and who will perform at NYU Abu Dhabi next year. She and her team work with video projection and MIDI to create an exhilarating experience for the audience. 

It is also interesting how the article addresses issues of surveillance, noting that with each day the world is more concerned with the collection of data and quantifying everything around us. Was the Red Gate Bridge project ethical? Will all the data turn against us in the future? All of these are questions with no concrete answer.


Response | Digitize Everything

A class that I’m taking this semester called Data and Human Space is kind of related to the main ideas of the ‘Digitize Everything’ article, studying how our relationship to human space changed in our data-rich world.

The main idea that I assimilated from the reading is that applications that make good use of the economic properties of digital information – like zero cost of reproduction – are the ones that will flourish and stay around for a long time. The article gives Waze as an example, but today our most-used apps work the same (TripAdvisor, Zomato, Google Maps). The huge body of information was free to generate, as it is the contribution of each user, and once the data it’s digitized it can be shared widely and repeatedly, also for free.

An aspect that struck me which relates to the digitization of things is the power that geospatial data has got nowadays and whether it is thought as new knowledge or new forms of surveillance; where is Google Maps pulling its data from when it shows you how many people are in a space at a particular time? Does it pull data from the phones even when the geo-location is turned off?

Design Meets Disability

I can say that this reading was one of my favorites of this class. It was so refreshing to read about a topic so widespread and yet so ‘undeveloped’. I didn’t notice before how little choice people with disabilities had in choosing a hearing aid or a prosthetic in terms of design.

It’s true that eyeglasses have had a great fate, being worn nowadays as a fashion statement on their own. In the reading, their evolution is mentioned as a transition from a ‘medical model’ to a ‘social model’. In the case of hearing aids, the same transition needs to occur in order for it to be seen as an ordinary gadget rather than a sign of disability. And Apple ear pods are a great example of design going in that direction because it can easily be applied to hearing aids. Nowadays you see people wearing ear pods even if they’re not particularly playing any music. Who, then, can tell the difference between a deaf person and one able to hear? However, design should not fall in the other extreme, when the gadget needs to be replaced not because it stopped working but because it went out of fashion – ‘changing a hearing aid because it’s no longer fashionable‘.

Concerning prosthetics, it is way more complicated when it comes to design. Even though the book has shown an athlete wearing leg prosthetics, she mentioned having a range of splints to choose from, almost like choosing what shoes one should wear. Most of the people would not afford this, so making a rather a universal, qualitative, timeless design is required in this field. This is why I think that designers should be involved in the field of medical engineering as well, and the book was stating that briefly. Good design requires valuing simplicity and at the same time not making the product boring. A minimalist radio, the iPod and the iPod Shuffle were given as examples demonstrating this concept.

Probably the essence of these chapters lies in the fact that how a product looks is as, or if not even more important than how it works. And when talking about disability, the products need to have good, thoughtful design behind them, because they’re not just another shirt that one can change on a daily basis but a “part” of a bigger picture, a part of the people themselves.

Robot x Potentiometer

In this exercise, besides controlling the arms of the robot with potentiometers, I wanted to add a little bit of ‘life’ to it and made the eyes move (with the potentiometer too).

 

This is a short video of it:

This is the Arduino Code

 

 

This is the Processing Code:

 

 

Tiger Game – Arduino x Processing

Concept

This game uses Serial Communication between Arduino and Processing in order to control a simple game with a potentiometer. It makes the tiger move up and down in order to avoid the enemies who are coming to attack him.

I have encountered issues during the process, so the final working version does not include the donuts (sadly) visible the intro image.

This is how it works:

This is the Arduino Code:

This is the code for Processing:

 

Casey Reas’ Concept of Order and Chaos

 

Casey Reas’ Eyeo talk explores the relevance of conceptual art to the idea of software as art, in relation to the concepts of randomness and order.

When speaking of any type of creation, I think it all boils down to concept,medium, and process. Although the artworks up until the 18th century originated from a visual and conceptual point of view, during the 20th century this shifted to rather a more conceptual and process-oriented approach. The Dada movement introduced the concept of chaos in art and, from then on, visual artists have focused on both order and chaos. Some of their work quite resembles the concept of chaos that Reas was talking about. Adding to that, I found a work of a dadaist, Man Ray, from 1922 (that’s a long way back right?) that perfectly resembles some of the graphic artworks done nowadays.

Man Ray, c. 1921–22, Rencontre dans la porte tournante, published on the cover of Der Sturm, Volume 13, Number 3, 5 March 192

 

In the same vein, Pollock’s famous “drip” works are composed of fractals (geometric patterns) that can be identified by a computer. You couldn’t believe that there is actual ‘order’ in that ‘chaos’. Building on that, when I saw Reas’ process described in the video I was amazed to see how it intertwines with the concepts form the art world. As far as I am concerned, the process is more interesting than just the finished work exhibited in the gallery. You certainly need randomness as well as order to achieve an overall harmonious and captivating piece.

 

The time period beginning from the 1960s and up until now will be remembered exactly the same as the beginnings of photography. Software art is using a different medium to produce the artwork (the computer), same as the camera in photography. People may say that the author is not the people themselves because the work was generated by a computer. And now the question is, who is the artist? The person or the computer?

Recreation of Computer Graphics

I recreated an old computer art design from the issue of “Computer Graphics and Art”.

The original:

Here is how it looks like:

 

Here is the code:

 

Color De-stress – OOP Example

The Concept

 

Starting from the class example, I created a basic game that acts as a de-stressing tool. The multicolored arc shapes will trace lines that can be covered in greyish transparent circles with a mouse click. The transparency, sizes, position, and color of all the objects vary; it’s kind of satisfying watching all the shapes and colors unfold. I tested different versions of it, in order to find the perfect balance of elements and to make it captivating, because a single mouse click game can get boring pretty fast.

In terms of programming, I created a new class of an arc shape that changes its color and position, and also one for the circles. The arc objects are drawn at the start of the program and are in a loop, whereas the twenty circle objects are drawn at once when the mouse is pressed.

Here is a video of how it works:

 

Here is the code:

 

Self Portrait

The Concept

This self portrait has been born out of trial and error. I had a hard time with Processing at first, but after the completion of the project, I feel like I’ve learned a lot. The portrait is comprised of two parts: the first one is an animation which draws a pattern made using the readings example (objects and classes) along with eyes that are moving and changing color according to the mouse position, while the second part is a brush that helps you finish the portrait, giving you endless possibilities of drawing.

Here is the code for it:

Here is a video showing how it works: