Musical Sand – Full Documentation


My project was inspired by the AR sandbox created by researchers at UC Davis : The AR sandbox is a lot more complicated than the sandbox that I built but it still gave me some insight into a similar project. 

The actual inspiration behind my project is the sandbox that is there in the Health and Wellness Center. I find it extremely relaxing and soothing. I wanted to generate a similar emotion using my project and wanted to relax the users. However, at the same time, I also wanted to give the user some agency to manipulate the experience to cater it to their tase. 

So basically, my project is a sandbox where you manipulate sand to hear a soothing sound of your choice. Based on which part of the sandbox has the heap of sand, you will hear a different soundtrack. The sandbox can be divided into four quadrants. Each quadrant has a different soundtrack attached to it. Users can manipulate the heap of sound to be in any one of the quadrants and that will lead to a soothing soundtrack. The soundtracks are natural sounds of waves, waterfall, and rainfall. I chose these sounds bcause I feel that they go well with the sand. For instance, sand is often found on the beach and thus the waves are meant to create a beach like experience. 

Initial sketch for sandbox
I don’t think it is possible for me to draw a schematic of my project because of lack of circuits involved or Arduino but if I had to draw a basic sketch, this would be it. 

List of parts used

Wooden Sandbox with frame

Xbox Kinect




One of the biggest components of my project was the entire hardware setup. I wanted the setup to be sophisticated because the experience is supposed to be soothing and so my hardware has to be non-distracting and minimalistic. I requested Andrew’s help in the wood scene shop to create the box with the measurements of 50 x 50 inch. He warned me that the box might get to heavy with the sand but I decided to go with it anyways because a really small box might not allow enough room to play with the sand. Ultimately, the box turned out to be fine and not very heavy with the sand in it. 

Once the box was created, the next step was to create a frame where the kinect could be mounted. For a lot of initial stages, when I wasn’t sure about where the kinect could be put, I used a basketball hoop to test the kinect. The wooden frame for the kinect was the last step because I had to be sure about where the kinect had to be placed which in turn was also dependent on the program to run properly. Once I had decided where the Kinect had to be placed, Michael helped me with the wooden frame of the project that looked a lot better. 

The gallery below has images of the primitive sandbox and of me trying to adjust the Kinect to get the sandbox in frame in processing. 


The software of my code was the most challenging part of the entire project. I had never used Kinect before and had no idea about how to go about it. I just knew that I wanted to learn to work with a new input method and the Kinect was something that I had in mind. Moreover, my idea of detecting the heap of sand required a device that should be able to detect z values and thus the Kinect was the only one that fit that description in the IM lab inventory. 

Once I checked out the Kinect, the next thing I had to do was to figure out how it actually worked. I went through Daniel Shiffman’s Kinect and Processing tutorial :

that helped me understand the basic concepts of Kinect and how it works and also how I can use the open Kinect library. The tutorial was very helpful but it still did not give me a precise solution about how I am supposed to work with my data. 

I eventually came across another library that can be used with Processing and that is known as OpenNI that can also be used with Kinect. After going through both of them and trying out code with both, I decided that open NI was better not just because of its use for me but also because more resources were available online for me to use.

This is my code. The code needs to be scrolled right at points to read the entire comments. There are many comments that are explaining the different parts of the code:


As I had mentioned earlier, these are some of the main challenges of my project:

Avoiding hand detection was one of the biggest challenges. I solved it to the best of my abilities using the color of the sand and the threshold for the sandbox. 

Creating the sandbox and painting it was time consuming if not challenging. 

Figuring out a way to get a smoother difference between two values of the closest pixel in two different frames was a difficult task. For some reason, the Kinect would often detect two really distant points as the closest points in two different frames even if nothing had moved at all. This was something that was beyond my control because it had to do with the Kinect itself. However, I was able to minimize its impact by averaging values between two points in five seconds. 


Hardware + Software = Final Project


My final project looked something like this.  

Here are some pictures and video from people using my sandbox in the showcase. 

The response I got on my project is the one that I wanted in the first place. People told me that they felt calm and relaxed after using my sandbox. 

Also, including some videos.
Using sandbox in the showcase
Showcase user


I also asked some users about what they would recommend that I should do to make the experience better. One person told me to create some sort of an enclosure around the sandbox where the user can ‘enter’ to not be distracted at all by the other activities of the showcase or life in general. So if I could make a curtained environment around the sandbox that would lead to a better and more soothing experience. 

Arantza (who was sitting next to me while I made the entire project) amazed by my sandbox.

Final project

I am making an interactive sandbox. Basically, the user will have a sandbox in front of them where playing with the sand will result in different outputs. For instance, as the users are playing with the sand they will experience different sounds that will complement the experience of playing with the sand. Users will realize patterns in how certain manipulations of sand result in particular sounds and based on that they can create their own “sandmusic”.

I do not require any additional parts so far. I have already rented out the Kinect. I need some help from the woodwork department in terms of helping me mount the Kinect on top of my sandbox. The only other parts of my project are the laptop which I have. Most of my other work is software based.

Complicated parts:

  • The most complicated part of my project is figuring out how to work with Kinect. I am going through youtube video after youtube video to understand the open Kinect library.
  • I also want to find out a way of removing movement of hands from the experience. The aim is for the Kinect to ignore the hand movement from influencing the sound. Only sand manipulation should affect that.
  • I have to figure out how to make the processing output manipulation. How to write the sound file for processing.
  • Most importantly, I need to connect the input and the output. How will I connect my processing file with data from Kinect to the output of sound that the user will experience.

In this sketch, you can see the dimensions of the box that I need. The intersection in the middle is where I would like the Kinect to hang from. It could either be done using four pieces of wood that come out of both sides or it can be done using one piece that hangs it from the center.

Water painting (Final Project)


I changed my idea from what I had thought in class last time. The idea is to now make a painting or a piece of art using water. My idea is inspired by how much I enjoy playing with water and the sandbox that we often see in waiting rooms. I find it really satisfying to drag my fingers around the sandbox and make patterns on it. I wanted to bring this idea to water.

Imagine if there was a small container of water and everything that you draw by running your fingers on water gets recorded and tracked. The thing with water is that unlike sand, it will not record the patterns that you create based on your movements. Therefore, the best way to bring this idea to life is to record it on a computer.

User Interaction

The ideal user interaction in my case would look like this:

The user comes and stands in front of my setup which would include a broad and shallow container filled with water. Most probably I will not provide any explanation about what needs to be done. Hopefully, the user will see the container of water and want to dip his/her hand into it instinctively just like how I would. Once they do that, they will be rewarded by the output whatever that may be. For instance, if I decide on a painting then movements along the water will create strokers of paint on the screen. Different movements will lead to different strokes and ultimately a work of art.

Another sort of output could be music. I can link particular movements to specific sounds. Most probably they will be soothing sounds of wind chimes and other relaxing instruments. Movement of hand along the water will create music that the user can listen to. So the user is not just playing in the water but also creating his/her own music which can make this project seem very personal.

Requirements (unsure)

In terms of requirements, I am not completely sure of what all I would require to make this project work properly since I was not able to discuss this in class. In terms of how to detect motion of fingers in water, I haven’t been able to find the sensor for that even on Google. Some suggested using an accelerometer to detect the movement of water itself and try to make that waterproof. However, that would not accomplish what I want to do.

I would need a container where I can store water. Ideally, it should be 30 x 30cm and a depth of 10cm. Apart from this, I don’t think that I would require any complicated tech.


Basic Sketch

3 complicated parts:

  • Figuring out how to detect the movement of hand in water. I can’t even find out whether or not a sensor exists that can detect such motion.
  • I have a feeling that there is going to be a lot of noise in detection if I manage to detect. For instance, the accuracy with which I will be able to detect the movement seems a bit of an issue. A lot of the experience will depend on this accuracy so this part is crucial.
  • I want to create a good user experience in general with a good introduction and conclusion of the experience. So I need to think about this and come up with a good experience. If I can build a narrative around it and make it into a story like experience that would be great too. So I need to think about this conceptually.

Digitize everything

One of the lines that resonated a lot with me was about how it is expensive to create something in todays time, but how it is extremely cheap to replicate. This is so true now that I think about it. It takes so much time and effort to create a song, but it only takes a ctrl + c and ctrl + v to replicate it. Same with a lot of other digital assets that exist right now.

I am fascinated by the information about the amount of data that is being created. The fact that we are going to run out of a metric system to measure the massive data that is being generated is truly mind blowing. The article highlighted the importance of this data in terms of how we can use it to test theories.

However, one thing that comes to my mind in terms of this big data is the recent Facebook scandal. It was just brought into the attention of everyone to what extent a lot of big companies like Google and Facebook have access to data about individual people. A harmless app like Google maps that we all find extremely useful, gives a ton of information about where we travel. Google stores this information and could use it in a lot of different ways from marketing to distributing it to others. I am not trying to say that we should not utilize big data. However, I definitely feel that adequate safeguards need to be in place that protect the data from cyber crime and other threats.

Design meets disability response

This reading was the first one that made me think about weak eye sight as a disability. I have weak eyes and I wear glasses but I never thought about my weak eyesight as a disability. Now that I think about it, it has got to do with the fact that glasses have addressed this issue in a way that even though it is visible, there is no stigma attached to it. Glasses have been very successful in dealing with this. Considering the fact that it is common for glasses to now be known as eyewear, it becomes clear how the perception of glasses has changed over time. In fact, these days I have noticed it is very common for a lot of people to just wear glasses with no prescription just because they think it is a fashion statement. This huge shift in perspective is very interesting.

I guess the most important debate is whether or not disability design should aim for discretion or not. On one hand we have fashion and wearable tech, whereas on the other hand we have the fact that concealing disability tech might perpetuate stigma attached to the disability.

Modified Robot

Arduino Code


Processing Code

Game Code

Arduino Code


Processing Code


Processing + Arduino Game

My game is basically a modification of what we did in class. I built on the serial call example and made a simple game out of it. It has all the features of a game. There is an objective. You can earn scores based on your performance. You can win.

I am basically using two potentiometers to control a ball on screen. The objective is to make that ball reach another ball and sit on top of it perfectly. As soon as the user is able to do that the ball that the user was supposed to reach changes position randomly. The user has around 15 seconds to do this each time. As soon as the user reaches the ball, he/she gets 15 more seconds to reach the next one. People can keep playing as long as they can manage. The score keeps increasing with each time they reach the ball. It is displayed once they run out of time.

Casey Reas Response

I found the talk a bit difficult to keep up with. At points it became a bit too abstract for me. I did enjoy learning about the different projects that he shared from the early part of the 20th century to the later one. I liked the contrast between order and chaos and the projects that he showed that played on this concept.

The example of the machine art of 1916 was very interesting. It was cool to see how the artist dropped random pieces of paper on the ground and that was considered a radical gesture at that point of time. The main purpose of this piece of art was to challenge the notions of an ordered society and the focus on logic. Using this art, the artist was successful in showing the beauty of the illogical and the unordered.

Code Art


Code Art (Video)