the greatest piece of art never seen

"The Greatest Piece Of Art Never Seen" project is an art installation that only reveals itself when your eyes are closed. With this piece I wanted to investigate unusual human/computer interfaces, become more familiar with integrating machine learning models in interactive art projects and make something that is a but more humorors than what I usally address.


working principle

The system basically consists of a webcamera, and image processing unit, a microcontroller, position sensors and a few different types of actuators. When a person stands in front of the installation the image processing unit starts detecting the face. When the system is positive that there is infact a person standing in front of the installation it then feeds the the image data into machine learning face analysis framework named Dlib. Utilizing Dlib then allows the system to divide the face into landmark points with which it is able to calculate the distance between upper and lower eyelids on the person's face. Naturally, the distance between a person's eyelids can be used to determine whether the person's eyes are over or below a certain threshold and just like that you got yourself a reliable eyes closed or open detection algorithm.


dlib face landmark example
Greatest Piece of Art Never Seen

Next, the image processing unit translates whether the persons eyes are open or closed to a 1 or 0 and sends it via the serial protocol to a ESP32 microcontroller. The ESP32 then takes that input and uses it to control a geared DC motor to rotate in one or the other direction. In order to transform the motor's energy output from circular to linear motion a timing belt has been attached on the drive-shaft of the motor. Cool thing about this solution is that I can now attach the curtains to two opposite points on the belt because these will always move back and forward relative to each other regardless of what way the motor is spinning. Sim Sala Bim, you just made actuated curtains that you can control by just opening or closing your eyes.


Data & control transfer diagram
Greatest Piece of Art Never Seen

creation

This whole project started when I checked out one Adrian RosebrockĀ“s blog posts about drowsiness detection systems in cars. I thought it was a really interesting concept to utilize computer vision to make sure that people are not falling asleep behind the wheel. However, since I don't regards myself as a car manufacturer but rather an installation artist I thought it would be interesting to apply this technology in an artistic context. Thus, this project was conceived.

That being said, I realised that making this installation would be a great opportunity for me to get more comfortable with implementing machine learning models in an artistic process as well as to get to know more about the latest ESP microcontroller, the ESP32 (which is must say is awesome!).


Finding & setting up hardware

First things first - I got the code from Adrian's blog running on my Macbook pro. I wanted a dedicated computer for this project so using my Macbook permanently was out of the picture. I then thought of using my trusty Raspberry Pi 3 as an alternative. I then went through all the hassle of installing openCV and Dlib on my Rasp Pi only to find out that it it didn't have computational muscle to run this relativly demanding script. I then turned my attention to the more powerful singel board computer, the TinkerBoard, which is made by Asus and should be able to perform adequately. Although I managed to install openCV, all its dependencies and Dlib on the TinkerBoard it kept telling me that it couldn't find the camera on any of the USB ports - A problem that I unfortunately wasn't able to resolve since it exists within the operating system, I think?. If any of you knows how to have the Thinkerboard stream video data from one of its USB ports to OpenCV I'd be very interested in hearing how.

Raspberry Pi 3 & Thinkerboard single board computers

I then remembered that I had an old laptop lying around running Windows 7. I had put it out of commision because it kept freezing and was to slow for my daily needs. However, for this it was perfect!. I took it apart and installed Ubuntu 16.04 as its new OS. This also turned out to solve some of the old freezing problems. I will not bore you with the details of installing all the necessary software but man I wouldn't wish that on my worst enemy - which of whom I believe I have none :).



At this point I was able to run the Dlib model on the hardware. However, I still wanted the script to automatically launch when the system would be turned on. Therefore I wrote a small bash script with which I can execute a few commands in terminal when the OS has fully booted. It took quite some time to find out how to run programs automatically on startup but apparently one can create a directory ( ~/.config/autostart ) in which everything that is placed there will be executed after boot complete. This I think will come in handy in future projects as well.

gnome-terminal commands & auto run Bash Script


[Desktop Entry]
Type=Application
Exec=gnome-terminal --command "path/to/your/.sh/script"
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name[en_NG]=Terminal
Name=Terminal
Comment[en_NG]=Start Terminal On Startup
Comment=Start Terminal On Startup


  echo "running python 2.7 script at startup"
  cd /home/user/Desktop/eye_detection
  python eyeDetection.py --shape-predictor data.dat # python script importing dlib face model


The frame

The actuators need for this system consists of 5x 750mm LED stips and a geared DC motor (servo). The DC motor drives a timing belt that pulls the curtains in or out while the LEDs is used to light up some cinematic looking letters from behind. Both the LEDs and the motor is controlled by an ESP32 dual core microcontroller.


First I had to design a frame in which all the individual components could be mounted without having them disturbing each other. The frame shoud be wide enough to fit in one aluminium, plexiglas and two way mirror sheets while also allowing a bit of space for the light to disperse but not so wide that it would make the final piece look bulky. This took a few iterations before I was ready to mount the actuators.


I could now test the motor and timing belt setup and quickly realised that the motor was sufficient in force to pull the curtains but would probably destroy the limit switches because of too much momentum. To address this problem I decided to add a rotary encoder to back of the motor and then use this sensor data to make the microcontroller slow down the speed of the motor when the curtains are approaching the limit. With a bit of tweaking I got this method to work rather well and now the curtains will actually stop at the limits - what a success!


Now, I wanted the installation to display a "secret" message that would only show when a person closes one's eyes. Since the curtains wouldn't be fast enough for the person standing in front of the installation to not read the message when opening their eyes I got the idea to use a so-called two way mirror in between the letters and the curtains. I then glued the letters to the frosted acrylic sheet behind the two way mirror. What is cool about a two way mirror is that it has a 99% reflection when no light comes from behind but becomes almost totally transparent when light is shined from the backside. This way a person interacting with the system will not be able to read the message if no light (LEDs) are shined from behind the acrylic sheet. This allowed me to just turn the lights off when the person opens their eyes consequently making the two way mirror go from totally transparent to 99% reflective at a fraction of a second.

On the picture below you see me working on the acrylic sheet that I "frosted" myself. This I must say, I cannot recommend doing yourself. I spent hours sanding and polishing and managed to get a decent result. Though if you are not (in relative terms) poor like me or if you regard your time as somewhat valuable then save yourself the hassle and just buy the damn thing.



Now when the frame was almost complete I was ready to mount the computer vision unit, LED transistor modules, power supplies e.g. on the back of the installation. I also decided to add the old laptop monitor to the backside because I thought people might find it interesting to see what the computer sees as well as to make it easier for me to make quick calibrations when exhibiting the piece at new locations. I could now go ahead and paint the parts of the installation that would be visible to people and then start testing the system in its entirety.


When testing the system I found that it was really inconvenient to see of the actuators was working according to the program while having to close my eyes for the system to activate. Since I still had one core on the ESP32 that I was not using I thought why not have it host a server on a wireless access point that I could use to see what was going on inside the system. Therefore I quickly put together some code that would make the ESP32 host a GUI based server on which I could see what way the curtains were moving, the encoder value, error messages and most importantly be able to pull the curtains in or out without having to close my eyes. If you find this interesting you may find the html for the access point WiFi server here.


conclusion

A lot of credit for this project goes to Adrian Rosebrock for his tutorial on importing Dlib into Python as well as the cool team behind Dlib itself. I really have been standing on the shoulders of giants.

As usual I started a project that at first was way more than I could chew but miraculously ended up with something that actually works! It has been a true pleasure getting to know more about all these relatively new subjects while implementing them into an artistic context. When it comes to technology, posistance really tends to pay off.

In the future I might reuse this code to make a piece that revolves around Heisenberg's uncertainty principle. This principle touches on the quantum physical phenomena that you can not know both the velocity and location of a particle at a given point in time. Thus, a particle exists potentially everywhere and it is only when it is being observed that it obtains an actual "location". An interesting phenomena that might be better understood by utilizing software like what used for this project - I don't know...

If you'd like to take a closer look at the code for this project you may find most of it on this Github repo.