FOCUS FOCALS.png

FOCUS FOCALS.mp4

We’re constantly out in this world with many distractions which means it can be hard to get things done sometimes. Activities that need focus and commitment can be a very difficult thing to do especially in an environment where you are surrounded by other people doing their own activities. Due to this we concepted an idea that we call Focus Focals.

Using RunwayML and some creative editing, we take the viewpoint of a student that is in an environment where they could be distracted by both visual and auditory stimuli but with the help of Focus Focals they are now able to concentrate by making the stimuli much more toned down. It blurs out the distractions around and keeps the focus solely on the task at hand.

Using a feature within RunwayML called Sequel, we were able to select our focal point - in the case of our first example, it was a book.

Our process was as follows -

  1. We recorded a video from the users viewpoint.
  2. Inputted that video into Sequel
  3. Selected the book as our object of interest using the green screen feature.
  4. Combined the selected object with a blurred version of the original video.

I think the coolest thing about this is that even though this was a pre-recorded video that we selected what we wanted to mask out by using the green screen feature on Runway, is not very far off from what the final vision could be. An experience like this could be created with 3 possible futures.

  1. Being able to do this with real time video, which at some point the model would be able to understand exactly what we are trying to focus on. This is already somewhat of a reality using object detection.
  2. Being able to do this with multiple objects with just text/audio input. The team behind runway teased a feature where you could highlight objects just by typing in what they are. This would fit in very nicely with the idea of real time detection.
  3. Finally having this run on a real device. The Hololens seems to fit very well into this world with it’s augmented reality capabilities and would be able to take in the video data and render it out.

Past Ideas

This project was quite difficult for us. We were trying to think of multiple ideas which managed to fit the brief of creating synthetic media. We initially came up with 3 technologies we wanted to use but the hard part was trying to come up with meaningful commentary for the project.

Another thought was around the idea of rivalry

It was an interesting idea to take a piece of media by an artist and have a rival artist perform that same piece of media. One example of this was Tupac singing a song performed by Biggie. We would use this online utility called UberDuck.ai which would allow us to synthesize the song in the style of the rival artist. The tool allowed for a change in speech rate and pitch which made it sound more accurate. Although by using the tool, it was still very buggy and didn’t produce the result we wanted.