×
Home About Updates Deep Filters Youtube Github Contact

algorithmic rotoscope

Experimentation in applying texture to video using machine learning via APIs.

Unique Algorithmic Filters Is Where It Is At

I am having a blast with the image texture filters that Algorithmia includes as part of their DeepLearning service. I've been playing with how each filter will behave when applied to different images. Some work better for the desert images, while others work best for water, and I'll apply to my waterfalls, lakes, and rivers. The process has been a welcome distraction, but where I really get lost thinking about the possibilities is when it comes to creating your own filters--an area I am just getting started with. 

Algorithmia provides a pretty straightforward guide to creating your own filters. Once you fire up the EC2 GPU instance, and follow their setup process, creating filters is pretty easy, but is also pretty addictive, depending on what kind of habit can afford. ;-) I have created six filters so far, but plan on creating more as soon as I get some money and more time. Just like applying the filters, training filters takes some practice, and experience training it against a variety of images, colors, textures, etc.

Experience applying filters to a variety of images is important and valuable, but experience training and creating filters I think is where it is at. Being able to find just the right filter to apply to an image or images used in the video is valuable, but being able to identify and create and train exactly the right set of textures, colors, and filters could provide some really unique experiences. I'm not a big fan of the concept of intellectual property, but I could see knowledge of training your texture and filter algorithms against specific pieces of art, photographs, and elements from our physical worlds being a pretty potentially unique offering--something you'd want to keep secret.

Some of the Algorithmia filters are loud and intense, which I like for some applications, but I'm finding their lighter touch, more artistic, and subtle filters have a wider range of uses. I'm applying these findings to the filters I'm training, but I need more experience applying existing filters, as well as the training of new filters. All of this takes a significant amount of compute and storage power--which costs money. I have made my algorithmic rotoscope framework API-centric so that I can scale this and increase the number of videos I am able to process, as well as the number of filters I am able to create and add to the process.

I am going to create around 10-15 more filters, then spend time just applying to see what I can produce. Then I'm hoping to have enough experience in applying and training to know what works best, what I like, and what compliments my approach to drone video capture. Eventually, I am hoping to establish my own unique set of filters and my own unique style in applying them to video using my algorithmic rotoscope process.