Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Realtime Stereo Vision (toy project)

[P] Realtime Stereo Vision (toy project)

Sup. I happened across one of the projects I was working on a few years ago, and I thought I would share it here in case you guys wanted to play with it.

Sample video of script in action

tl;dr summary of the code is it does a shitty implementation of a few layers of lstm to process a live video feed from a webcam or two, and it tries to predict some configurable number of frames into the future, conditioned on its own action space also. Also, since I happen to have two identical webcams lying around, I adapted the code to also handle concurrent stereoscopic video feeds for the fun of it. It wasn’t really meant to be a super rigorous implementation (ie, I used lstm instead of cnn layers), but I still think it had some mildly interesting outcomes.

Also, imho the video feed of the loss makes for some fun visual effects.

Anyways, here’s a link to the eye-bleedingly bad (but mostly functional) code: https://github.com/Miej/online-deep-learning

enjoy!

submitted by /u/Miejuib
[link] [comments]