Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
u/Luxonis-Brandon put together a video demonstrating the real-time speed of the DepthAI.
The device is something we’ve been working on that combines disparity depth and AI via Intel’s Myriad X VPU. We’ve developed a SoM that’s not much bigger than a US quarter which takes direct image inputs from 3 cameras (2x OV9282, 1x IMX378), processes it, and spits the result back to the host via USB3.1.
Our ultimate goal is to develop a rear-facing AI vision system that will alert cyclists of potential danger from distracted drivers, so we needed disparity + AI to get object localization outputs – an understanding of where and what objects are. This needs to happen fast and with as little latency as possible… and at the edge… and at low power!
There are some Myriad X solutions on the market already, but most use PCIe, so the data pipeline isn’t as direct as Sensor–>Myriad–>Host, and the existing solutions also don’t offer a three camera solution for RGBd. So, we built it!
If anyone has any questions or comments, we’d love to hear it!
Shameless plugs for our hackaday and crowdsupply 🙂
submitted by /u/Luxonis-Brian
[link] [comments]