Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Update–using an Orbbec Astra Pro, improved tracking, and again the dynamixel Pan/Tilt turret, ROS and YOLOv3 for realtime robotic object tracking

https://youtu.be/QoP2Hu_RQcU

Above is an update to an ongoing “Applied-ML” project of mine.

This is a pan tilt turret equipped with an infrared depth camera that is being guided by YOLOv3 in ROS to track “Human heads”. I trained YOLO using Google OpenImages V4, and used pirobot’s code for “Robotics By Example, volume 2”, leggedrobotic’s darknet_ros, and my own headtracker node to take the 2D data for the bounding boxes from YOLO and retrieve the 3D data associated with specific depth registered RGB pixel coordinates for tracking.

The detection is much smoother in this release, although at about :12 in the video here, it jolts hard to the right in error (likely an error in lead and/or joint speed update, should be easy to resolve).

YOLO performs significantly better on the NVidia Tesla k40 that I’m using here as well, upgraded from the GTX1060 in my previous post. I’m also using a calibrated Orbbec Astra Pro instead of the Kinect 360 as well. The depth registration of the RGB as well as the stability of the detection has noticably improved.

I plan to begin the challenge of designing a rudimentary implementation of “visual dialogue” with this in an eventual upcoming upgrade. Ideally, I want this to be able to not just hold somewhat of a conversation, but be able to look around a room at objects that it’s capable of detecting, use SLAM to store their location, and interact with people and the world around it verbally and within context (an example being “what is that cat behind you doing?” and have it respond with looking for said cat, tracking and mapping it’s location, and generating a verbal response).

Stay tuned for more updates; the next will be a bit more exciting!

Link to the first release of this bot and description of the underlying technology is below:

https://www.reddit.com/r/MachineLearning/comments/dik1lr/p_my_implementation_of_object_tracking_using_an/

submitted by /u/Oswald_Hydrabot
[link] [comments]