Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
First off I am not an ML engineer. I am am embedded software engineer working mostly in safety critical systems. So if there are some dumb assumptions here don’t crucify me. One of the biggest things that strikes me about ML is it’s black box nature. We can’t ask the machine how it made a descision, in fact I’ve heard claims that we shouldn’t because it would inject human bias into the system. For things like data scraping and image recognition that seems fine, but I can’t imagine having a conversation at my work go like this:
“X failed, people died. Go figure out how and fix it.
Sorry boss I can retrain the model with this new outcome but I can’t tell you why it broke or guarantee to any degree of certainty it won’t happen again”
That just wouldn’t fly. Is there something I’m missing?
submitted by /u/nocomment_95
[link] [comments]