Learn About Our Meetup

4500+ Members

[D] how do you expect ML to transition over to safety critical systems?

First off I am not an ML engineer. I am am embedded software engineer working mostly in safety critical systems. So if there are some dumb assumptions here don’t crucify me. One of the biggest things that strikes me about ML is it’s black box nature. We can’t ask the machine how it made a descision, in fact I’ve heard claims that we shouldn’t because it would inject human bias into the system. For things like data scraping and image recognition that seems fine, but I can’t imagine having a conversation at my work go like this:

“X failed, people died. Go figure out how and fix it.

Sorry boss I can retrain the model with this new outcome but I can’t tell you why it broke or guarantee to any degree of certainty it won’t happen again”

That just wouldn’t fly. Is there something I’m missing?

submitted by /u/nocomment_95
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.