Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
A Google whistleblower explained that much of the demonetization/censorship action occurring on YouTube is done through Google’s speech-to-text. If so, it seems that altering a video’s audio to become an adversarial example, prior to it being uploaded, could serve to slow what’s happening.
Is it possible to reliably generate adversarial examples for an ai which you do not have direct access to (Google’s Cloud Speech-To-Text is behind a pay wall)? I’ve heard Lex Fridman mention that adversarial examples are often effective against multiple networks, even when their structures differ.
submitted by /u/ShameSpirit
[link] [comments]