[D] Apple’s new iOS 13 Music highly accurate and intelligible lyrics system. How does it work?
When iOS 13 released, the music app had a redesign on the lyrics player, showing the lyrics like a slide show as the music played with incredible time accuracy. Plus, all the phrases were revealed in perfectly grouped of text. The lyrics player even detected when the song was in an instrumental without any singing. It was even enable to detect the quiet mumbles and unintelligible words that are covered in the background music.
So the main question is… can this be done without ML and if it is (most likely), how can you get or train a model that can reproduce this intelligent system?
I’d love to hear your thoughts!