Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
So this is a kind of complex question, so I hope I formulate it good enough.
I have a human activity detection task that binary classifies if a user does a specific action or not. For me, it is enough if the system detects the action within 3 seconds after it initially happened.
I am using smartphone sensor data with a frequency of 50Hz, which I then combine with a windowing approach with windows of 1sec length and 0.5sec overlap (i.e. I calculate statistics such as `mean` or `std` for each sensor data for a set time of 1 sec, store these in “windows” and overlap these “windows” by 0.5sec).
For LSTM to learn longterm data I use 5 such windows as timesteps (which would represent 3sec of data) and shift each timestep by one window. So the shape of the data fed to the model is:
[13000 instances, 5 timesteps, 21 features]
Now let’s consider the following case of a finished classification of such a model where all of the large squares in the image are labeled as an event, but only some are classified as such:
As I understand it, LSTM using the `binary_crossentropy` loss function and `accuracy` as a metric in Keras will evaluate the results in a way that the above accuracy would be 2 out of 5 correctly classified instances. However, the accuracy, in this case, should be 100% because my goal is to detect the event within 3 sec, so as long one of these 5 timesteps are labeled as the event I should get 100% accuracy.
So my questions are: