Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] Using ML for a Graduate Project

Hello,

As the title states, I will be using ML for a project of mine, and was hoping I could get some direction on where to start. I have some experience with ML in the past, but not using it in the way that I would like to for this project.

I always thought it was pretty cool watching people train models to play games, and decided that I would like to do something like that for my project. However, I am having some trouble getting started.

The game I decided on using for this project is an open-source game called SuperTuxKart. Basically, a racing game like Mario Kart. I wanted to train a model to run the time trials/beat the AI/use the abilities/etc.

The game is built using C++, but I was thinking of maybe using Python to hook into the game process and send commands to it like that. But again, this is not something I have done before, so if that is not the correct approach here, please let me know.

Any and all advice would be greatly appreciated.

Thanks!

submitted by /u/pilpod
[link] [comments]

[D] Interesting papers for RNN problem / Time-Series prediction with largely _known_ underlying actuators?

A bit of background: For my master thesis will I predict the inverse dynamics of a robotic arm by matching the measured position, speed and acceleration of each joint to the measured torque applied to the respective joint. The measured torque can – for the purpose of this machine learning task – be seen as the ground truth and is currently determined through feedback controllers measuring how much the actual trajectory is off in comparison to the intended trajectory (though it would be advantageous to know this beforehand, hence the ML application). This basically makes the problem a supervised learning problem.

For a quick visualization: see the following graphs: I have to match the [position, speed and acceleration] of a 7 joint robot to the [applied torque] (the blue line is the actual measurement of applied torque to the first joint, the others are quick drafts of ML prediction systems). If you would like to read more about this problem, I can recommend this recent paper

Now to my issue: As I understand it, does this problem seem like a typical time-series problem well suited for RNNs. However all obvious underlying actuators (position, speed, acceleration) that will influence the predicted torque are known, making it therefore unnecessary for RNNs to detect some underlying time dependent pattern and therefore turning my problem into a relatively simple nondiscrete classification problem (I hope this is the correct term) – correct? However the non-obvious and hardly- or non measurable underlying actuators (e.g. friction, inertia, deflection, etc), that cause inverse dynamics prediction to be a Machine learning problem in the first place, may (or may not…) be time dependent. Considering this is the problem still a viable RNN problem as I understand it, even though the underlying actuators are largely known.

My question: Aside from me being very happy with you checking my logic (and also general feedback), would I also appreciate any links/keywords to research that looks into that RNN problem / Time series prediction with the twist that the underlying actuators are mostly known. I would also appreciate any links/keywords to recent research in the field of nondiscrete classificaiton that is promising, as I will approach the inverse dynamics problem as a nondiscrete classification problem as well as a time series prediction problem and compare the viability of both over the course of my thesis. I also have more or less read most research that specifically looks into inverse dynamics prediction problem, though I am hoping to get good research that looks at this problem in a more general way or looks at a similar problem in another application domain, so that I might employ it for the inverse dynamics prediction task.

Thank you for taking the time of reading my question, I very much appreciate it!

submitted by /u/OnePaulToRuleThemAll
[link] [comments]

[R] Is this NAS method beating EfficientNet in accuracy vs latency/FLOPs tradeoff? Once for All: Train One Network and Specialize it for Efficient Deployment

this is the paper:

https://openreview.net/forum?id=HylxE1HKwS

They compare with MobileNet V3, and get 76.4% top1 accuracy with 238 MAdds (says flops, but I think it is wrong). While EfficientNet B0 ( https://arxiv.org/pdf/1905.11946.pdf ) gets 77.3 for 390 Madds (again, says flops, but that’s just wrong?)

So B0 gets 0.9% advantage but 1.6x the flops… there is no direct comparison between them, according to figure 5 seems the 76.4% keeps going up with flops

Any thoughts on this? seems it should get more attention

submitted by /u/skariel
[link] [comments]

[D] How ICLR review scores changed during the discussion period

Hi,

I have gathered some statistics about the publicly available ICLR review data. In particular, I was interested in how the scores were affected by the discussions.

In summary, 11.79% of all reviews changed their score (mostly improvements).

Surprisingly, reviewers who stated that they “I do not know much about this area” or “I made a quick assessment of this paper” were less likely to change their score.

Detailed results and code can be found here

submitted by /u/mlechlll
[link] [comments]

[Research] STFT within neural network pipeline

I have been thinking and looking for an answer for this for a while, but I couldn’t really find a satisfactory solution on google (or maybe i’m not looking for the right thing), hence my post here.

Assume I have a GAN that generates raw audio waveforms. The generator is a convolutional neural network that produces raw audio waveforms, which are passed to a discriminator that evaluates it and backprop is performed. This is pretty straight forward.

But I found that my discriminator is pretty bad at distinguishing real from fake waveforms, therefor I would find it beneficial if I could convert the generated waveform to a spectrogram with an STFT and discriminate real from fake spectrograms.

I understand how the forward pass is performed, but my problem is with backprop. I understand that we compute an error based on the discriminator predictions and back propagate it through the discriminator, which is a standard CNN classifier. But now what happens in between the discriminator and generator? Do we perform an ISTFT on the back propagated error? And how is this done in keras or PyTorch? Would it be some special kind of intermediary layer? I would like to implement this, but I have no idea where to even start.

In general, how is a domain conversion handled within a neural network pipeline?

It would be really helpful if you could share your thoughts on this, or point me towards some work that has already been done on this. Thanks in advance and cheers!

submitted by /u/khawarizmy
[link] [comments]

[D] Node Embedding & GNN for Graphs

I am reading about node/graph embeddings. It seems that Neural Networks & the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications to generate embeddings from graph-data. However, when generating node embeddings learned from GNNs, I don’t seem to understand how edge information are captured. How do you incorporate edge information (if you have a lot of edge features) to generate graph/node embeddings?. Most of the techniques that I came across [1] [2] don’t consider edge information.

Do you have any recommendation of a paper/reference of a method that incorporate edge rich information to generate embeddings?

submitted by /u/__Julia
[link] [comments]