Blog

Learn About Our Meetup

4500+ Members

Category: Reddit MachineLearning

DCGANS – Adding more convolutions [Project]

I’m training a DCGAN on a dataset of 10k images. I want to expirement with adding and taking away convolutions in attempt to balance out the generator and discriminator. Whats the rule to do this in practice?

Here is my generator:

self.main = nn.Sequential( nn.ConvTranspose2d(100, 512, 4, 1, 0, bias = False), nn.BatchNorm2d(512), # We normalize all the features along the dimension of the batch. nn.ReLU(True), # We apply a ReLU rectification to break the linearity. nn.ConvTranspose2d(512, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(256, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(128, 64, 4, 2, 1, bias = False),
nn.BatchNorm2d(64), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(64, 3, 4, 2, 1, bias = False), nn.Tanh() )

What input and output values should I have for the 6th layer?

submitted by /u/CasualTrip
[link] [comments]

[D] Efficient workflow with colab/jupyter?

I’m struggeling how to efficiently use Google’s colab facility.

My normal workflow is:

  1. Fiddle around in JupyterLab untill I have some result.
  2. Move code into standalone .py libraries, clean-up the JupyterLab notebook to call the library functions.
  3. Create testcases that test the standalone .py libraries, refactor code more.

This tandem between Jupyter and a traditional .py IDE helps to get code that is clean and testable. My notebooks tend to be messy, they aren’t unit-tested and might not work anymore in the future.

This doesn’t work that well with Google colab – it’s not that easy to move code to libraries and have it available in Google colab. But I would like to use the computing power that comes with colab.

What is your workflow with colab?

submitted by /u/zuuuhkrit
[link] [comments]

[N] NeurIPS 2019 videos

Greetings,
Just FYI (because it can also be found on the conference page: https://nips.cc/) and in case you missed it, this year the videos are not on the Facebook page but here:
https://slideslive.com/neurips/
In addition, brief paper overview videos are now linked on the website:
https://nips.cc/Conferences/2019/Videos
The conference started on the 8th of December ( Expo (Industry) Day on the 8th). The talks started on the 9th, and the tutorials among other things are already online.

submitted by /u/blissfox-red
[link] [comments]

[D] Advice on hourly rate as a student freelancer?

Hello! I need some advice about hourly rates for freelance work from machine learners. I’ve looked online a bit and the answers are all over the place or are generally for data scientists with degrees.

Right now I am still studying to get my master’s degree (eventually phd), and reading papers/practicing about ML. Recently one of my project got really popular on GitHub and as a consequence I’ve been contacted to help with implementing/designing models.

What should I charge for my skills?

Edit: I was asked to join a startup for the same reasons before for 90k usd after passing many interviews but it feels so out of proportions to me.

submitted by /u/greencent
[link] [comments]

[D] Neural Computers for Text Generation

Has anyone ever applied neural computer based models, e.g. Neural Turing Machine or Differentiable Neural Computer for text generation? In a short research I couldn’t find any such work.

In general these models should do a good job as they are essentially RNNs with an (theoretically) unlimited memory to write and read to and indeed have been proved to generalize better with respect to sequence length.

Reasons I see why such models haven’t been applied to the problem of text generation yet could be, that they are simply too complex and make it a burden to work with them. Of course it’s not something with the difficulty of string theory but nevertheless the entrance barriers for understanding and actually working with them are much bigger than for the usual CNN or RNN. Combined with the fact that the NLP community is much smaller than the one of CV or RL, there is probably also just a low probability that someone with the necessary skills and interests comes across this ideas.

submitted by /u/trashcoder
[link] [comments]

[D] Sonnet implementation of DeepMind’s VQ-VAE 2 image generation?

Paper. I am aware of the vqvae example in the sonnet repository, however this only covers reconstruction of images rather than image generation.

After digging a bit I found this pytorch implementation which does have the generation, but I am wondering if there is some Sonnet implementation that I just haven’t come across? Perhaps one by the DeepMind team?

submitted by /u/unrulyspeed
[link] [comments]

[P] Predicting soccer matches outcomes with machine learning as time series

Hello everyone,

I made a project that tries to predict outcomes of Premier League matches.

In short, the predictions are modeled as time series classification in an unconventional way. Basically, a neural network model is created for every team and are all trained simultaneously according to matches as they were successively played.

I did not publish the dataset used, but I put full source code on my github. It is quite restrictive due to missing dataset, so please take it mostly as an inspiration or to take a look how I implemented various things.

Anyway, if you are interested in a bit crazy approach, check it out!

Link to blog post: Predicting soccer matches outcomes with machine learning as time series

submitted by /u/xequin0x00
[link] [comments]

[Discussion] What algorithm should I use to remove the noise (in blue left corner)?

[Discussion] What algorithm should I use to remove the noise (in blue left corner)?

The data set needs to be cleaned of the noise being collected from a device. The device reports if the user was complying with the rules or not (red points are user complying with the standards).

The best method to go about it visually from the plot (it’s a 2D data – duration and magnitude) is to plot a region using the values in which user is complying. This picture is just one user, different users have different plots.

I am looking for an algorithm that is not plot dependent and helps in removing this noise signal.

https://preview.redd.it/9ht9ihfdyn341.png?width=432&format=png&auto=webp&s=49521a31e30bfaa9205aebf8c6e6f3d39d6bdd20

submitted by /u/kUl_BuOy
[link] [comments]

[R] Increase model performance by removing certain subsets of data.

In industry and research workflows today, we greedily acquire, label, and train as much data as possible. While more data usually corresponds with better model performance, this is not always the case. New research in data valuation allows us to target the subsets of our data that would train the best model.

In this article we explore cases where less data is better, and how to identify which data is irrelevant to the machine learning task at hand.

Would love feedback on the article!

submitted by /u/princealiiiii
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.