Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[P] SparkTorch: Distributed training of PyTorch networks on Apache Spark with ML Pipeline support

SparkTorch is a project that I have wanted to do for awhile, and after Pytorch released a variety of great updates to the distributed package, I decided to build a package that could easily orchestrate training on Apache Spark. The goal was to be able to easily integrate the training of Pytorch models to the Spark ML Pipeline. This was done by creating a custom estimator that could be saved and loaded for inference (or even additional training). Right now, there are two modes of training: distributed synchronous and Hogwild!. I will be continuing work on the project and would definitely enjoy collaboration.

submitted by /u/lodev12
[link] [comments]

[R][P] StarGAN v2: Diverse Image Synthesis for Multiple Domains

[R][P] StarGAN v2: Diverse Image Synthesis for Multiple Domains

![img](ypy74yikfs241 “Diverse image synthesis results on the CelebA-HQ dataset and our newly collected animal faces (AFHQ) dataset. The first column shows input images while the remaining columns are images synthesized by StarGAN v2.”)

!(pdaoqt5rfs241 “StarGAN v2 can transform a source image into an output image reflecting the style (e.g., hairstyle and makeup) of a given reference image. Additional high-quality videos can be found at the link below.”)

arXiv: https://arxiv.org/abs/1912.01865

github: https://github.com/clovaai/stargan-v2

video: shorturl.at/eACS9

Abstract

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain variations. The code, pretrained models, and dataset will be released for reproducibility.

submitted by /u/yunjey
[link] [comments]

[D] Preferred Networks (creators of Chainer) migrating it’s research platform to PyTorch from Chainer

Press Release: https://preferred.jp/en/news/pr20191205/

Preferred Networks Migrates its Deep Learning Research Platform to PyTorch

PFN to work with PyTorch and the open-source community to develop the framework and advance MN-Core processor support.

Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishikawa) today announced plans to incrementally transition its deep learning framework (a fundamental technology in research and development) from PFN’s Chainer™ to PyTorch. Concurrently, PFN will collaborate with Facebook and the other contributors of the PyTorch community to actively participate in the development of PyTorch. With the latest major upgrade v7 released today, Chainer will move into a maintenance phase. PFN will provide documentation and a library to facilitate the migration to PyTorch for Chainer users.

PFN President and CEO Toru Nishikawa made the following comments on this business decision.

“Since the start of deep learning frameworks, Chainer has been PFN’s fundamental technology to support our joint research with Toyota, FANUC, and many other partners. Chainer provided PFN with opportunities to collaborate with major global companies, such as NVIDIA and Microsoft. Migrating to PyTorch from Chainer, which was developed with tremendous support from our partners, the community, and users, is an important decision for PFN. However, we firmly believe that by participating in the development of one of the most actively developed frameworks, PFN can further accelerate the implementation of deep learning technologies, while leveraging the technologies developed in Chainer and searching for new areas that can become a source of competitive advantage.”

Rest of article…

submitted by /u/hardmaru
[link] [comments]

[D] Popular practical AI course changes to TensorFlow 2.0 + Keras

Saw this post trending on twitter and thought it might be a good resource for beginners / intermediates.

tweet: https://twitter.com/GokuMohandas/status/1202411040295645184

course: https://practicalai.me

The tweet has details on why the author changed from PyTorch to TensorFlow and it’s one of the few times I see concrete reasons for this specific use case. I’m an avid PyTorch dev (for research and production) but I’m willing to give TF a shot…but GOD I remember hating gradient tape.

Has anyone tried TF2.0 yet and liked it?

submitted by /u/mlcoursesreview
[link] [comments]

[D] start a Machine Learning Theory subreddit?

In reading the r/machinelearning posts I often feel that there are two communities here:

  1. People who are interested in getting the last 2% of performance on their big data sets, using methods they can use *today*,
  2. Researchers who are interested in discussing new ideas and algorithms, that may be only only demonstrated on MNIST or may not be practical for years to come.

I waste time skipping past comments from the other community (the one I’m not interested in), and sometimes find that I cannot tell even from the title of a thread which community it appeals to. Open the thread, scan, close, a minute wasted.

I suspect that explicitly recognizing these two communities would both save everyone a bit of time, but also might encourage some deeper conversations (at least on the research side – discussions won’t be buried with all the “doesn’t work as well as existing methods my real-world big dataset” comments).

There are already several ML related subreddits (LearnML, Deeplearning, ?). But this one appears to be the only one that attracts real researchers (I know that some of you are here), and if so is the only one that has this particular dichotomy.

What do you think?

submitted by /u/pointy-eigenvector
[link] [comments]