Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Author: torontoai

[D] Which Machine Learning algorithm is better to use for Google Ads data?

Hi, I’m trying to adjust bids(Max. CPC) of products, advertised on google ads, based on their past performance like Clicks, Impressions, Conversions, Product Price, etc. Google provides those data in CSV format. Now I want to utilize those data for bidding intelligently instead of using Smart Bidding feature provided by Google itself. Is there any good Machine Learning algorithm that can be used for such a case. Or, no need to use any ML algorithm here? Or, any better idea that can be applied for this use-case? Thanks in advance.

submitted by /u/Vipool
[link] [comments]

[P] Implementation of AAGAN image dehazing network

Hi all, I am currently trying to implement AAGAN paper

I found some inconsistency in the paper.

In Generator we have 5 encoding blocks. As far as I understand they change number of channels in such way: 32 – 64 64- 128 128- 256 256 – 512 512 – 1024

But the residual layers that follow the enBlocks have c512. What should I do with it? I make res layers with c1024 but I don’t know is it correct. Because it seems my network does not work correctly (it stucks at some point and G loss decreases VEERY slowly, while the D converges pretty fast. The dehazed images look better than hazed, but they are noticeably darker than the original ones and they do not remove haze effect completely.

I will share my code today later after fixing some code issues.

submitted by /u/denix56
[link] [comments]

[D] Is it allowed to republish your ‘workshop’ papers in other conference?

Hi, i’m considering writing a paper on upcoming ICML 2020, with my theoretical works and Primary experiments(e.g., MNIST things). At the moment my results is quite closed/or outperform some other SOTA models, but i think there’s no time to finish other two or three experiments for main conference paper. If i’d accepted with some workshop tracks in a conference, is it allowed to republish the same paper with supplement experiments/and some other theoretical claiming? If it’s ok, then are there any possible penalty for future submission? (e.g., lack of novelty compared to my prev workshop paper, …) Thank you in advance and happy new year!

submitted by /u/pky3436
[link] [comments]

[D] Lex Fridman’s AI podcast, is it really contributing to the AI field?

Hello,

I am following Lex’s podcast recently, however, I have noticed some researchers like Anima says that he is a clown and mouthpiece of Elon musk. Also, he had blocked many peoples on Twitter, which is maybe questionable? I would like to ask the community here how do they see the podcast is I am still newbie in the field?

Thanks,

submitted by /u/meldiwin
[link] [comments]

[D] Why isn’t there more research papers related to active learning for deep computer vision problems?

So perhaps this is a misguided question but to my (limited) understanding the biggest hurdle to applying deep computer vision models from research(ex: classification or object detection) to a new problem is data collection. If you have access to a video stream the problem is: what are the best images to annotate. To me this sounds like it would fall under the Active Learning umbrella, however, I’ve seen a very limited set of papers[1,2,3,4] applied to this. The experiments on these papers also aren’t that great because they don’t reflect reality (images in a video are not i.i.d.)

Am I missing something? Perhaps a better way of selecting what images to annotate that’s not related to active learning?

[1] Deep Active Learning for Object Detection

[2] Cost-Effective Active Learning for Deep Image Classification

[3] The power of ensembles for active learning in image classification

[4] Reducing Class Imbalance during Active Learning for Named Entity Annotation

submitted by /u/CartPole
[link] [comments]