Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[Discussion] GANs For Test Case Generation For Source Code

Since the past few months, I have been really immersed in software security and found source code analysis as a really interesting domain under which I wanted to pursue further research. I have the following research idea and needed validation from fellow DL enthusiast Redditors if the idea is even feasible.

Idea :

Create a GAN that can help generate test cases that can break/crash/exploit the given source code. The idea revolves around the fact that the Generator will help generate the test cases and the discriminator will evaluate the test case and score the generator on the basis the number of branches of computation that the generator is visiting, i.e. score the output of the generator on basis of the code coverage attained by the output of the generator.

While thinking about this idea I also realized that string input generation can become a problem where the inputs to evaluate can explode. And currently, I don’t know how I can circumvent such a problem.

I wanted to validate this research idea before I dive further into how it can be done.

submitted by /u/thunder_jaxx
[link] [comments]

[P] Access to Cloud GPU

Hi everyone!

I’ve got some projects around ML for the winter break and I wonder if some of you knows website that offer access to GPU for the calcul part.

I saw some of them years ago and I was wondering if it still exist. They had a free access for a limited time.

Thank you and merry Xmas !

submitted by /u/Krokodeale
[link] [comments]

[D] Separate app server and deep learning server?

I am in charge of integrations and deployments for a small analytics firm. We have many apps that require various machine learning models (mostly deep learning) for image-based classification, context detection, and sequence predictions.

I want there to be a separate app server for each app, and a single heavy-duty deep learning server that hosts and runs all these models. We are currently using ONNX models to have a uniform runtime for all the models because our various DS teams prefer different frameworks.

I wanted to know the best practices in such a scenario. Would it be better to have the inference module in the app server or on a separate central server specialized for ML? If so, would it be better to preprocess the data on the app server or pass raw data to the ML server and perform preprocessing before inference?

Thanks. 🙂

submitted by /u/mischief_23
[link] [comments]

[D] 16x Tesla V100 Server, Benchmarks + Architecture

https://lambdalabs.com/blog/announcing-hyperplane-16/

Tesla V100s have I/O pins for at most 6x 25 GB/s NVLink traces. So, systems with more than 6x GPUs cannot fully connect GPUs over NVLink. This causes I/O bottlenecks that significantly diminish returns of scaling beyond six GPUs.

This article provides an overview of their architecture that bypasses this limitation using additional high bandwidth links. Looking at the benchmarks, multi-GPU performance scales almost perfectly linearly from 1x GPU 16x GPUs.

I’m one of the engineers who worked on this project. Happy to answer any questions!

submitted by /u/mippie_moe
[link] [comments]

[R] David Duvenaud: Bullshit I and others have said about Neural ODE’s

https://youtu.be/YZ-_E7A3V2w

Excellent talk from Neurips retrospectives workshop – one of the most interesting ones I’ve seen.

One question I had related to the last question: When I originally heard about the adjoint sensitivity method used for the backwards pass of Neural ODE’s, I was curious about the fact that the backwards pass is essentially untethered from the forwards pass. Would it make sense to write a ODE solver specifically for the backwards pass that is able to make use of the forwards pass checkpoints? For example, you could try to enforce that ypur backwards pass doesn’t stray too far from for your forwards pass. I don’t know enough about ODE solvers to know if this makes sense.

submitted by /u/programmerChilli
[link] [comments]

[P] pytorch-fuzzdom: Write browser tests without DOM specifics

The goal of the project is to let developers write automated browser (ie selenium) tests that have no specific DOM knowledge of the target application. The idea is to reduce the need to rewrite these tests when the underlining DOM implementation changes. To accomplish this an RL agent trained on the Miniwob++ dataset to map user specified actions to a series of DOM events. The agent is exposed through a familiar `ActionChains` interface that ques up actions to be performed in a browser.

One notable difference from prior approaches is the utilization of the DOM as graph data. This uses pytorch-geometric to represent both the state space and the available action space. This allows the agent to work with a variable number of nodes and actions.

Github: https://github.com/zbyte64/pytorch-fuzzdom
Current status: The model after a day of training should converge on ~10 of the 16 tasks. I am about to have less free time to work on this so I am making it public now.

submitted by /u/zbyte64
[link] [comments]