Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[P] Tensorflow deployment

Hi, I gather different methods how to deploy Tensorflow model to cater different processing, https://github.com/huseinzol05/Gather-Deployment

1 . Object Detection. Flask SocketIO + WebRTC

  • Stream from webcam using WebRTC -> Flask SocketIO to detect objects -> WebRTC -> Website.

2 . Object Detection. Flask SocketIO + opencv

  • Stream from OpenCV -> Flask SocketIO to detect objects -> OpenCV.

3 . Speech streaming. Flask SocketIO

  • Stream speech from microphone -> Flask SocketIO to do realtime speech recognition.

4 . Text classification. Flask + Gunicorn

  • Serve Tensorflow text model using Flask multiworker + Gunicorn.

5 . Image classification. TF Serving

  • Serve image classification model using TF Serving.

6 . Image Classification using Inception. Flask SocketIO

  • Stream image using SocketIO -> Flask SocketIO to classify.

7 . Object Detection. Flask + opencv

  • Webcam -> Opencv -> Flask -> web dashboard.

8 . Face-detection using MTCNN. Flask SocketIO + opencv

  • Stream from OpenCV -> Flask SocketIO to detect faces -> OpenCV.

9 . Face-detection using MTCNN. opencv

  • Webcam -> Opencv.

10 . Image classification using Inception. Flask + Docker

  • Serve Tensorflow image model using Flask multiworker + Gunicorn on Docker container.

11 . Image classification using Inception. Flask + EC2 Docker Swarm + Nginx load balancer

  • Serve inception on multiple AWS EC2, scale using Docker Swarm, balancing using Nginx.

12 . Text classification. Hadoop streaming MapReduce

  • Batch processing to classify texts using Tensorflow text model on Hadoop MapReduce.

13 . Text classification. Kafka

  • Stream text to Kafka producer and classify using Kafka consumer.

14 . Text classification. Distributed TF using Flask + Gunicorn + Eventlet

  • Serve text model on multiple machines using Distributed TF + Flask + Gunicorn + Eventlet. Means that, Distributed TF will split a single neural network model to multiple machines to do feed-forward.

15 . Text classification. Tornado + Gunicorn

  • Serve Tensorflow text model using Tornado + Gunicorn.

16 . Text classification. Flask + Celery + Hadoop

  • Submit large texts using Flask, signal queue celery job to process using Hadoop, delay Hadoop MapReduce.

17 . Text classification. Luigi scheduler + Hadoop

  • Submit large texts on Luigi scheduler, run Hadoop inside Luigi, event based Hadoop MapReduce.

18 . Text classification. Luigi scheduler + Distributed Celery

  • Submit large texts on Luigi scheduler, run Hadoop inside Luigi, delay processing.

19 . Text classification. Airflow scheduler + elasticsearch + Flask

  • Scheduling based processing using Airflow, store inside elasticsearch, serve it using Flask.

20 . Text classification. Apache Kafka + Apache Storm

  • Stream from twitter -> Kafka Producer -> Apache Storm, to do distributed minibatch realtime processing.

21 . Text classification. Dask

  • Batch processing to classify texts using Tensorflow text model on Dask.

22 . Text classification. Pyspark

  • Batch processing to classify texts using Tensorflow text model on Pyspark.

23 . Text classification. Pyspark streaming + Kafka

  • Stream texts to Kafka Producer -> Pyspark Streaming, to do minibatch realtime processing.

24 . Text classification. PyFlink

  • Batch processing to classify texts using Tensorflow text model on Flink batch processing.

25 . Text classification. Streamz + Dask + Kafka

  • Stream texts to Kafka Producer -> Streamz -> Dask, to do minibatch realtime processing.

Discussion

  1. I am waiting for official binary released flink 1.10, support custom Python function, right now stable released 1.9 cannot.
  2. Just realized, I missed FastAPI, will going to add later.
  3. Kubeflow required at least minikube, maybe will add later.
  4. Will add imagezmq -> nginx load balancer -> process multiple video sources on multiple machines, display on single machine.

Feel free to comment!

submitted by /u/huseinzol05
[link] [comments]

[D] Techniques to encourage feature diversity?

In my (convolutional) neural network, there are some activation maps that behave/look almost exactly like each other (i.e. high activations in response to the same visual pattern), which is undesirable behavior for my current project. What are some techniques one can use to encourage the diversity of features extracted by a neural network?

submitted by /u/seann999
[link] [comments]

[P] MusicGAN

Hey guys!

I just submitted MusicGAN to the #TFWorld hackathon and thought I’d post my results! A link to a video demo can be found in the repo. MusicGAN is just the catchy name I gave the project. I essentially implemented the architecture described in WaveGAN with WGAN-GP using Tensorflow 2. I had actually tried to create an LSTM with minature WaveGANs to create audio of any duration, however, I was unable to get it to work well enough before the submission deadline. I am currently looking at creating sounds with different instruments, and will then return to my attempt at the recurrent version.

Harry

submitted by /u/HStuart18
[link] [comments]

[D] Why do disentangling methods not result in independent dimensions of the learned representation?

By disentangling methods I mean methods under the VAE framework such as factor-VAE and beta-TCVAE which explicitly regularize the total correlation of the aggregate posterior q(z) approx 1/N sum_n q(z | x_n).

Locatello et. al. in their large-scale study of disentanglement methods (1) show empirical evidence to demonstrate that the dimensions of the mean representation of q(z|x) (usually used for representation) are correlated, but it seems that the dimensions of the mean representation by definition are independent if we use a factorial distribution to represent the posterior such as a diagonal-covariance Gaussian. Also, averaging this representation over the data distribution should also be factorial if we assume that the aggregate posterior q(z) is factorial (proof in 2), so I think the claim in 1 is wrong.

submitted by /u/Tonic_Section
[link] [comments]

[D] A bird’s-eye view of modern AI from NeurIPS 2019

Hi folks, I had a chance to attend NeurIPS this year and wrote a blog post outlining my impressions, sharing here in the hopes that they are useful for people and spark a conversation!

https://alexkolchinski.com/2019/12/30/neurips-2019/

Comments on what you agree/disagree with, other things you noticed, links to different perspectives etc. would be much appreciated.

submitted by /u/kolchinski
[link] [comments]