Learn About Our Meetup

4200+ Members

Category: Reddit MachineLearning

[D] Observations from OpenAI’s Five (Dota 2)

I’ve been working on writing an article about AlphaStar for a while (cough, very late), but after last week’s events I decided to sit down and write about OpenAI’s 5 success.

There are a few areas I wish I had more knowledge to expand on:

  • I wish I knew more about OpenAI’s Rapid to write about.
  • Pros and Cons of PPO for Dota 2. I’d also like to know what didn’t work.
  • Decision Tree of Starcraft 2 vs OpenAI. Relative to each of the games, OpenAI has solved more of Dota 2’s action space. However, Starcraft 2 seems to have a larger decision tree?
  • OpenAI mentioned “surgery” when thinking of transfer learning, but there isn’t much information out there.

Very open to feedback and suggestions, thanks!

submitted by /u/jshek
[link] [comments]

[D] GAN Immediate Mode Collapse

I’m not even sure if mode collapse is the correct term; neither the generator nor discriminator is learning anything when I pass both true/false samples to the discriminator. If instead I only show the discriminator true or false samples, the loss drops. I’ve seen mode collapse after a few epochs of training other GANs but never complete stagnation out of the gate. What might be going wrong here?

def generator(): neurons = 121 model = Sequential() # Input shape [batch_size,timestep,input_dim] model.add(LSTM(neurons,activation='tanh',recurrent_activation='hard_sigmoid',kernel_initializer='RandomUniform',return_sequences=True)) model.add(LSTM(neurons,activation='tanh',recurrent_activation='hard_sigmoid',kernel_initializer='RandomUniform',return_sequences=True)) model.add(Dense(1,activation=None)) return model def discriminator(): model = Sequential() # Input shape [batch_size,steps,channels] model.add(Conv1D(32,4,strides=2,activation=None,padding='same',input_shape=(None,1))) model.add(LeakyReLU()) model.add(Conv1D(64,4,strides=2,activation=None,padding='same')) model.add(LeakyReLU()) model.add(BatchNormalization()) model.add(Conv1D(128,4,strides=2,activation=None,padding='same')) model.add(LeakyReLU()) model.add(BatchNormalization()) model.add(Dense(128,activation='relu')) model.add(Dense(1,activation='sigmoid')) return model def generator_containing_discriminator(g, d): model = Sequential() model.add(g) d.trainable = False model.add(d) return model def g_loss_function(y_true,y_pred): l_bce = keras.losses.binary_crossentropy(y_tue,y_pred) l_norm = K.sqrt(K.square(y_true)-K.square(y_pred)) return l_bce+l_norm def train(X,Y,BATCH_SIZE): d_optim = SGD(lr=0.002) g_optim = SGD(lr=0.00004) g = generator() d = discriminator() gan = generator_containing_discriminator(g, d) g.compile(loss=g_loss_function, optimizer=g_optim) gan.compile(loss='binary_crossentropy',optimizer="SGD") d.trainable = True d.compile(loss='binary_crossentropy', optimizer=d_optim) num_batches = int(X.shape[0]/float(BATCH_SIZE)) for epoch in range(1000): for index in range(1,num_batches): # Prepare data startIdx = (index-1)*BATCH_SIZE endIdx = index*BATCH_SIZE inputs = X[startIdx:endIdx,:] targets = Y[startIdx:endIdx] # Generate predictions Y_pred = g.predict(inputs) # Build input and truth arrays for discriminator targets = targets.reshape(BATCH_SIZE,1,1) truth = np.vstack((np.ones((BATCH_SIZE,1,1)),np.zeros((BATCH_SIZE,1,1)))) d_loss = d.train_on_batch(np.vstack((targets,Y_pred)),truth) d.trainable = False # Test GAN g_truth = np.ones((BATCH_SIZE,1,1)) g_loss = gan.train_on_batch(inputs,g_truth) d.trainable = True print('Epoch {} | d_loss: {} | g_loss: {}'.format(epoch, d_loss,g_loss)) g.save_weights('generator',True) d.save_weights('discriminator',True) return d,g,gan 

submitted by /u/Cranial_Vault
[link] [comments]

[Project]Deploy trained model to AWS lambda with Serverless framework

Hi guys,

We have continue updating our open source project for packaging and deploying ML models to product (, and we have create an easy way to deploy ML model as a serverless ( project that you could easily deploy to AWS lambda and Google Cloud Function. We want to share with you guys about it and hear your feedback.


A little background of BentoML for those aren’t familiar with it. BentoML is a python library for packaging and deploying machine learning models. It provides high-level APIs for defining a ML service and packaging its artifacts, source code, dependencies, and configurations into a production-system-friendly format that is ready for deployment.

Feature highlights: * Multiple Distribution Format – Easily package your Machine Learning models into format that works best with your inference scenario: – Docker Image – deploy as containers running REST API Server – PyPI Package – integrate into your python applications seamlessly – CLI tool – put your model into Airflow DAG or CI/CD pipeline – Spark UDF – run batch serving on large dataset with Spark – Serverless Function – host your model with serverless cloud platforms

  • Multiple Framework Support – BentoML supports a wide range of ML frameworks out-of-the-box including Tensorflow, PyTorch, Scikit-Learn, xgboost and can be easily extended to work with new or custom frameworks.

  • Deploy Anywhere – BentoML bundled ML service can be easily deploy with platforms such as Docker, Kubernetes, Serverless, Airflow and Clipper, on cloud platforms including AWS Lambda/ECS/SageMaker, Gogole Cloud Functions, and Azure ML.

  • Custom Runtime Backend – Easily integrate your python preprocessing code with high-performance deep learning model runtime backend (such as tensorflow-serving) to deploy low-latancy serving endpoint.


How to package machine learning model as serverless project with BentoML

It’s surprising easy, just with a single CLI command. After you finished training your model and saved it to file system with BentoML. All you need to do now is run bentoml build-serverless-archive command, for example:

 $bentoml build-serverless-archive /path_to_bentoml_archive /path_to_generated_serverless_project --platform=[aws-python, aws-python3, google-python] 

This will generate a serverless project at the specified directory. Let’s take a look of what files are generated.

 /path_to_generated_serverless_project - serverless.yml - requirements.txt - copy_of_bentoml_archive/ - (if platform is google-python, it will generate 

serverless.yml is the configuration file for serverless framework. It contains configuration to the cloud provider you are deploying to, and map out what events will trigger what function. BentoML automatically modifies this file to add your model prediction as a function event and update other info for you.

requirements.txt is a copy from your model archive, it includes all of the dependencies to run your model is the file that contains your function code. BentoML fill this file’s function with your model archive class, you can make prediction with this file right away without any modifications.

copy_of_bentoml_archive: A copy your model archive. It will be bundle with other files for serverless deployment.


What’s next

After you generate this serverless project. If you have the default configuration for AWS or google. You can deploy it right away. Otherwise, you can update the serverless.yaml based on your own configurations.

Love to hear feedback from you guys on this.





Edit: Styling

submitted by /u/yubozhao
[link] [comments]

[Project] Computer Vision with ONNX Models

Hey everyone! I just created a new runtime for Open Neural Network Exchange (ONNX) models called ONNXCV.

Basically, you can inference ONNX models for realtime computer vision applications (i.e. image classification and object detection) without having to write boilerplate code. It is useful in the sense that one can focus on making deep learning models using any deep learning library (that converts models into the ONNX file format), without having to sacrifice time into building the actual inferencing program.

Let me know what you think and if I should continue to build upon this (or not).

Here is the code.

submitted by /u/Chromobacterium
[link] [comments]

[P] I2S OCR – Image 2 Speech App

Hey folks,

We are pleased to introduce the I2S OCR scanner app. I2S is a state-of-the-art OCR Scanner that practically turns almost any images with human readable characters into text content which is in turn transformed into human voice in your native language & accent.

Once the image data (Book page, magazine, journal, scientific paper, etc.) recognized & transformed into text content, you’ll be able to playback that text in your local accent & over 45 languages of your choice!

Text output not understood? no problem, use the built-in translation service & get your text translated to over 75 languages of your choice. Generate PDF on the fly, Copy to device clipboard &, share your text output with friends.

Feature Set Includes:

  • State of the art OCR processing algorithm powered by PixLab.
  • Ability to recognize the input language automatically.
  • Speaks over 45 languages & their accents.
  • Translate output to over 70 languages of your choice.
  • Generate PDF, Share your output & Give your feedback.


We hope you enjoy using I2S and we look for your feedback if any!

submitted by /u/histoire_guy
[link] [comments]

[D] Confused with axis and space

If i apply the dirichlet process, or any other clustering method for that matter, am able to create components/clusters out of my data, these are plotted on a 2d plane. What are the axis that define this space? What are the plots and clusters defined as? And why is it safe to assume that the distance between each value can determind whether or not the data shares a similar structure / topic?

submitted by /u/Unlistedd
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat