Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I started working on the Toxicity Detection Kaggle challenge. I’d be curious to know how people familiar with NLP handle this kind of problems.
My problem is that I want to try things out, but running anything on the full model takes hours, and running on a subset of the data doesn’t always tell me how good the result is.
I have multiple computers and GPUs at home so I juggle between them but my workflow is generally a mess. It’s difficult to keep track of what I’m doing, what experiments I should prioritize, optimize hyperparameters etc.
Is there an all-in-one solution out there to manage my experiments? I’m starting to experiment with Kubernetes but it’s a lot of overhead to configure and run jobs.
I guess I could use a beefier tool like Kubeflow but I’m wondering if that wouldn’t be too big of a tool for the job.
How are professionals handling things?
submitted by /u/MasterScrat
[link] [comments]