Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Hi guys, I have what could be a stupid question, but I see that I’m encountering this issue regularly and would like to know your opinion: so yesterday I was trying to improve my ML model in order to improve its accuracy, and found out that it was performing worse. Why? I checked the previous model architecture (saved with Keras plot_model) and saw what I did differently last week. No problem, I will just revert to that architecture and test again. Model overfits in half the epochs now. Damn, I also changed the dataset augmentation pipeline, now I cannot recreate those specific scores.
Basically this is my issue, I happen to develop a model for n-days, test it, save it etc. then after a couple of weeks I try to revert to “that good model setup I was having” and I cannot get the same results anymore as I changed too much stuff. I marginally fixed it by saving the model architecture as png using Keras in order to have a quick visual comparison, It’s not the end of the world, but I don’t have a clean way to deal with this issue. How do you guys avoid such problems?
Thank you!
submitted by /u/HitLuca
[link] [comments]