Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Debugging model performance discrepancy between offline eval and online exp

I got the chance to have an interview with an online ads company. The interviewer asked me a question

“if we expect a newly trained model to perform well in online exp, but exp result is pretty negative, how to debug? “

My answer is “may be caused by overfitting. if so, can change the models, e.g. if using decision tree, can switch to random forest”.

The interviewer seems not very satisfied with the answer as he says switching model is heavy weight. I then answered that it could be feature or data distribution discrepancy. Then he asked how to debug these two cases. I am a little stuck.

Want to know some of your opinions?

submitted by /u/marksteve4
[link] [comments]