Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Using same model in both implementation and evaluation

Let’s say for my research project I use a model in part of my implementation (e.g. To calculate semantic similarities between sentences) since it is the best suited for the particular task at hand. Then in the evaluation of my implementation and those of my competitors, can I use the same model (e.g. To calculate semantic similarities between output and input) or would this be inappropriate since it was used in my own implementation? It wouldn’t make much sense to use a different model for the evaluation since it is the “best suited” both for implementation and evaluation. What should the approach be here?

submitted by /u/AnonMLstudent
[link] [comments]