Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Let’s say for my research project I use a model in part of my implementation (e.g. To calculate semantic similarities between sentences) since it is the best suited for the particular task at hand. Then in the evaluation of my implementation and those of my competitors, can I use the same model (e.g. To calculate semantic similarities between output and input) or would this be inappropriate since it was used in my own implementation? It wouldn’t make much sense to use a different model for the evaluation since it is the “best suited” both for implementation and evaluation. What should the approach be here?
submitted by /u/AnonMLstudent
[link] [comments]