Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] How to detect/prevent fatal implementation bugs?

When you have an idea and implement it in a program and you don’t get the high accuracy that you hoped for, it could be because the idea itself doesn’t work or the implementation has a bug. If the low accuracy is due to a bug but you conclude that the idea doesn’t work, you could be tragically missing an important research result.

One way to solve the problem is to implement the idea twice, preferably with two people working independently, and compare their results. If the results (prediction and accuracy metrics) are the same for the same input, you conclude that both implementations were made as intended. If not, then you have to make them agree, examining differences and fixing bugs along the way. However, different implementations are bound to make different assumptions about various details and have different output even if there are no bugs. Even different ways of using a PRNG results in different results. Generally, it’s a very labor-intensive painstaking process that can take even more time than the initial implementation itself.

Are there other, more efficient, ways to ward off implementation errors that could doom one’s research?

submitted by /u/Syncopat3d
[link] [comments]