Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] An Interesting (in my opinion) Observation While Messing With the Full GPT-2

When playing around with an online implementation of the full model (talktotransformer.com), I noticed it can do something that is (to me) really cool: It can complete analogies! If you use an open quote and start an analogy leaving out the last word, it can often get it right! This is interesting because the model was not to my knowledge trained to do this, so it must be an emergent result of “understanding” the english language! I’ve so far tried a number of different analogies in varying orders, for example ‘”Angry is to Anger as Afraid is to’ and ‘”Big is to Bigger as Small is to’, and while it doesn’t ALWAYS get it right it does more often than not.

I tried this on the earliest, incomplete model they released and it failed, so this seems to be unique to the full model (although I never tried with any of the models of intermediate complexity that they released in between with their staged release plan, so I can’t confirm at which point it gained the ability).

Anyone else noticed this? And am I alone in thinking it’s cool? It may not be as flashy as writing an article, but it shows a level of “understanding” that things like markov chains, etc., can’t generally match IME.

submitted by /u/Argenteus_CG
[link] [comments]