[D] Does Neural Program Synthesis be improved with x100 scaling of samples/compute/labels/curriculum ?!
Looking at some recent papers on program synthesis
Neural (Meta) Program Synthesis, Singh {GB}
AlphaNPI twitted that acceptance to NIPS2019 with spotlight
I am wondering if field is still working out good architectures, representations, etc
OR
existing SOTA techniques can be applied if we have x100 more compute, or a massive dataset of input-output pairs, or maybe a long detailed curated curriculum of specs and solutions, etc
submitted by /u/so_tiredso_tired
[link] [comments]