arxivst stuff from arxiv that you should probably bookmark

Recursive Models Write Programs

Post · Apr 25, 2017 17:04 ·

recursion program-semantics cs-lg cs-ne cs-pl

A program that writes programs needs recursion is the conclusion of this new paper. The authors take several standard programming tasks and propose a model architecture that proves it has perfect generalizability with small amounts of training data.

Highlights From the Paper

  • Recursion enables provably perfect generalization.
  • Existing models suffer from a common limitation—generalization becomes poor beyond a threshold level of complexity.
  • The learned recursive programs solve all valid inputs with 100% accuracy after training on a very small number of examples, out-performing previous generalization results.

Arxiv Abstract

  • Jonathon Cai
  • Richard Shin
  • Dawn Song

Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.

Read the paper (pdf) »