Sparse signal recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they are able to run orders of magnitude faster than existing methods. Unfortunately, these methods are difficult to train, often-times specific to a single measurement matrix, and largely unprincipled blackboxes. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be unrolled to form interpretable deep neural networks. Taking inspiration from this work, we develop novel neural network architectures that mimic the behavior of the denoising-based approximate message passing (D-AMP) and denoising-based vector approximate message passing (D-VAMP) algorithms. We call these new networks Learned D-AMP (LDAMP) and Learned D-VAMP (LDVAMP). The LDAMP/LDVAMP networks are easy to train, can be applied to a variety of different measurement matrices, and come with a state-evolution heuristic that accurately predicts their performance. Most importantly, our networks outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and runtime. At high resolutions, and when used with matrices which have fast matrix multiply implementations, LDAMP runs over $50\times$ faster than BM3D-AMP and hundreds of times faster than NLR-CS.