In this work, we propose a method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. Inferring scene illumination from a single photograph is a challenging problem. The pixel intensities observed in a photograph are a complex function of scene geometry, reflectance properties, and illumination. We introduce an end-to-end solution to this problem and propose a deep neural network that takes the limited field-of-view photo as input and produces an environment map as a panorama and a light mask prediction over the panorama as the output. Our technique does not require special image capture or user input. We preprocess standard low dynamic range panoramas by introducing novel light source detection and warping methods on the panorama, and use the results with corresponding limited field-of-view crops as training data. Our method does not rely on any assumptions on scene appearance, geometry, material properties, or lighting. This allows us to automatically recover high-quality illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion lead to results that are photo-realistic, which we demonstrate over a large set of examples and via a user study.