Google has recently introduced the Cloud Vision API for image analysis. According to the demonstration website, the API “quickly classifies images into thousands of categories, detects individual objects and faces within images, and finds and reads printed words contained within images.” It can be also used to “detect different types of inappropriate content from adult to violent content.” In this paper, we evaluate the robustness of the Google’s Cloud Vision API to input perturbation. In particular, we show that by adding sufficient noise to the image, the API generates completely different outputs for the noisy image, while a human observer would perceive its original content. We show that the attack is consistently successful, by performing extensive experiments on different image types, including natural images, images containing faces and images with texts. Our findings indicate the vulnerability of the API in adversarial environments. For example, an adversary can bypass an image filtering system by adding noise to an image with inappropriate content.