Medical datasets are notoriously small making it difficult to use modern deep learning techniques which rely on large amounts of data. This paper showcases that a deep network pre-trained on ImageNet, and then re-trained on a much smaller dataset provides strong results for melanoma screenings.
Utilizing Transfer Learning to Augment Small Medical Datasets
Post · Mar 23, 2017 18:47 · Share on Twitter
Knowledge transfer impacts the performance of deep learning -- the state of the art for image classification tasks, including automated melanoma screening. Deep learning's greed for large amounts of training data poses a challenge for medical tasks, which we can alleviate by recycling knowledge from models trained on different tasks, in a scheme called transfer learning. Although much of the best art on automated melanoma screening employs some form of transfer learning, a systematic evaluation was missing. Here we investigate the presence of transfer, from which task the transfer is sourced, and the application of fine tuning (i.e., retraining of the deep learning model after transfer). We also test the impact of picking deeper (and more expensive) models. Our results favor deeper models, pre-trained over ImageNet, with fine-tuning, reaching an AUC of 80.7% and 84.5% for the two skin-lesion datasets evaluated.