A team of Stanford scientists are developing an artificially intelligent (AI) diagnosis algorithm for skin cancer that might change the way medicine operates.
The algorithm has so far been “trained” to visually diagnose a potential cancer from a database of nearly 130,000 skin disease images, including melanoma. It performed with amazing accuracy.
“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory, said in a press release. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'”
The end product was the subject of a paper published this month in Nature, titled “Dermatologist-level classification of skin cancer with deep neural networks.”
Researchers have so far tested the algorithm against 21 board-certified dermatologists and, particularly in the diagnosis of skin lesions, the algorithm was capable of matching the performance of dermatologists.
Dermatologists begin skin diagnoses with visual examinations, usually through a dermatoscope, a portable microscope that allows the dermatologist to magnify the patient’s skin. If they identify something abnormal, the natural next step is to ask for a biopsy of that tissue.
Adding the algorithm to the diagnosis process could combine a visual examination with deep learning, a type of AI that is modeled after brain neural networks, meaning that a computer can be “trained” to solve a problem instead of giving answers that have previously been programmed into it.
To build the algorithm, the researchers turned to a project developed by Google researchers who had already trained an algorithm to identify 1.28 million images from 1,000 categories.
Based on the technology already developed, the Stanford team was able to train the algorithm to differentiate between a malignant carcinoma and a benign seborrheic keratosis, for example, after going through more than 130,000 images of skin lesions representing about 2,000 different diseases.
Then, the algorithm was tested against 21 board-certified dermatologists. The dermatologists were asked whether, based on each image, they would ask for a biopsy, indicate any kind of treatment, or reassure the patient. This test was repeated 370 times. The algorithm performed with significant accuracy levels in all three diagnostic tasks in which it was evaluated.
The researchers now want to work on making the algorithm available as a smartphone app to bring skin cancer diagnosis closer to anyone who wants it. The team trusts the algorithm will be relatively easy to adapt to smartphone use, after real-world clinical assessment.
“Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients,” said Susan Swetter, co-author of the paper and director of the melanoma program at Stanford. “However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike.”