Cancer photo
SHARE

Add one more item to the list of things machines can do better than humans: Examine and diagnose breast cancer. Stanford researchers have developed new software that can automatically evaluate microscopic images of breast cancer and make determinations about its aggressiveness and type, offering patients an accurate prognosis. It’s more accurate than a human doctor, as it turns out.

The system brings cancer pathology, which has largely been unchanged since the Great Depression, firmly into the 21st century.

The new system is called C-Path, for Computational Pathologist. It can classify the types of cancer cells present, and even identified a new set of features that are associated with a poor chance of survival, according to its developers. Yale University pathologist Dr. David Rimm, who was not involved in the research, said it could transform the use of computers in pathology and medicine. “C-Path appears to be a Watson-like precursor to computer-aided medicine,” he writes in a commentary accompanying a paper about the work.

Stanford researchers led by Dr. Andrew Beck developed a machine-learning algorithm and trained it using existing tissue samples taken from patients whose fate was already known. The computer analyzed a suite of images from the Netherlands Cancer Institute and Vancouver General Hospital, and made thousands of measurements of the cells’ morphology and other characteristics. Then the human pathologists hand-trained the computer to distinguish between two types of cells, stromal and epithelial (connective tissue vs. glandular tissue), an important distinction in projecting breast cancer severity and spread. Ultimately, the C-Path system used 6,642 individual features and created a new scoring system that can predict a breast cancer patient’s outcome.

Once it was trained, Beck et al. used C-Path to evaluate tissues of cancer patients it had not examined before. Again, the researchers already knew the outcome, so they were able to check C-Path’s success rate. Its results were a statistically significant improvement over human-based examination, the authors say.

C-Path even figured out something pathologists haven’t — that the characteristics of the cancer cells and the surrounding cells were both important in determining a patient’s outcome. “Through machine learning, we are coming to think of cancer more holistically, as a complex system rather than as a bunch of bad cells in a tumor,” said Dr. Matt van de Rijn, a professor of pathology and co-author of the study, in a statement. “The computers are pointing us to what is significant, not the other way around.”

This is an impressive result, as Rimm explains it, because existing pathological analysis is a very subjective science. Experts make judgments about tissue metastasis and a patient’s overall chances of survival by simply looking at tissue, and their own diagnoses can vary widely, as a 2008 study showed. But a computer model using thousands of times more criteria could be much more consistent.

Despite this success, C-Path is still a long way from clinical use, the authors say. But it’s a strong proof of concept, and as Rimm says, human pathologists ought to take notice. The work is published online today in the journal Science Translational Medicine.

Processed images from patients alive at 5 years after surgery and from patients deceased at 5 years after surgery were used to construct an image-based prognostic model. After construction of the model, it was applied to a test set of breast cancer images (not used in model building) to classify patients as high or low risk of death by 5 years.

C-Path

Processed images from patients alive at 5 years after surgery and from patients deceased at 5 years after surgery were used to construct an image-based prognostic model. After construction of the model, it was applied to a test set of breast cancer images (not used in model building) to classify patients as high or low risk of death by 5 years.