Precision Oncology 3.0

Rapid learning by using panomics and statistical reverse engineering methods to hypothesise the putative tumour driver networks
20 Mar 2014

The emerging paradigm of Precision Oncology 3.0 uses panomics (genomic, transcriptomic, proteomic, metabolomic, etc.) and sophisticated methods of statistical reverse engineering to hypothesise the putative networks that drive a given patient's tumour, and to attack the tumour drivers with combinations of targeted therapies. In a review article, published in the February 2014 issue of Nature Reviews Clinical Oncology, Jeff Shrager of the CommerceNet and Jay Tenenbaum of the Cancer Commons, discuss on a paradigm termed Rapid Learning Precision Oncology, wherein every treatment event is considered as a probe that simultaneously treats the patient and provides an opportunity to validate and refine the models on which the treatment decisions are based.

Implementation of Rapid Learning Precision Oncology requires overcoming a host of challenges that include developing analytical tools, capturing the information from each patient encounter and rapidly extrapolating it to other patients, coordinating many patient encounters to efficiently search for effective treatments, and overcoming economic, social and structural impediments, such as obtaining access to, and reimbursement for, investigational drugs.

Borrowing from Web nomenclature, the authors roughly distinguish three generations of Precision Oncology.

Precision Oncology 1.0

It is prevailing standard that involves testing for small numbers of molecular abnormalities that are correlated with drug response in particular tumour types. Precision Oncology 1.0 is almost always constrained by the tissue-of-origin, and other non-molecular characteristics such as microscopic histology.

Precision Oncology 2.0

It involves examining dozens or potentially hundreds of possible mutational hotspots simultaneously, or sequencing the exomes of several hundred cancer-associated genes, and this approach might sometimes disregard non-molecular characteristics.

Precision Oncology 2.0 requires that laboratories have specialised equipment, perhaps including next-generation sequencing, and imposes a much greater interpretive load on the physician, who may be expected to develop a therapeutic regimen to match a wide range of possible molecular subtypes.

Because of these requirements, few patients have had the opportunity to take advantage of Precision Oncology 2.0, but with the broad availability of next-generation sequencing and molecular diagnostic service providers to aid in interpretation, it is rapidly becoming the standard of care at leading cancer centers worldwide.

Precision Oncology 3.0

The emerging new generation of precision oncology, which the authors call Precision Oncology 3.0, uses broad-spectrum panomics and sophisticated network-based statistical reverse engineering methods to hypothesise the putative driver networks for a given patient's tumour. Once these are computed, they are combined with important contextual features (such as the patient's treatment history, status, and preferences, as well as knowledge of available drugs and drug interactions) to hypothesise a treatment plan that attacks the tumour drivers with combination of narrowly targeted therapies.

The heart of Precision Oncology 3.0 is driver network analysis and clinical targeting and treatment planning. Driver network analysis identifies key genes, which modulate established cancer hallmarks. By charting the trajectory of a tumour's molecular profile over time, it might be possible to anticipate how a cancer is likely to evolve, and to take proactive steps to block it from doing so.

In Precision Oncology 3.0, every treatment event is a probe, simultaneously treating the patient and providing an opportunity to test and improve molecular understanding of the disease. Whereas classical clinical trials provide strong evidence for the efficacy and/or effectiveness of a small set of treatments in a large set of patients, Precision Oncology 3.0 works in the opposite way, evaluating a wide range of possible treatments in a small cohort of patients, and then aggregating the results over all such experiments to achieve strong evidence. Moreover, by capturing what is learned about each pathway and each drug at each such encounter, the resulting knowledge can be generalised to other drugs or drug combinations, patients, and cancers, enabling learning to proceed rapidly, one patient at a time instead of one trial at a time. Evidence for the ability of Precision Oncology 3.0 to learn from single patients comes from the analysis of exceptional responders in large-scale clinical trials.

Precision Oncology 3.0 in cancer clinics

Investigators at some major cancer clinics in USA are beginning to apply Precision Oncology 3.0 in a clinical setting. Medical institutions pursuing Precision Oncology 3.0 include, but are not limited to, the Duke Centre for Personalized and Precision Medicine, the Institute for Precision Medicine at the Weill Cornell Medical College at New York–Presbyterian Hospital, the MD Anderson Cancer Centre Institute for Personalized Cancer Therapy, the Centre for Translational Pathology at the University of Michigan, and the Personalized Cancer Medicine Program at the Icahn Institute for Genomics and Multiscale Biology at Mount Sinai Hospital.

Unfortunately, the high variability of cancer, the enormous amount of very complex data delivered by panomic technologies, the large number of targeted therapies under development, and the need for combined regimens, all distributed over a considerable, but distinctly finite number of patients, render cancer, in effect, a large number of rare diseases.

To efficiently search a space of this nature, one needs to capture the learnings from as many patients and treatment experiments as possible in a continuously updated knowledge base, and use that knowledge to guide each treatment decision across all patients in a coordinated manner that optimises the tradeoffs between patient outcomes and knowledge acquisition.

A promise from Rapid Learning Precision Oncology

By tightly integrating research and clinical care around and across individual patients, the rapid learning paradigm has the potential to dramatically accelerate knowledge acquisition and reduce delays in getting promising treatments into the clinic.

Developers can get early validation of new drugs by testing them on patients with the right mutations, who are otherwise out of options. Physicians can share and learn from the thousands of clinical experiments that take place daily, but which are rarely deeply analysed, and even more rarely reported in the literature. Scientists can use preclinical experiments on a patient's cell line or xenograft to inform that patient's treatment. Most importantly, patients can be treated in accord with the best available treatments and the world's collective knowledge on how and when to use them.

In theory, applying the principles of rapid learning to Precision Oncology 3.0 would provide each patient with the best possible treatment based on the latest knowledge, while efficiently gathering evidence to advance understanding of cancer mechanisms, molecular subtypes, and therapies.

Several technical approaches can address Precision Oncology 3.0 challenges, for example seeking correlations in existing data (data mining or big data analysis), detailed analysis of particular cases (small data analysis), rapid scientific communications, and coordination through a process termed Global Cumulative Treatment Analysis.


Jeff Shrager, Jay M. Tenenbaum. Rapid learning for precision oncology. Nat Rev Clin Oncol 2014; 11(2): 109–118.

Last update: 20 Mar 2014

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings