Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Reshaping the Cancer Clinic

Big data in cancer is in the early stages, but undergoing initiatives promise to advance oncology field
03 Feb 2016
Targeted Therapy

In the recent Nature Outlook supplement on Big data in biomedicine, an article by Charlie Schmidt discusses projects that are front line initiatives with a promise to make advances in oncology. In particular, the areas discussed about big data (genomic and clinical) are about a rapidly growing start-up company, a professional association's initiative, a computer giant's cognitive computing and a network of academic cancer centers.

The Cancer Genome Atlas, which catalogues cancer mutations, contains around 2.5 million gigabytes of data. This giant project, run by the US National Institutes of Health, has vastly improved understanding of various types of cancer. However, it holds relatively little information on the clinical experience and outcomes in patients who supplied the samples.

Electronic health records contain a wealth of case-specific information that could be used to improve cancer care. But more often than not, such records are isolated in individual hospitals and medical practices and lost to research. In an effort to improve cancer treatment, many oncologists are now collaborating on efforts to bring together and make sense of the big data. Opportunities for big data extend across most areas of medicine, but cancer is upfront.

Lynn Etheredge, a health-care consultant, who in 2007 wrote an influential article for Health Affairs calling for “rapid learning systems” to handle big data, believes that oncology community has entered a historic period for cancer research and treatment.

Hoping to build on early successes with personalised cancer drugs, oncologists and computer specialists are working together to harness digitized information and apply it in the clinic. These emerging ventures are competing for business and are grappling with difficult questions about privacy, data ownership and sustainable business models, according to Schmidt’s interview with Etheredge.

The article features some of organisations and approaches that are bringing big data to the cancer clinic in the US.

A rapidly growing start-up company

Launched in 2009 by scientists at the Broad Institute in Cambridge, Massachusetts, Foundation Medicine bills insurance companies for its analytical services. Academic and community oncologists submit patients' tissue samples, and Foundation Medicine sequences them, screens for genomic cancer drivers against its own growing database of molecular profiles (generated from more than 50,000 cancer patients so far) and data from other public repositories.

Foundation Medicine analyses the tissues and report back available therapeutic interventions, either in the form of a drug approved by the US Food and Drug Administration or a clinical trial. Oncologists can also query Foundation Medicine's client network for advice on difficult cases. Within 72 hours, responses are aggregated and sent to the doctor. The company aims to make its client data more broadly available for use in clinical decision-making.

In January 2015, Roche spent 1 billion USD on a 56% stake in Foundation Medicine, expecting revenue of more than 85 million USD.

Professional association's initiative

The American Society of Clinical Oncology’s (ASCO) CancerLinQ is a platform designed to extract patient data from electronic health records, anonymize and aggregate the data from oncology practices. It is expecting that oncologists will be able to interrogate CancerLinQ to see the effects of specific interventions, to review how their own treatment approaches stack up against established care standards, and to develop hypotheses for further study. Aggregating such large amounts of data, integrating them with other types of information, including doctors' notes and biomarker repositories should help to reveal the effectiveness of particular drugs or approaches.

“Much of what we know about treating cancer comes from clinical trials that enroll just 3% of the patients diagnosed with cancer every year,” according to Dr Clifford Hudis, who serves on CancerLinQ's board of governors. “With CancerLinQ, we're trying to learn from the remaining 97% who don't participate in these studies.”

An initial group of 15 practices of varying sizes are participating in the system, which ASCO expects to contain 500,000 patient records by 2016.

Artificial intelligence

Big data needs big computing, and in 2013 IBM formed a separate business unit (IBM Watson Health) to focus on commercial opportunities in cancer for its Watson cognitive computing system, which combines natural language and learning capabilities. Watson's store of biomedical knowledge includes every abstract in the PubMed, the US National Cancer Institute's Drug Dictionary, the entire catalogue of somatic cancer mutations in the COSMIC (Catalogue of Somatic Mutations in Cancer) database, which is curated by the Wellcome Trust Sanger Institute, in Cambridge, UK, and data from many other sources.

Physicians at the Memorial Sloan Kettering Cancer Centre and at the MD Anderson Cancer Center are training Watson to become a clinical support tool, which entails presenting the computer with anonymized and hypothetical cases. But oncologists caution that Watson isn't ready for prime time yet. In particular, Watson's capacity for natural language processing remains a work in progress.

Academic networking

One of the major challenges facing cancer research is how to match patients with targeted drugs that act on rare mutations, because enrolling enough of these patients in clinical trials is not easy. But one group of hospitals has found a way to get round the problem.

Launched in 2014 by the Moffitt Cancer Center, in Tampa, Florida, the Oncology Research Information Exchange Network (ORIEN) comprises nine academic cancer centers. Patients provide clinical data and tissue samples for analysis, and importantly agree to life-long follow-up, which allows patients to be recruited into new trials geared to their own genetic profile.

Moffitt developed the protocol in 2003 and created a company, M2Gen, to handle the analyses and tissue storage in 2006. The development of ORIEN gives this protocol a national reach in the US, with about 130,000 people enrolled so far. Member centers share clinical and molecular data, so they can collaborate on research questions.

Cost

Extracting clinical insights from big data, and using them to guide treatments, does not come cheaply. For example, Foundation Medicine charges nearly 6,000 USD to sequence and interpret the data from a single solid tumour, and more than 7,000 USD for haematological malignancy. However, the technologies tend to become cheaper from a year to year.

The cost of new anticancer drugs often have price tags of more than 100,000 USD per patient per year. Some countries may bargain far more aggressively with drug companies to bring down prices, or reject the drugs altogether on a cost basis, through agencies such as the UK National Institute for Health and Care Excellence.

Ideally, big money will buy big gains in personalised treatments and cures.

Reference

Schmidt C. Reshaping the cancer clinic. Nature 2015; 527(7576):S10–S11.

Last update: 03 Feb 2016

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.