Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Role of Big Data for Evaluation and Supervision of Medicines in the EU

Summary report of the Heads of Medicines Agencies – EMA Joint Big Data task force
18 Feb 2019
Bioethical Principles and GCP;  Targeted Therapy

Recommendations for a path towards understanding the acceptability of evidence derived from ‘Big data’ in support of the evaluation and supervision of medicines by regulators were published on 15 February 2019 as part of a summary report of the Heads of Medicines Agencies (HMA) – European Medicines Agency (EMA) Joint Big Data task force. The recommendations and associated actions set out what needs to be addressed, but the mechanisms by which this may be achieved requires further focused work over the coming year. Stakeholders are invited to submit feedback and observations on the recommendations to inform the upcoming work of the group. 

Massive amounts of data are generated on a daily basis through wearable devices, electronic health records, social media, clinical trials or spontaneous adverse reaction reports. There is no doubt that insights derived from this data will increasingly be used by regulators to assess the benefit-risk of medicines across their whole lifecycle. However, in order to benefit from and make prudent use of the data collected, regulators need a deeper understanding of the data landscape.

The HMA – EMA Joint Big Data task force is composed of experienced medicines regulators from 14 national competent authorities and EMA. In preparing the report, it assessed the generation of ‘big data’, their relevant sources and main formats, the methods for processing and analysing big data and the current state of expertise across the European medicines regulatory network.

A crucial step was defining ‘big data’ itself, a widely-used term that is lacking a commonly-accepted definition. The definition adopted by the task force reads as follows: “extremely large datasets which may be complex, multi-dimensional, unstructured and heterogeneous, which are accumulating rapidly and which may be analysed computationally to reveal patterns, trends and associations. In general, big data sets require advanced or specialised methods to provide an answer within reliable constraints.”

Six subgroups of data sources relevant to regulatory decision-making were considered by the taskforce: genomics, bioanalytical ‘omics (proteomics, etc.), clinical trials, observational data, spontaneous adverse drug reactions data and social media and mobile health data.

Stakeholders and members of the public are invited to submit their comments on the core recommendations in the summary report (not to exceed 1,000 words) to bigdatasec@dkma.dk until 15 April 2019. In particular, views on prioritisation of future actions would be welcomed. The feedback received will be taken into account in the next phase of the work of the task force. A newly-refined mandate for the group is in place for the next year to define next steps and prioritisation of actions.

Last update: 18 Feb 2019

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.