2025 CDISC + TMF Europe Interchange Program
Session 1: Opening Plenary
CDISC recognizes the importance of amplifying patient voices and provides a vital platform where experiences, challenges, and insights are not only shared but valued.It’s inspiring that brilliant minds gathered in this event, committed to advancing clinical research, optimizing the use of data, and improving lives.
Collaboration is the cornerstone of meaningful progress in research. Patients are more than subjects of research; we are partners. Our insights must guide study design, data interpretation, and regulatory discussions.
Living with a rare disease is challenging for the patient and their family. Still, rare diseases bring patient advocacy to a new level, investing in progress, cooperation, and co-creation. This leads to innovation and inclusivity in care, clinical research, and medicines but also in data standardization and data use.
Standardization and interoperability is essential—not just for efficiency but for equity, ensuring diverse patient voices are represented and data is accessible and accurate.
CDISC’s commitment to streamlined standards accelerates therapies, enhances research reliability, and promotes inclusivity.
I invite you to join me in this ongoing conversation, where together, we can bridge the gap between research and real-life experiences to drive impactful, compassionate innovation.
Morning Break
Session 2: Track A, B, C - The European Landscape of Clinical Research and Health Care
- Nick Halsey, EMA
- Jesper Kjaer, Novo Nordisk
- Eftychia-Eirini Psarelli, EMA
Session 2D+E: The Future of TMF (TMF Track)
Lunch & Poster Session
Session 3A: Digital Data Flow
Session 3B: Artificial Intelligence
Data Standards teams or Subject Matter Experts (SME’s) often face the same recurring challenge: addressing numerous questions about the implementation of SDTM and company-specific standards. These queries consume significant time and resources. Locating the rationale for past decisions or clarifying an approach often requires consulting multiple resources.
Enter ‘SANDY’ (Standards ANswers Do it Yourself): an AI-powered chatbot designed to alleviate this burden. While it may not replace the Data Standards team, SANDY enhances efficiency by quickly retrieving relevant information from a vast array of documents and providing direct references to the sources.
In this presentation we will share the story of SANDY’s development to a Minimal Viable Product (MVP): from selecting the ideal large language model (LLM) and setting up a vector database to creating a functional, user-friendly chatbot. We’ll discuss the different challenges and limitations we encountered and the innovative solutions we have implemented, together with an external partner, to overcome these hurdles. We’ll also explore how we trained the model and iteratively improved response quality and accuracy. The session will conclude with a live demonstration, including pre-designed questions.
Artificial Intelligence (AI) is transforming clinical trial research, and this presentation explores its integration into the CDISC Open Rules project. I will demonstrate a custom-trained GPT-powered chatbot embedded in the CDISC Open Rules editor, designed to assist users in creating and validating CDISC Open Rules efficiently—without programming expertise.
The session will cover the chatbot’s architecture, highlighting prompt engineering’s role in generating high-quality outputs. I’ll guide attendees through document preparation for accurate, context-aware results and showcase real-world examples of rule creation and validation. The AI-driven chatbot automates rule drafting, test data generation, and interactive support, reducing manual effort while enhancing efficiency.
This presentation will inspire attendees—whether data managers, statisticians, or other clinical trial professionals—to embrace AI-driven solutions in their work. By demonstrating a practical application of a GPT-powered chatbot, I’ll provide actionable insights to enhance workflows, improve efficiency, and foster innovation.
Session 3C: Innovation Showcase

Compliance with CDISC data standards is mandatory for clinical trial submissions to the FDA, PMDA and MHRA. However, adopting CDISC standards isn’t just a necessity. It’s an important investment that enables more meaningful research, and deeper data insights. Complying with CDISC data standards is an ongoing challenge for many organizations.
This presentation explores best practice processes to support and achieve successful implementation and compliance with regulatory requirements. We examine the concept of ‘designing studies with the end in mind,’ through early standards adoption. Essentially, how implementing industry standards from the start of a study, and designing a compliant trial upfront, rather than leaving compliance to the end, is the ultimate blueprint for best practice. We also examine the essential role of software in creating and maintaining clinical metadata standards.
In conclusion, we demonstrate through a case study how end-to-end standards implementation can be leveraged to not only achieve compliance, but also facilitate greater quality and consistency, as well as faster delivery of submission deliverables.
Session 3D: Technology in TMF Management (TMF Track)
In the digital age of Trial Master File (TMF) management, Veeva eTMF is a leading platform for ensuring compliance, accuracy, and efficiency. Despite its widespread use, variability exists in how TMF content is filed, processed, and maintained. Analyzing trends and processes from sponsors and CROs using Veeva eTMF provides insights to identify best practices, inefficiencies, and opportunities for improvement. This session will explore TMF data trends, focusing on document creation, collaboration, centralized vs. decentralized filing, AI’s impact on TMF quality, and performance metrics for different models (FSP vs. FSO) and across Reference Model zones. By analyzing TMF data, companies can gain new insights to improve decision-making, drive operational efficiency, and enhance document quality across clinical trials.
TMF metrics and KPIs have long been a core part of ensuring inspection readiness. But as clinical trials evolve, the value of TMF data goes far beyond just compliance.
In this session, we'll explore how these metrics can play a much bigger role in driving trial innovation and optimization.
We'll start by taking a look at the metrics and KPIs most commonly used to achieve inspection readiness. Things like quality, completeness, and timeliness are essential benchmarks, but they're not without their challenges. We'll highlight where organizations typically run into roadblocks and what can be done to overcome them.
From there, we'll shift gears to explore more advanced ways of using TMF data. The Session will show how analyzing TMF data differently can help companies embrace a riskbased approach, reduce QC cycle times and decrease overall TMF management efforts.
Finally, we'll push the boundaries of what's traditionally expected from TMF metrics. We'll look at emerging use cases, such as how TMF data can support predictive analytics to improve trial conduct to unlock hidden value from their TMF data, beyond inspection readiness.
This session will challenge attendees to think differently about TMF metrics and KPIs. By shifting the focus from compliance to innovation, organizations can improve trial oversight, enhance decision-making, and ultimately optimize how trials are conducted.
At the end of this session, participants will:
- Understand the role of TMF metrics and KPIs beyond inspection readiness and how they can be leveraged for trial innovation and operational improvements.
- Learn practical ways to use TMF data insights to drive efficiencies, improve collaboration, reduce cycle times, and proactively manage risks.
- Explore emerging use cases for TMF data in areas such as predictive analytics, risk-based monitoring, and vendor management to optimize clinical trial conduct
The TMF is comprised of a variety of clinical systems; each of them considered TMF repositories. TMF repositories are to be validated per the published EMA guideline titled “Guideline on the content, management and archiving of the clinical trial master file (paper and/or electronic)” released in December 2018.The TMF Reference Model added the Computer System Validation tab with version 3.0 in 2015. The table supports the organization of documentation associated with validation of clinical systems utilized in a clinical. Since the EMA GCP Inspectors Working Group published the “Guideline on computerised systems and electronic data in clinical trials” in March 2023, the TMF RM Computer System Validation tab has become that much more visible as it highlights the expectations of the guideline. This presentation is important for all TMF management professionals and clinical systems professionals who perform validation or assurance related activities so they can ensure that their companies comply with the EMA guidelines.
Session 3E: TMF Culture and Engagement (TMF Track)
The unusual combination of having numerous pharmaceutical companies within impossibly close geographical proximity has sparked something very interesting in Denmark, namely intercompany networks. One of these being the The Danish TMF Network.
The Danish TMF Network has grown from a handful of people asking each other about the newfangled “electronic TMF” at a TMF conference to more than 30 members from over 15 Danish pharmaceutical companies, discussing topics such as recent inspection trends, process improvements and TMF engagement amongst non-TMF’ers.
The network provides a safe space to share ideas without the fear of judgement and a place to have any questions answered, as there is more than 300 years of combined TMF experience among the members.
From the unlikely origin story to the process improvements the group has fostered within the individual companies, this presentation hopes to inspire other TMF’ers to unite across companies.
Panelists:
- Melissa De Swaef, argenx
- Georgiana Brahy, Parexel
- Liz Farrell, Agios
All clinical trial parties understand the importance of making new treatments available to patients globally. However, many overlook the significance of the TMF and its evolution from a paper archive into a valuable information resource. Maintaining an inspection-ready TMF at all times minimizes workload during inspections and provides a single source of truth for all study-related information.
TMF success goes beyond technology and process optimization. It is fundamentally shaped by the organization’s culture. A cohesive culture strategy that engages both end users and functional area leads is key to long-term TMF management success. A strong TMF culture creates awareness of its value and supports continuous inspection readiness.
This panel will explore the benefits and essential elements of a strong TMF culture, as well as how to establish it within an organization, including cascading it to external partners involved in a study.
Building an internal inspection preparation program within your TMF team can significantly enhance inspection success. This involves collaborating with various functions to understand their TMF processes and supporting an "Always Inspection Ready" (AIR) mantra with your study teams.
This presentation will explore how to partner effectively with your study manager to ensure adherence to TMF Plans, address overdue documentation issues, and identify areas for improvement. The program will also account for study-associated risks, providing a comprehensive approach to readiness.
We will discuss strategies to enable AIR, promote a positive TMF culture, perform test-drives with various functions and ensure the TMF accurately reflects the study. This proactive approach, starting well before a potential inspection, ensures your team is prepared, allowing you to pace yourself and peak at the finish line. Aim to elevate your readiness with a robust inspection preparation program.
- Vittoria Sparacio, Novartis
- Torsten Stemmler, BFarm
- Hobson Lopes, Regeneron
Afternoon Break
Session 4A: CDISC 360i
CDISC 360i is driving the shift to a fully digital standards ecosystem, eliminating traditional silos and enhancing data interoperability. This session will provide an overview of the 360i technical roadmap, focusing on the implementation of the Unified Study Definitions Model (USDM) and advancements in biomedical concepts, rules, and data exchange standards. Learn how linked, machine-readable standards are enabling seamless data flow from study design through regulatory submission. Discover how collaboration with industry stakeholders and the adoption of automation and AI are accelerating clinical research. Join us to explore how 360i is driving greater efficiency across the clinical development lifecycle.
This presentation provides a deep dive into the design and implementation of the activity concept within OpenStudyBuilder, showcasing its role in the broader context of clinical study automation as envisioned by CDISC 360i. We will compare OpenStudyBuilder's graph-based model with CDISC Biomedical Concepts, focusing on the rationale behind design choices, including structural adaptations and enhancements. Additionally, we will discuss challenges encountered during development, highlighting lessons learned and future opportunities to refine the approach.
GSK is pioneering a CDISC-native, end-to-end (E2E) knowledge graph approach to accelerate clinical trials, aligning with CDISC 360i and open-source initiatives like Pharmaverse. Their strategy involves: 1) unifying models by linking an implementation ontology based on ODMv2 to Analyses and USDM study definition for seamless protocol design and implementation; 2) developing a reusable analysis framework separating definitions from outputs for faster study delivery; and 3) committing to E2E data capture and automation from study design to results. GSK invites industry collaboration to shape and refine CDISC 360i, fostering a more interoperable clinical data ecosystem for faster drug development.
Session 4B: CDISC Foundational
The study data tabulation model (SDTM) provides methods to define the complex interconnection of data as dataset/record relationships through the Related Records, RELREC, special-purpose dataset. In AstraZeneca, in the CVRM therapeutic area, we have initiated work with the RELREC dataset to standardize, simplify, guide, and possibly automate the handling of these relationships.
- What values are appropriate to ensure unique links?
- What are the shortcomings of designing linking variables based on each RAW dataset as a standalone entity?
- How can a sponsor define and maintain relationships that function across clinical studies?
This presentation will target our approach to address these questions. The strategy we have adopted to maintain useful data relationships, even when one is not in full control of all the sponsor standards or study needs. Furthermore, we explore possible automation or managing related records and stress test RELREC implementation.
In the current regulatory landscape, harmonizing clinical trial data according to industry standards is essential at every stage from data collection to analysis, including non-CRF data which can make up to 70% of trial data. Using CDISC-based controlled terminologies is key for achieving consistent data across studies. However, one-dimensional codelists do not ensure sufficient data harmonization, especially when converting unstructured scientific documentation into standardized formats.
This presentation outlines a Value Level Metadata (VLM)-based solution for non-CRF data collection at Roche. Initially, manually created VLM reports accessed through a metadata repository are expanded both scientifically and technically, integrating into machine-readable metadata workflows. Using a generic VLM ontology and meta-programming approach, Roche aims to automate authoring tools and data transfer specifications, enhancing automation and machine readability. Challenges of maintaining extensive sets of VLM and proposed technical solutions for corner cases, along with benefits for the generation of Biomedical Concepts are explored.
Session 4C: Academia
Many pharmaceutical companies and Contract Research Organizations have established their internal systems and processes to comply with CDISC standards in Japan. However, the CDISC standard has yet to be widely spread among Academia. One reason is that Academia does not have enough opportunities to be directly involved in the regulatory submission process. However, they would like to obtain knowledge and skills of CDISC Standards. We have established a specific team within the CDISC Japan User Group (CJUG) SDTM team, which mainly consists of clinical research support staff affiliated with Academia, starting in December 2022. This team's primary purpose is to develop professionals capable of implementing CDISC Standards in Academia by enabling beginners without prior experience to acquire CDISC Standards knowledge and skills through mock clinical trials protocol and the creation of SDTM datasets. This presentation outlines specific activities and achievements in Japan, illustrating how to implement CDISC standards in Academia.
Current and emerging CDISC standards have an important role in the successful long-term retention, preservation and access to clinical data. Use of CDISC standards during the retention period can support both internationally recognised digital preservation good practice and regulatory requirements such as ALCOA+ data integrity. Yet this seems to be little recognised in the community and is rarely used to justify their use. This presentation will: (a) review retention and archiving requirements, for example as described in ICH E6 (R3); (b) present the long-term retention and preservation benefits of CDISC open-standards for data and metadata; (c) show how CDISC standards align with long-term digital preservation good practice, for example from NARA and the DPC; and (d) discuss how planning for long-term retention and data management from the outset of a trial can reduce both costs and risks over the long-term when trial data and records are archived and preserved.
Research Electronic Data Capture (REDCap) is a free, user-friendly web-based interface which requires no background knowledge or technical experience to use, designed specifically for use by academic and public health, non-profit institutions. The REDCap consortium consists of thousands of institutions and millions of users and studies, representing the potential for a huge pool of data that could be tapped into to support clinical trials. Yet there are many challenges of academia adopting CDISC standards for research. In recognition of that, CDISC has partnered with REDCap to help bridge the gap. This review of a healthy volunteer research study, delves more deeply into the REDCap and CDISC connection, as well as outlining the methods, surprises and challenges of mapping the resulting data into SDTM. In conclusion, with patience, REDCap data can be successfully mapped but additional outreach to academia on the existence and use of standards may be beneficial.
Session 4D: Risk Based Approaches (TMF Track)
- Karen Roy, CDISC / Epista
- Torsten Stemmler, BFarm
- Joanne Malia, Regeneron
- Paul Carter, Montrium
Session 4E: Fundamentals of TMF (TMF Track)
The TMF is more than just a regulatory requirement—it’s a key tool for running efficient and compliant clinical trials. This presentation highlights how treating the TMF as a strategic asset, rather than just a checklist, can improve trial operations and outcomes.
Key topics include best practices for using eTMF data performance metrics to drive continuous improvement. Attendees will learn how a well-managed TMF helps meet regulatory requirements, speeds up trial timelines, and strengthens compliance—ultimately setting organizations up for long-term success.
The TMF Reference Model is known for providing standardized nomenclature and structure for our TMFs. Being a well-adopted tool in the industry many sponsors use it as foundation for standardization. But what else is it, or what else can it be? Imagining the full potential of the model it can fulfil several roles – referred to as “identities” - from a source of information for multiple stakeholder groups, a source of truth in managing processes, a foundation for automation and record exchange to a facilitator of necessary x-functional communication. All those roles/identities and their potential will be described and pre-requisites for each role will be suggested/recommended. The aim of the topic is to provide ideas how a complex Excel Structure can be used to “bring TMF topics to life”.
The TMF Completeness continues to be at the forefront of an Inspection Ready TMF. The methodology used to assess TMF Completeness needs to be carefully designed to ensure we don’t miss out on aspects that an eTMF application won’t be able to consider, and then supplement it with processes.
Understanding the eTMF application’s capabilities and limitations is the first step in designing the methodology. Once designed, setting up efficient processes using the CIMPD framework to identify and plug the TMF Completeness gaps are extremely important.
Identifying the right resources and the right time to identify the TMF Completeness gaps, along with the optimal utilisation of the eTMF application features, significantly impacts the profitability of a study along with its quality.
Interchange Evening Networking Event (MUST be Registered for the Evening Event to Attend)
Session 5A: CDISC Open Rules
The integration of CDISC CORE in a statistical compute environment, such as SAS, allows analysts to take advantage of applying Conformance Rules to clinical submission domains. The process involves expressing current CDISC Conformance Rules in a common specification format, which is then loaded into the CDISC Library. Each Conformance Rule requires the development of an executable component to facilitate its application. Combining the functionalities of Python based CDISC CORE with the capabilities of SAS enable analysts to work with an analytics language of their choice to create the validation reports.
To deliver high-quality datasets fast in this evolving industry, it is crucial to continue to ensure compliance with relevant data conformance rules and regulatory requirements.
SGS invests in the CDISC Open Rules project which aims to deliver executable data conformance rules for each foundational standard. We’ve integrated the Rule Editor and created custom rules; however fully implementing CDISC Open Rules in-house requires significant effort. Collaboration between end users and developers is crucial for successful implementation, despite challenges like misaligned expectations and communication gaps.
We will share our experiences, detailing tools, processes, and validation procedures for implementing CDISC Open Rules. This will emphasize the importance of teamwork and open dialogue with CDISC. In addition, we will address common challenges and share our strategy to foster cross-departmental collaboration and streamline implementation.
Overall, we want to provide insights into overcoming implementation challenges and highlight the benefits of adopting CDISC Open Rules industry wide.
In November 2023 FDA and CDISC started a three-year RCA (research collaboration agreement). The purpose of the RCA is the development and maintenance of FDA business rules as part of the CORE open-source project. The CORE volunteers will create specifications that will become machine executable by writing the code in YAML and storing the rules in the CDISC Library. For the community it means a single version of the rules that is not interpretable in different ways and for FDA it means that all stakeholders will use the same rules, independent of the application used to run them.
In this co-presentation we want to show the process and development of the rules. Furthermore, we will show the community how to implement these rules in existing software and how companies can develop their own set of (especially quality assurance) rules, and add these to an existing CORE implementation.
Session 5B: Analysis Results Standard
Session 5C: Real World Data
Session 5D: TMF Interoperability (TMF Track)
Panelists:
- Anne-Nöelle Charles, GSK
- Jay Smith, TransPerfect
- Jamie Toth, BeiGene
- Jaime Chang, Biogen
A panel discussion with SMEs (pharma, CROs, Vendors) that have (partially) successfully implemented TMF Interoperability:
- Seamless Data Exchange: The different systems involved in clinical trials (e.g., eTMF, Regulatory, Safety, supply, EDC... Not only eTMF and CTMS which is pretty common!) can share data effortlessly without the need for extensive manual re-entry or reconciliation. Data flow smoothly between systems, ensuring that updates in one system are reflected in the others.
- Standardization: Data formats and structures are consistent across systems. This standardization facilitates easier data mapping, integration, and reporting.
- Real-time Access: Stakeholders, including sponsors, CROs, and regulatory authorities, can access the most current and complete TMF data in real-time, facilitating better decision-making and faster responses to issues, whether the TMF system is used (not only eTMF).
- Enhanced Collaboration: Multiple stakeholders, including internal teams and external partners, can collaborate more effectively. Document sharing, review, and approval processes are streamlined, reducing delays and improving communication.
- Comprehensive Reporting: Integrated systems provide comprehensive reporting and analytics capabilities, enabling better oversight, monitoring, and management of TMF as a whole. This includes dashboards, key performance indicators (KPIs), and other tools to track progress and identify issues. As a result, the TMF metrics (completeness, timeliness, quality) reflect the entire TMF, not only primary eTMF.
- User-friendly Interface: The interoperable system should have an intuitive and user-friendly interface that allows users to easily navigate and manage documents, workflows, and data without extensive training
Session 5E: TMF Management (TMF Track)
The transition of Trial Master Files (TMFs) in ongoing studies, often termed "rescue" studies, presents a complex process crucial for ensuring study continuity.
This process may occur due to changes in sponsor ownership or shifts between Clinical Research Organizations (CROs). Effective TMF transitions require meticulous planning and execution, encompassing several key areas. These include technical review, focusing on transfer methods and audit trail management; timeline considerations, emphasizing coordination and resource planning; comprehensive risk assessment and mitigation strategies; alignment and mapping of TMF content with current operational frameworks; and development of robust review strategies for transferred content.
Additionally, learning from past transitions and ensuring smooth completion of the transfer process are vital. By addressing these critical aspects, stakeholders can effectively navigate the complexities of rescue studies, maintaining the integrity and continuity of clinical trials during TMF transitions. This approach equips professionals with the knowledge necessary to manage these challenging scenarios successfully.
During mergers and acquisitions, the Trial Master File (TMF) is a critical deliverable for a successful transition. This presentation provides essential tools for a seamless TMF transfer.
Key stages include:
1. Blueprint: Understand TMF components' importance in acquisitions
2. Framework Construction: Learn to organize and validate TMF documents for transfer readiness
3. Connecting Pieces: Explore communication and collaboration strategies for smooth TMF handovers, ensuring a unified transition.
4. Avoiding Failures: Identify common challenges in TMF acquisition and strategies to overcome them, maintaining TMF integrity.
5. Grand Finale: Celebrate TMF integration and explore leveraging it for future success.
Whether experienced or new to acquisitions, gain actionable insights and confidence for successful TMF transfers.
Ensuring the necessary documentation is in place and up to date is not usually a programmer’s favorite task, especially when what is required in the guidelines is a bit vague. In this presentation, a programmer will present key considerations for Biometrics CROs to ensure that their parts of the TMF are managed proactively and timely to maintain inspection readiness.
A cross-functional team in Cytel was brought together to create the essential records process, ensuring it was as easy as possible to implement for the teams working on the projects. We used the TMF Reference Model as the basis for our Project Specific Essential Records Filing Plan, to provide details of exactly what documents were needed for each function.
We will cover what was straight forward and what was more challenging and what we have learned on the journey so far.
Morning Break
Session 6A: Regulatory Submissions
In a significant collaboration, seven leading vaccine companies — AstraZeneca, GlaxoSmithKline, Johnson & Johnson, Merck, Moderna, Pfizer, and Sanofi — have formed the Vaccines Industry Standards Group (VISG). Over the past two years, this initiative has focused on harmonizing interpretations of regulatory submission guidance and recurrent feedback, as well as CDISC data standards. The group recognizes that aligning the understanding of requirements — such as participant diary data collection and the submission of reactogenicity and efficacy data — accelerates time to market and benefits global health.
This unified approach could facilitate future collaboration with Health Authorities and CDISC, aiming to update the CDISC Vaccines TAUG to meet current Health Authorities' expectations, thereby ensuring clarity and consistency in submission standards.
Our collaborative model can serve as a blueprint for other therapeutic areas within the pharmaceutical industry, demonstrating how organizations can work together to streamline regulatory processes while maintaining a competitive edge in product innovation.
The integrated summary of safety (ISS) is a critical component of a submission to the FDA regulatory authority. For the ISS, data from different studies are pooled and harmonised to conduct the integrated analyses. Different strategies can be used to create the integrated datasets.
The ISS may be accompanied by integrated SDTM/ADaM DefineXML and integrated Reviewer's Guides (icSDRG and iADRG) to provide additional context and information about the integrated SDTM and ADaM datasets.
Based on a use case, we’ll explain in this presentation the approach we took to create CDISC compliant integrated datasets. Furthermore, we’ll share our experiences regarding the creation of an SDTM and ADaM Define-XML for integrated datasets as well as the icSDRG and the iADRG.
The FDA's Real-Time Oncology Review (RTOR) program accelerates the review of oncology clinical trials by allowing for the early submission of top-line results and datasets. In return, RTOR requires the submission of Analysis Data Model (ADaM) datasets which closely follow data specifications provided by the FDA Oncology Center of Excellence (OCE) and Office of Oncologic Diseases (OOD) Safety Team.
This presentation explores challenges related to the implementation of these ADaM specifications, with special focus on the non-standard Adverse Events Analysis Dataset for Cytokine Release Syndrome (CRS) and Neurotoxicity (NT) – ADCRSNT. We will describe the specifications the FDA provides for ADCRSNT; the additional data that sponsors need to prepare to support the analysis of CRS and NT events; and the innovative solutions employed at AstraZeneca to develop robust standards for this uniquely challenging dataset. Our discussion will highlight the importance of a well-documented approach to ensure seamless compliance with RTOR guidance.
Session 6B: Standards in Action
Session 6C: ADaM
Estimands, a concept established in ICH E9 (R1), are increasingly used in clinical trials and required by regulatory authorities. To provide guidance on the implementation of estimands and intercurrent events (ICEs) in ADaM programming, we developed an example trial. This trial included multiple ICEs per subject and different estimands. We also included different imputation methods to analyze these estimands.
We will present the definition of estimands, including ICEs, used in this example trial and then explain the programmatic implementation in more detail, i.e. how to structure ICE datasets and what variables are needed. Furthermore, we present how to set up the estimand-related efficacy ADaM dataset with additional variables and records allowing for thorough analysis of estimands.
ADaM datasets are essential for clinical study analyses. Their structure and derivation algorithms are often documented in ADaM specifications before programming begins. However, at this stage, key documents may still be evolving, and clinical data unavailable, potentially leading to incomplete or inaccurate specifications.
An alternative approach without predefined specifications involves conducting a testrun analysis using actual study data, guided by a central ADaM model aligned with CDISC standards. Statistical programmers develop the datasets based on a stable SAP version while an independent validator performs parallel derivations, ensuring unbiased results. Differences are addressed through discussion and refinement.
Submission-ready metadata are generated in define.xml format alongside this first testrun. Dataset structure and codelists are extracted directly from the ADaM datasets, while the developer adds study-specific derivations.
By integrating dataset development, validation, and metadata generation into a single workflow, this approach supports the creation of high-quality ADaM datasets while adapting to evolving study requirements.
The variability in data representation across companies presents significant challenges in reviewing and analyzing ADaM datasets. Standardizing pharmacokinetic (PK) data for analysis with software like Phoenix WinNonlin is crucial. The ADaM Implementation Guide (IG) for Non-compartmental Analysis (NCA) Input Data addresses this by using the Basic Data Structure (BDS) with a subclass for NCA.
This guide introduces new dosage-based flags (e.g., NCAXFL, NCAwXRS, PKSUMXF, METABFL) that enhance existing BDS variables, allowing for more precise PK analysis. By detailing necessary variables and standardizing naming conventions, the guide streamlines the analysis process.
While we are still exploring the complexities of the ADNCA IG, we have gained valuable insights and practical experience. Our exploration has provided us with valuable knowledge and practical experience, which we believe can be beneficial to others navigating similar challenges. We are eager to contribute to the broader discussion and help advance the collective understanding of this IG.
Session 6D: AI in TMF Management (TMF Track)

- Yen Phan, elderbrook solutions
- Martin Rother, Daquma
- Martin Hausten, Boehringer Ingelheim
Session 6E: Partnerships in TMF Management (TMF Track)
Emerging clinical trial sponsors often rely on Clinical Research Organizations (CROs) or other vendors for TMF management but remain responsible for oversight and compliance. Biotech organizations tend to underestimate the importance of early TMF oversight, leading to costly remediation at trial closeout. This session highlights the distinction between TMF management and oversight while offering practical ways to align Sponsor-CRO expectations. From Request for Proposal (RFP) to Archive, attendees will learn the value of integrating TMF requirements into study risk assessments, contracts, budgets, procedures, and governance reviews to prevent compliance issues, budget renegotiations, and relationship conflicts. By proactively assessing TMF needs, sponsors and CROs can reduce risks, ensure audit readiness, and avoid last-minute resource burdens.
The electronic Trial Master File is a cornerstone of clinical trial management, serving as a centralized repository for all essential documents required to ensure regulatory compliance and good clinical practice. User security is crucial, given the diverse stakeholders involved. The presentation highlights how security profiles and permissions are tailored to user roles, enhancing operational control and compliance. Customizable training ensures users are proficient, and access management procedures minimize risks, such as automating the classification of blinded documents and restricting unauthorized sharing. Security also extends to a robust document quality control process, ensuring that integrity and accessibility of trial documents are consistently maintained. A dedicated helpdesk is available to assist users, and continuous improvement efforts help keep the eTMF inspection-ready, ensuring trial documentation remains secure and compliant.
At end of study, CRO to Sponsor eTMF transfers consumed significant time and effort from internal IT teams, Clinical Teams and Validation and QA departments.
This case study outlines how argenx introduced a robust eTMF Migration Factory solution that:
reduced internal IT effort - by introducing automated QC checks and verifications
reduced clinical team effort – by using already agreed patterns and automating eTMF transfer verifications
reduced validation and quality effort – by introducing technology which could re-use and re-execute agreed and validated migration business logic.
Overall, the results enabled argenx clinical teams (and others) to save significant valuable time to focus on core business activities.
Lunch
Session 7, Track A & B: AC/BC - Highway to Automation
CDISC Biomedical Concepts (BCs) are structured, standardized units of knowledge that can be used to enhance data consistency and facilitate automation in clinical research. They are designed to fill gaps in existing standards by adding semantics, variable relationships, and detailed metadata needed to support the development of digital workflows in clinical research. This presentation will provide an update on the progress of BC development as well as the role that BCs play in CDISC’s new 360i initiative, a project that is aimed transforming the way we develop and use standards within clinical research creating connected and interoperable information enabling automation, enhancing data integrity, and accelerating innovation.
CDISC Biomedical Concepts (BCs) provide standardized templates for clinical observations, but the current BC library's limited coverage hinders widespread adoption. Creating new BCs manually requires significant expertise and effort. We present an AI-powered solution combining Large Language Models (LLMs) with the NCI Thesaurus to accelerate BC creation. Our three-stage pipeline automatically extracts candidate BCs from Therapeutic Area User Guides (TAUGs), matches them against the NCI Thesaurus using vector similarity, and transforms them into draft BC definitions using the Data Element Concept template. Developed in collaboration between Lindus Health and CDISC, this system was tested using the Breast Cancer TAUG. The pipeline is therapeutic area agnostic and requires minimal effort to process other TAUGs, potentially enabling rapid expansion of the BC library while maintaining quality through expert validation. This approach advances CDISC's vision of creating more connected standards.
In January 2025, a CDISC working group was established to define and model Analysis Concepts, an important step toward enabling end-to-end automation within the CDISC 360i framework.
The current USDM (Unified Study Data Model) defines objectives, endpoints, and schedule of activities (SoA), including Biomedical Concepts (BC). While these Biomedical Concepts facilitate downstream automation of study setup and data collection, a significant gap remains in the metadata required for derived and analyzed data and their relationship to USDM-defined endpoints. As USDM currently supports eProtocol creation, a natural extension toward supporting the electronic Statistical Analysis Plan (eSAP) is required.
This presentation will provide a status update on the working group's progress in defining and modeling Analysis Concepts, highlighting key developments and future directions in this standardization effort.
The potential benefits of biomedical concepts have been touted for many years, and as an industry we are finally on the cusp of implementing them more widely. At GSK, we have been experimenting around how a proto-concept, typically in the form of CDISC terminology, can be used as a bridge between entities in our protocols, collection and SDTM value-level definitions. More recently, we are very actively pursuing an automation agenda which includes protocol digitisation and widescale deployment of fully metadata-drive analysis results and analysis output creation, which further presses the need for biomedical concepts, and additionally highlights the need for the industry to align on concept models for study design and analysis. This presentation will share examples of work so far, and some of the upcoming challenges which we hope to address through industry collaboration around CDISC 360i.
- Bess LeRoy, CDISC
- Amiel Kollek, Lindus Health
- Kirsten Langendorf, data4knowledge ApS
- Warwick Benger, GSK
- Igor Klaver, GSK
Session 7C: Applied Standards Governance
Session 7D+E: The Future of TMF (TMF Track)
This session will provide TMF community members with a beginner’s guide to the goals and current progress with digital data flow (DDF) and the ICH M11 digital protocol standard. We will also show how the TMF standards will intersect with DDF as we design for the digital TMF.
Panelists:
- Nick Hargaden, Moderna
- Heather Childs, PPD
- Jim Horstmann, Veeva
Afternoon Break
Session 8: Closing Plenary
WHO published pivotal new guidance in September 2024. Based on a global World Health Assembly resolution on Strengthening Clinical Trials, WHO provides concrete recommendations on how to reform clinical trials to better address patient needs, develop safe and effective interventions for under-represented populations, and improve efficiency in clinical trial design and approval processes. The speaker will outline the new framework and the potential role of the CDISC community.