2024 Japan Interchange Program
Please click on sessions listed below to view all presenters and topics within each session.
View PRINT PROGRAM HERE.
Session 1: Opening Plenary and Keynote Presentation
Session 2: Second Plenary- Updates from CDISC
Session 3: CDISC Implementation and Use Cases
[Background] Pharmaceutical companies and CROs have completed CDISC compliance as it became necessary for regulatory submissions. However, academia, which does not submit applications for approval, has been slow in adopting CDISC standards. Academic institutions in Japan lack both manpower and research funds, making it difficult for them to prioritize CDISC compliance. Nevertheless, staff at Academic Research Organizations (AROs) are interested in CDISC, yet this interest has not translated into CDISC standard implementation. AROs in Japan are scattered throughout the country, making it challenging for staff to attend on-site seminars or meetings. Therefore, as part of efforts to promote CDISC among ARO staff across Japan who have an interest but find it difficult to progress further, we began uploading comical CDISC videos to YouTube a year ago in February 2023. Currently, 33 videos have been created.
[Methods] We investigated the access to the 33 videos mentioned above. Additionally, a survey was conducted among members of the CDISC Japan User Group (CJUG).
[Results] As of February 16, 2024, the video with the highest number of views is "01. What is CDISC?" followed by "02. CDISC Even in Academia." Among the recently uploaded videos, "30. Little Tips About SDTMIG" received significant views. Additionally, the survey results were generally positive. The preferred duration for videos was found to be 3 to 5 minutes. While there was personal approval for enjoying the videos like comics, some expressed resistance to watching them during work hours.
Background: The Japanese research group has pooled data from 17 cohort studies for “the Evidence for Cardiovascular Prevention from Observational Cohorts in Japan (EPOCH-JAPAN) study” focusing on cardiovascular epidemiology.
Objective: This study aimed to develop integrated databases incorporating repeated measurements from eight selected cohort studies in a global data standard format.
Methods: We investigated documents relating to pooled cohort data and consulted CDISC experts. Subsequently, we obtained data from each cohort, analyzed their data structures, and created integrated databases.
Results: Despite the draft guidance for using the SDTM for observational studies and recommendations for pre-conversion to the SDTM format, we found little benefit for our project. Hence, we directly generated ADaM-like datasets for each cohort and integrated them by vertical merging.
Discussion: Multiple cohort datasets were integrated to generate key evidence for health promotion. The CDISC standards, including ADaM and controlled terminology, can be partially applied to cohort study data.
The presentation will primarily focus on the functionality of our SAS macro, detailing its input requirements and the resulting output. We will explore how this macro streamlines our daily operations, enhancing efficiency and minimizing manual tasks. Furthermore, we'll highlight how bookmarking becomes effortless, even with numerous pages and multiple visits, all achievable within few minutes.
Since its formal implementation in 2020, pharmaceutical companies in Japan have been submitting e-Data (patient level study data) to PMDA at the time of NDA submission.
In the initial notifications, practical and technical guidance documents, there have been several requirements specific to PMDA, which forced sponsor companies to establish additional domestic processes to comply with the requirements, including PMDA meetings and validation/documentation.
Lilly have developed internal process locally and globally to meet these requirements accurately and efficiently. However, those differences posed significant challenges to file NDAs simultaneously across multiple regions including Japan.
Recently, after successful receipt of e-data from many sponsor companies for several years, PMDA has been revising their guidance documents in a more “harmonized” manner with other global counterparts.
Today I present the history of e-data regulations in Japan, and how Lilly responded to these changes to accelerate drug development.
Session 4: Regulatory & Healthcare Interoperability
HL7 Vulcan is an HL7 FHIR Accelerator with the vision connecting clinical care and clinical research through data interoperability with FHIR, comprised of 45+ member organizations across academic researchers, implementers, biopharmaceutical industry sponsors, standards development organizations including CDISC, consortia and others internationally. Vulcan runs FHIR Connectathons and creates FHIR Implementation Guides to accelerate the utilization of FHIR for research purposes. Much of the work is around leveraging EHR data; however, there is also work related to structuring formerly unstructured information often consumed in documents. In this session, overall introduction will be provided and then one or two project details will be explained how each group manages and work together under the collaborations of the Vulcan volunteer members and external partners. Audience will also learn how CDISC and HL7 Vulcan collaborate.
Session 5: Regulatory Updates, Part II
Session 6: Data Science
JSON, JavaScript Object Notation, is widely used when data is sent from a server to a web page. Dataset-JSON is cross collaboration project between CDISC and PHUSE and it will be new standard for storing and transporting clinical data. Current transport format (SAS xpt format) is outdated and imposed limitations on submission data due to the lack of modern features. Dataset-JSON will not only overcome these limitations but also consider better efficiencies, consistency, and re-usability to adopt new technology in a longer term. In this presentation, hands on experience from Dataset-JSON submission pilot workshop led by CDISC, PHUSE and FDA will be provided and describe the possibilities of adoption of Dataset-JSON in future submission in Japan with technical and pragmatic challenges.
This presentation explores practical SAS applications for handling Dataset-JSON in clinical trials. It emphasizes Proc JSON and libname JSON engine for efficient data exchange, surpassing limitations of SAS v5 XPT files. Leveraging SAS Extended Attributes enhances flexibility. A proposed method enables easy creation and reading of Dataset-JSON structures, addressing issues like storing originator information and variable types. By utilizing SAS Extended Attributes since version 9.4, Dataset-JSON creation becomes feasible, allowing richer data exchange. Overall, this approach facilitates seamless interaction between SAS and other programming languages, fostering active information exchange among system engineers and programmers through Dataset-JSON.
End to End (E2E) Data Standards Governance provides the framework, aligned with industry standards and regulatory requirements. To govern clinical trials’ standards efficiently align with F.A.I.R principle and enable end-to-end data transformation, metadata registry (MDR) is expected to play a crucial role.
E2E Data Standards Governance drives the following core expectations for clinical trials:
- Focusing on activities essential to the study, and eliminate nonessential data collection from the study
- Align with CDISC’s Standards Development Guiding Principles
- Generalize the standards contents and expand the coverage of the standards
- Reduction in time spent discussing how the data are captured or displayed vs what the data tell us
- Data aggregated/integrated with minimum effort
- Non-standard/Study Specific deliverables will be considered in the context of scientific and regulatory benefit and then used to influence future standards
- The governance proactively addresses the impact of changes or new standards.
- We will share overview of E2E standards governance, challenges and key business requirements of MDR
Session 7: TMF Topics
Inspection readiness is a driving factor in TMF management. Each sponsor/CRO defines their TMF processes to ensure inspection readiness and meet global & local regulatory requirements from agencies such as EMA, MHRA, FDA and PMDA.
Huge efforts have been made by the agencies and the pharma industry to globally meet TMF requirements to simplify & improve inspection readiness processes and the TMF Reference Model is the best example which has significantly contributed to the harmonization of global TMF standards. However, many sponsors/CROs working across regions are still struggling to strike the right balance between meeting diverse regulatory requirements and optimizing TMF management.
In this session, we’ll look at Chugai’s approach to TMF management with some examples of how they have addressed these challenges with focus on staying compliant while simultaneously driving process efficiencies as well as operational excellence.
This session provides a deep dive into data enablement as a foundational principle of effective eTMF Management. Specifically tailored to tackle the challenges of eTMF data and content management the session will address the explosive data landscape of the life science industry and shed light on how implementation of a data enablement strategy can unlock significant business benefits for organizations looking towards long-term value realization of their eTMF systems.
Using real-world cases, the session will provide examples of the use of data enablers, including the use of the TMF Reference Model, to enhanced standardization, automation, and data quality in eTMF transfers, migrations, and M&A activities.
The session will onset in an overview of the evolution of data generation and date use in Life Sciences and discuss the foreseen challenges in efficiently managing the exponential pace of change within our industry. Following, the session will provide an overview of the key components of a Data Enablement strategy and how it might be practically applied to the eTMF, transforming it from a mere document repository into a strategic tool that enhances operational efficiency, ensures compliance, reduces risks, and accelerates regulatory timelines.
With focus on especially eTMF data governance through the use of the TMF Reference Model and data quality as a key efficiency driver, the session will conclude with a walkthrough of 3 real-world cases where the application of data enablement principles significantly increased the value realization of each use case.
At the end of the session, participants will:
• Understand the challenges of eTMF value realization within the current life science data landscape
• Know the principles and key components of a data enablement strategy
• Be able to identify practical use cases for data enablement within eTMF management
• Know the benefits and challenges of implementing a data enablement framework in eTMF management.
Why are people in TMF important? Isn’t everything done by the TMF management team? The answer is ‘No’. Not the TMF management team, but the study team plays a major role in TMF activities. Thus, communication is essential for successful TMF management. Active listening, providing clear guidance, connecting dots, clarifying roles and responsibilities, and identifying a TMF Subject Matter Expert are critical elements of TMF people management. Also, acknowledging when TMF health is improved, sharing success stories with all stakeholders to motivate, and empowering stakeholders’ greater awareness of inspection readiness builds confidence in TMF integrity. Highly engaged stakeholders’ equals inspection ready TMF. In conclusion, TMF stakeholders' collaboration is indispensable. Let’s stakeholders know how important they are to TMF, raise the engagement level, and achieve success in TMF. TMF is Too Much Fun.
Session 8: Novelty in Clinical Trials and CDISC Standards
Digital Data Flow (DDF) is becoming a real-life solution as the DDF-RA project makes significant progress over the past year. The model now contains eligibility criteria and relationships to document contents and more, making the model more practical in building solutions for automated EDC setup, automatic generation of documents, and SDTM automation. There has also been a breakthrough in information technology in the past year, Generative AI, showing future landscape of innovative automation especially with natural language processing such as automated document generation.
The presenter and co-author, as members of the clinical trial solution development team at Fujitsu, have evaluated current technology and prospected trends of Generative AI and concluded that Generative AI will be part of future life and yet the DDF model is the critical component of clinical trial automation. This presentation explains our implementation of DDF leveraging Generative AI, and why the hybrid approach makes more sense.