2024 CDISC + TMF US Interchange Program
Click Here for Our Program PDF
PLEASE NOTE:
Panel presentations may not have included any slides.
Any presentations missing slides will be updated as soon as we receive the final versions from presenters.
A few presenters who were unable to join us in person submitted recorded presentations. NOT ALL presentations were recorded.
Session 1 - Opening Plenary
Session 2: Second Opening Plenary - Regulatory Presentations & Roundtable
Session 2D+E: TMF Vision for the Future (TMF Track)
Lunch and Poster Session
Examines common validation errors in SEND datasets and introduces strategies to address these inaccuracies. Analyzing SEND datasets validated over the past year at the Korea Institute of Toxicology, we identify recurring validation errors and their impact on submissions. Errors are classified based on incidence and impact, highlighting issues like flawed dataset structures, terminology misalignment, and inaccuracies in animal data. Root causes include misinterpretations of the SEND Implementation Guide, indicating a need for better training. We propose procedural improvements, enhanced training, and better validation software. Recommendations include an initial review checklist to catch structural errors early, regular updates in training on SEND directives, and improved software to detect terminology issues. Our methodology for error categorization and corrective strategies aims to improve SEND dataset accuracy and efficiency. This framework can help other organizations refine their SEND validation, ensuring compliance and minimizing submission delays.
The establishment of the Korean SEND User Group is crucial for bridging the standardization gap in nonclinical data management prior to regulatory enforcement in South Korea. This initiative will proactively address the current low adoption and awareness of the Standard for Exchange of Nonclinical Data (SEND) among local biopharmaceutical stakeholders. By fostering education and voluntary adoption through workshops and seminars, the group aims to prepare the industry for upcoming regulatory demands. Additionally, it will facilitate communication between Korean professionals and the global CDISC community, ensuring that local practices align with international standards. This effort is pivotal in enhancing Korea's readiness for regulatory updates and promoting global compliance and collaboration in pharmaceutical data management.
Splitting SDTM (Study Data Tabulation Model) domains into multiple datasets can serve as a valuable tool, streamlining the process of creating and reviewing large, complex data domains.
In SDTM, a general observation class domain may be split into separate datasets. In practice, we commonly split the Findings About (FA) and QS (Questionnaires) domains.
Common reasons for dataset splitting include better representation of complex data, improved traceability from Case Report Forms (CRF) to SDTM specifications to datasets, and simplified programming and validation.
Domains can be split using several methods, including purpose, category, or time points.
This poster will cover practical tips and FAQs such as, ensuring consistent variable naming across split datasets, using unique identifiers (e.g., STUDYID, USUBJID,-SEQ) to link split datasets, updating the Define.XML file, CDISC compliance queries, and CRF annotation.
To generate SDTM, it is necessary to extract raw data, run SDTM code, and validate SDTM. If the process is performed automatically, SDTM creation will be more efficient and less prone to human error.
Using Application Programming Interface (API)s, the SDTM generation process can be done automatically. In this poster, we will briefly walk through the automated sequencing process to download eCRF data using the Medidata Rave Web Service (RWS) API, running SDTM code, and validating generated SDTM using Pinnacle 21® Community (P21C) or Pinnacle 21 Enterprise (P21E) API based on SAS® programming language.
Collaborating with players armed with unique abilities is key to crafting an unbeatable strategy. Regarding standardized clinical trial data, cross-departmental collaboration can help break down departmental siloes while enhancing the quality of study data, leading to earlier availability of effective treatments.
Many organizations have enlisted SDTM programmers with unraveling SDTM validation issues. However, some issues require tracing SDTM data back to its roots to optimize decision-making. Integrating data managers into this process can help slash time to resolution and boost overall data quality.
This poster will provide important considerations on weaving data managers into the SDTM validation process. Suggested workflows will illustrate how to level up your approach to resolving validation issues. Curating appropriate FDA validation rules along with detailed examples will showcase how these are best served by data managers. Lastly, suggested training and ideas to power-up your data managers will equip you to battle issues at the source.
In this use case, the authors discuss how they approached representing both the dosage of the investigational compound and the radioactivity level of a [14C] tracer to support the analysis needs of the protocol. Administration of a single dose of [¹⁴C]-labeled drug is acknowledged as the standard for estimating the disposition of an investigational drug. A radiolabeled investigational compound was used to characterize the clearance routes and the exposure of parent drug and related metabolites. The data collected to support these pharmacokinetic endpoints included both the actual dose of drug administered and the actual radioactivity administered. The SDTM modeling for this data presented a challenge in that there was a single dose of a molecule with two different types of dose levels (drug and radioactivity). This poster describes the modelling options and rationale for the option most suitable for SDTM conformance and the needs of the protocol analysis.
The Office of Scientific Investigation (OSI) has three parts per the guidance given by FDA. Those three parts are: a) Clinical Study-Level Information, b) Subject-Level Data Line Listings By Clinical Site, and c) Summary-Level Clinical Site Dataset also known as CLINSITE. The responsibility of completing the first part, “Clinical Study Level information”, does not require any special statistical programming and for this reason the Clinical team typically completes it. The next two parts need programming and hence validation or QC (Quality Check) becomes essential. The second part, “Subject-Level Data Line Listings by Clinical site”, is a large package having 9 separate listings for each site. Although the second part is huge having multiple line listings across all sites with clinical study, the regulatory agency thoroughly checks the third part “Summary Level Clinical Site Dataset” aka CLINSITE dataset and how the variables are derived in those. It becomes imperative to check a) The latest specification for CLINSITE, as given in the FDA Bioresearch Monitoring Technical Conformance Guide, is followed by the study team and b) to make sure the study specific or program specific variables listed in the specification are clearly defined or described. This paper proposes efficient validation methods that can be practiced by clinical study programmers at the development.
ISS/ISE (Integrated Summary of Safety/Effectiveness) aims to provide a consolidated overview of safety/efficacy information related to a drug or biological. It combines data from individual studies to identify patterns, trends, potential safety concerns, and treatment effects. ISS/ISE is typically included as a part of the regulatory submission package to support marketing authorization applications (e.g. New Drug Application (NDA) or Biologics License Application (BLA)). It provides regulatory agencies with a complete understanding of the product's safety/efficacy profile and supports informed decision-making regarding its approval and labeling. This topic will focus on some of the roadblocks faced from a programming perspective and the approaches taken to resolve them. Highlighted here are a few key components.
Risk Based is the hottest word in our industry at the moment and something we are all trying to work out how to use in order to manage the increasing complexity of the TMF as well as the ever present pressure of resource and cost constraints. The thing we need to remember with Risk however is it can go both ways, this means that although it can help in decreasing some of our work, it can always add more if things start going the other way. This poster helps to show some of the things to watch out for, KPIs to manage and threshold to think about.
Session 3A: Digital Protocol, Part I
The CDISC Unified Study Definitions Model (USDM), Biomedical Concepts, and the implementation of end-to-end study data automation is hard to envision from model diagrams and conceptual drawings. To make the USDM vision tangible, we will present a technology demonstrator showing how the model can be used as the foundation for:
1. The creation of a digital study protocol including the study design and the schedule of assessments (SoA).
2. Driving data capture artefacts from the SoA
3. Loading data including bulk loads, human entered and from EHR sources (FHIR)
4. The automated generation of SDTM.
5. The creation of submission ready artefacts such as aCRFs and define.xml
The presentation will focus is on what is possible and show how the USDM, BCs, and SDTM can be brought together in a seamless manner, allow for the move away from a siloed and processed focused way of working to a data-centric world of seamless integrated standards.
Session 3B: Real World Data Sources to CDISC
Observational RWE studies pose unique challenges for using the Study Data Tabulation Model (SDTM). CDISC has released "CDISC SDTM Implementation in Observational Studies and Real World Data" which covers many of the most common issues with using SDTM for observational studies, but due to the complexity in these types of studies, there are a number of situations and use cases where additional examples would be helpful. This presentation will provide an additional use case in the implementation of the new CDISC considerations document in a post marketing observational RWE study by exploring the challenges faced, the options that were considered, and eventual solutions.
Developing a data transfer for eCOA data appears straightforward due to established CDASH and SDTM standards. The capture of questions and corresponding answers in clinical trials allows for the mix of site visit questionnaires and daily diaries across tablets and handheld devices. Implementing best practices while building the eCOA platform can streamline data analysis during transfer. This presentation focuses on four key considerations for SDTM creation:
- Accounting for missing data and enabling future SDTM population with --REASND.
- Incorporating functionality for unscheduled visits, regardless of protocol specifications.
- Planning the strategy for VISITNUM and VISIT in diary and unscheduled data
- Managing date, time, and datetime variables efficiently for SDTM QS compliance, aligning with eCOA data's multiple time-related entries.
Addressing these aspects early with all stakeholders can prevent unexpected challenges during critical phases.
Enabling an end-to-end data flow of real-world data from source to regulator involves a number of different data standards. As FHIR’s main use is the exchange of electronic health record information through JSON formatted files, it falls short in data selection and fit-for-purpose evaluation. On the other hand, direct transfer to CDISC SDTM would entail extensive mapping, which might not be necessary for the majority of patients. Moreover, SDTM format is not optimal for fit-for-purpose assessments and imputation.
We propose aligning and standardizing the essential intermediary stage of data transformation. The OMOP standard serves this purpose very well as it includes a number of source traceability variables and standardized mapping of concepts. In this presentation, it will be demonstrated how we add custom variables to the OMOP standard to allow for full end-to-end traceability and seamless mapping to SDTM.
Session 3C: CDISC Open Rules Workshop
Interested in the CDISC Open Rules Project? Whether you're new or experienced, our CDISC Open Rules workshop is for everyone! This 90-minute workshop is divided into two parts.
Part 1 will give you updates on the CDISC Open Rules project's status, covering progress, new features, and future plans. Essential information to ensure smooth adoption of CDISC Open Rules. Don't miss out!
Part 2 is hands-on practice using the CDISC Open Rules Engine for data package validation. You'll learn about the Engine's features and how to use it effectively. Available on GitHub, this Command Line Interface (CLI) lets you run all published CDISC Open Rules on your datasets.
We'll demonstrate how to:
- Locate the CLI
- Download it locally on your laptop
- Understand the different commands and use them for data validation
- Interpret the issue report
Access workshop materials here: 2024 US Interchange - CDISC Open Rules Workshop Materials
Session 3D: TMF Essentials (TMF Track)
Session 3E: The Impact of Regulations (TMF Track)
Afternoon Break
Session 4A: Digital Protocol, Part II
The TransCelerate DDF initiative has led to a major shift across the biopharmaceutical industry. Clinical development executives are increasingly considering to adopt digitalized study definition and protocol information that is reusable and actionable.
In this presentation, speakers will...
1.Explain the benefits and value proposition of digitalized clinical development, in particular how it relates to structured study definitions, true digitalized protocols, and clinical documents in general.
2.Describe a standards-based digital protocol architecture that can be used by sponsors today.
3.Highlight how components of the study definition are created as connected and re-usable components that ensure standards are embedded from the beginning of the process and can be applied across the full range of clinical documents.
4.Demonstrate how insights can be used to optimize study design.
- Peter Van Reusel, CDISC
- Bill Illis, TransCelerate Biopharma
- Dave Iberson-Hurst, data4knowledge
- Bob Brindle & Frederik Malfait, Nurocor
- Viral Vyas, Bristol Myers Squibb
Session 4B: Standards Governance & MDRs
A metadata repository (MDR) is a key component in achieving successful standards-based
automation, however implementing an MDR as a traditional relational database can take
significant time and effort.
This presentation describes an agile approach to metadata repository implementation using
open-source tools to support automation, version control and traceability.
Data collection of adjudicated events and findings has been challenging due to the limited guidance from standards organizations and regulatory agencies. Leading to differences in data collection and reporting approaches over the years.
The presentation will cover key points in collecting the finding and events adjudication data and the inputs used to create mapping. Review of the historical data mapping findings adjudication process was examined by the company leading up to the new solution. Using the structure and process included in the PHUSE paper Best Practices for Submission of Event Adjudication Data, Version Date 18-Oct-2019, the team looked at the adjudication findings to improve the data collection and reporting. Findings about collection for the adjudication findings map to the FACE domain and required the creation of a custom domain of XC Adjudication Findings involving cross functional collaboration between internal and external parties ensuring full traceability from end to end.
Session 4C: CDISC Open Rules Update
In November 2023 FDA and CDISC started a three-year RCA (research collaboration agreement). The purpose of the RCA is the development and maintenance of FDA business rules as part of the CDISC Open Rules open-source project. The CDISC Open Rules volunteers will create specifications that will become machine executable by writing the code in YAML and storing the rules in the CDISC Library. For the community it means a single version of the rules that is not interpretable and for FDA it means that all stakeholders will use the same rules, independent of the application used to run them.
In this presentation we want to show the process, development, and status of the rules. Furthermore, we will show how the community can contribute to this ongoing effort.
This presentation aims to delve into the motivations behind Bioforum's creation of a user-friendly application for executing CORE rules, the rationale behind Submit24TM's deviation from the CORE engine, and the imperative for third-party CORE software to undergo CDISC certification. Primarily, Bioforum embarked on the development of Submit24TM propelled by the belief that data conformity should be universally accessible, unencumbered by financial constraints. Secondly, Bioforum conducted trials integrating the CDISC CORE engine into their software solutions, however they had to build their own engine due to various reasons. Lastly, Bioforum underscores the necessity for third-party software to undergo CDISC certification to ensure consistency with the results generated by the CORE engine. The industry stands poised to evaluate data conformity with the freedom to select compliant technologies tailored to their requisites. CDISC CORE, complemented by open-source technology and the CDISC CORE certification program, facilitates this paradigm shift.
The CORE project encourages the use of its open-source software to test study data for conformance to various standards by using the same set of unambiguous rules governed by CDISC. Next to these rules, establishing an industry-standard method for rule creation offers a promising prospect. This presentation will explore creating custom rules using the Conformance Rule Authoring tool. Our journey/challenges of implementing the authoring tool in (y)our cloud environment and running custom rules with the CORE-engine, will be shared. We’ll discuss several use cases, including rules not included in the CDISC-governed set, data cleaning rules, and validation of non-SDTM clinical data, e.g. external vendor data. Keeping the spirit of open-source in mind, it is crucial that we share these custom rules within the community. This presentation will highlight the potential of custom rule creation and its impact on improving data integrity and consistency in clinical research.
Session 4D: Risk Based Approaches (TMF Track)
Session 4E: TMF Management through Metrics (TMF Track)
Evening Event (Must have registered for the Evening Event to attend)
Session 5A: Concepts in Practice
- Jon Neville, CDISC
- Mikkel Traun, Novo Nordisk
- Ryan Dempsey & Edwin Van Stein, GSK
- Chris Price, Roche
Session 5B: Regulatory
The European Union Clinical Trials Regulation (EUCTR) aims to harmonize clinical trial regulations across the EU, enhancing patient safety and research efficiency. Implemented in 2014, it introduces centralized approval procedures, increased transparency, and stricter safety reporting. However, compliance is complex due to varying national regulations among member states, posing challenges for sponsors and investigators. These stakeholders must navigate diverse regulatory landscapes, ensuring both EUCTR and national compliance. The regulation's transparency requirements demand accurate and timely trial result reporting, complicating logistics. Stringent safety reporting and pharmacovigilance obligations require robust systems, careful coordination, and collaboration. Additionally, changes in the classification of investigational medicinal products (IMPs) add complexity. Effective compliance requires proactive strategies, clear communication, comprehensive training, and leveraging technology to enhance efficiency and transparency. By understanding these challenges and adopting collaborative approaches, stakeholders can maintain high standards of patient safety and data integrity in EU clinical trials.
The FDA recently updated their Study Data Technical Conformance Guide to request that sponsors provide two separate domains for lab results. For some organizations, this may be a new approach to providing results in both SI and US conventional units. AbbVie has been using this solution to the SI versus US conventional units situation in our SDTM standards for many years. This presentation is to share the perspective of why this was the way AbbVie chose to address the regulatory request in this manner more than 5 years ago and the lessons learned over that time. For those organizations that will need to change their way of working to accommodate this new request from FDA, we hope to provide our perspective on what has worked well for us and how this has fit into our standard data process flow.
Session 5C: Special Topics / Implementing CDISC
In Japan, the Pharmaceuticals and Medical Devices Agency (PMDA) mandated CDISC standard submissions for drug approval from early 2020. While pharmaceutical companies and CROs have adapted to CDISC, its adoption in Japanese academia is still in early phases. Prominent organizations promoting CDISC include the Japan CDISC Coordinating Committee (J3C), CDISC Japan User Group (CJUG), and AMED. The "Study of CDISC standards implementation in academia" project by AMED started in 2019, aiming to survey and enhance CDISC adoption in academia. Surveys revealed that less than half of Academic Research Organizations (AROs) used CDISC standards, with major barriers being lack of resources and knowledgeable personnel. The project team operates acrf.jp, providing annotated CRFs, define.xml, analysis programs, and educational links to aid CDISC implementation. This effort, supported by various entities, highlights the ongoing promotion of CDISC standards in Japanese academia.
This is a brief update on the ADaM Oncology Usage Guidance with an explanation of what is coming and the expectations in usage. I’ll also be covering future updates to that document and how to plan for it.
Session 5D: End of Study Challenges (TMF Track)
- Princess Barcelona-Martin, Beacon Therapeutics
- Colleen Butler, Syneos Health
- Soraya Halligan, Regeneron
- John Saviski, GSK
Session 5E: Partnerships in TMF Management (TMF Track)
Session 6A: 360i - Moving from Proof of Concept to Implementation
- Johnathan Chainey, Roche
- Brooke Hinkson, Merck
- Rhona O'Donnell, Novo Nordisk
Session 6B: CDISC for Beginners
Data standards requirements for regulatory submissions are constantly evolving. The US Food and Drug Administration (FDA) and Pharmaceuticals and Medical Devices Agency (PMDA) delineate these requirements through binding guidance documents, technical specifications and other reference documents collaboratively developed with industry consortiums. Periodic updates introduce nuanced modifications to these standards and add to the challenge of navigating this dynamic regulatory landscape.
This presentation provides a comprehensive overview of the diverse requirements and guidance on interpreting them. We share best practices and resources for attendees to discover further information and templates to facilitate the preparation of submission data packages that align with regulatory expectations. We also conduct a practical analysis, comparing FDA requirements with those of the PMDA in the context of data standardisation requirements within a submission package to provide attendees with useful insights.
This presentation will give a brief overview of the standard ADaM dataset classes, followed by an interactive segment where a commonly-used table shell will be displayed, and the audience will be encouraged to name the standard ADaM dataset class and variables that could be used to most easily generate that table. Alternative approaches will be discussed where appropriate, including the use of the ADaM Other class and when it is acceptable to produce a single table from multiple ADaM datasets. The goal is to get the audience thinking about designing ADaM datasets based on table requirements, instead of on the structure of the SDTM domains feeding into those datasets.
Session 6C: Analysis Results Standards - eTFL
Lilly is utilizing metadata to enable automation, efficiency, and consistency in our clinical trial deliverables.
- Implemented metadata repository for all standards including data collection, SDTM, ADAM, and Analysis Results Datasets (ARDS) and study-level specifications
- SDTM/ADAM Automation:
- Robust programming process
- Granular transformation metadata
- Utilize metadata to dynamically create code
- Enable study teams to insert code for study-specific variables.
- Automated 95-97% SDTM variables
- Anticipated to automate ~60% ADaM variables and drive creation of therapeutic area standards
- Consistency across studies.
- ARDS Automation (future)
- Developed robust ARDS Model that parallels work by CDISC ARD Group. Includes all analyses.
- Similar automation approach to SDTM/ADaM.
- Process change: ARDS becomes input to TFLs.
- TFLs become cosmetic display step (no analysis) enabling GUI tools to be used
- Increase automated generation of ARDS/TFLs from 70% to >90%
- Full transparency and traceability from data collection through ARDS.
Session 6D: Audits and Inspections (TMF Track)
- Yen Phan, CodLad
- David Glasgow, FDA, BiMo
- Deb Wells, Novartis
- Pam Dellea-Giltner, PDG Clinical Consulting
Session 6E: TMF Reference Model Becoming a Standard (TMF Track)
Panelists:
- Rob Jones, Phlexglobal
- Lou Pasquale, IQVIA
- Jim Horstmann, Veeva
Session 7A: Data Science
Session 7B: Enable & Automate
In November of 2023, the U.S. Food & Drug Administration (FDA) released a 48-page document titled "Submitting Patient-Reported Outcome Data in Cancer Clinical Trials". This document provides technical specifications for submitting patient-reported outcome (PRO) data collected in cancer clinical trials to support oncology studies. Given the breadth and specificity of the guidance, the increasing prevalence of Clinical Outcome Assessments (COAs) in the work we do, and the direct link of the content to the FDA Study Data Technical Conformance Guide, adoption of the standards contained in "Submitting Patient-Reported Outcome Data in Cancer Clinical Trials" is highly recommended regardless of the COA therapeutic area. This presentation will provide practical guidance on how to successfully incorporate these FDA recommendations into study data collection, tabulation, analysis, and submission.
Pharmaverse (pharmaverse.org) is a connected network of companies and individuals working to promote collaborative development of curated open source R packages for clinical reporting usage in pharma. From SDTM to ADaM to TFLs to interactive visualizations to submission artifacts, pharmaverse aims to support data handling and analysis from end to end. The codebase is open sourced, permissively licensed, and collaboratively developed from contributors all around the world. This presentation will not only provide visibility on pharmaverse packages, but also help attendees understand how they can contribute to this community effort which continues to reduce gaps and increase the availability of a common R-based toolset for all to freely use.
Session 7C: Analysis Results Standards & eTFL Hands-On Workshop
Session 7D+E: Technology and Innovation in TMF Management
Session 8A: AI & ML
Efficient metadata governance is pivotal for seamless digital data flow, ensuring streamlined data collection, analysis, and standardized transformations. Despite the evolving importance of metadata, complex trial designs often complicate governance, leading to siloed standards and intricate processes. Protocol specific nuances further exacerbate the challenges, justifying deviations from established norms.
This presentation explores the transformative role of Artificial Intelligence (AI) and Machine Learning (ML) in enriching metadata management. AI and ML present a paradigm shift by swiftly creating metadata standards, facilitating rapid transitions between standards, and maintaining the lineage between standards.
The discussion encompasses the ease of maintenance of standards, generation of traceability and transformation specifications, identifying redundancies, and accommodating protocol-specific changes within the defined organizational hierarchy. It underscores the invaluable benefits of AI/ML to elevate metadata quality, minimize complexities, and boost reusability, traceability, and automation.
Session 8B: Foundational Standards
Choosing which ADaM dataset structure is best for collected data is usually straight forward. Lab or vital sign data is typically put in a Basic Dataset Structure (BDS) ADaM dataset, while Adverse Events, Concomitant Medications, and Medical History are in an Occurrence Data Structures (OCCDS) ADaM data structure. However, there are times when both true occurrence and parameter-result level information are captured on one CRF. Determining which ADaM data structure is best, based on the analysis needs, can be difficult. This paper outlines an instance where both BDS and OCCDS ADaM datasets could be appropriate, but neither perfectly works. The pros and cons of each data structure will be scrutinized, utilizing one in depth example, to illustrate that an analysis structure choice is not always straight forward.
This presentation is going to introduce the unique challenges with missing data that rare disease studies are facing and explain three types of missing data. This presentation is going to compare three methodologies for imputing missing data in longitudinal studies: Last Observation Carried Forward (LOCF), Mixed-Effects Model for Repeated Measures (MMRM), and Multiple Imputation (MI). This presentation will also showcase how SAS data steps and CDISC-compliant ADaM datasets could facilitate this type of analysis and improve traceability.
Session 8C: COSA
Admiral is an R package designed to streamline the creation of Analysis Data Model (ADaM) datasets in clinical trials. It provides a modular toolbox of individual R functions that enable statistical programmers to build ADaM datasets efficiently. This open-source package fosters collaboration and industry-wide contributions, encouraging a consistent and reusable approach to ADaM development.
Admiral emerged to address diverse data analysis challenges across companies and therapeutic areas. By advocating an open-source framework, Admiral promotes shared solutions and a standardized ADaM approach. R was chosen for its open-source nature, popularity among statisticians, and strong data science community support. Regulatory acceptance of R-based submissions further underscores its reliability.
Admiral provides example scripts and templates for specific ADaM structures like ADSL, BDS, and OCCDS, along with thorough documentation and unit tests. The package reached version 1.0.0 by early 2024, reflecting its stability. Ongoing maintenance and documentation improvements aim to enhance user experience and package compatibility.
Session 8D+E: Investigators and Inspectors (TMF Track)
- David Glasgow, FDA
- Anna Fehr & Nikki Jundt, One of a Kind Clinical Research Center
- Leila Canlas, Pfizer
- Sarah Dean, Precision for Medicine
- Dawn Niccum, InSeption Group