As resource forecasting for 2021 commences, it is a good time to visit existing resourcing models to ensure they can satisfy the demands of your development pipeline. One model worthy of consideration in a variety of areas is the Functional Sourcing Model (FSP). While FSP is traditionally thought of as applicable to companies with large programs and pipelines, it is increasingly being used by smaller companies who want the same benefits afforded by the FSP model which include: risk mitigation via tighter control over operational processes, consistency, scalability and flexibility in resource management, and cost controls on clinical trial conduct.
Performing an ideal clinical study would result in zero non-compliances throughout its entire life cycle. What an achievement that would be! Perhaps this is a utopian/non-realizable outcome, but wouldn’t it be great to avoid non-compliance issues resurfacing in the same study or perhaps in a follow on study. How does one incorporate lessons learned or experience from compliance issues to prevent recurrence? One clear approach is to thoroughly understand what caused the non-compliance –to get to that ‘a-ha’ moment. What’s the documentation process for recording the non-compliance? Is there a repository of non-compliances that is maintained to log these findings? Are there owners or actionees for these non-compliances to investigate them and close them within a reasonable time frame? As these questions are identified, a convergence of thought hopefully is achieved. If discipline is practiced to capture the specific issue as it is surfaced, it will be easier to get to root cause. It is likely some non-compliances are simple human error for which an effective Corrective and Preventative Action (CAPA) could be additional training of a site or study team members. When a critical non-compliance occurs warranting a larger stakeholder participation as its impact could very well jeopardize the study, the discipline for finding root cause should not deviate. Effort to focus on root cause taking advantage of tools such as Fishbone Diagrams to thoroughly evaluate all candidate root causes should take precedence. Only after root cause is understood should careful consideration on all available corrective actions and/or preventative actions be evaluated. To assess which appropriate corrective action to use, a Trade Space Evaluation methodology, often used in other industries, can be utilized. With this approach corrective actions are compared against a set of common criteria using numerical scoring metrics or color-coding metrics (Red, Yellow, or Green) to narrow down the optimal CAPA. Future blogs will illustrate how this methodology can be effective in the down-selection of the optimal CAPA. The key is to not to dilute the root cause process. Risk assessments performed at the start of a study are key to predicting issues and pre-plan their mitigation. Incorporating a technique like the Trade Space Methodology in the CAPA process at this initial assessment and sustaining it through the study by maintaining the risk log with this additional information can result in a comprehensive closed loop CAPA process. Holding periodic and regular meetings with investigators and other study team members where non-compliances are reviewed and reported on further ensures a robust process. Depending on the size of the organization, having an independent QA resource focused on the overall non-compliance process and retaining the knowledge base can be an effective discipline. The Return on Investment for this discipline and rigor should be a downward trend in the number of non-compliances experienced through subsequent studies to get to the highly desirable zero non-compliances’ Study!
As sponsors continue to restart paused studies, there will be a flurry of medical writing activities taking place to support IND submissions, investigator brochures, protocols, and other documents. Given that, below are tips to help improve the quality and efficiency in creating these documents.
Edit check programming and testing is time consuming and labor intensive. As such, it is efficient to minimize the time spent on programming by getting it right the first time. Historically, data management have measured edit check success as the percentage of edit checks that pass User Acceptance Testing (UAT). However, that may not be the best measure. What if an alternative metric leveraging Lean Six Sigma Methodology was used? The Rolled Throughput Yield (RTY) is a metric often employed in manufacturing operations to detect the probability that a process with more than one step will produce a defect free unit. If we track this metric for the edit check process, we can better gauge the probability of success of the edit check program.