Drug Approval Strategies in the Age of Fast Track, Breakthrough Therapy and Accelerated Approval

Image result for Process Validation

Process Validation and Regulatory Review

Drug Approval Strategies in the Age of Fast Track, Breakthrough Therapy and Accelerated Approval

To meaningfully discuss the process validation and regulatory approval strategies required for drugs that have been designated Fast Track, Breakthrough Therapy or Accelerated Approval drugs, we must first clarify these designations and briefly remind ourselves what the Process Validation guidance looks like. Then we will be able to clearly identify challenges and approaches to these barriers when working to bring a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.

Fast Track designation – Fast Track drugs treat serious conditions where there is an unmet medical need. Concluding that a condition is serious and that there is an unmet medical need most definitely leaves room for judgement, but generally speaking, the conditions these drugs treat are life-threatening, and the drug in question is expected to contribute to survival, daily functioning or the likelihood that a condition will advance to a very serious state. Fast Track drugs receive the benefit of more frequent meetings and communication with the FDA, and the drug qualifies for Accelerated Approval and rolling review of the Biologic License Application (BLA) or New Drug Application (NDA).

Breakthrough Therapy – Breakthrough Therapy status can be assigned to drugs that treat a serious condition when preliminary clinical data show significantly improved outcomes compared to treatments currently on the market. Breakthrough Therapies are eligible for: Fast Track designation benefits, extensive FDA guidance on effective drug development early in the development process and organizational commitment, including access to FDA senior managers.

Accelerated Approval – The FDA established accelerated approval regulations in 1992. Accelerated Approval could be given to drugs that met a serious unmet medical need, and approval was based on a surrogate endpoint. Fast forward to 2012 when Congress passed the Food and Drug Administration Safety Innovations Act (FDASIA). This amendment to the Federal Food, Drug, and Cosmetic Act (FD&C Act) allowed approval to be based on either a surrogate endpoint per the 1992 regulations or approval based on an intermediate clinical endpoint. For example, as a result of the 2012 legislation, a cancer drug could be approved based on the surrogate endpoint of increasing the probability of cancer to going into remission or the intermediate clinical endpoint of shrinking tumor size—an outcome that is strongly correlated with the ability to much more successfully treat cancer and induce remission.

These FDA designations are clearly designed to increase the availability and speed to market of drugs treating serious conditions where unmet medical needs exist. Given that nimbleness and speed has historically not been the pharmaceutical industry’s nor FDA’s strong suit—commercialization of a drug has historically taken on average 12 years and cost up to $2.5B (including expenditure outlays and opportunity costs). The ability for these designations to save both time and money is very attractive. However, given the slow-moving nature of the industry, changes in both mindset and approaches are needed by both drug innovators and regulators to validate processes and ensure drug quality within the faster-moving constructs.

Let’s now turn to the most recent Process Validation guidance so that we may juxtapose that system with the nimble needs of Fast Track Designation, Breakthrough Therapy and Accelerated Approval drugs—ultimately, making some observations regarding needed Process Validation and overall regulatory approval approaches as the industry moves towards accelerated development processes for an increasing number of drugs.

Image result for Process Validation

WHAT IS PROCESS VALIDATION?
According to the FDA’s 2011 Process Validation (PV) guidance, “For purposes of this guidance, process validation is defined as the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product. Process validation involves a series of activities taking place over the lifecycle of the product and process.”

The Three Stages of Process Validation:
Stage 1: Process Design–manufacturing process is defined during this stage and is based on knowledge acquired through development and scale-up activities.

Stage 2: Process Qualification–process design is evaluated to determine if the process is capable of reproducible commercial manufacturing.

Stage 3: Continued Process Verification–ongoing assurance during manufacturing that the process is controlled and the outcome predictable.

Image result for Process Validation

Keys for Successful Validation Include:
• Gaining knowledge from the product and process development
• Understanding sources of variation in the production process
• Determining the presence of and degree of variation
• Understanding the impact of variation on the process and end product
• Controlling variation in a manner aligned with Critical Quality Attributes (CQA) and the risk a given attribute introduces to the process

Process Qualification, a key component of Process Validation, should be based on overall level of product and process understanding, level of demonstrable control, data from lab, pilot and commercial batches, effect of scale and previous experience with similar products and processes. Process Qualification is generally recommended to be based on higher levels of sampling, additional testing and greater scrutiny of process performance than would be typical of routine commercial production.

As we will now explore, some of the demands of Process Qualification and overall Process Validation is severely challenged by the approaches required when bringing a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.

Image result for Process Validation

NOVEL APPROACHES NEEDED FOR ACCELERATED APPROVALS
Historically, it has taken an average of 12 years and, according to a Tufts Center for the Study of Drug Development (CSDD) report, including expenditures and opportunity costs, an average of ~$2.6 billion to bring a prescription drug to market. This paper will refrain from making editorial comments about this pharmaceutical industry fact; however, the undeniable reality is that the speed required at every point in the industry to develop Fast Track, Accelerated Approval or Breakthrough drugs is having a profound impact.

Image result for Process Validation

Approval of a Breakthrough drug, which of course is classified for Accelerated Approval, means manufacturers need to develop Chemistry, Manufacturing and Controls (CMC) data in about half the time of the traditional process. In addition, Breakthrough designation does not mean the innovator company can do less. In order to meet these accelerated timelines, they do need to start analytical methods creation and product and process characterization sooner, and handle the process differently. Validation of a process traditionally has called for sufficient data and an adequate number of runs to convince the manufacturer (and regulators) that the process works. As we will explore below, Breakthrough therapies are often in the market before the product is fully validated.

However, the guiding force behind these new approaches is that despite sharply reduced timeframes, manufacturers cannot compromise patient safety or product supply. Therefore, characterization of critical product and process attributes is typically required much earlier in the process.

Image result for Process Validation

Challenges and Realities of Process Validation and Regulatory Approval within the Accelerated Drug Paradigm:
• The collaboration and communication required between the FDA and innovator companies is extensive. Given limited FDA resources and extensive resources required by the organizations of innovator companies, is the growth of the Fast Track/Breakthrough Therapy/Accelerated Approval programs sustainable?
• New Drug Applications (NDA) for Breakthrough Therapies include less manufacturing information and data requiring alternative risk-mitigation approaches and often nontraditional statistical models.
• Both patient safety and product supply is at the forefront, without the data and historical knowledge traditionally used to address these concerns.
• The primary concerns for CMC reviewers include incomplete characterization of the drug, underdeveloped analytical methods and a lack of full understanding of a product’s Critical Quality Attributes (CQA) and associated risks.
• Process Validation will, in many cases, be incomplete at product launch.

Image result for Process Validation

THE CHANGED PARADIGM RESTORED TO ORDER (SORT OF)
The “restored order” for the approval of, and ultimate Process Validation for, Breakthrough/Accelerated Approval drugs will not look like anything we normally see. Again, all Breakthrough and Accelerated Approval drugs address very serious conditions and offer treatment where none currently exists, or offers benefits well above and beyond drug products currently on the market. Therefore, flexibility has been applied to segments of the traditional product review and approval process to speed the availability of treatments for these critical conditions.

Despite the flexibility in, and often changes to the product review and approval process, patient safety remains at the forefront, as well as the guarantee of consistent product supply.

Approaches for Successfully Handling the Approval and Validation of Accelerated Approval Drugs:
• Open and transparent communication with the FDA is essential throughout the entire approval and post-market process. The pharmaceutical company mindset of not wanting to learn certain information for fear of needing to revalidate based on those discoveries has no place in this new reality. New information will be learned pre- and post-launch, and plenty of amendments will need to be filed.
• Given the compressed development timeframes, less stability data will be available at submission. Additional data will be submitted via amendments during the review cycle, and in some cases, post-market.
• Launch commercial process with limited experience and optimize post-approval–the classic three runs is not the guiding force within this construct. The level of flexibility regulators will extend is determined for each specific product. Factors taken into consideration include: riskiness of product characteristics, seriousness of the condition and medical need, complexity of manufacturing processes, state of the innovator’s quality system and merits of the innovator’s risk-based quality assessment including Critical Quality Attributes (CQA).
• Novel statistical models and approaches will need to be applied in many cases. Representative samples and assays for these models will likely need to be acquired from sources, like prior knowledge and use of comparability protocols. Also, determination of the appropriate use of stability data from representative pilot scale lots will be required.
• Manufacturers should freely acknowledge where data is limited, demonstrate that the missing data pose no risk to patient safety or product supply and outline post-market strategy for acquiring the missing data. Conversations with the FDA are clearly required for successful outcomes.
• Focus on patient safety and reliable supply of quality product at launch, not process optimization. In addition, begin critical product attributes and process characterization work much earlier than a typical pharmaceutical development process. In many cases, consider broader product quality ranges for non-Critical Quality Attributes until further manufacturing experience is acquired post-approval.

Image result for Process Validation

Enhance analytical methods and understanding to offset more limited process understanding and to support future comparability work. Extremely important, involve commercial Quality Control representatives in the development assay design.
• Again, CMC activities that may be incomplete at launch include: Process Validation, stability studies on commercial product, manufacturing scale/tech transfer data and complete control system data.
• A post-approval product lifecycle management plan is a must, and it needs to be included in the filing to support deferred CMC activities.

Fast Track, Breakthrough Therapy and Accelerated Approval drugs have profoundly changed the thinking and approach to Process Validation and other CMC activities.

Image result for Process Validation

Sources:
Joseph A. DiMasia, Henry G. Grabowskib, Ronald W. Hansenc, “Innovation in the Pharmaceutical Industry: New Estimates of R&D costs,” Tufts Center for the Study of Drug Development, Tufts UniversityJ. Wechsler, “Breakthrough Drugs Raise Development and Production Challenges,” Pharmaceutical Technology 39 (7) 2015.Earl S. Dye, PhD, “CMC/GMP Considerations for Accelerated Development and Launch of Breakthrough Therapy Products,” Roche“Guidance for Industry Expedited Programs for Serious Conditions – Drugs and Biologics,” U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), May 2014 ProceduralAnthony Mire-Sluis, Michelle Frazier, Kimberly May, Emanuela Lacana, Nancy Green, Earl Dye, Stephan Krause, Emily Shacter, Ilona Reischl, Rohini Deshpande and Joe Kutza, “Accelerated Product Development: Leveraging Industry and Regulator Knowledge to Bring Products to Patients Quickly,” BioProcess International, December 2014

Daniel Alsmeyer and Ajay Pazhayattil, Apotex Inc., “A Case for Stage 3 Continued Process Verification,” Pharmaceutical Manufacturing, May 2014

Image result for Process Validation

Image result for Process Validation

Image result for Process Validation

/////////////Process Validation, Regulatory Review, Drug Approval Strategies,  Fast Track, Breakthrough Therapy, Accelerated Approval

Advertisement

What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution

 

What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution

Multivariate predictive distribution quantifies the level of QA in a design space. “Parametric bootstrapping” can help simplify early analysis and complement Bayesian methods.

The ICH Q8 core definition of design space is by now somewhat familiar: “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality” [1]. This definition is ripe for interpretation. The phrase “multidimensional combination and interaction” underscores the need to utilize multivariate analysis and factorial design of experiments (DoE), while the words “input variables (e.g., material attributes) and process parameters” remind us of the importance of measuring the right variables.

However, in presentations and articles discussing design space, not much focus has been given to the key phrase, “assurance of quality”. This does not seem justified, given that guidance documents such as ICH Q8, Q9, Q10, PAT, etc. are inundated with the words “risk” and “risk-based.” For any ICH Q8 design space constructed, surely the core definition of design space begs the question, “How much assurance?” [2]. How do we know if we have a “good” design space if we do not have a method for quantifying “How much assurance?” in a scientifically coherent manner?

 

The Flaws of Classical MVA

Classical multivariate analysis and DoE methodology fall short of providing convenient tools to allow one to answer the question “How much assurance?” There are two reasons for this. One is that multivariate analysis and DoE have historically focused on making inferences primarily about response means. But simply knowing that the response means of a process meet quality specifications is not sufficient to allow one to conclude that the next batch of drug product will meet specifications. One reason for this is that we always have batch-to-batch variation. Building a design space based upon overlapping mean responses will result in one that is too large, harboring operating conditions with a low probability of meeting all of the quality specifications.  Just because each of the process mean quality responses meet specification does not imply that the results of the next batch will do so; there will always be variation about these means.

However, if we can quantify the entire (multivariate) predictive distribution of the process quality responses as a function of “input variables (e.g., material attributes) and process parameters”, then we can compute the probability of a future batch meeting the quality specifications. The multivariate predictive distribution of the process quality responses incorporates all of the information about the means, variation, and correlation structure among the response types.

Another, more technically subtle reason, that classical multivariate analysis and DoE methodology do not provide straightforward tools to construct a proper design space is that, beyond inference about response means, they are oriented towards construction of prediction intervals or regions. For a process with multiple critical quality responses, it is awkward to try to use a classical 95% prediction region (which is not rectangular in shape) to compare against a (rectangular) set of specifications for multiple quality responses. On the other hand, using individual prediction intervals does not take into account the correlation structure among the quality responses. What is needed instead is a “multivariate predictive distribution” for the quality responses. The proportion of this predictive distribution that sits inside of the rectangular set of quality specifications is then simply a quantification of “how much assurance”.

 

The Stochastic Nature of Our Processes

 

Complex manufacturing processes are inherently “stochastic processes”. This means that while such process may have underlying mechanistic model relationships between the input factors and process responses, nonetheless such relationships are embedded with random variation. Conceptually, for many complex manufacturing processes, even if “infinitely” accurate measurement devices could be used, such processes would still produce quality responses that vary from batch to batch and possibly within batch. This is why classical multivariate analysis and DoE methodology, which focuses on inference for response means, present an insufficient tool set. This is not to say that such methods are not necessary; indeed they are. However, we need to better understand the multivariate stochastic nature of complex manufacturing processes in order to quantify “how much assurance” relative to an ICH Q8 design space.

The concept of a multivariate distribution is useful for understanding complex manufacturing processes, with regard to both input variables and response variables.  Figure 1 shows a hypothetical illustration of the relationships among various input variables and response variables. Notice that some of the input variables and the response variables are described by multivariate distributions. In Figure 1 we have a model which describes the relationships between the input variables, the common cause process variability and the quality responses. This relationship is captured by the multivariate mathematical function f=(f1,…,fr), which maps various multivariate input variables, x, z, e, θ  to the multivariate quality response variable, Y=(Y1,…,Yr). Here, x=(x1,…,xk) lists the controllable process factors (e.g. pressure, temperature, etc.), while z=(z1,…,zh) lists process variables which are noisy, such as input raw materials, process set-point deviations, and possible ambient environmental conditions (e.g. humidity).

The variable e=(e1,…,er) represents the common-cause random variability that is inherent to the process. It is natural that z and e be represented by (multivariate) distributions that capture the mean, variation about the mean, and correlation structure of these random variables. The parameter θ = (θ1,…θp) represents the list of unknown model parameters. While such a list can be thought of as composed of fixed unknown values, for purposes of constructing a predictive distribution, it may be easier to describe the uncertainty associated with these unknown model parameters by a multivariate distribution. The function f then transmits the uncertainty in the variables z, e, and θ to the response variables, Y. In other words, the “input distributions” for z, e, and θ combine to produce the predictive distribution for Y through the process model function f.

 Bayesian Method
Figure 1. Multivariate distributions and the role they play in the design space reliability. Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in an unreliable process.

The controllable process factors x can then be used to move the distribution of response variables Y to be better situated within the region bounded by the quality specifications. In Figure 1, the rectangular gray region in the lower left is the region bounded by the quality specifications for “percent dissolved” and “friability” for a tablet production process. Here, one can see that the process operating at operating set point x it not reliable because only about 65% of the predictive distribution is within the region carved out by the quality specifications. (The concentric elliptical contours are labelled by the proportion of the predictive distribution inside of each contour.)

In Figure 2, however, one can see that the operating set point x is associated with a much more reliable process since about 90% of the predictive distribution is within the region carved out by the quality specifications. Situations like that in Figure 2 can then be used to create a design space in the following way: A design space (for a process with multiple critical quality responses) can be thought of as the collection of all controllable process factors (x-points) such that the an acceptably high percentage of the (multivariate) predictive distribution of critical quality responses falls within the region outlined by the (multiple) quality specifications. 

Bayesian Method
Figure 2. Multivariate distributions and the role they play in the design space reliability.  Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in a process with a higher reliability.

Slight modifications of the above definition may be needed to accommodate situations involving feedback/feedforward control, etc. But a predictive distribution of the quality responses is required nonetheless in order to address the question of “How much assurance?”

 

From Bayesian to Bootstrapping

Viewing Figures 1 and 2 shows that one can get a predictive distribution (for the quality responses in Y) if one has access to the various input distributions, in particular the multivariate distribution representing the uncertainty due to the unknown model parameters. But how can one compute the distribution representing the uncertainty due to the unknown model parameters? One possible approach is to use Bayesian statistics.

The Bayesian statistical paradigm provides a conceptually straightforward way to construct a multivariate statistical distribution for the unknown model parameters associated with a manufacturing process. This statistical distribution for the unknown model parameters is known as the “posterior” distribution by Bayesian statisticians. The resulting distribution for Y (in Figures 1 and 2) is called the posterior predictive distribution. This distribution accounts for uncertainty due to common-cause process variation as well as the uncertainty due to unknown model parameters. It also takes into account the correlation structure among multiple quality response types. In addition, the Bayesian approach also allows for the inclusion of prior information (e.g., from previous pilot plant and laboratory experiments). In fact, use of the posterior predictive distribution is an effective way to do process optimization with multiple responses [3]. An excellent overview of these Bayesian methods is given in the recent book Process Optimization: A Statistical Approach [4]. An application of the Bayesian approach to Design space construction is given by this author in [5].

An alternative approach for obtaining a distribution to represent the uncertainty due to the unknown model parameters involves a concept called “parametric bootstrapping”. This concept is somewhat more transparent than the Bayesian approach. A simple form of the parametric bootstrap approach can be described as follows. First fit your process model to experimental data to estimate the unknown model parameters. Using computer simulation, simulate a new set of (artificial) responses so that you will have new responses similar to the ones obtained from your real experiment. Using the artificial responses, estimate a new set of model parameters. Do this many more times (10,000 say) to obtain many more sets of model parameter estimates. Each of these model parameters (obtained from each artificial data set) will be somewhat different, due to the variability in the data. The collection of all these sets of model parameter estimates forms a bootstrap distribution of the model parameters that expresses their uncertainty. An experiment with a large number of runs (i.e. a lot of information) will tend to have a tighter distribution, expressing the fact that we have only a little uncertainty about the model parameters. But an experiment with only a little information will tend to yield a bootstrap distribution that is more spread out. This is particularly the case, if the common cause variation in the process is also large.

The parametric bootstrap distribution can then be used in place of the Bayesian posterior distribution to enable one to obtain a predictive distribution for the quality responses. This simple parametric bootstrap approach will approximate the Bayesian posterior distribution of model parameters, but it will tend to produce a parameter distribution that is smaller (i.e., a bit too small) than the one obtained using the Bayesian approach. More sophisticated parametric bootstrap approaches are possible [6] which may result in more accurate approximations to the Bayesian posterior distribution of model parameters.

Software Selection

 

Unfortunately, the DoE software available in many point-and-click statistics packages does not produce multivariate predictive distributions. It therefore does not provide a way to quantify “How much assurance” one can assign to the Design space. So clearly, easy-to-use software is a current bottleneck to allowing experimenters to produce Design spaces based upon multivariate predictive distributions.

However, there are some software packages that consulting statisticians can use to produce approximate predictive distributions. The SAS statistical package has a procedure call PROC MODEL. (It is part of SAS’s econometric suite of procedures.)  This procedure will analyze multivariate response regression models. It has a Monte Carlo option which will produce simulations that allow one to sample from a predictive distribution. This is useful if the process you have can be modeled using PROC MODEL.

The R statistical programming language has functions which will allow the user to do a Bayesian analysis for certain multivariate response regression models. In addition R has a boot function which will allow the user (in conjunction with other R statistical functions) to perform a parametric bootstrap analysis, which can eventually be incorporated into an R program for creating an approximate multivariate predictive distribution. In any case, the SAS and R software does require some statistical sophistication in order to create the appropriate programming steps. Easier to use point-and-click software is still very much needed to lower the computational hurdles for generating multivariate predictive distributions needed for ICH Q8 design space development.

 

Putting It All Together

 

Viewing the multivariate world from our limited three-dimensional perspective is often a challenge. This is no different for a design space involving multiple controllable factors.  For the design space definition above, it is convenient to use a probability function, P(x), which assigns a probability (i.e., reliability measure) to each set of process operating conditions, denoted by x = (x1,…, xk). Here, P(x) is the probability that all of the critical quality responses simultaneously meet their specification limits. As stated previously, this probability can be thought of as the proportion of the predictive distribution contained within the region bounded by the quality specifications. The design space is then the set of all x such that P(x) is at least some acceptable reliability value. One can then plot P(x) vs. x to view the design space, provided that the dimension of x (i.e., k) is not too large. A simplified example discussed in [5] and [7] is briefly described below.

In an early phase synthetic chemistry study, a prototype design space using a predictive distribution was created for an active pharmaceutical ingredient (API). The (four) quality responses considered were: “% starting material isomer” (<0.15%), “% product isomer” (<2%), “% of impurity #1” (<3.5%), and “% API purity” (>95%). Four controllable experimental factors (x = (x1,…, x4)) were used in a DoE to model how they (simultaneously) influenced the four quality responses. These factors were x1=temperature (oC), x2=pressure (psi), x3= catalyst loading (eq.), and x4=reaction time (hrs).

A Bayesian statistical analysis was used to construct the multivariate predictive distribution for the four quality responses. For the above quality specifications (in parentheses), and for various x-points, the proportion of the multivariate predictive distribution inside the specification region was computed using many Monte Carlo simulations. In other words, for each x-point the corresponding P(x) value was computed. A multipanel contour plot of P(x) vs. the four experimental factors is shown in Figure 3 below. The white regions indicate the regions of larger reliability for simultaneously meeting all of the process specifications. Although these absolute reliability levels are less than ideal, this example is useful for illustration purposes.

Pharma QbD
Figure 3. Multipanel contour plots of the DS for the early phase synthetic chemistry example. (Used with permission from the Association for Quantitative Management.)

As it turned out, the “optimal” factor-level configurations were such that the associated proportions of the predictive distributions were only about 60% to 70% inside of the quality specification region. Part of the reason for this is that the “input” distributions corresponding to the “unknown model parameters” and the “common cause variation” were too large. A more detailed analysis of this data, conducted in [7], indicates that if a larger experimental design had been used and the process variation reduced by 30%, the more reliable factor conditions would be associated with reliability measures that were to the 95% level and beyond. This shows the importance of reducing the variation for as many of the process “input” distributions (of the kind shown in Figures 1 and 2) as is possible.

In the case where the dimension of x is not small, a (read-only) sortable spreadsheet could be used to make informed movements within such a design space [5]. While the spreadsheet approach does not provide a high-level (i.e., bird’s eye) view of the design space (as in Figure 3), it does show (locally) how P(x) changes as various controllable factors are changed. In theory, it may also be possible to utilize mathematical optimization routines applied to P(x) to move optimally within a design space whose dimension is not small.

Developing Design Spaces Based Upon Predictive Distributions: A Summary

 

Emphasize efficient experimental designs, particularly for multiple unit operations. For design space construction, clearly it is desirable to have good data that can be efficiently collected. As such, good experimental design and planning is critical. It is particularly useful to have good data that can be efficiently collected across various unit operations in a multi-step process. The performance of one unit operation may depend upon various multiple aspects of previous unit operations. When viewed as a whole, a process may have a variety of interacting variables that span its unit operations. This is an aspect of process optimization and design space construction that requires more research by statisticians and their clients (chemical engineers, chemometricians, pharmaceutical scientists, etc.). Some work has been done [8] but more is needed.  Poor experimental designs can be costly and possibly lead to biases and/or too much uncertainty about unknown model parameters. This is not helpful for developing a good multivariate predictive distribution with which to construct a design space.

Depend upon easy-to-use statistical software for multivariate predictive distributions. As mentioned previously in this article, the availability of easy-to-use statistical software for creating multivariate predictive distributions is a key bottleneck for the development of design spaces that quantify “how much assurance” can be associated with meeting quality specifications. More broadly, it has been shown recently that multivariate predictive distributions can be very useful for process optimization involving multiple responses [3], [4]. Monte Carlo simulation software applications such as @Risk and Crystal Ball provide point-and-click software for process simulation. However, a key issue that still remains involves producing a multivariate distribution that reflects the uncertainty of unknown model parameters associated with a process. Producing such a distribution is not always statistically simple.

The Bayesian approach provides a unifying paradigm, but the computational issues do require careful assessment of model and distributional assumptions, and in many cases, careful judgement involving convergence of algorithms. The parametric bootstrap approach is more transparent but may produce design spaces that are a bit too large, unless certain technical refinements are brought to bear. Nonetheless, the statistical methods exist to produce good multivariate distributions that represent the uncertainty of unknown model parameters. As always, modeling and inference need to be done with care, using data from appropriately designed experiments.

Use computer simulation to better understand reliability and risk assessments for complex processes. If one can state that they understand a process quantitatively, even a stochastic process, then one should expect that they can simulate such a process reasonably well. Given that many pharmaceutical processes may involve a series of complex unit operations, computer simulation may prove to be helpful for understanding how various process factors and conditions combine to influence multivariate quality responses. With proper care, computer simulation may help process engineers and scientists to better understand the multivariate nature of their multi-step manufacturing procedures. Of course, experimental validation of computer simulation findings with real data may still be needed. In addition, process risk assessments based largely upon judgement (e.g., through tools such as FMEA) can be enhanced through the use of better quantitative methods such as probability measures [9] (e.g., obtained through computer simulations). The issue of ICH Q8 design space provides a further incentive to utilize and enhance such computer simulation tools.

Have a clear understanding of the processes involved and lurking risks, not just the blind use of mathematical models. We must always keep in mind that our statistical predictions, such as multivariate predictive distributions, are based upon mathematical models that are approximations to the real world environment. How much assurance we can accurately attribute to a design space will be influenced by the predictive distribution, and all of the modeling and input information (e.g. input distributions) that went into developing it. Generally speaking, of course, all decisions and risk measurements based upon statistical methods are influenced to varying degrees by what mathematical modeling assumptions are used, quantifications of design space assurance are fundamentally no different.

However, seemingly good predictive modeling (along with accurate statistical computation) may not be sufficient to provide a design space with long term, high-level assurance of meeting quality specifications. The issue of process robustness is also important, not only for design space construction but in other areas of QbD as well.  This issue has many facets and I believe it needs to be given more emphasis in pharmaceutical product development and manufacturing. So as not to go too far on a tangent, I will only briefly summarize some of the issues involved.

In quality engineering, making a process robust to noisy input variables is known as “robust parameter design” (RPD). The Japanese industrial engineer Genichi Taguchi pioneered this approach to quality improvement. The basic idea is to configure the controllable (i.e., non-noisy) process factors in such as way to dampen the influence of the noise process variables (e.g., raw material attributes or temperature/pressure fluctuations in the process). A convenient aspect of the predictive distribution approach to design space, and process optimization in general, is that it provides a technically and computationally straightforward way to do RPD. We simply simulate the nose variable distribution (shown in Figures 1 and 2) and then configure the controllable factors (x-points) to increase the probability of meeting the quality specifications. If the controllable factors are able to noticeably dampen down the influence of the noise variables, typically the probability that the quality responses will meet specifications will increase (assuming that the mean responses meet their specifications). See [4], [5], and [10] for RPD examples involving multivariate predictive distributions.

Less well known, however, are two more subtle issues that can cause problems with predictive distributions. These are lurking variables and heavy-tailed distributions.  Process engineers and scientists need to brainstorm and test various possibilities for a change in the process or its inputs that could increase the risk that the predictive distribution is overly optimistic or is not stable over time.

Some predictive distributions may have what are called “heavy tails”. (The degree of “heavy tailedness” is called kurtosis by statisticians.) We need to be careful with such distributions as they are more likely to suddenly produce values far from the center of the distribution, than for a normal (i.e., Gaussian) distribution.

If the process can be simulated on a computer, sensitivity analyses can be done to assess the effect of various shocks to the system or changes to the input or predictive distributions, such as heavier tails. An interesting overview of these two issues and of how quality and risk combine can be found in [11].

In conclusion, the ability to understand randomness and think stochastically is important as multivariate random variation pervades all complex production processes. Given that we are forced to deal with randomness (in multivariate form, no less), Monte Carlo simulation has become a useful way to gain some insight into the combined effects of controllable and random effects present in a complex production process. (Interested readers may want to visit The American Society for Quality’s web site on Probabilistic Technology available at http://www.asq.org/communities/probabilistic-technology/index.html). Computer simulation can help our intuition for understanding stochastic processes. Such intuition in humans is not always on the mark.  We can all be fooled by randomness. See for example the book by Taleb [12].


References
1. ICH (2005). “ICH Harmonized Tripartite Guideline: Pharmaceutical Development, Q8.”

2. Peterson, J. J. Snee, R. D., McAllister, P.R., Schofield, T. L., and Carella, A. J., (2009) “Statistics in the Pharmaceutical Development and Manufacturing” (with discussion),  Journal of Quality Technology, 41, 111-147.

3. Peterson, J. J. (2004), “A Posterior Predictive Approach to Multiple Response Surface Optimization”, Journal of Quality Technology, 36, 139-153.

4. Del Castillo, E. (2007), Process Optimization: A Statistical Approach, Springer, N.Y.

5. Peterson, J. J. (2008), “A Bayesian Approach to the ICH Q8 Definition of Design Space”, Journal of Biopharmaceutical Statistics, 18, 959-975.

6. Davison, C. and Hinkley, D.V. (1997), Bootstrap Methods and Their Application, Cambridge University Press, Cambridge, UK.

7. Stockdale, G. W. and Cheng, A. (2009), “Finding Design Space and a Reliable Operating Region using a Multivariate Bayesian Approach with Experimental Design”, Quality Technology and Quantitative Management, (to appear).

8. Perry, L. A., Montgomergy, D.C., and Fowler, J. W. (2002), “Partition Experimental Designs for Sequential Processes: Part II – Second Order Models”, Quality and Reliability Engineering International, 18, 373-382.

9. Claycamp, H. G. (2008). “Room for Probability in ICH Q9: Quality Risk Management”, Institute of Validation Technology conference: Pharmaceutical Statistics 2008 Confronting Controversy, March 18-19, Arlington, VA

10. Miro-Quesada, G., del Castillo, E., and Peterson, J.J., (2004) “A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables”, Journal of Applied Statistics, 31, 251-270

11. Kenett, Ron S  and Tapiero, Charles S. (2009),”Quality, Risk and the Taleb Quadrants” presented at the IBM Quality & Productivity Research Conference, June 3rd, 2009.  Available at SSRN: http://ssrn.com/abstract=1433490

12. Taleb, Nassim (2008) Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, Random House, New York.

/////// ICH Q8 Design Space,  Multivariate Predictive Distribution,QA,  design space, Parametric bootstrapping, complement Bayesian methods.