New ICH Guidelines: ICH Q13 on Conti Manufacturing and ICH Q14 on AQbD


New ICH Guidelines:

*ICH Q13* on Continuous Manufacturing &

*ICH Q14* on ATP – QbD (Analytical target profile and quality by design)

New ICH Guidelines: ICH Q13 on Conti Manufacturing and ICH Q14 on AQbD

In a press release from 22 June the International Council for Harmonisation (ICH) has announced that they will prepare new topics for the future. The Assembly agreed to begin working on two new topics for ICH harmonisation:

Analytical Procedure Development and Revision of Q2(R1) Analytical Validation (Q2(R2)/Q14)
Continuous Manufacturing (Q13)

The long anticipated revision of ICH Q2(R1) “Guideline on Validation of Analytical Procedures: Text and Methodology” has been approved and the work plan is scheduled to commence in Q3 2018. It is intended that the new guidelines will be consistent with ICH Q8(R2), Q9, Q10, Q11 and Q12 .

The AQbD approach is very important to collect information in order to get an understanding and control of sources of variability of the analytical procedure by defining the control strategy.

Based on the Analytical Target Profile (ATP) the objective of the test and the quality parameters can be defined. By performing the validation (qualification) in the QbD concept, sufficient confidence can be achieved in order to consistently generate the analytical results that meet the ATP requirements.

So far there has been a lack of an Analytical Development Guideline, which the new ICH Development Guideline is supposed to compensate. Currently analytical procedures are mainly validated according to the classical validation parameters and these procedures mainly focus on HPLC Methods. Therefore this ICH topic has a top priority for the pharmaceutical industry. It is expected that the Revision of the Q2 (R1) Guideline will help to implement new and innovative analytical methods.

For more details please read the complete ICH Press Release (Kobe, Japan, June 2018).

Report from the EMA-FDA QbD pilot program

Image result for QBDReport from the EMA-FDA QbD pilot program

In March 2011, the European Medicines Agency (EMA) and the United States Food and Drug Administration (US FDA) launched, under US-EU Confidentiality Arrangements, a joint pilot program for the parallel assessment of applications containing Quality by Design (QbD) elements.

The aim of this program was to facilitate the consistent implementation of QbD concepts introduced through International Council for Harmonisation (ICH) Q8, Q9 and Q10 documents and harmonize regulatory decisions to the greatest extent possible across the two regions.

To facilitate this, assessors/reviewers from US and EU exchanged their views on the implementation of ICH concepts and relevant regulatory requirements using actual applications that requested participation into the program. The program was initially launched for three years. Following its first phase, both agencies agreed to extend it for two more years to facilitate further harmonization of pertinent QbD-related topics.

The program officially concluded in April 2016. During this period, the agencies received 16 requests to participate. One submission was rejected because the approach presented was not limited to QbD applications, and another application was not reviewed because it was never filed by the applicant.

In total, two Marketing Authorisation Applications (MAA)/New Drug Applications (NDA), three variation/supplements and nine scientific advice applications were evaluated under this program. One MAA/NDA was assessed under the parallel assessment pathway, with the rest following the consultative advice route. Based on the learnings during the pilot, FDA and EMA jointly developed and published three sets of Question and Answer (Q&A) documents.

These documents also addressed comments from the Japanese Pharmaceuticals and Medical Devices Agency (PMDA), which participated as an observer, offering input to further facilitate harmonization. The objective of these Q&A documents was to generate review guides for the assessors/reviewers and to communicate pilot outcomes to academia and industry.

Additionally, these documents captured any differences in regulatory expectations due to regional requirements, e.g. inclusion of process validation information in the dossier. The following topics were covered in each of the three Q&A documents: –

Q&A (1) published on Aug 20, 2013 included the following topics: (a) Quality target product profile (QTPP) and critical quality attributes (CQA), (b) Criticality, (c) Level of detail in manufacturing process descriptions, and (d) QbD for analytical methods1 –

Q&A (2) published on Nov 1, 2013 on Design Space Verification, that included definition, presentation, justification (including potential scale-up effects) and verification of design spaces both for active substances and finished products2 –

Q&A (3) published on Dec 19, 2014 included the following topics: (a) Level of detail in the dossier regarding Risk Assessment (RA), (b) Level of detail in the dossier regarding Design of Experiments (DOE) and Design Space3 R


Additionally, the FDA-EMA pilot provided the agencies an opportunity to harmonize regulatory expectations for the following precedent-setting applications that were reviewed under the consultative advice pathway: – The first continuous manufacturing (CM) based application submitted to both agencies.

Based on the learnings from this application, the following areas related to CM were harmonized: batch definition; control of excipients; material traceability; strategy for segregation of nonconforming material; real-time release testing (RTRT) methods and prediction models; and good manufacturing practice (GMP) considerations for RTRT, validation strategy, models, and control strategy. – A post approval supplement that included a broad based post-approval change management plan/comparability protocol.

Both agencies were harmonized on the expected level of detail in the protocol and considerations for implementation of a risk based approach to evaluate the changes proposed in the protocol. In line with the scope of the QbD pilot program, joint presentations of key findings were publically presented and discussed with stakeholders at different conferences.

These included the Joint EMAParenteral Drug Association QbD workshop4 organized in 2014 which also included participation from FDA and PMDA.

Overall, it is concluded that, on the basis of the applications submitted for the pilot, there is solid alignment between both Agencies regarding the implementation of multiple ICH Q8, Q9 and Q10 concepts. The FDA/EMA QbD pilot program opened up a platform for continuous dialogue which may lead to further communication on areas of mutual interest to continue the Agencies’ support for innovation and global development of medicines of high quality for the benefit of patients.

Both agencies are currently exploring potential joint activities with specific focus on continuous manufacturing, additional emerging technologies, and expedited/accelerated assessments (e.g. PRIME, Breakthrough). Additionally, EMA and FDA are hosting experts from each other’s organisations to facilitate dialog and explore further opportunities.

References: 1. EMA-FDA pilot program for parallel assessment of Quality-by-Design applications: lessons learnt and Q&A resulting from the first parallel assessment

2. FDA-EMA Questions and Answers on Design Space Verification

3. FDA-EMA Questions and answers on level of detail in the regulatory submissions

4. Joint European Medicines Agency/Parenteral Drug Association quality-by-design workshop l_000808.jsp&mid=WC0b01ac058004d5c3




The benefits of quality by design (QbD) concepts related to both product (ICH Q8)1 and drug substance (ICH Q11)2 are well-established, particularly in regards to the potential to use knowledge to affect process changes without major regulatory hurdles, i.e., revalidation/regulatory filing, etc. Less wellestablished, but potentially of significant value, is the application of the same concepts to analytical methods.

Analytical methods play an obvious key role in establishing the quality of final product as they establish conformance with product acceptance criteria (i.e., specifications) and indicate the integrity of the product through indication of product stability. Analytical methods are validated, like manufacturing processes, but what if the operational ranges could be established during method validation when demonstrating fitness for purpose?

Would it be possible to drive method improvement, especially post validation in the same way that the concept of continuous improvement is a key driver for manufacturing processes? Despite this attractive “value proposition”, there is to date little evidence that as an industry this is being practically realized.

The result is that many methods used in a QC environment lag well behind technical developments in the analytical field, often leading to the use of suboptimal procedures that impact adversely on the efficiency within the laboratory. The challenge is to create an environment whereby such changes can be made efficiently and effectively.

One approach is to apply the principles of ICH Q8−10; delivering a science and risk based approach to the development and validation of analytical methods, establishing a method operable design region (MODR) within which changes can be made. Such a framework is illustrated in Figure 1.



This starts with a definition of the effective requirements of the method, an analytical target profile (ATP), this taking the specific form of acceptance criteria for method performance. Such a process can be used to not only establish effective analytical methods but is also supportive of continual improvement, specifically within the MODR. However, such a concept is potentially limited in that the expectation is that changes are restricted to within the MODR.

Such restrictions may inhibit continuous improvement. A prime example is change of stationary phase or a change from HPLC to UPLC; both fall outside of the original MODR. Historically such changes have been notoriously difficult and often therefore avoided unless imperative. A recent publication13 examined this, presenting a method enhancement concept that would allow minor changes outside of the MODR. This is based on the realization that performance of any analytical method is based on the conduct of a system suitability test (SST); such tests ensure the method’s fitness for purpose.

Karlsson et al. stated that changes outside of the initial MODR may be possible provided that the method principle is unchanged, failure modes are the same, and the SST is capable of detecting these, both for the original method and for any method changes that fall outside of the original MODR. Put simplychanges can be made provided the SST criteria are passed. A change from HPLC to UPLC was used to illustrate this. Revalidation of the method is still required, but critically such changes do not require regulatory interaction but can be managed through internal quality systems.

1 ICH Q8 Pharmaceutical Development.
(2) ICH Q11 – Development and Manufacture of Drug Substances
(Chemical Entities and Biotechnological/Biological Entities) Q11. (Aug 2009).




What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution


What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution

Multivariate predictive distribution quantifies the level of QA in a design space. “Parametric bootstrapping” can help simplify early analysis and complement Bayesian methods.

The ICH Q8 core definition of design space is by now somewhat familiar: “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality” [1]. This definition is ripe for interpretation. The phrase “multidimensional combination and interaction” underscores the need to utilize multivariate analysis and factorial design of experiments (DoE), while the words “input variables (e.g., material attributes) and process parameters” remind us of the importance of measuring the right variables.

However, in presentations and articles discussing design space, not much focus has been given to the key phrase, “assurance of quality”. This does not seem justified, given that guidance documents such as ICH Q8, Q9, Q10, PAT, etc. are inundated with the words “risk” and “risk-based.” For any ICH Q8 design space constructed, surely the core definition of design space begs the question, “How much assurance?” [2]. How do we know if we have a “good” design space if we do not have a method for quantifying “How much assurance?” in a scientifically coherent manner?


The Flaws of Classical MVA

Classical multivariate analysis and DoE methodology fall short of providing convenient tools to allow one to answer the question “How much assurance?” There are two reasons for this. One is that multivariate analysis and DoE have historically focused on making inferences primarily about response means. But simply knowing that the response means of a process meet quality specifications is not sufficient to allow one to conclude that the next batch of drug product will meet specifications. One reason for this is that we always have batch-to-batch variation. Building a design space based upon overlapping mean responses will result in one that is too large, harboring operating conditions with a low probability of meeting all of the quality specifications.  Just because each of the process mean quality responses meet specification does not imply that the results of the next batch will do so; there will always be variation about these means.

However, if we can quantify the entire (multivariate) predictive distribution of the process quality responses as a function of “input variables (e.g., material attributes) and process parameters”, then we can compute the probability of a future batch meeting the quality specifications. The multivariate predictive distribution of the process quality responses incorporates all of the information about the means, variation, and correlation structure among the response types.

Another, more technically subtle reason, that classical multivariate analysis and DoE methodology do not provide straightforward tools to construct a proper design space is that, beyond inference about response means, they are oriented towards construction of prediction intervals or regions. For a process with multiple critical quality responses, it is awkward to try to use a classical 95% prediction region (which is not rectangular in shape) to compare against a (rectangular) set of specifications for multiple quality responses. On the other hand, using individual prediction intervals does not take into account the correlation structure among the quality responses. What is needed instead is a “multivariate predictive distribution” for the quality responses. The proportion of this predictive distribution that sits inside of the rectangular set of quality specifications is then simply a quantification of “how much assurance”.


The Stochastic Nature of Our Processes


Complex manufacturing processes are inherently “stochastic processes”. This means that while such process may have underlying mechanistic model relationships between the input factors and process responses, nonetheless such relationships are embedded with random variation. Conceptually, for many complex manufacturing processes, even if “infinitely” accurate measurement devices could be used, such processes would still produce quality responses that vary from batch to batch and possibly within batch. This is why classical multivariate analysis and DoE methodology, which focuses on inference for response means, present an insufficient tool set. This is not to say that such methods are not necessary; indeed they are. However, we need to better understand the multivariate stochastic nature of complex manufacturing processes in order to quantify “how much assurance” relative to an ICH Q8 design space.

The concept of a multivariate distribution is useful for understanding complex manufacturing processes, with regard to both input variables and response variables.  Figure 1 shows a hypothetical illustration of the relationships among various input variables and response variables. Notice that some of the input variables and the response variables are described by multivariate distributions. In Figure 1 we have a model which describes the relationships between the input variables, the common cause process variability and the quality responses. This relationship is captured by the multivariate mathematical function f=(f1,…,fr), which maps various multivariate input variables, x, z, e, θ  to the multivariate quality response variable, Y=(Y1,…,Yr). Here, x=(x1,…,xk) lists the controllable process factors (e.g. pressure, temperature, etc.), while z=(z1,…,zh) lists process variables which are noisy, such as input raw materials, process set-point deviations, and possible ambient environmental conditions (e.g. humidity).

The variable e=(e1,…,er) represents the common-cause random variability that is inherent to the process. It is natural that z and e be represented by (multivariate) distributions that capture the mean, variation about the mean, and correlation structure of these random variables. The parameter θ = (θ1,…θp) represents the list of unknown model parameters. While such a list can be thought of as composed of fixed unknown values, for purposes of constructing a predictive distribution, it may be easier to describe the uncertainty associated with these unknown model parameters by a multivariate distribution. The function f then transmits the uncertainty in the variables z, e, and θ to the response variables, Y. In other words, the “input distributions” for z, e, and θ combine to produce the predictive distribution for Y through the process model function f.

 Bayesian Method
Figure 1. Multivariate distributions and the role they play in the design space reliability. Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in an unreliable process.

The controllable process factors x can then be used to move the distribution of response variables Y to be better situated within the region bounded by the quality specifications. In Figure 1, the rectangular gray region in the lower left is the region bounded by the quality specifications for “percent dissolved” and “friability” for a tablet production process. Here, one can see that the process operating at operating set point x it not reliable because only about 65% of the predictive distribution is within the region carved out by the quality specifications. (The concentric elliptical contours are labelled by the proportion of the predictive distribution inside of each contour.)

In Figure 2, however, one can see that the operating set point x is associated with a much more reliable process since about 90% of the predictive distribution is within the region carved out by the quality specifications. Situations like that in Figure 2 can then be used to create a design space in the following way: A design space (for a process with multiple critical quality responses) can be thought of as the collection of all controllable process factors (x-points) such that the an acceptably high percentage of the (multivariate) predictive distribution of critical quality responses falls within the region outlined by the (multiple) quality specifications. 

Bayesian Method
Figure 2. Multivariate distributions and the role they play in the design space reliability.  Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in a process with a higher reliability.

Slight modifications of the above definition may be needed to accommodate situations involving feedback/feedforward control, etc. But a predictive distribution of the quality responses is required nonetheless in order to address the question of “How much assurance?”


From Bayesian to Bootstrapping

Viewing Figures 1 and 2 shows that one can get a predictive distribution (for the quality responses in Y) if one has access to the various input distributions, in particular the multivariate distribution representing the uncertainty due to the unknown model parameters. But how can one compute the distribution representing the uncertainty due to the unknown model parameters? One possible approach is to use Bayesian statistics.

The Bayesian statistical paradigm provides a conceptually straightforward way to construct a multivariate statistical distribution for the unknown model parameters associated with a manufacturing process. This statistical distribution for the unknown model parameters is known as the “posterior” distribution by Bayesian statisticians. The resulting distribution for Y (in Figures 1 and 2) is called the posterior predictive distribution. This distribution accounts for uncertainty due to common-cause process variation as well as the uncertainty due to unknown model parameters. It also takes into account the correlation structure among multiple quality response types. In addition, the Bayesian approach also allows for the inclusion of prior information (e.g., from previous pilot plant and laboratory experiments). In fact, use of the posterior predictive distribution is an effective way to do process optimization with multiple responses [3]. An excellent overview of these Bayesian methods is given in the recent book Process Optimization: A Statistical Approach [4]. An application of the Bayesian approach to Design space construction is given by this author in [5].

An alternative approach for obtaining a distribution to represent the uncertainty due to the unknown model parameters involves a concept called “parametric bootstrapping”. This concept is somewhat more transparent than the Bayesian approach. A simple form of the parametric bootstrap approach can be described as follows. First fit your process model to experimental data to estimate the unknown model parameters. Using computer simulation, simulate a new set of (artificial) responses so that you will have new responses similar to the ones obtained from your real experiment. Using the artificial responses, estimate a new set of model parameters. Do this many more times (10,000 say) to obtain many more sets of model parameter estimates. Each of these model parameters (obtained from each artificial data set) will be somewhat different, due to the variability in the data. The collection of all these sets of model parameter estimates forms a bootstrap distribution of the model parameters that expresses their uncertainty. An experiment with a large number of runs (i.e. a lot of information) will tend to have a tighter distribution, expressing the fact that we have only a little uncertainty about the model parameters. But an experiment with only a little information will tend to yield a bootstrap distribution that is more spread out. This is particularly the case, if the common cause variation in the process is also large.

The parametric bootstrap distribution can then be used in place of the Bayesian posterior distribution to enable one to obtain a predictive distribution for the quality responses. This simple parametric bootstrap approach will approximate the Bayesian posterior distribution of model parameters, but it will tend to produce a parameter distribution that is smaller (i.e., a bit too small) than the one obtained using the Bayesian approach. More sophisticated parametric bootstrap approaches are possible [6] which may result in more accurate approximations to the Bayesian posterior distribution of model parameters.

Software Selection


Unfortunately, the DoE software available in many point-and-click statistics packages does not produce multivariate predictive distributions. It therefore does not provide a way to quantify “How much assurance” one can assign to the Design space. So clearly, easy-to-use software is a current bottleneck to allowing experimenters to produce Design spaces based upon multivariate predictive distributions.

However, there are some software packages that consulting statisticians can use to produce approximate predictive distributions. The SAS statistical package has a procedure call PROC MODEL. (It is part of SAS’s econometric suite of procedures.)  This procedure will analyze multivariate response regression models. It has a Monte Carlo option which will produce simulations that allow one to sample from a predictive distribution. This is useful if the process you have can be modeled using PROC MODEL.

The R statistical programming language has functions which will allow the user to do a Bayesian analysis for certain multivariate response regression models. In addition R has a boot function which will allow the user (in conjunction with other R statistical functions) to perform a parametric bootstrap analysis, which can eventually be incorporated into an R program for creating an approximate multivariate predictive distribution. In any case, the SAS and R software does require some statistical sophistication in order to create the appropriate programming steps. Easier to use point-and-click software is still very much needed to lower the computational hurdles for generating multivariate predictive distributions needed for ICH Q8 design space development.


Putting It All Together


Viewing the multivariate world from our limited three-dimensional perspective is often a challenge. This is no different for a design space involving multiple controllable factors.  For the design space definition above, it is convenient to use a probability function, P(x), which assigns a probability (i.e., reliability measure) to each set of process operating conditions, denoted by x = (x1,…, xk). Here, P(x) is the probability that all of the critical quality responses simultaneously meet their specification limits. As stated previously, this probability can be thought of as the proportion of the predictive distribution contained within the region bounded by the quality specifications. The design space is then the set of all x such that P(x) is at least some acceptable reliability value. One can then plot P(x) vs. x to view the design space, provided that the dimension of x (i.e., k) is not too large. A simplified example discussed in [5] and [7] is briefly described below.

In an early phase synthetic chemistry study, a prototype design space using a predictive distribution was created for an active pharmaceutical ingredient (API). The (four) quality responses considered were: “% starting material isomer” (<0.15%), “% product isomer” (<2%), “% of impurity #1” (<3.5%), and “% API purity” (>95%). Four controllable experimental factors (x = (x1,…, x4)) were used in a DoE to model how they (simultaneously) influenced the four quality responses. These factors were x1=temperature (oC), x2=pressure (psi), x3= catalyst loading (eq.), and x4=reaction time (hrs).

A Bayesian statistical analysis was used to construct the multivariate predictive distribution for the four quality responses. For the above quality specifications (in parentheses), and for various x-points, the proportion of the multivariate predictive distribution inside the specification region was computed using many Monte Carlo simulations. In other words, for each x-point the corresponding P(x) value was computed. A multipanel contour plot of P(x) vs. the four experimental factors is shown in Figure 3 below. The white regions indicate the regions of larger reliability for simultaneously meeting all of the process specifications. Although these absolute reliability levels are less than ideal, this example is useful for illustration purposes.

Pharma QbD
Figure 3. Multipanel contour plots of the DS for the early phase synthetic chemistry example. (Used with permission from the Association for Quantitative Management.)

As it turned out, the “optimal” factor-level configurations were such that the associated proportions of the predictive distributions were only about 60% to 70% inside of the quality specification region. Part of the reason for this is that the “input” distributions corresponding to the “unknown model parameters” and the “common cause variation” were too large. A more detailed analysis of this data, conducted in [7], indicates that if a larger experimental design had been used and the process variation reduced by 30%, the more reliable factor conditions would be associated with reliability measures that were to the 95% level and beyond. This shows the importance of reducing the variation for as many of the process “input” distributions (of the kind shown in Figures 1 and 2) as is possible.

In the case where the dimension of x is not small, a (read-only) sortable spreadsheet could be used to make informed movements within such a design space [5]. While the spreadsheet approach does not provide a high-level (i.e., bird’s eye) view of the design space (as in Figure 3), it does show (locally) how P(x) changes as various controllable factors are changed. In theory, it may also be possible to utilize mathematical optimization routines applied to P(x) to move optimally within a design space whose dimension is not small.

Developing Design Spaces Based Upon Predictive Distributions: A Summary


Emphasize efficient experimental designs, particularly for multiple unit operations. For design space construction, clearly it is desirable to have good data that can be efficiently collected. As such, good experimental design and planning is critical. It is particularly useful to have good data that can be efficiently collected across various unit operations in a multi-step process. The performance of one unit operation may depend upon various multiple aspects of previous unit operations. When viewed as a whole, a process may have a variety of interacting variables that span its unit operations. This is an aspect of process optimization and design space construction that requires more research by statisticians and their clients (chemical engineers, chemometricians, pharmaceutical scientists, etc.). Some work has been done [8] but more is needed.  Poor experimental designs can be costly and possibly lead to biases and/or too much uncertainty about unknown model parameters. This is not helpful for developing a good multivariate predictive distribution with which to construct a design space.

Depend upon easy-to-use statistical software for multivariate predictive distributions. As mentioned previously in this article, the availability of easy-to-use statistical software for creating multivariate predictive distributions is a key bottleneck for the development of design spaces that quantify “how much assurance” can be associated with meeting quality specifications. More broadly, it has been shown recently that multivariate predictive distributions can be very useful for process optimization involving multiple responses [3], [4]. Monte Carlo simulation software applications such as @Risk and Crystal Ball provide point-and-click software for process simulation. However, a key issue that still remains involves producing a multivariate distribution that reflects the uncertainty of unknown model parameters associated with a process. Producing such a distribution is not always statistically simple.

The Bayesian approach provides a unifying paradigm, but the computational issues do require careful assessment of model and distributional assumptions, and in many cases, careful judgement involving convergence of algorithms. The parametric bootstrap approach is more transparent but may produce design spaces that are a bit too large, unless certain technical refinements are brought to bear. Nonetheless, the statistical methods exist to produce good multivariate distributions that represent the uncertainty of unknown model parameters. As always, modeling and inference need to be done with care, using data from appropriately designed experiments.

Use computer simulation to better understand reliability and risk assessments for complex processes. If one can state that they understand a process quantitatively, even a stochastic process, then one should expect that they can simulate such a process reasonably well. Given that many pharmaceutical processes may involve a series of complex unit operations, computer simulation may prove to be helpful for understanding how various process factors and conditions combine to influence multivariate quality responses. With proper care, computer simulation may help process engineers and scientists to better understand the multivariate nature of their multi-step manufacturing procedures. Of course, experimental validation of computer simulation findings with real data may still be needed. In addition, process risk assessments based largely upon judgement (e.g., through tools such as FMEA) can be enhanced through the use of better quantitative methods such as probability measures [9] (e.g., obtained through computer simulations). The issue of ICH Q8 design space provides a further incentive to utilize and enhance such computer simulation tools.

Have a clear understanding of the processes involved and lurking risks, not just the blind use of mathematical models. We must always keep in mind that our statistical predictions, such as multivariate predictive distributions, are based upon mathematical models that are approximations to the real world environment. How much assurance we can accurately attribute to a design space will be influenced by the predictive distribution, and all of the modeling and input information (e.g. input distributions) that went into developing it. Generally speaking, of course, all decisions and risk measurements based upon statistical methods are influenced to varying degrees by what mathematical modeling assumptions are used, quantifications of design space assurance are fundamentally no different.

However, seemingly good predictive modeling (along with accurate statistical computation) may not be sufficient to provide a design space with long term, high-level assurance of meeting quality specifications. The issue of process robustness is also important, not only for design space construction but in other areas of QbD as well.  This issue has many facets and I believe it needs to be given more emphasis in pharmaceutical product development and manufacturing. So as not to go too far on a tangent, I will only briefly summarize some of the issues involved.

In quality engineering, making a process robust to noisy input variables is known as “robust parameter design” (RPD). The Japanese industrial engineer Genichi Taguchi pioneered this approach to quality improvement. The basic idea is to configure the controllable (i.e., non-noisy) process factors in such as way to dampen the influence of the noise process variables (e.g., raw material attributes or temperature/pressure fluctuations in the process). A convenient aspect of the predictive distribution approach to design space, and process optimization in general, is that it provides a technically and computationally straightforward way to do RPD. We simply simulate the nose variable distribution (shown in Figures 1 and 2) and then configure the controllable factors (x-points) to increase the probability of meeting the quality specifications. If the controllable factors are able to noticeably dampen down the influence of the noise variables, typically the probability that the quality responses will meet specifications will increase (assuming that the mean responses meet their specifications). See [4], [5], and [10] for RPD examples involving multivariate predictive distributions.

Less well known, however, are two more subtle issues that can cause problems with predictive distributions. These are lurking variables and heavy-tailed distributions.  Process engineers and scientists need to brainstorm and test various possibilities for a change in the process or its inputs that could increase the risk that the predictive distribution is overly optimistic or is not stable over time.

Some predictive distributions may have what are called “heavy tails”. (The degree of “heavy tailedness” is called kurtosis by statisticians.) We need to be careful with such distributions as they are more likely to suddenly produce values far from the center of the distribution, than for a normal (i.e., Gaussian) distribution.

If the process can be simulated on a computer, sensitivity analyses can be done to assess the effect of various shocks to the system or changes to the input or predictive distributions, such as heavier tails. An interesting overview of these two issues and of how quality and risk combine can be found in [11].

In conclusion, the ability to understand randomness and think stochastically is important as multivariate random variation pervades all complex production processes. Given that we are forced to deal with randomness (in multivariate form, no less), Monte Carlo simulation has become a useful way to gain some insight into the combined effects of controllable and random effects present in a complex production process. (Interested readers may want to visit The American Society for Quality’s web site on Probabilistic Technology available at Computer simulation can help our intuition for understanding stochastic processes. Such intuition in humans is not always on the mark.  We can all be fooled by randomness. See for example the book by Taleb [12].

1. ICH (2005). “ICH Harmonized Tripartite Guideline: Pharmaceutical Development, Q8.”

2. Peterson, J. J. Snee, R. D., McAllister, P.R., Schofield, T. L., and Carella, A. J., (2009) “Statistics in the Pharmaceutical Development and Manufacturing” (with discussion),  Journal of Quality Technology, 41, 111-147.

3. Peterson, J. J. (2004), “A Posterior Predictive Approach to Multiple Response Surface Optimization”, Journal of Quality Technology, 36, 139-153.

4. Del Castillo, E. (2007), Process Optimization: A Statistical Approach, Springer, N.Y.

5. Peterson, J. J. (2008), “A Bayesian Approach to the ICH Q8 Definition of Design Space”, Journal of Biopharmaceutical Statistics, 18, 959-975.

6. Davison, C. and Hinkley, D.V. (1997), Bootstrap Methods and Their Application, Cambridge University Press, Cambridge, UK.

7. Stockdale, G. W. and Cheng, A. (2009), “Finding Design Space and a Reliable Operating Region using a Multivariate Bayesian Approach with Experimental Design”, Quality Technology and Quantitative Management, (to appear).

8. Perry, L. A., Montgomergy, D.C., and Fowler, J. W. (2002), “Partition Experimental Designs for Sequential Processes: Part II – Second Order Models”, Quality and Reliability Engineering International, 18, 373-382.

9. Claycamp, H. G. (2008). “Room for Probability in ICH Q9: Quality Risk Management”, Institute of Validation Technology conference: Pharmaceutical Statistics 2008 Confronting Controversy, March 18-19, Arlington, VA

10. Miro-Quesada, G., del Castillo, E., and Peterson, J.J., (2004) “A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables”, Journal of Applied Statistics, 31, 251-270

11. Kenett, Ron S  and Tapiero, Charles S. (2009),”Quality, Risk and the Taleb Quadrants” presented at the IBM Quality & Productivity Research Conference, June 3rd, 2009.  Available at SSRN:

12. Taleb, Nassim (2008) Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, Random House, New York.

/////// ICH Q8 Design Space,  Multivariate Predictive Distribution,QA,  design space, Parametric bootstrapping, complement Bayesian methods.

QbD Sitagliptin


Image result for sitagliptin qbd

Application of On-Line NIR for Process Control during the Manufacture of Sitagliptin

Global Science, Technology and Commercialization, Merck Sharp & Dohme Corporation P.O. Box 2000, Rahway, New Jersey 07065, United States
Org. Process Res. Dev., 2016, 20 (3), pp 653–660
DOI: 10.1021/acs.oprd.5b00409
Publication Date (Web): February 12, 2016
Copyright © 2016 American Chemical Society


Abstract Image

The transamination-chemistry-based process for sitagliptin is a through-process, which challenges the crystallization of the active pharmaceutical ingredient (API) in a batch stream composed of multiple components. Risk-assessment-based design of experiment (DoE) studies of particle size distribution (PSD) and crystallization showed that the final API PSD strongly depends on the seeding-point temperature, which in turn relies on the solution composition. To determine the solution composition, near-infrared (NIR) methods had been developed with partial least squares (PLS) regression on spectra of simulated process samples whose compositions were made by spiking each pure component, either sitagliptin free base (FB), water, isopropyl alcohol (IPA), dimethyl sulfoxide (DMSO), or isopropyl acetate (IPAc), into the process stream according to a DoE. An additional update to the PLS models was made by incorporating the matrix difference between simulated samples in lab and factory batches. Overall, at temperatures of 20–35 °C, the NIR models provided a standard error of prediction (SEP) of less than 0.23 wt % for FB in 10.56–32.91 wt %, 0.22 wt % for DMSO in 3.77–19.18 wt %, 0.32 wt % for IPAc in 0.00–5.70 wt %, and 0.23 wt % for water in 11.20–28.58 wt %. After passing the performance qualification, these on-line NIR methods were successfully established and applied for the on-line analysis of production batches for compositions prior to the seeding point of sitagliptin crystallization.


A biocatalytic manufaturing route for januvia – Society of Chemical …

Nov 2, 2011 – 9 Steps, 52% overall yield, >100Kg of sitagliptin prepared ….. FDA filings requires “Quality by Design”: A way to allow process changes within.



Example of QbD Application in Japan

Aug 11, 2016 – QbD assessment experience in Japan … Number of Approved Products with QbD … Active Ingredient : Sitagliptin Phosphate Hydrate.



Name Explanation
Active Pharmaceutical Ingredient (API) An active pharmaceutical ingredient (API) is a substance used in a finished pharmaceutical product, intended to furnish pharmacological activity or to otherwise have direct effect in the diagnosis, cure, mitigation, treatment or prevention of disease, or to have direct effect in restoring, correcting or modifying physiological functions in human beings.


Annual Product Reviews (APR) The Annual Product Reviews (APR) include all data necessary for evaluation of the quality standards of each drug product to determine the need for changes in drug product specifications or manufacturing or control procedures. The APR is required by the U.S. Code of Federal Regulations.
ANVISA The Brazilian Health Surveillance Agency (in Portuguese, Agência Nacional de Vigilância Sanitária) is a governmental regulatory body in Brazil. Similar to the FDA in the United States, it oversees the approval of drugs and other health products and regulates cosmetics, food products, and other health-related industries.
Biologic License Application (BLA) The Biologics License Application (BLA) is a request for permission to introduce, or deliver for introduction, a biologic product into commerce in the U.S.
CFDA The China Food and Drug Administration is similar to the FDA in the United States and is responsible for regulating food and drug safety.
cGMP Current Good Manufacturing Practices govern the design, monitoring, and control of manufacturing facilities and processes and are enforced by the US FDA. Compliance with these regulations helps safeguard a drug’s identity, strength, quality, and purity.
COFEPRIS The Federal Commission for Protection against Sanitary Risks (in Spanish, Comisión Federal para la Protección contra Riesgos Sanitarios) is a government agency in Mexico. It regulates food safety, drugs, medical devices, organ transplants, and environmental protection.
Common Technical Document (CTD) The Common Technical Document (CTD) is the mandatory common format for new drug applications in the EU and Japan, and the U.S. The CTD assembles all the Quality, Safety and Efficacy information necessary for a drug application.
European Medicines Agency (EMA) The European Medicines Agency (EMA) is a decentralised agency of the European Union (EU), located in London. It began operating in 1995. The Agency is responsible for the scientific evaluation, supervision and safety monitoring of medicines developed by pharmaceutical companies for use in the EU.
Food and Drug Administration (FDA) The Food and Drug Administration (FDA) is an agency within the U.S. Department of Health and Human Services. The FDA is responsible for the approval of new pharmaceutical products for sale in the U.S. and performs audits at the companies participating in the manufacture of pharmaceuticals to ensure that they comply with regulations.
Human growth hormone A growth hormone (GH or HGH) is a peptide hormone produced by the pituitary gland that stimulates growth in children and adolescents. It is involved in several body processes, including cell reproduction and regeneration, regulation of body fluids, and metabolism. It can be produced by the body (ie, somatotropin) or genetically engineered (ie, somatropin).
In-Process Control (IPC) In-Process Controls (IPC) are checks performed during production in order to monitor and if necessary to adjust the process to ensure that the product conforms its specification.
Interferons (INFs) Interferons are proteins produced by the body as part of the immune response. They are classified as cytokines, proteins that signal other cells to trigger action. For example, a cell infected by a virus will release interferons to stimulate the defenses of nearby cells.
Interleukins Interleukins are proteins produced by cells as an inflammatory response. Most interleukins help leukocytes communicate with and direct the division and differentiation of other cells.
Investigational Medicinal Product Dossier (IMPD) The Investigational Medicinal Product Dossier (IMPD) is the basis for approval of clinical trials by the competent authorities in the EU. The IMPD includes summaries of information related to the quality, manufacture and control of the Investigational Medicinal Product, data from non-clinical studies and from its clinical use.
Investigational New Drug (IND) An Investigational New Drug application is provided to the FDA to obtain permission to test a new drug in humans in Phase I – III clinical studies. The IND is reviewed by the FDA to ensure that study participants will not be placed at unreasonable risk.
Marketing Authorization Application (MAA) The Marketing Authorization Application (MAA) is a common document used as the basis for a marketing application across all European markets, plus Australia, New Zealand, South Africa, and Israel. This application is based on a full review of all quality, safety, and efficacy data, including clinical study reports.
Master batch records These general manufacturing instructions, which are required by cGMP, are the bases for a precise, detailed description of a pharmaceutical manufacturing process. They ensure that all proper ingredients are included, each process step is completed, and the process is controlled.
Medicines and Healthcare Products Regulatory Agency (MHRA) The Medicines and Healthcare products Regulatory Agency (MHRA) regulates medicines, medical devices and blood components for transfusion in the UK. MHRA is an executive agency, sponsored by the Department of Health.
MFDS The Ministry of Food and Drug Safety (formerly the Korean Food & Drug Administration) is a government agency that oversees the safety and efficacy of drugs and medical devices in South Korea.
Monoclonal antibodies Monoclonal antibodies are antibodies made in a laboratory from identical immune cells that are clones of a single cell. They are distinct from polyclonal antibodies, which are made from different immune cells.
NDA A New Drug Application (NDA) is the vehicle submitted to the FDA by drug companies in order to gain approval to market a new product. Safety and efficacy data, proposed package labeling, and the drug’s manufacturing methods are typically included in an NDA.
New Drug Application (NDA) The New Drug Application (NDA) is the vehicle through which drug sponsors formally propose that the FDA approve a new chemical pharmaceutical for sale and marketing in the U.S.


Oligonucleotides These short nucleic acid chains (made up of DNA or RNA molecules) are used in genetic testing, research, and forensics.
Parenteral Parenteral medicine is taken or administered in a manner other than through the digestive tract. Intravenous and intramuscular injections are two examples.
Peptide hormones Peptide hormones are proteins secreted by organs such as the pituitary gland, thyroid, and adrenal glands. Examples include follicle-stimulating hormone (FSH) and luteinizing hormone. Similar to other proteins, peptide hormones are synthesized in cells from amino acids.
PMDA The Pharmaceuticals Medical Devices Agency is an independent administrative agency that works with the Ministry of Health, Labour and Welfare to oversee the safety and quality of drugs and medical devices in Japan.
Process Analytical Technology (PAT) These analytical tools help monitor and control the manufacturing process, including accommodating for variability in material and equipment, in order to ensure consistent quality.
Product Quality Reviews (PQR) The Product Quality Reviews (PQR) of all authorized medicinal products, is conducted with the objective of verifying the consistency of the existing process, the appropriateness of current specifications for both starting materials and finished product, to highlight any trends and to identify product and process improvements. The PQR is required by the EU GMP Guideline.
Quality by Design (QbD) This concept involves a holistic, proactive, science- and risk-based approach to the development and manufacturing of drugs. At the heart of QbD is the idea that quality is achieved through in-depth understanding of the product and the process by which it is developed and manufactured.
Restricted Access Barrier System (RABS) This advanced aseptic processing system provides an enclosed environment that reduces the risk of contamination to the product, containers, closures, and product contact surfaces. As a result, it can be used in many applications in a fill-finish area.
Scale-up Scale-up involves taking a small-scale manufacturing system developed in the laboratory to a commercially viable, robust production process.
Six Sigma Six Sigma is a set of quality management methods, techniques, and tools used to improve manufacturing, transactional, and other business processes. The goal is to enhance quality (as well as employee morale and profits) by identifying and eliminating the cause of errors and process variations.
Target Product Profile (TPP) This key strategic document summarizes the features of an intended drug product. Characteristics may include the dosage form, route of administration, dosage strength, pharmacokinetics, and drug product quality criteria.
TFDA The Taiwan Food & Drug Administration is a governmental body devoted to enhancing food safety and drug quality in that country.

QbD: Controlling CQA of an API

The importance of Quality by Design (QbD) is being realized gradually, as it is gaining popularity among the generic companies. However, the major hurdle faced by these industries is the lack of common guidelines or format for performing a risk-based assessment of the manufacturing process. This article tries to highlight a possible sequential pathway for performing QbD with the help of a case study. The main focus of this article is on the usage of failure mode and effect analysis (FMEA) as a tool for risk assessment, which helps in the identification of critical process parameters (CPPs) and critical material attributes (CMAs) and later on becomes the unbiased input for the design of experiments (DoE). In this case study, the DoE was helpful in establishing a risk-based relationship between critical quality attributes (CQAs) and CMAs/CPPs. Finally, a control strategy was established for all of the CPPs and CMAs, which in turn gave rise to a robust process during commercialization. It is noteworthy that FMEA was used twice during theQbD: initially to identify the CPPs and CMAs and subsequently after DoE completion to ascertain whether the risk due to CPPs and CMAs had decreased.

Image result for Quality by Design in Action 1: Controlling Critical Quality Attributes of an Active Pharmaceutical Ingredient

Image result for Quality by Design in Action 1: Controlling Critical Quality Attributes of an Active Pharmaceutical Ingredient

Quality by Design in Action 1: Controlling Critical Quality Attributes of an Active Pharmaceutical Ingredient

CTO-III, Dr. Reddy’s Laboratories Ltd, Plot 116, 126C and Survey number 157, S.V. Co-operative Industrial Estate, IDA Bollaram, Jinnaram Mandal, Medak District, Telangana 502325, India
Department of Chemistry, Osmania University, Hyderabad, Telangana 500007, India
Org. Process Res. Dev., 2015, 19 (11), pp 1634–1644
*Telephone: +919701346355. Fax: + 91 08458 279619. E-mail: (A.K.R.)., * (P.S.).



str3 str4

str5 str6

str1 str2
////// QbD, DoE, FMEA, ANOVA, Design space.

New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach

Image for unlabelled figure

The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent.

The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper.

Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery.

Fig. 1

Fig. 1.

Steps and elements of the QbD methodology completed by the authors and applied in the early stage of pharmaceutical development.

“By identifying the critical process parameters, the practical development was more effective, with reduced development time and efforts.”

Edina Pallagi, our QbD evangelist from Hungary shares her team’s experience applying QbD to Dry Powder Inhalation Formulation.

The paper covers:

  • QbD methodology the researchers applied
  • Formulation of dry powder inhalation – API and excipients
  • QTPP, CQA and CPPs  identified for pulmonary use along with target, justification and explanation
  • Characterization test methods
  • Knowledge Space development
  • QbD software used

New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach

  • a Institute of Drug Regulatory Affairs, University of Szeged, Faculty of Pharmacy, Szeged, Hungary
  • b Department of Pharmaceutical Technology, University of Szeged, Faculty of Pharmacy, Szeged, Hungary