Generics: FDA´s New Guidance on Prior Approval Supplements

Image result for Prior Approval Supplements

Generics: The US Food and Drug Administration (FDA) recently published a new Guidance regarding Prior Approval Supplements (PAS). Read more about FDA´s Guidance for Industry “ANDA Submissions – Prior Approval Supplements Under GDUFA“.

http://www.gmp-compliance.org/enews_05634_Generics-FDA%B4s-New-Guidance-on-Prior-Approval-Supplements_15721,Z-RAM_n.html

On October 14, 2016, the US Food and Drug Administration (FDA) published a new Guidance regarding Prior Approval Supplements (PAS).
FDA says that “this guidance is intended to assist applicants preparing to submit to FDA prior approval supplements (PASs) and amendments to PASs for abbreviated new drug applications (ANDAs)”.

Specifically, the guidance describes how the Generic Drug User Fee Amendments of 2012 (GDUFA) performance metric goals apply to:

  • A PAS subject to the refuse-to-receive (RTR) standards;
  • A PAS that requires an inspection;
  • A PAS for which an inspection is not required;
  • An amendment to a PAS;
  • Other PAS-related matters.

GDUFA is designed to speed the delivery of safe and effective generic drugs to the public and reduce costs to industry. That requires that FDA and human generic drug manufacturers meet certain requirements and commitments. “FDA committed to review and act on a certain percentage of PASs within a specified period from the date of submission for receipts in fiscal year (FY) 2015 through FY 2017. The percentage of PASs that FDA has committed to review and act on increases with each fiscal year; the deadlines for review also depend on whether consideration of a PAS requires an inspection.”

Changes to an approved application:
The criteria laid down in FDA regulations for submitting information as a PAS (major change), as a Changes Being Effected-Supplement (CBE-supplement, moderate change), or in an annual report (minor change) were not changed by GDUFA.

Timelines depending on inspections for PAS submissions:
The GDUFA goal date for a PAS depends on whether the PAS requires an inspection. If a PAS does not require an inspection, the goal date is 6 months from the date of submission; but if a PAS requires an inspection, the goal date is 10 months from the date of submission. An initial goal date of 6 months occasionally may change to a 10-month goal date if, during the review, FDA determines an inspection is necessary. If an amendment is made to a PAS, the GDUFA goal date associated with that PAS may be revised. FDA strongly recommends that, at the time of submission, a supplement should be complete and ready for a comprehensive review.

Submission of Supplements:
The following information should be provided on the first page of the PAS:

  • A statement indicating whether the PAS is for a new-strength product;
  • A statement indicating whether the submission is an amendment to a PAS, and if so the corresponding tier classification;
  • A statement indicating whether the PAS contains any manufacturing or facilities changes;
  • A list of the specific review disciplines to review the PAS (Chemistry, Labeling, DMF, Bioequivalence, Microbiology, or Clinical);
  • If expedited review is requested, the label Expedited Review Request should be placed prominently at the top of the submission. The submission should include a basis for the expedited review request.

It is possible to submit multiple PASs for the same chenge as “grouped supplements”. These are submitted to ANDAs by a single applicant for the same chemistry, manufacturing, and controls (CMC) change to each application. Because the grouped supplements are being reviewed together, generally they will have the same GDUFA goal date. Although the submissions are considered a group, each supplement in the group is considered its own individual submission and therefore would require a GDUFA PAS fee for each ANDA identified in the group.

Alternative Submissions:

  • Identify a lead ANDA for a group of PASs (only one fee is paid, or fewer than all the fees for the group are paid);
  • For some changes (e.g., widening of an approved specification or introduction of a new API supplier) once a PAS is submitted and approved, subsequent supplements for the same change to other ANDAs may be classified as CBE-30s;
  • comparability protocol submitted in a PAS to an ANDA for a specific drug product, once approved, may justify a reduced reporting category for the same change in subsequent supplements to that ANDA.

If FDA finds that a supplement submitted as a CBE supplement should have been submitted as a PAS, it will notify the applicant. The applicant is not required to withdraw the CBE supplement because when FDA sends a letter explaining that the applicant’s submission is not accepted as a CBE supplement, FDA administratively closes the CBE supplement, and it is considered withdrawn. The applicant may resubmit the supplement as a PAS for FDA approval before distribution of the drug product, along with the required GDUFA user fee. The GDUFA performance metric goals and applicable user fees will apply to that PAS and the GDUFA review clock will start from the date of submission of that PAS.

For more information please see the FDA Guidance for industry “ANDA Submissions – Prior Approval Supplements Under GDUFA“.

///////////Generics, FDA,  New Guidance,  Prior Approval Supplements

EMA/ FDA Mutual Recognition Agreement on drug facility inspections moving forward

Image result for signing animations

Image result for FDA

Image result for EMA

EMA/ FDA Mutual Recognition Agreement moving forward
A possible agreement between the EMA and the US FDA on mutual recognition agreement on drug facility inspections could already be signed in January 2017.

http://www.gmp-compliance.org/enews_05650_EMA–FDA-Mutual-Recognition-Agreement-moving-forward_15642,15660,15656,Z-QAMPP_n.html

A possible agreement between the European Medicines Agency EMA and the US Food and Drug Administration FDA on mutual recognition of drug facility inspections could already be signed in January 2017. This is noted in a report of the EU Commission: “The state-of-play and the organisation of the evaluation of the US and the EU GMP inspectorates were discussed. In light of the progress achieved, the conclusion of a mutual recognition agreement of Good Manufacturing Practices (GMPs) inspections by January 2017 is under consideration.”

But, according to the Commission, some issues are still not resolved – like, for example, the exchange of confidential information and the inclusion of veterinary products in the scope of the text.

The “Report of the 15th Round of Negotations for the Transatlantic Trade and Invesment Partnership” summaries the 15th round of negotiations for the Transatlantic Trade and Investment Partnership (TTIP) from 3rd to 7th October 2016 in New York.

////////EMA, FDA,  Mutual Recognition Agreement, drug facility inspections

Opportunities for Reducing Sampling and Testing of Starting Materials

Image result for EC GMP Guide

Chapter 5 of the EC GMP Guide for the area of production was updated last year. This chapter contains concrete information about the conditions when testing and sampling of APIs and excipients can be reduced. Read more here about the sections 5.35 and 5.36 of the EU GMP Guide.

http://www.gmp-compliance.org/enews_05655_Opportunities-for-Reducing-Sampling-and-Testing-of-Starting-Materials_15461,15911,15462,Z-QCM_n.html

Chapter 5 of the EC GMP Guide for the area of production was already updated last year. However, not everybody really knows that it contains concrete information about the conditions when testing and sampling of APIs and excipients can be reduced. Particularly sections 5.35 and  5.36 include requirements and thus show possibilities for a reduction.

Basically, the manufacturers of finished products are responsible for every testing of starting materials as described in the marketing authorisation dossier. Yet, part of or complete test results from the approved starting material manufacturer can be used, but at least their identity has to be tested – as described in the in the marketing authorisation dossier.

If one chooses to outsource the testing activity to the supplier, this has to be justified and documented. Moreover, a few additional measures have to be fulfilled, like:

  • Particular attention should be paid to the distribution controls (transport, wholesaling, storage, delivery) to ensure that ultimately the test results are still applicable to the delivered material.
  • Performance of risk-based audits at the sites executing the testing and sampling of starting materials to verify the GMP compliance and to ensure that the specifications and testing methods are used as described in the marketing authorisation dossier.
  • The certificate of analysis of the manufacturer/supplier of the starting material should be signed by a designated person with appropriate qualifications and experience. The signature confirms the compliance with the agreed product specification.
  • The medicinal product manufacturer should have adequate experience in dealing with the starting material manufacturer – including assessment of batches previously received and the history of compliance before reducing own, internal testing.
  • At appropriate intervals, the medicinal product manufacturer or another approved contract laboratory has to carry out a full analysis to compare the test results with the results of the certificate of analysis of the material manufacturer or supplier, and thus to check their reliability. In case of discrepancy, an investigation has to be performed and appropriate measures taken. The certificates of analysis cannot be accepted until those measures are completed.

You can access the complete Chapter 5 “Production” of the EU GMP Guide here.
////////Opportunities,  Reducing,  Sampling, Testing, Starting Materials, EC GMP Guide

New EDQM’s Public Document informs about the Details required in a New CEP Application for already Referenced Substances

str1

A Policy Document recently published by the EDQM describes regulations for referencing already existing CEPs in an application for a new CEP. Read more about how the certificates of an intermediate or starting material have to be used in new applications for a CEP.

http://www.gmp-compliance.org/enews_05624_New-EDQM-s-Public-Document-informs-about-the-Details-required-in-a-New-CEP-Application-for-already-Referenced-Substances_15429,15332,15982,15721,S-WKS_n.html

str2

When applying for a Certificate of Suitability (CEP) for an API, detailed information has to be provided regarding the synthesis stages, the starting material and the intermediates. In the event that the starting materials or the intermediates are already covered by a CEP, the EDQM has recently published a “Public Document” entitled “Use of a CEP to describe a material used in an application for another CEP”. The document contains regulations on how to reference the “CEP X” of a starting material or an intermediate in the application for the “CEP Y” of an API. The requirements for both scenarios are described as follows:

  • CEP X belongs to an intermediate or a starting material within the synthesis route of a substance Y for which a CEP is submitted.
    1. The submission must make clear that X is really an intermediate or a starting material and is covered by a valid CEP (“CEP X”). A copy of this CEP X has to be attached.
    2. The complete specification described in the CEP X must be the basis for the release of the intermediate or the starting materials X for use in the synthesis of Y.
    3. The lifecycle of CEP X is directly coupled with the lifecycle of CEP Y. For example, a revision of CEP X also triggers a revision of CEP Y so that the revised CEP X has to be included to the revision application of CEP Y.
    4. If the CEP X looses its validity (e.g. due to expiry or withdrawal) the application for CEP Y has to be updated, for example the CEP of a substance from an alternative source has to be submitted.
    5. The application for CEP Y has to include complete details about the supply chain and/ or about all the manufacturing sites involved in the process described on CEP X.

Details about all manufacturing sites involved in the process described in the CEP X will also be mentioned in the annex 1 of the new CEP Y when X is an intermediate for the synthesis of Y. However, this doesn’t apply when X is the starting material for the synthesis of Y.

Please see the Public Document “Use of a CEP to describe a material used in an application for another CEP” for further details.

str1

STR1

////////// EDQM,  Public Document,  New CEP Application, already Referenced Substances

Now online – Stimuli article on the proposed USP General Chapter “The Analytical Procedure Lifecycle “

Image result for The Analytical Procedure Lifecycle 1220

Now online – Stimuli article on the proposed USP General Chapter “The Analytical Procedure Lifecycle <1220>”
A Stimuli Article to the Revision Process regarding the proposed New USP General Chapter “The Analytical Procedure Lifecycle <1220>” has been published. Read more about the new concept for the lifecycle managment of analytical methods.

http://www.gmp-compliance.org/enews_05629_Now-online—Stimuli-article-on-the-proposed-USP-General-Chapter-%22The-Analytical-Procedure-Lifecycle–1220-%22_15438,Z-PDM_n.html

Image result for The Analytical Procedure Lifecycle 1220Image result for The Analytical Procedure Lifecycle 1220

The General Chapters—Chemical Analysis Expert Committee is currently developing a new general chapter <1220> The Analytical Procedure Lifecycle. The purpose of this new chapter will be to more fully address the entire procedure lifecycle and define concepts that may be useful.

A Stimuli article on the proposed General Chapter <1220> has been approved for publication in Pharmacopeial Forum 43(1) [Jan.-Feb. 2017]. USP is providing this Stimuli article in advance of its publication to provide additional time for comments.

In addition to offering a preview of the proposed general chapter, the General Chapters—Chemical Analysis Expert Committee and the Validation and Verification Expert Panel are seeking specific input from users in the pharmaceutical industry regarding the following questions:

  • Would a general chapter on the lifecycle approach be valuable?
  • Is the information presented herein sufficient for implementation of an analytical procedure under the quality by design (QbD) approach?
  • Would incorporation of references to statistical tools, either in this chapter or in another chapter, be valuable?
  • Can you provide input or approaches that would improve this proposed general chapter?

The content and scope of the proposed general chapter will be refined on the basis of responses to this Stimuli article. Because stakeholders may have differing views, the objective of this Stimuli article is to identify and build areas of consensus that may be included in <1220>.

The approach is consistent with the concept of quality by design (QbD) as described in International Council for Harmonisation (ICH) Q8-R2, Q9, Q10, and Q11.

In order to provide a holistic approach to controlling an analytical procedure throughout its lifecycle, one can use a three-stage concept that is aligned with current process validation terminology:

  • Stage 1: Procedure Design and Development (Knowledge Gathering, Risk Assessment, Analytical Control Strategy, Knowledge Management, Preparing for Qualification)
  • Stage 2: Procedure Performance Qualification
  • Stage 3: Continued Procedure Performance Verification (Routine Monitoring, Changes to an Analytical Procedure)

A fundamental component of the lifecycle approach to analytical procedures is having a predefined objective that stipulates the performance requirements for the analytical procedure. These requirements are described in the analytical target profile (ATP) which can be considered as analogous to the quality target product profile (QTPP).

The Download Stimuli Article is available on the USP website since October 14, 2016: Proposed New USP General Chapter: The Analytical Procedure Lifecycle <1220>.

Comments will be accepted until March 31, 2017, the end of the comment period for Pharmacopeial Forum 43(1). This Stimuli article provides the framework for the proposed general chapter “The Analytical Procedure Lifecycle <1220>” and describes the current thinking of the USP Validation and Verification Expert Panel which advises the General Chapters—Chemical Analysis Expert Committee with regard to future trends in analytical procedures development, qualification, and continued monitoring.

Image result for The Analytical Procedure Lifecycle 1220

Image result for animationsImage result for animations

/////////////Stimuli article, proposed USP General Chapter, The Analytical Procedure Lifecycle,  <1220>

Sigma-concepts.com a Website for excellent reading with figures & diagrams

New Drug Approvals

6sigmaconceptsstr0

READ  http://6sigma-concepts.com/

Dr. Amrendra Kumar Roy

Dr. Amrendra Kumar Roy

QbD Head, Jubilant Generics, NOIDA

sigma-concepts.com WEBSITE

https://www.facebook.com/6sigmaconcepts/

Resistive Technosource Private Limited

206 II floor Devika Chamber, RDC, Hapur Rd, Block 1, P & T Colony, Raj Nagar, Ghaziabad, Uttar Pradesh 201002
QUOTE………..

OUR VISION


We would like to share our ~13 yrs of practical experience in the field of product development using statistical tools. But first, what compelled us to pursue six-sigma. Most of us started our career as a process chemist after completing PhD and it was during those initial days we realized the importance of “first time right” during commercialization. This enabled not only first mover advantage but also ensured timely and un-interrupted supply of our products into the market. Another aspect of the process development is its robustness, which ensures sustainable margins in whatever products we manufacture. Above achievement was possible only because of the…

View original post 1,208 more words

New EDQM’s Public Document informs about the Details required in a New CEP Application for already Referenced Substances

Image result for CEP EDQM

A Policy Document recently published by the EDQM describes regulations for referencing already existing CEPs in an application for a new CEP. Read more about how the certificates of an intermediate or starting material have to be used in new applications for a CEP.

click

http://www.gmp-compliance.org/enews_05624_New-EDQM-s-Public-Document-informs-about-the-Details-required-in-a-New-CEP-Application-for-already-Referenced-Substances_15429,15332,15982,15721,S-WKS_n.html

When applying for a Certificate of Suitability (CEP) for an API, detailed information has to be provided regarding the synthesis stages, the starting material and the intermediates. In the event that the starting materials or the intermediates are already covered by a CEP, the EDQM has recently published a “Public Document” entitled “Use of a CEP to describe a material used in an application for another CEP”. The document contains regulations on how to reference the “CEP X” of a starting material or an intermediate in the application for the “CEP Y” of an API. The requirements for both scenarios are described as follows:

  • CEP X belongs to an intermediate or a starting material within the synthesis route of a substance Y for which a CEP is submitted.
    1. The submission must make clear that X is really an intermediate or a starting material and is covered by a valid CEP (“CEP X”). A copy of this CEP X has to be attached.
    2. The complete specification described in the CEP X must be the basis for the release of the intermediate or the starting materials X for use in the synthesis of Y.
    3. The lifecycle of CEP X is directly coupled with the lifecycle of CEP Y. For example, a revision of CEP X also triggers a revision of CEP Y so that the revised CEP X has to be included to the revision application of CEP Y.
    4. If the CEP X looses its validity (e.g. due to expiry or withdrawal) the application for CEP Y has to be updated, for example the CEP of a substance from an alternative source has to be submitted.
    5. The application for CEP Y has to include complete details about the supply chain and/ or about all the manufacturing sites involved in the process described on CEP X.

Details about all manufacturing sites involved in the process described in the CEP X will also be mentioned in the annex 1 of the new CEP Y when X is an intermediate for the synthesis of Y. However, this doesn’t apply when X is the starting material for the synthesis of Y.

Please see the Public Document “Use of a CEP to describe a material used in an application for another CEP” for further details.

 

/////////New EDQM’s,  Public Document ,  New CEP Application, Referenced Substances, cep, edqm

ENHANCED ANALYTICAL METHOD CONTROL STRATEGY CONCEPT

Image result for ANALYTICAL METHOD CONTROL STRATEGY

ENHANCED ANALYTICAL METHOD CONTROL STRATEGY CONCEPT

The benefits of quality by design (QbD) concepts related to both product (ICH Q8)1 and drug substance (ICH Q11)2 are well-established, particularly in regards to the potential to use knowledge to affect process changes without major regulatory hurdles, i.e., revalidation/regulatory filing, etc. Less wellestablished, but potentially of significant value, is the application of the same concepts to analytical methods.

Analytical methods play an obvious key role in establishing the quality of final product as they establish conformance with product acceptance criteria (i.e., specifications) and indicate the integrity of the product through indication of product stability. Analytical methods are validated, like manufacturing processes, but what if the operational ranges could be established during method validation when demonstrating fitness for purpose?

Would it be possible to drive method improvement, especially post validation in the same way that the concept of continuous improvement is a key driver for manufacturing processes? Despite this attractive “value proposition”, there is to date little evidence that as an industry this is being practically realized.

The result is that many methods used in a QC environment lag well behind technical developments in the analytical field, often leading to the use of suboptimal procedures that impact adversely on the efficiency within the laboratory. The challenge is to create an environment whereby such changes can be made efficiently and effectively.

One approach is to apply the principles of ICH Q8−10; delivering a science and risk based approach to the development and validation of analytical methods, establishing a method operable design region (MODR) within which changes can be made. Such a framework is illustrated in Figure 1.

 

str0

This starts with a definition of the effective requirements of the method, an analytical target profile (ATP), this taking the specific form of acceptance criteria for method performance. Such a process can be used to not only establish effective analytical methods but is also supportive of continual improvement, specifically within the MODR. However, such a concept is potentially limited in that the expectation is that changes are restricted to within the MODR.

Such restrictions may inhibit continuous improvement. A prime example is change of stationary phase or a change from HPLC to UPLC; both fall outside of the original MODR. Historically such changes have been notoriously difficult and often therefore avoided unless imperative. A recent publication13 examined this, presenting a method enhancement concept that would allow minor changes outside of the MODR. This is based on the realization that performance of any analytical method is based on the conduct of a system suitability test (SST); such tests ensure the method’s fitness for purpose.

Karlsson et al. stated that changes outside of the initial MODR may be possible provided that the method principle is unchanged, failure modes are the same, and the SST is capable of detecting these, both for the original method and for any method changes that fall outside of the original MODR. Put simplychanges can be made provided the SST criteria are passed. A change from HPLC to UPLC was used to illustrate this. Revalidation of the method is still required, but critically such changes do not require regulatory interaction but can be managed through internal quality systems.

1 ICH Q8 Pharmaceutical Development. http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q8_
R1/Step4/Q8_R2_Guideline.pdf.
(2) ICH Q11 – Development and Manufacture of Drug Substances
(Chemical Entities and Biotechnological/Biological Entities) Q11.http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q11/Q11_Step_4.pdf (Aug 2009).

 

////////ENHANCED,  ANALYTICAL METHOD CONTROL , STRATEGY CONCEPT

 

Drug Approval Strategies in the Age of Fast Track, Breakthrough Therapy and Accelerated Approval

Image result for Process Validation

Process Validation and Regulatory Review

Drug Approval Strategies in the Age of Fast Track, Breakthrough Therapy and Accelerated Approval

To meaningfully discuss the process validation and regulatory approval strategies required for drugs that have been designated Fast Track, Breakthrough Therapy or Accelerated Approval drugs, we must first clarify these designations and briefly remind ourselves what the Process Validation guidance looks like. Then we will be able to clearly identify challenges and approaches to these barriers when working to bring a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.

Fast Track designation – Fast Track drugs treat serious conditions where there is an unmet medical need. Concluding that a condition is serious and that there is an unmet medical need most definitely leaves room for judgement, but generally speaking, the conditions these drugs treat are life-threatening, and the drug in question is expected to contribute to survival, daily functioning or the likelihood that a condition will advance to a very serious state. Fast Track drugs receive the benefit of more frequent meetings and communication with the FDA, and the drug qualifies for Accelerated Approval and rolling review of the Biologic License Application (BLA) or New Drug Application (NDA).

Breakthrough Therapy – Breakthrough Therapy status can be assigned to drugs that treat a serious condition when preliminary clinical data show significantly improved outcomes compared to treatments currently on the market. Breakthrough Therapies are eligible for: Fast Track designation benefits, extensive FDA guidance on effective drug development early in the development process and organizational commitment, including access to FDA senior managers.

Accelerated Approval – The FDA established accelerated approval regulations in 1992. Accelerated Approval could be given to drugs that met a serious unmet medical need, and approval was based on a surrogate endpoint. Fast forward to 2012 when Congress passed the Food and Drug Administration Safety Innovations Act (FDASIA). This amendment to the Federal Food, Drug, and Cosmetic Act (FD&C Act) allowed approval to be based on either a surrogate endpoint per the 1992 regulations or approval based on an intermediate clinical endpoint. For example, as a result of the 2012 legislation, a cancer drug could be approved based on the surrogate endpoint of increasing the probability of cancer to going into remission or the intermediate clinical endpoint of shrinking tumor size—an outcome that is strongly correlated with the ability to much more successfully treat cancer and induce remission.

These FDA designations are clearly designed to increase the availability and speed to market of drugs treating serious conditions where unmet medical needs exist. Given that nimbleness and speed has historically not been the pharmaceutical industry’s nor FDA’s strong suit—commercialization of a drug has historically taken on average 12 years and cost up to $2.5B (including expenditure outlays and opportunity costs). The ability for these designations to save both time and money is very attractive. However, given the slow-moving nature of the industry, changes in both mindset and approaches are needed by both drug innovators and regulators to validate processes and ensure drug quality within the faster-moving constructs.

Let’s now turn to the most recent Process Validation guidance so that we may juxtapose that system with the nimble needs of Fast Track Designation, Breakthrough Therapy and Accelerated Approval drugs—ultimately, making some observations regarding needed Process Validation and overall regulatory approval approaches as the industry moves towards accelerated development processes for an increasing number of drugs.

Image result for Process Validation

WHAT IS PROCESS VALIDATION?
According to the FDA’s 2011 Process Validation (PV) guidance, “For purposes of this guidance, process validation is defined as the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product. Process validation involves a series of activities taking place over the lifecycle of the product and process.”

The Three Stages of Process Validation:
Stage 1: Process Design–manufacturing process is defined during this stage and is based on knowledge acquired through development and scale-up activities.

Stage 2: Process Qualification–process design is evaluated to determine if the process is capable of reproducible commercial manufacturing.

Stage 3: Continued Process Verification–ongoing assurance during manufacturing that the process is controlled and the outcome predictable.

Image result for Process Validation

Keys for Successful Validation Include:
• Gaining knowledge from the product and process development
• Understanding sources of variation in the production process
• Determining the presence of and degree of variation
• Understanding the impact of variation on the process and end product
• Controlling variation in a manner aligned with Critical Quality Attributes (CQA) and the risk a given attribute introduces to the process

Process Qualification, a key component of Process Validation, should be based on overall level of product and process understanding, level of demonstrable control, data from lab, pilot and commercial batches, effect of scale and previous experience with similar products and processes. Process Qualification is generally recommended to be based on higher levels of sampling, additional testing and greater scrutiny of process performance than would be typical of routine commercial production.

As we will now explore, some of the demands of Process Qualification and overall Process Validation is severely challenged by the approaches required when bringing a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.

Image result for Process Validation

NOVEL APPROACHES NEEDED FOR ACCELERATED APPROVALS
Historically, it has taken an average of 12 years and, according to a Tufts Center for the Study of Drug Development (CSDD) report, including expenditures and opportunity costs, an average of ~$2.6 billion to bring a prescription drug to market. This paper will refrain from making editorial comments about this pharmaceutical industry fact; however, the undeniable reality is that the speed required at every point in the industry to develop Fast Track, Accelerated Approval or Breakthrough drugs is having a profound impact.

Image result for Process Validation

Approval of a Breakthrough drug, which of course is classified for Accelerated Approval, means manufacturers need to develop Chemistry, Manufacturing and Controls (CMC) data in about half the time of the traditional process. In addition, Breakthrough designation does not mean the innovator company can do less. In order to meet these accelerated timelines, they do need to start analytical methods creation and product and process characterization sooner, and handle the process differently. Validation of a process traditionally has called for sufficient data and an adequate number of runs to convince the manufacturer (and regulators) that the process works. As we will explore below, Breakthrough therapies are often in the market before the product is fully validated.

However, the guiding force behind these new approaches is that despite sharply reduced timeframes, manufacturers cannot compromise patient safety or product supply. Therefore, characterization of critical product and process attributes is typically required much earlier in the process.

Image result for Process Validation

Challenges and Realities of Process Validation and Regulatory Approval within the Accelerated Drug Paradigm:
• The collaboration and communication required between the FDA and innovator companies is extensive. Given limited FDA resources and extensive resources required by the organizations of innovator companies, is the growth of the Fast Track/Breakthrough Therapy/Accelerated Approval programs sustainable?
• New Drug Applications (NDA) for Breakthrough Therapies include less manufacturing information and data requiring alternative risk-mitigation approaches and often nontraditional statistical models.
• Both patient safety and product supply is at the forefront, without the data and historical knowledge traditionally used to address these concerns.
• The primary concerns for CMC reviewers include incomplete characterization of the drug, underdeveloped analytical methods and a lack of full understanding of a product’s Critical Quality Attributes (CQA) and associated risks.
• Process Validation will, in many cases, be incomplete at product launch.

Image result for Process Validation

THE CHANGED PARADIGM RESTORED TO ORDER (SORT OF)
The “restored order” for the approval of, and ultimate Process Validation for, Breakthrough/Accelerated Approval drugs will not look like anything we normally see. Again, all Breakthrough and Accelerated Approval drugs address very serious conditions and offer treatment where none currently exists, or offers benefits well above and beyond drug products currently on the market. Therefore, flexibility has been applied to segments of the traditional product review and approval process to speed the availability of treatments for these critical conditions.

Despite the flexibility in, and often changes to the product review and approval process, patient safety remains at the forefront, as well as the guarantee of consistent product supply.

Approaches for Successfully Handling the Approval and Validation of Accelerated Approval Drugs:
• Open and transparent communication with the FDA is essential throughout the entire approval and post-market process. The pharmaceutical company mindset of not wanting to learn certain information for fear of needing to revalidate based on those discoveries has no place in this new reality. New information will be learned pre- and post-launch, and plenty of amendments will need to be filed.
• Given the compressed development timeframes, less stability data will be available at submission. Additional data will be submitted via amendments during the review cycle, and in some cases, post-market.
• Launch commercial process with limited experience and optimize post-approval–the classic three runs is not the guiding force within this construct. The level of flexibility regulators will extend is determined for each specific product. Factors taken into consideration include: riskiness of product characteristics, seriousness of the condition and medical need, complexity of manufacturing processes, state of the innovator’s quality system and merits of the innovator’s risk-based quality assessment including Critical Quality Attributes (CQA).
• Novel statistical models and approaches will need to be applied in many cases. Representative samples and assays for these models will likely need to be acquired from sources, like prior knowledge and use of comparability protocols. Also, determination of the appropriate use of stability data from representative pilot scale lots will be required.
• Manufacturers should freely acknowledge where data is limited, demonstrate that the missing data pose no risk to patient safety or product supply and outline post-market strategy for acquiring the missing data. Conversations with the FDA are clearly required for successful outcomes.
• Focus on patient safety and reliable supply of quality product at launch, not process optimization. In addition, begin critical product attributes and process characterization work much earlier than a typical pharmaceutical development process. In many cases, consider broader product quality ranges for non-Critical Quality Attributes until further manufacturing experience is acquired post-approval.

Image result for Process Validation

Enhance analytical methods and understanding to offset more limited process understanding and to support future comparability work. Extremely important, involve commercial Quality Control representatives in the development assay design.
• Again, CMC activities that may be incomplete at launch include: Process Validation, stability studies on commercial product, manufacturing scale/tech transfer data and complete control system data.
• A post-approval product lifecycle management plan is a must, and it needs to be included in the filing to support deferred CMC activities.

Fast Track, Breakthrough Therapy and Accelerated Approval drugs have profoundly changed the thinking and approach to Process Validation and other CMC activities.

Image result for Process Validation

Sources:
Joseph A. DiMasia, Henry G. Grabowskib, Ronald W. Hansenc, “Innovation in the Pharmaceutical Industry: New Estimates of R&D costs,” Tufts Center for the Study of Drug Development, Tufts UniversityJ. Wechsler, “Breakthrough Drugs Raise Development and Production Challenges,” Pharmaceutical Technology 39 (7) 2015.Earl S. Dye, PhD, “CMC/GMP Considerations for Accelerated Development and Launch of Breakthrough Therapy Products,” Roche“Guidance for Industry Expedited Programs for Serious Conditions – Drugs and Biologics,” U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), May 2014 ProceduralAnthony Mire-Sluis, Michelle Frazier, Kimberly May, Emanuela Lacana, Nancy Green, Earl Dye, Stephan Krause, Emily Shacter, Ilona Reischl, Rohini Deshpande and Joe Kutza, “Accelerated Product Development: Leveraging Industry and Regulator Knowledge to Bring Products to Patients Quickly,” BioProcess International, December 2014

Daniel Alsmeyer and Ajay Pazhayattil, Apotex Inc., “A Case for Stage 3 Continued Process Verification,” Pharmaceutical Manufacturing, May 2014

Image result for Process Validation

Image result for Process Validation

Image result for Process Validation

/////////////Process Validation, Regulatory Review, Drug Approval Strategies,  Fast Track, Breakthrough Therapy, Accelerated Approval

What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution

 

What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution

Multivariate predictive distribution quantifies the level of QA in a design space. “Parametric bootstrapping” can help simplify early analysis and complement Bayesian methods.

The ICH Q8 core definition of design space is by now somewhat familiar: “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality” [1]. This definition is ripe for interpretation. The phrase “multidimensional combination and interaction” underscores the need to utilize multivariate analysis and factorial design of experiments (DoE), while the words “input variables (e.g., material attributes) and process parameters” remind us of the importance of measuring the right variables.

However, in presentations and articles discussing design space, not much focus has been given to the key phrase, “assurance of quality”. This does not seem justified, given that guidance documents such as ICH Q8, Q9, Q10, PAT, etc. are inundated with the words “risk” and “risk-based.” For any ICH Q8 design space constructed, surely the core definition of design space begs the question, “How much assurance?” [2]. How do we know if we have a “good” design space if we do not have a method for quantifying “How much assurance?” in a scientifically coherent manner?

 

The Flaws of Classical MVA

Classical multivariate analysis and DoE methodology fall short of providing convenient tools to allow one to answer the question “How much assurance?” There are two reasons for this. One is that multivariate analysis and DoE have historically focused on making inferences primarily about response means. But simply knowing that the response means of a process meet quality specifications is not sufficient to allow one to conclude that the next batch of drug product will meet specifications. One reason for this is that we always have batch-to-batch variation. Building a design space based upon overlapping mean responses will result in one that is too large, harboring operating conditions with a low probability of meeting all of the quality specifications.  Just because each of the process mean quality responses meet specification does not imply that the results of the next batch will do so; there will always be variation about these means.

However, if we can quantify the entire (multivariate) predictive distribution of the process quality responses as a function of “input variables (e.g., material attributes) and process parameters”, then we can compute the probability of a future batch meeting the quality specifications. The multivariate predictive distribution of the process quality responses incorporates all of the information about the means, variation, and correlation structure among the response types.

Another, more technically subtle reason, that classical multivariate analysis and DoE methodology do not provide straightforward tools to construct a proper design space is that, beyond inference about response means, they are oriented towards construction of prediction intervals or regions. For a process with multiple critical quality responses, it is awkward to try to use a classical 95% prediction region (which is not rectangular in shape) to compare against a (rectangular) set of specifications for multiple quality responses. On the other hand, using individual prediction intervals does not take into account the correlation structure among the quality responses. What is needed instead is a “multivariate predictive distribution” for the quality responses. The proportion of this predictive distribution that sits inside of the rectangular set of quality specifications is then simply a quantification of “how much assurance”.

 

The Stochastic Nature of Our Processes

 

Complex manufacturing processes are inherently “stochastic processes”. This means that while such process may have underlying mechanistic model relationships between the input factors and process responses, nonetheless such relationships are embedded with random variation. Conceptually, for many complex manufacturing processes, even if “infinitely” accurate measurement devices could be used, such processes would still produce quality responses that vary from batch to batch and possibly within batch. This is why classical multivariate analysis and DoE methodology, which focuses on inference for response means, present an insufficient tool set. This is not to say that such methods are not necessary; indeed they are. However, we need to better understand the multivariate stochastic nature of complex manufacturing processes in order to quantify “how much assurance” relative to an ICH Q8 design space.

The concept of a multivariate distribution is useful for understanding complex manufacturing processes, with regard to both input variables and response variables.  Figure 1 shows a hypothetical illustration of the relationships among various input variables and response variables. Notice that some of the input variables and the response variables are described by multivariate distributions. In Figure 1 we have a model which describes the relationships between the input variables, the common cause process variability and the quality responses. This relationship is captured by the multivariate mathematical function f=(f1,…,fr), which maps various multivariate input variables, x, z, e, θ  to the multivariate quality response variable, Y=(Y1,…,Yr). Here, x=(x1,…,xk) lists the controllable process factors (e.g. pressure, temperature, etc.), while z=(z1,…,zh) lists process variables which are noisy, such as input raw materials, process set-point deviations, and possible ambient environmental conditions (e.g. humidity).

The variable e=(e1,…,er) represents the common-cause random variability that is inherent to the process. It is natural that z and e be represented by (multivariate) distributions that capture the mean, variation about the mean, and correlation structure of these random variables. The parameter θ = (θ1,…θp) represents the list of unknown model parameters. While such a list can be thought of as composed of fixed unknown values, for purposes of constructing a predictive distribution, it may be easier to describe the uncertainty associated with these unknown model parameters by a multivariate distribution. The function f then transmits the uncertainty in the variables z, e, and θ to the response variables, Y. In other words, the “input distributions” for z, e, and θ combine to produce the predictive distribution for Y through the process model function f.

 Bayesian Method
Figure 1. Multivariate distributions and the role they play in the design space reliability. Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in an unreliable process.

The controllable process factors x can then be used to move the distribution of response variables Y to be better situated within the region bounded by the quality specifications. In Figure 1, the rectangular gray region in the lower left is the region bounded by the quality specifications for “percent dissolved” and “friability” for a tablet production process. Here, one can see that the process operating at operating set point x it not reliable because only about 65% of the predictive distribution is within the region carved out by the quality specifications. (The concentric elliptical contours are labelled by the proportion of the predictive distribution inside of each contour.)

In Figure 2, however, one can see that the operating set point x is associated with a much more reliable process since about 90% of the predictive distribution is within the region carved out by the quality specifications. Situations like that in Figure 2 can then be used to create a design space in the following way: A design space (for a process with multiple critical quality responses) can be thought of as the collection of all controllable process factors (x-points) such that the an acceptably high percentage of the (multivariate) predictive distribution of critical quality responses falls within the region outlined by the (multiple) quality specifications. 

Bayesian Method
Figure 2. Multivariate distributions and the role they play in the design space reliability.  Here, it is clear that the multivariate predictive distribution associated with the process control values (x = (x1,…, xk) ) results in a process with a higher reliability.

Slight modifications of the above definition may be needed to accommodate situations involving feedback/feedforward control, etc. But a predictive distribution of the quality responses is required nonetheless in order to address the question of “How much assurance?”

 

From Bayesian to Bootstrapping

Viewing Figures 1 and 2 shows that one can get a predictive distribution (for the quality responses in Y) if one has access to the various input distributions, in particular the multivariate distribution representing the uncertainty due to the unknown model parameters. But how can one compute the distribution representing the uncertainty due to the unknown model parameters? One possible approach is to use Bayesian statistics.

The Bayesian statistical paradigm provides a conceptually straightforward way to construct a multivariate statistical distribution for the unknown model parameters associated with a manufacturing process. This statistical distribution for the unknown model parameters is known as the “posterior” distribution by Bayesian statisticians. The resulting distribution for Y (in Figures 1 and 2) is called the posterior predictive distribution. This distribution accounts for uncertainty due to common-cause process variation as well as the uncertainty due to unknown model parameters. It also takes into account the correlation structure among multiple quality response types. In addition, the Bayesian approach also allows for the inclusion of prior information (e.g., from previous pilot plant and laboratory experiments). In fact, use of the posterior predictive distribution is an effective way to do process optimization with multiple responses [3]. An excellent overview of these Bayesian methods is given in the recent book Process Optimization: A Statistical Approach [4]. An application of the Bayesian approach to Design space construction is given by this author in [5].

An alternative approach for obtaining a distribution to represent the uncertainty due to the unknown model parameters involves a concept called “parametric bootstrapping”. This concept is somewhat more transparent than the Bayesian approach. A simple form of the parametric bootstrap approach can be described as follows. First fit your process model to experimental data to estimate the unknown model parameters. Using computer simulation, simulate a new set of (artificial) responses so that you will have new responses similar to the ones obtained from your real experiment. Using the artificial responses, estimate a new set of model parameters. Do this many more times (10,000 say) to obtain many more sets of model parameter estimates. Each of these model parameters (obtained from each artificial data set) will be somewhat different, due to the variability in the data. The collection of all these sets of model parameter estimates forms a bootstrap distribution of the model parameters that expresses their uncertainty. An experiment with a large number of runs (i.e. a lot of information) will tend to have a tighter distribution, expressing the fact that we have only a little uncertainty about the model parameters. But an experiment with only a little information will tend to yield a bootstrap distribution that is more spread out. This is particularly the case, if the common cause variation in the process is also large.

The parametric bootstrap distribution can then be used in place of the Bayesian posterior distribution to enable one to obtain a predictive distribution for the quality responses. This simple parametric bootstrap approach will approximate the Bayesian posterior distribution of model parameters, but it will tend to produce a parameter distribution that is smaller (i.e., a bit too small) than the one obtained using the Bayesian approach. More sophisticated parametric bootstrap approaches are possible [6] which may result in more accurate approximations to the Bayesian posterior distribution of model parameters.

Software Selection

 

Unfortunately, the DoE software available in many point-and-click statistics packages does not produce multivariate predictive distributions. It therefore does not provide a way to quantify “How much assurance” one can assign to the Design space. So clearly, easy-to-use software is a current bottleneck to allowing experimenters to produce Design spaces based upon multivariate predictive distributions.

However, there are some software packages that consulting statisticians can use to produce approximate predictive distributions. The SAS statistical package has a procedure call PROC MODEL. (It is part of SAS’s econometric suite of procedures.)  This procedure will analyze multivariate response regression models. It has a Monte Carlo option which will produce simulations that allow one to sample from a predictive distribution. This is useful if the process you have can be modeled using PROC MODEL.

The R statistical programming language has functions which will allow the user to do a Bayesian analysis for certain multivariate response regression models. In addition R has a boot function which will allow the user (in conjunction with other R statistical functions) to perform a parametric bootstrap analysis, which can eventually be incorporated into an R program for creating an approximate multivariate predictive distribution. In any case, the SAS and R software does require some statistical sophistication in order to create the appropriate programming steps. Easier to use point-and-click software is still very much needed to lower the computational hurdles for generating multivariate predictive distributions needed for ICH Q8 design space development.

 

Putting It All Together

 

Viewing the multivariate world from our limited three-dimensional perspective is often a challenge. This is no different for a design space involving multiple controllable factors.  For the design space definition above, it is convenient to use a probability function, P(x), which assigns a probability (i.e., reliability measure) to each set of process operating conditions, denoted by x = (x1,…, xk). Here, P(x) is the probability that all of the critical quality responses simultaneously meet their specification limits. As stated previously, this probability can be thought of as the proportion of the predictive distribution contained within the region bounded by the quality specifications. The design space is then the set of all x such that P(x) is at least some acceptable reliability value. One can then plot P(x) vs. x to view the design space, provided that the dimension of x (i.e., k) is not too large. A simplified example discussed in [5] and [7] is briefly described below.

In an early phase synthetic chemistry study, a prototype design space using a predictive distribution was created for an active pharmaceutical ingredient (API). The (four) quality responses considered were: “% starting material isomer” (<0.15%), “% product isomer” (<2%), “% of impurity #1” (<3.5%), and “% API purity” (>95%). Four controllable experimental factors (x = (x1,…, x4)) were used in a DoE to model how they (simultaneously) influenced the four quality responses. These factors were x1=temperature (oC), x2=pressure (psi), x3= catalyst loading (eq.), and x4=reaction time (hrs).

A Bayesian statistical analysis was used to construct the multivariate predictive distribution for the four quality responses. For the above quality specifications (in parentheses), and for various x-points, the proportion of the multivariate predictive distribution inside the specification region was computed using many Monte Carlo simulations. In other words, for each x-point the corresponding P(x) value was computed. A multipanel contour plot of P(x) vs. the four experimental factors is shown in Figure 3 below. The white regions indicate the regions of larger reliability for simultaneously meeting all of the process specifications. Although these absolute reliability levels are less than ideal, this example is useful for illustration purposes.

Pharma QbD
Figure 3. Multipanel contour plots of the DS for the early phase synthetic chemistry example. (Used with permission from the Association for Quantitative Management.)

As it turned out, the “optimal” factor-level configurations were such that the associated proportions of the predictive distributions were only about 60% to 70% inside of the quality specification region. Part of the reason for this is that the “input” distributions corresponding to the “unknown model parameters” and the “common cause variation” were too large. A more detailed analysis of this data, conducted in [7], indicates that if a larger experimental design had been used and the process variation reduced by 30%, the more reliable factor conditions would be associated with reliability measures that were to the 95% level and beyond. This shows the importance of reducing the variation for as many of the process “input” distributions (of the kind shown in Figures 1 and 2) as is possible.

In the case where the dimension of x is not small, a (read-only) sortable spreadsheet could be used to make informed movements within such a design space [5]. While the spreadsheet approach does not provide a high-level (i.e., bird’s eye) view of the design space (as in Figure 3), it does show (locally) how P(x) changes as various controllable factors are changed. In theory, it may also be possible to utilize mathematical optimization routines applied to P(x) to move optimally within a design space whose dimension is not small.

Developing Design Spaces Based Upon Predictive Distributions: A Summary

 

Emphasize efficient experimental designs, particularly for multiple unit operations. For design space construction, clearly it is desirable to have good data that can be efficiently collected. As such, good experimental design and planning is critical. It is particularly useful to have good data that can be efficiently collected across various unit operations in a multi-step process. The performance of one unit operation may depend upon various multiple aspects of previous unit operations. When viewed as a whole, a process may have a variety of interacting variables that span its unit operations. This is an aspect of process optimization and design space construction that requires more research by statisticians and their clients (chemical engineers, chemometricians, pharmaceutical scientists, etc.). Some work has been done [8] but more is needed.  Poor experimental designs can be costly and possibly lead to biases and/or too much uncertainty about unknown model parameters. This is not helpful for developing a good multivariate predictive distribution with which to construct a design space.

Depend upon easy-to-use statistical software for multivariate predictive distributions. As mentioned previously in this article, the availability of easy-to-use statistical software for creating multivariate predictive distributions is a key bottleneck for the development of design spaces that quantify “how much assurance” can be associated with meeting quality specifications. More broadly, it has been shown recently that multivariate predictive distributions can be very useful for process optimization involving multiple responses [3], [4]. Monte Carlo simulation software applications such as @Risk and Crystal Ball provide point-and-click software for process simulation. However, a key issue that still remains involves producing a multivariate distribution that reflects the uncertainty of unknown model parameters associated with a process. Producing such a distribution is not always statistically simple.

The Bayesian approach provides a unifying paradigm, but the computational issues do require careful assessment of model and distributional assumptions, and in many cases, careful judgement involving convergence of algorithms. The parametric bootstrap approach is more transparent but may produce design spaces that are a bit too large, unless certain technical refinements are brought to bear. Nonetheless, the statistical methods exist to produce good multivariate distributions that represent the uncertainty of unknown model parameters. As always, modeling and inference need to be done with care, using data from appropriately designed experiments.

Use computer simulation to better understand reliability and risk assessments for complex processes. If one can state that they understand a process quantitatively, even a stochastic process, then one should expect that they can simulate such a process reasonably well. Given that many pharmaceutical processes may involve a series of complex unit operations, computer simulation may prove to be helpful for understanding how various process factors and conditions combine to influence multivariate quality responses. With proper care, computer simulation may help process engineers and scientists to better understand the multivariate nature of their multi-step manufacturing procedures. Of course, experimental validation of computer simulation findings with real data may still be needed. In addition, process risk assessments based largely upon judgement (e.g., through tools such as FMEA) can be enhanced through the use of better quantitative methods such as probability measures [9] (e.g., obtained through computer simulations). The issue of ICH Q8 design space provides a further incentive to utilize and enhance such computer simulation tools.

Have a clear understanding of the processes involved and lurking risks, not just the blind use of mathematical models. We must always keep in mind that our statistical predictions, such as multivariate predictive distributions, are based upon mathematical models that are approximations to the real world environment. How much assurance we can accurately attribute to a design space will be influenced by the predictive distribution, and all of the modeling and input information (e.g. input distributions) that went into developing it. Generally speaking, of course, all decisions and risk measurements based upon statistical methods are influenced to varying degrees by what mathematical modeling assumptions are used, quantifications of design space assurance are fundamentally no different.

However, seemingly good predictive modeling (along with accurate statistical computation) may not be sufficient to provide a design space with long term, high-level assurance of meeting quality specifications. The issue of process robustness is also important, not only for design space construction but in other areas of QbD as well.  This issue has many facets and I believe it needs to be given more emphasis in pharmaceutical product development and manufacturing. So as not to go too far on a tangent, I will only briefly summarize some of the issues involved.

In quality engineering, making a process robust to noisy input variables is known as “robust parameter design” (RPD). The Japanese industrial engineer Genichi Taguchi pioneered this approach to quality improvement. The basic idea is to configure the controllable (i.e., non-noisy) process factors in such as way to dampen the influence of the noise process variables (e.g., raw material attributes or temperature/pressure fluctuations in the process). A convenient aspect of the predictive distribution approach to design space, and process optimization in general, is that it provides a technically and computationally straightforward way to do RPD. We simply simulate the nose variable distribution (shown in Figures 1 and 2) and then configure the controllable factors (x-points) to increase the probability of meeting the quality specifications. If the controllable factors are able to noticeably dampen down the influence of the noise variables, typically the probability that the quality responses will meet specifications will increase (assuming that the mean responses meet their specifications). See [4], [5], and [10] for RPD examples involving multivariate predictive distributions.

Less well known, however, are two more subtle issues that can cause problems with predictive distributions. These are lurking variables and heavy-tailed distributions.  Process engineers and scientists need to brainstorm and test various possibilities for a change in the process or its inputs that could increase the risk that the predictive distribution is overly optimistic or is not stable over time.

Some predictive distributions may have what are called “heavy tails”. (The degree of “heavy tailedness” is called kurtosis by statisticians.) We need to be careful with such distributions as they are more likely to suddenly produce values far from the center of the distribution, than for a normal (i.e., Gaussian) distribution.

If the process can be simulated on a computer, sensitivity analyses can be done to assess the effect of various shocks to the system or changes to the input or predictive distributions, such as heavier tails. An interesting overview of these two issues and of how quality and risk combine can be found in [11].

In conclusion, the ability to understand randomness and think stochastically is important as multivariate random variation pervades all complex production processes. Given that we are forced to deal with randomness (in multivariate form, no less), Monte Carlo simulation has become a useful way to gain some insight into the combined effects of controllable and random effects present in a complex production process. (Interested readers may want to visit The American Society for Quality’s web site on Probabilistic Technology available at http://www.asq.org/communities/probabilistic-technology/index.html). Computer simulation can help our intuition for understanding stochastic processes. Such intuition in humans is not always on the mark.  We can all be fooled by randomness. See for example the book by Taleb [12].


References
1. ICH (2005). “ICH Harmonized Tripartite Guideline: Pharmaceutical Development, Q8.”

2. Peterson, J. J. Snee, R. D., McAllister, P.R., Schofield, T. L., and Carella, A. J., (2009) “Statistics in the Pharmaceutical Development and Manufacturing” (with discussion),  Journal of Quality Technology, 41, 111-147.

3. Peterson, J. J. (2004), “A Posterior Predictive Approach to Multiple Response Surface Optimization”, Journal of Quality Technology, 36, 139-153.

4. Del Castillo, E. (2007), Process Optimization: A Statistical Approach, Springer, N.Y.

5. Peterson, J. J. (2008), “A Bayesian Approach to the ICH Q8 Definition of Design Space”, Journal of Biopharmaceutical Statistics, 18, 959-975.

6. Davison, C. and Hinkley, D.V. (1997), Bootstrap Methods and Their Application, Cambridge University Press, Cambridge, UK.

7. Stockdale, G. W. and Cheng, A. (2009), “Finding Design Space and a Reliable Operating Region using a Multivariate Bayesian Approach with Experimental Design”, Quality Technology and Quantitative Management, (to appear).

8. Perry, L. A., Montgomergy, D.C., and Fowler, J. W. (2002), “Partition Experimental Designs for Sequential Processes: Part II – Second Order Models”, Quality and Reliability Engineering International, 18, 373-382.

9. Claycamp, H. G. (2008). “Room for Probability in ICH Q9: Quality Risk Management”, Institute of Validation Technology conference: Pharmaceutical Statistics 2008 Confronting Controversy, March 18-19, Arlington, VA

10. Miro-Quesada, G., del Castillo, E., and Peterson, J.J., (2004) “A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables”, Journal of Applied Statistics, 31, 251-270

11. Kenett, Ron S  and Tapiero, Charles S. (2009),”Quality, Risk and the Taleb Quadrants” presented at the IBM Quality & Productivity Research Conference, June 3rd, 2009.  Available at SSRN: http://ssrn.com/abstract=1433490

12. Taleb, Nassim (2008) Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, Random House, New York.

/////// ICH Q8 Design Space,  Multivariate Predictive Distribution,QA,  design space, Parametric bootstrapping, complement Bayesian methods.