Process Validation and Regulatory Review
To meaningfully discuss the process validation and regulatory approval strategies required for drugs that have been designated Fast Track, Breakthrough Therapy or Accelerated Approval drugs, we must first clarify these designations and briefly remind ourselves what the Process Validation guidance looks like. Then we will be able to clearly identify challenges and approaches to these barriers when working to bring a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.
Fast Track designation – Fast Track drugs treat serious conditions where there is an unmet medical need. Concluding that a condition is serious and that there is an unmet medical need most definitely leaves room for judgement, but generally speaking, the conditions these drugs treat are life-threatening, and the drug in question is expected to contribute to survival, daily functioning or the likelihood that a condition will advance to a very serious state. Fast Track drugs receive the benefit of more frequent meetings and communication with the FDA, and the drug qualifies for Accelerated Approval and rolling review of the Biologic License Application (BLA) or New Drug Application (NDA).
Breakthrough Therapy – Breakthrough Therapy status can be assigned to drugs that treat a serious condition when preliminary clinical data show significantly improved outcomes compared to treatments currently on the market. Breakthrough Therapies are eligible for: Fast Track designation benefits, extensive FDA guidance on effective drug development early in the development process and organizational commitment, including access to FDA senior managers.
Accelerated Approval – The FDA established accelerated approval regulations in 1992. Accelerated Approval could be given to drugs that met a serious unmet medical need, and approval was based on a surrogate endpoint. Fast forward to 2012 when Congress passed the Food and Drug Administration Safety Innovations Act (FDASIA). This amendment to the Federal Food, Drug, and Cosmetic Act (FD&C Act) allowed approval to be based on either a surrogate endpoint per the 1992 regulations or approval based on an intermediate clinical endpoint. For example, as a result of the 2012 legislation, a cancer drug could be approved based on the surrogate endpoint of increasing the probability of cancer to going into remission or the intermediate clinical endpoint of shrinking tumor size—an outcome that is strongly correlated with the ability to much more successfully treat cancer and induce remission.
These FDA designations are clearly designed to increase the availability and speed to market of drugs treating serious conditions where unmet medical needs exist. Given that nimbleness and speed has historically not been the pharmaceutical industry’s nor FDA’s strong suit—commercialization of a drug has historically taken on average 12 years and cost up to $2.5B (including expenditure outlays and opportunity costs). The ability for these designations to save both time and money is very attractive. However, given the slow-moving nature of the industry, changes in both mindset and approaches are needed by both drug innovators and regulators to validate processes and ensure drug quality within the faster-moving constructs.
Let’s now turn to the most recent Process Validation guidance so that we may juxtapose that system with the nimble needs of Fast Track Designation, Breakthrough Therapy and Accelerated Approval drugs—ultimately, making some observations regarding needed Process Validation and overall regulatory approval approaches as the industry moves towards accelerated development processes for an increasing number of drugs.
WHAT IS PROCESS VALIDATION?
According to the FDA’s 2011 Process Validation (PV) guidance, “For purposes of this guidance, process validation is defined as the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product. Process validation involves a series of activities taking place over the lifecycle of the product and process.”
The Three Stages of Process Validation:
Stage 1: Process Design–manufacturing process is defined during this stage and is based on knowledge acquired through development and scale-up activities.
Stage 2: Process Qualification–process design is evaluated to determine if the process is capable of reproducible commercial manufacturing.
Stage 3: Continued Process Verification–ongoing assurance during manufacturing that the process is controlled and the outcome predictable.
Keys for Successful Validation Include:
• Gaining knowledge from the product and process development
• Understanding sources of variation in the production process
• Determining the presence of and degree of variation
• Understanding the impact of variation on the process and end product
• Controlling variation in a manner aligned with Critical Quality Attributes (CQA) and the risk a given attribute introduces to the process
Process Qualification, a key component of Process Validation, should be based on overall level of product and process understanding, level of demonstrable control, data from lab, pilot and commercial batches, effect of scale and previous experience with similar products and processes. Process Qualification is generally recommended to be based on higher levels of sampling, additional testing and greater scrutiny of process performance than would be typical of routine commercial production.
As we will now explore, some of the demands of Process Qualification and overall Process Validation is severely challenged by the approaches required when bringing a Fast Track, Accelerated Approval or Breakthrough Therapy drug to market.
NOVEL APPROACHES NEEDED FOR ACCELERATED APPROVALS
Historically, it has taken an average of 12 years and, according to a Tufts Center for the Study of Drug Development (CSDD) report, including expenditures and opportunity costs, an average of ~$2.6 billion to bring a prescription drug to market. This paper will refrain from making editorial comments about this pharmaceutical industry fact; however, the undeniable reality is that the speed required at every point in the industry to develop Fast Track, Accelerated Approval or Breakthrough drugs is having a profound impact.
Approval of a Breakthrough drug, which of course is classified for Accelerated Approval, means manufacturers need to develop Chemistry, Manufacturing and Controls (CMC) data in about half the time of the traditional process. In addition, Breakthrough designation does not mean the innovator company can do less. In order to meet these accelerated timelines, they do need to start analytical methods creation and product and process characterization sooner, and handle the process differently. Validation of a process traditionally has called for sufficient data and an adequate number of runs to convince the manufacturer (and regulators) that the process works. As we will explore below, Breakthrough therapies are often in the market before the product is fully validated.
However, the guiding force behind these new approaches is that despite sharply reduced timeframes, manufacturers cannot compromise patient safety or product supply. Therefore, characterization of critical product and process attributes is typically required much earlier in the process.
Challenges and Realities of Process Validation and Regulatory Approval within the Accelerated Drug Paradigm:
• The collaboration and communication required between the FDA and innovator companies is extensive. Given limited FDA resources and extensive resources required by the organizations of innovator companies, is the growth of the Fast Track/Breakthrough Therapy/Accelerated Approval programs sustainable?
• New Drug Applications (NDA) for Breakthrough Therapies include less manufacturing information and data requiring alternative risk-mitigation approaches and often nontraditional statistical models.
• Both patient safety and product supply is at the forefront, without the data and historical knowledge traditionally used to address these concerns.
• The primary concerns for CMC reviewers include incomplete characterization of the drug, underdeveloped analytical methods and a lack of full understanding of a product’s Critical Quality Attributes (CQA) and associated risks.
• Process Validation will, in many cases, be incomplete at product launch.
THE CHANGED PARADIGM RESTORED TO ORDER (SORT OF)
The “restored order” for the approval of, and ultimate Process Validation for, Breakthrough/Accelerated Approval drugs will not look like anything we normally see. Again, all Breakthrough and Accelerated Approval drugs address very serious conditions and offer treatment where none currently exists, or offers benefits well above and beyond drug products currently on the market. Therefore, flexibility has been applied to segments of the traditional product review and approval process to speed the availability of treatments for these critical conditions.
Despite the flexibility in, and often changes to the product review and approval process, patient safety remains at the forefront, as well as the guarantee of consistent product supply.
Approaches for Successfully Handling the Approval and Validation of Accelerated Approval Drugs:
• Open and transparent communication with the FDA is essential throughout the entire approval and post-market process. The pharmaceutical company mindset of not wanting to learn certain information for fear of needing to revalidate based on those discoveries has no place in this new reality. New information will be learned pre- and post-launch, and plenty of amendments will need to be filed.
• Given the compressed development timeframes, less stability data will be available at submission. Additional data will be submitted via amendments during the review cycle, and in some cases, post-market.
• Launch commercial process with limited experience and optimize post-approval–the classic three runs is not the guiding force within this construct. The level of flexibility regulators will extend is determined for each specific product. Factors taken into consideration include: riskiness of product characteristics, seriousness of the condition and medical need, complexity of manufacturing processes, state of the innovator’s quality system and merits of the innovator’s risk-based quality assessment including Critical Quality Attributes (CQA).
• Novel statistical models and approaches will need to be applied in many cases. Representative samples and assays for these models will likely need to be acquired from sources, like prior knowledge and use of comparability protocols. Also, determination of the appropriate use of stability data from representative pilot scale lots will be required.
• Manufacturers should freely acknowledge where data is limited, demonstrate that the missing data pose no risk to patient safety or product supply and outline post-market strategy for acquiring the missing data. Conversations with the FDA are clearly required for successful outcomes.
• Focus on patient safety and reliable supply of quality product at launch, not process optimization. In addition, begin critical product attributes and process characterization work much earlier than a typical pharmaceutical development process. In many cases, consider broader product quality ranges for non-Critical Quality Attributes until further manufacturing experience is acquired post-approval.
Enhance analytical methods and understanding to offset more limited process understanding and to support future comparability work. Extremely important, involve commercial Quality Control representatives in the development assay design.
• Again, CMC activities that may be incomplete at launch include: Process Validation, stability studies on commercial product, manufacturing scale/tech transfer data and complete control system data.
• A post-approval product lifecycle management plan is a must, and it needs to be included in the filing to support deferred CMC activities.
Fast Track, Breakthrough Therapy and Accelerated Approval drugs have profoundly changed the thinking and approach to Process Validation and other CMC activities.
Joseph A. DiMasia, Henry G. Grabowskib, Ronald W. Hansenc, “Innovation in the Pharmaceutical Industry: New Estimates of R&D costs,” Tufts Center for the Study of Drug Development, Tufts UniversityJ. Wechsler, “Breakthrough Drugs Raise Development and Production Challenges,” Pharmaceutical Technology 39 (7) 2015.Earl S. Dye, PhD, “CMC/GMP Considerations for Accelerated Development and Launch of Breakthrough Therapy Products,” Roche“Guidance for Industry Expedited Programs for Serious Conditions – Drugs and Biologics,” U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), May 2014 ProceduralAnthony Mire-Sluis, Michelle Frazier, Kimberly May, Emanuela Lacana, Nancy Green, Earl Dye, Stephan Krause, Emily Shacter, Ilona Reischl, Rohini Deshpande and Joe Kutza, “Accelerated Product Development: Leveraging Industry and Regulator Knowledge to Bring Products to Patients Quickly,” BioProcess International, December 2014
Daniel Alsmeyer and Ajay Pazhayattil, Apotex Inc., “A Case for Stage 3 Continued Process Verification,” Pharmaceutical Manufacturing, May 2014
/////////////Process Validation, Regulatory Review, Drug Approval Strategies, Fast Track, Breakthrough Therapy, Accelerated Approval
Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis
Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Part One of Three
The FDA defined Process Validation in 1987 by the following: “Process Validation is establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specification and quality attributes.” 1The purpose of this article is to discuss how to validate a process by introducing some basic statistical concepts to use when analyzing historical data from Batch Records and Quality Control Release documents to establish specifications and quality attributes for an existing process. In an ideal world, the qualification of processing equipment, utilities, facilities, and controls would commence at the start up of a new plant or the implementation of a new system. This would be followed by the validation of the process based on developmental data and used to establish the product ranges for in process and final release testing. However, the ideal case may not exist and thus there are incidences where commissioning of facilities or new systems occurs concurrently with the qualification and process validation; or the facility and equipment are “existing” and there is no such documentation. Some facilities, equipment, or processes pre-date the above definition by many years and therefore have never been validated on qualified equipment, utilities, facilities, and controls. Additionally, in some cases, no developmental data exists to establish the product ranges for in process and release testing.
Basics of Process Validation
Before examining the existing processes, it is important to first understand the basic concepts of Process Validation. Figure 1 is a flow chart defining Process Validation from the developmental stage to the plant floor. As a simplistic example, a process begins with the raw materials being released, then the raw materials are mixed, pH is adjusted, purification occurs by gel chromatography, excipients are added for final formulation, and the product is filled and terminally sterilized. Each of these steps has defined functions and therefore would have a designed goal. For example, purification would not begin until the desired pH is reached in the previous step. Therefore, the desired pH is an in process attribute of the pH adjustment stage and the amount of buffer used to adjust the pH is a processing parameter. Each of these steps has attributes that one would want to monitor to determine that the product is being produced acceptably at that step such that the next process step can start. The ranges for these attributes are generally determined by process development data so that if the process attributes are met, then there is a high degree of confidence that the final container is filled with a product of acceptable attributes as determined by the developmental data.
Click on image for larger view
During Process Validation, there needs to be approved Standard Operating Procedures (SOPs) in place that Plant Operators have been trained on. Analytical testing should be performed by SOPs and the Quality Control (QC) analysts should be trained in these SOPs as well. The analytical tests should have been previously validated by normal analytical methods validation and documented as such. Also, the equipment used to prepare product must be documented to be qualified for its installation, operation, and performance, commonly referred to as IQ, OQ, and PQ. There are methods for performing such qualification retrospectively, but for the purpose of this discussion, it is only important to note that the equipment must be qualified.
Relating Process Validation to Existing Processes
Existing processes may lack developmental data for in process ranges and release testing. If a retrospective analysis of existing data is used to establish process ranges, including input, output, and in process testing parameters, then the process can be treated like a new process by following the basics of Prospective Process Validation. The difference between traditional Prospective Process Validation and Prospective Process Validation based on retrospective analysis is that in place of developmental data to establish ranges, the retrospective analysis reviews data from past Batch Production Records, QC test reports, product specs, etc. Thus the items that need to be in place are: approved SOPs, and Batch Records with personnel training, equipment qualification (IQ, OQ, PQ), QC methods validated, approved SOPs and training for QC personnel. With these in place, all that is missing is a Process Validation Protocol with defined ranges. The best sources of this information are approved completed batch records, process deviation reports, QC Release data, and small-scale studies. From these, the following items must be completed:
- Critical parameters and input and output parameters must be defined.
- A statistically valid time frame or number of batches must be determined.
- The data used to establish the parameters must be extracted from controlled documents.
- The data extracted from the controlled documents will be analyzed to establish ranges.
Each one of these steps will be examined in the following sections to describe them in further detail.
Critical parameters and input and output parameters defined.
In The Guidelines on General Principles of Process Validation, 15 MAY 1987, it states that:
The validity of acceptance specifications should be verified through testing and challenge of the product on a sound scientific basis during the initial development and production phase.1
It is important to determine which parameters in your process are critical to the final product. When determining these parameters and attributes a variety of personnel with different expertise should be utilized. Assembling a team of professionals is a starting point and this committee should be a multi-disciplined team including Quality, Validation, Systems Engineering, Facility Engineering, Pharmaceutical Sciences (or R&D), and Manufacturing. When determining the parameters and attributes which are critical, it is important to consider those which if they were not controlled or achieved, then the result would have an adverse effect on the product. A risk assessment should be performed to analyze what the risk is and what the results are if a specific parameter or attribute is not controlled or achieved (e.g. the resulting product would be flawed). Risk assessment is defined by The Ontario Ministry of Agriculture, Food and Rural Affairs as:
- the probability of the negative event occurring because of the identified hazard,
- the magnitude of the impact of the negative advent, and
- consideration of the uncertainty of the data used to assess the probability and the impact of the components. 2
Click on any image for larger view
Figure 2 is a list of general questions to consider when assessing risk while Figure 3 is an example of Fault Tree Analysis – a formal approach to evaluating risk, where a Top Level Event is observed and through questions and observations the cause of the event can be determined.
National Center for Drugs and Biologics and National Center for Devices and Radiological Health, “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY 1987.National Center for Drugs and Biologics and National Center for Devices and Radiological Health, “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY 1987.Ontario Ministry of Agriculture, Food and Rural Affairs (2000), Queen’s Printer for Ontario, Last Updated March 22, 2000; WEB:http://www.gov.on.ca/omafra/english/research/risk/assum1b.html.
Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Part Two of Three
A Statistically Valid Time Frame or Number of Batches
How large of a sample set is needed of previously recorded data to determine ranges that are truly representative of the process, and will the ranges be useful in the Validation effort and not set one up for failure? This is a difficult question to answer, and it is important to note that the batches selected should have no changes between them, thus be produced with the same processing conditions. The draft FDA Guidance for Industry, Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients from March of 1998 suggests that 10-30 consecutive batches be examined to assess process consistency. 1
This is a good target statistically because when selecting a sample size or population, the concept of Normality or Degree of Normality becomes important. As a general rule of thumb, the population should be large enough that the distribution of the samples within the population approaches a Normal or Gaussian distribution (one defined by a bell-shaped curve). Thus in theory if the samples are normally distributed then 99% of the samples will fall within the +/- 3 Standard Deviation (S.D.) range. 2 When considering this, one should think in the sense of Statistical Significance (the P-Value). The P-Value is the significance that a given sample will be indicative of the whole population. Thus at 99% (a P-Value of 0.01) then a given sample has 1% chance of falling outside the +/- 3 S.D. range or (assuming no relationship with other variables) the sample has a 99% statistical significance as representative of the population. Finally, when considering the central limit theorem, which has the underlying concept that as the sample size gets larger, the distribution of the population becomes more normal, then in general a sample size of 10 – 30 as the FDA suggests would have a high chance of being distributed normally.
The data used to establish the parameters must be extracted from controlled documents.
When the number of batches to review is selected, the next step is to determine from what documents the processing data will be extracted. Typically the range establishing data must be taken from approved and controlled documents (see the examples below).
Examples of Controlled Documents:
- Batch Production Records (BPRs)
- Quality Control (QC) Lab Reports
- Limits establish by Licensure
- Product License Agreement (PLA)
– Biologic License Agreement (BLA)
– New Drug Application (NDA) or Abbreviated (ANDA)
- Product Release Specifications
- Small scale bench studies simulating plant environment.
The data extracted from the controlled documents will be analyzed to establish ranges.
Having established where the data will be selected from, the data must then be analyzed for specific trends such to define ranges for the Process Validation Protocol’s acceptance criteria. This acceptance criteria will be what the “Actual” process data collected during the execution of the Protocol will be compared to, in order to verify its acceptance. This part is where much thought needs to be applied so that the acceptance criteria are not so tight that failure is eminent or so broad that the achievement of the criteria proves nothing. Listed below are general steps that can be incorporated to determine the analysis.
- Draw “X Charts” and analyze for outliersApply a model to determine Normality
- Determine the +/- 3 S.D. range and plot on Trend Chart
- Determine the Confidence Interval as compared to +/- 3 S.D.
- Process Capability Indices
- Recording the Maximum and Minimum
- Assign Acceptance Criteria Range and justify
Drawing Trend Charts
Trend Charts, also referred to as X-Charts, are a good way of plotting data points from a set of data where the target is the same metric (for example pH as measured at a specific point in the process). It is a matter of defining the X-axis by the number of samples and the Y-axis by the metric that is being used. As an example, the X-axis could be a list of the batches by batch number and the Y axis could be pH. Figure 4 is an example of a type of trend chart. This way the data is presented graphically and can be appreciated with respect to setting a range.
With the data plotted, one can quickly assess any visible trends in the data. Additionally one can no begin the task of applying statistics to the data. It is important to determine if there are outliers in the data. Outliers may exist and can usually be rationalized by adverse events in processing as long as they are reported appropriately. Outliers can also exist as samples that are “statistically insignificant.” As mentioned before, the P-Value is the significance that a given sample will be indicative of the whole population so that outliers would have a very low P-Value. One method for determining outliers is to use a box-plot where a box is drawn from a lower point (defined typically by the 25th percentile) to an upper point (typically the 75th percentile). The “H-spread” is defined as the distance between the upper and lower points. 3 Outliers are then determined to be any data that falls outside a predetermined multiplier of the H-spread. For example the lower outlier limit and upper outlier limit are defined as 4 times the H-Spread, anything above or below these limits is statistically insignificant and are outliers.
Apply a model to determine Normality
With the accumulated data plotted, the Degree of Normality should be investigated so that the data can be analyzed by the appropriate method. There are several models for determining the Degree of Normality; some common ones are the Kolmogorov-Smirnov test, Chi-Square goodness-of-fit test, and Shapiro-Wilks’ W test. 4 Once the Degree of Normality is determined a more appropriate statistical method can be applied for setting ranges. If the data is determined to be non-Normal than there are two approaches to evaluating the data. The first way is to apply a Nonparametric Statistical model (e.g. the Box-Cox Transformation 5 ), however, these tests are considered to be less powerful and less flexible in terms of the conclusions that they provide, so it is preferred to increase the sample size such that a normal distribution is approached. 5 If the data is determined to be Normal or the sample size is increased such that the data is distributed more normally, then the data can be better analyzed for it’s range characteristics.
Click on any image for larger view
Determine the +/– 3 SD range and plot on Trend Chart
The data having now been displayed graphically should be analyzed mathematically. This can be done by using simple statistics where the mean is determined as well as the standard deviation. The mean refers to the average of the samples in the population. The standard deviation is the measure of the variation in the population from the mean. If the distribution proves to be normal, as from our normality tests above or by selecting a large enough population such that the central limit theorem predicts the distribution to be normal, then it stands that 99% of the data will fall within the +/–3 SD range. Using our example from Figure 4, the data is analyzed for its mean and standard deviation using the displayed formulas in Figure 5. Once this is determined, the +/– 3 SD can be applied to the trend charts by drawing them as limits at their values. This graphically displays the data as it is applied per batch and how it fits within the statistical limit of +/– 3 SD (see Figure 6.)
- U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Guidance for Industry, Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients, March 1998, 36
- StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB: http://www.statsoft.com/textbook/stathome.html
- Rice Virtual Lab in Statistics (1993-2000), David M. Lane, HyperStat Online, Houston TX, WEB:http://www.ruf.rice.edu/~lane/rvls.html
- StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB: http://www.statsoft.com/textbook/stathome.htm
- Box, G.E.P., and Cox, D.R. (1964), “An Analysis of Transformations,” J. Roy. Stat. Soc., Ser. B.
Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Parts Three of Three
Determine the Confidence Interval as compared to +/- 3S.D. With the +/- 3 S.D. ranges determined, it can be considered important to evaluate what confidence there is that the next data point will fall within this range. The rationale for determining this level is to justify that the +/- 3 S.D. range provides a confidence that 99% of the data is within that range. Similar to the +/- 3 S.D. range, the confidence interval is a range between which the next measurement would fall. This level is typically 99% or greater. Thus a 99% confidence interval means, “there is 99% insurance that the next value would be in the range.” To calculate a 99% Confidence Interval, one needs to consider the area under the standard normal curve, the mean, the standard deviation, and the population size. Determining this level can be done using the formula in Figure 7. The confidence interval can be added to our previous example and is displayed in Figure 8.
Figure 8: Trend Chart with +/- 3 S.D. and Confidence Interval.
Figure 7: Definition of Confidence Interval Formulas.
Following our example:
Confidence Interval (99.9%) = 6.23 ± snc ( 0.151202/(30) 1/2)
Confidence Interval (99.9%) = 6.95 to 6.02
The confidence interval at 99.9% is slightly more narrow than the +/- 3 S.D. range which follows the trend if the +/- 3 S.D. range provides that 99% of the data will be within the range if the data is normally distributed.
As can be seen in the trend charts, the data fits well within the +/- 3 S.D. range, and therefore the confidence level is very high that the next data point that is collected will be within this range. Therefore this range may be appropriate to use as acceptance criteria based on the statistics. If the confidence level was wider than the +/- 3 S.D., then the data would have to be analyzed such to investigate if there were errors in calculating the degree of normality, the +/- 3 S.D., the confidence level, the outliers, or errors in the sampling technique to show that it was not computational error.
Process Capability Factors
Another method to setting ranges to be used as acceptance criteria are the Process Capability Indices defined by:
CPu = (USL – µ) / s (or in our example 3 S.D.)
CPl = (µ – LSL) / s (or in our example 3 S.D.)
USL = Upper Specification Limit
LSL = Lower Specification Limit
An industry accepted standard CP would be 1.33.1 This would mean that only 0.003% of the testing results would be out of the specification or 99.997% would be within the specification. This is a similar concept to confidence level.
Recording the Maximum and Minimum
Recording the maximum and minimum values in the data is important because it is a quick way to see if the data is all within the +/- 3 S.D. range. Additionally, if the maximum and minimum are within the +/- 3 S.D. range, than there is an additional level of confidence since all of the data would be within the range. Lastly, the data may be determined to be non-normally distributed and in such case, confidence may predict to high a possibility for failure at the +/- 3 S.D. range so in the interim, the maximum, and minimum values can be selected to be the range until further data can be collected to define the range (this refers back to increasing the sample size in order to approach a more normal distribution).
Assign Acceptance Criteria Range and Justify
Using all of the above analysis techniques, knowledge of the process and agreement on by a cross-discipline committee, acceptance criteria ranges can be assigned for the critical parameters and attributes. A general course of action would be to start by recording all the data at a given point in a spreadsheet, calculating the mean, S.D., population size, +/- 3 S.D., 95% and 99% confidence intervals, plotting the trend charts with appropriate ranges, and then deciding on which range makes the best sense. When selecting the acceptance criteria, a cross-functional committee should be utilized with backgrounds of QA, Manufacturing, Validation, R&D, and Engineering present. The
ranges should be selected and justified by scientifically sound data and conclusions. The ranges should be within the PAR for the product, which means that if +/- 3 S.D. is selected, the range should be checked at the upper and lower limits to verify that acceptable product is prepared. This should be done prior to a final agreement on the range and incorporation into the validation protocol. A report should be written to document the ranges with the rationale for selecting them and the justification for determining the limits as well as any determination that the ranges are within the PAR. Additionally, those ranges which are not to be included should be discussed within the report to justify why they are not to be recorded. A process validation protocol should be prepared with theses ranges for acceptance criteria and the process should be run at a target within the acceptance criteria ranges at least three consecutive times using identical procedures to verify that the process is valid.
Since the ideal case of validating a process during its implementation does not always exist in the pharmaceutical, biopharmaceutical, biotechnology or medical device industries, it may be important to determine a way to validate these processes using historical data. The historical data can be found in a variety of places as long as it is approved (e.g. approved and completed BPRs or quality control release documents, etc). A cross-functional team should perform a risk assessment on the parameters and attributes to determine which ones would be included in the process validation. A range establishing study for the attributes and parameters should be performed to evaluate historical data and analyze the data set for the concepts of normality, variation (standard deviation), and confidence. With a high degree of confidence, acceptance criteria ranges should be set for each parameter and attribute and a process validation protocol should be written with the appropriate ranges. This protocol should be approved and executed at target settings within the acceptance criteria ranges, from the start of the manufacturing process to the finish using qualified equipment, approved SOPs, and trained operators. In a final report for the process validation, the degree to which the process is valid would be determined by the satisfaction of the approved acceptance criteria.
1. Box, G.E.P., and Cox, D.R. (1964), “An Analysis of Transformations,” J. Roy. Stat. Soc., Ser. B., 26, 211.
2. Kieffer, Robert and Torbeck, Lynn, (1998), Pharmaceutical Technology, (June), 66.
3. Lane, David M., Rice Virtual Lab in Statistics (1993-2000), HyperStat Online, Houston TX, WEB: http://www.ruf.rice.edu/~lane/rvls.html.
4. National Center for Drugs and Biologics and National Center for Devices and Radiological Health,(1987) “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY.
5. Ontario Ministry of Agriculture, Food and Rural Affairs (2000), Queen’s Printer for Ontario, Last Updated March 22, 2000; Web: http://www.gov.on.ca/omafra/english/research/risk/assum1b.html.
6. StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB: http://www.statsoft.com/textbook/stathome.html.
7. U.S. Department of Helath and Human Service, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Veterinary Medicine, “Guidance for Industry: Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients,” Rickville MD. March 1998, 36.
//////////Process Validation, Existing Processes and Systems, Retrospective Analysis