Month: June 2015

Critical Assessment of Pharmaceutical Processes, A Rationale for Changing the Synthetic Route

Posted on

New Drug Approvals

Changing the Synthetic Route - Chemical Reviews  ACS Publications -    --

Critical Assessment of Pharmaceutical ProcessesA Rationale for Changing the Synthetic Route

AstraZeneca, Process R&D, Avlon/Charnwood, Avlon Works, Severn Road, Hallen, Bristol BS10 7ZE, U.K., GlaxoSmithKline, Synthetic Chemistry, Old Powder Mills, Tonbridge, Kent TN11 9AN, U.K., and Pfizer, Chemical R&D, PGR&D, Ramsgate Road, Sandwich, Kent CT13 9NJ, U.K.
Chem. Rev., 2006, 106 (7), pp 3002–3027
DOI: 10.1021/cr050982w
Publication Date (Web): March 8, 2006

Table of Contents

  • 1. Introduction
  • 2. Criteria for Process Assessment
    • 2.1. Safety Issues2.1.1. Potential Safety Issues and Their Significance
  • 2.1.2. Prediction and Assessment of Safety Issues
  • 2.1.3. Options To Manage Safety Issues
  • 2.1.4. Designing a Safer New Route
    • 2.2. Environmental…

View original post 177 more words

Controlling Residual Arylboronic Acids as Potential Genotoxic Impurities in APIs

Posted on

Arylboronic acids, but not the corresponding deboronated arenes, recently have been found to be weakly mutagenic in microbial assays [1].  Hence arylboronic acids may be considered potentially genotoxic impurities, and controlling the levels of residual arylboronic acids in APIs could become a regulatory requirement.  The issues should be decided by toxicology studies for the specific arylboronic acids in question.

Several approaches have been successful in removing boronic acids.  Diethanolaminomethyl polystyrene (DEAM-PS) [2],[3] and immobilized catechol [4] have been used to scavenge boronic acids.  Complex formation with diethanolamine may solubilize residual boronic acids in mother liquors.  Since arylboronic acids ionize similarly to phenols, basic washes of an API solution may remove arylboronic acids.  A selective crystallization can purge an arylboronic acid from the API.

The best means to control residual aryl boronic acids in APIs at the ppm level may be to decompose them through deboronation.  Sterically hindered, electron-rich aryl boronates and heteroaromatic boronates are especially prone to deboronation [5],[6],[7].  Deboronation of a hindered difluorobenzeneboronic acid was accelerated in the presence of Pd, and increased in the presence of H2O [8].  Kuivila and co-workers found that protodeboronation of 2,6-dimethoxybenzeneboronic acid was slowest about pH 5, and rapid under more acidic or basic conditions [9].  Butters and co-workers found that protodeboronation occurred for an arylboronate anion, but not for the corresponding arylboronic acid [10].  Amgen researchers found that deboronation of an ortho-substituted arylboronic acid was fastest in the presence of K2CO3 [11].  The Snieckus group found that deboronation occurred readily when pinacol was added to 4-pyridylboronic acid [12].  Percec and co-workers found that deboronation of neopentylglycol boronates, especially an ortho-substituted arylboronic acid ester, was catalyzed by nickel species[13].  Kuivila and co-workers found that CuCl2 catalyzed the deboronation of 2,6-dimethoxybenzeneboronic acid and other arylboronic acids, with formation of the corresponding aryl chlorides [14].  Unfortunately, adding reagents to a reaction mixture increases the burdens of analysis and impurity removal, but additives such as these may accelerate deboronation in difficult cases.  Simply extending the reaction conditions, which are generally basic for efficient Suzuki coupling, or heating with some amount of aqueous hydroxide are probably the preferred treatments to decompose an arylboronic acid.  By knowing the kinetics of the decomposition of the arylboronic acid it may be possible to show by QbD that analyses for the residual arylboronic acid in an API are not necessary.

[1] O’Donovan, M. R.; Mee, C. D.; Fenner, S.; Teasdale, A.; Phillips, D. H. Mutat. Res.: Genet. Toxicol. Environ. Mutagen. 2011, 724(1-2), 1.

[2] Antonow, D.; Cooper, N.; Howard, P. W.; Thurston, D. E. J. Comb. Chem. 2007, 9, 437.

[3] Hall, D. G.; Tailor, J.; Gravel, M. Angew. Chem. Int. Ed. 1999, 38, 3064.

[4] Yang, W.; Gao, X.; Springsteen, G.; Wang, B. Tetrahedron Lett. 2002, 43, 6339.

[5] Goodson, F. E.; Wallow, T. I.; Novak, B. M. Organic Syntheses; Vol. 74; Smith, A. B., III, Ed.; Organic Syntheses, Inc.; 1997; p. 64.

[6] Baudoin, O.; Cesario, M.; Guénard, D.; Guéritte, F. J. Org. Chem. 2002, 67, 1199.

[7] Dai, Q.; Xu, D.; Lim, K.; Harvey, R. G. J. Org. Chem. 2007, 72, 4856.

[8] Cammidge, A. N.; Crépy, K. V. L. J. Org. Chem. 2003, 68, 6832.

[9] Kuivila, H. G.; Reuwer, J. F., Jr.; Mangranite, J. A. Can. J. Chem. 1963, 41, 3081.

[10] Butters, M.; Harvey, J. N.; Jover, J.; Lennox, A. J. J.; Lloyd-Jones, G. C.; Murray, P. M.Angew. Chem. Int. Ed. 2010, 49, 5156.

[11] Achmatowicz, M.; Thiel, O. R.; Wheeler, P.; Bernard, C.; Huang, J.; Larsen, R. D.; Faul, M. M.J. Org. Chem. 2009, 74, 795.

[12] Alessi, M.; Larkin, A. L.; Ogilvie, K. A.; Green, L. A.; Lai, S.; Lopez, S.; Snieckus, V. J. Org. Chem. 2007, 72, 1588.

[13] Moldoveanu, C.; Wilson, D. A.; Wilson, C. J.; Leowanawat, P.; Resmerita, A.-M.; Liu, C.; Rosen, B. M.; Percec, J. Org. Chem. 2010, 75, 5438.

[14] Kuivila, H. G.; Reuwer, J. F., Jr.; Mangravite, J. A. J. Am. Chem. Soc. 1964, 86, 2666.


The usefulness of boronic acids and their esters is well documented.

Over the last decade much interest has been shown in the field of boron chemistry particularly in the inorganic aspect. Boronic acids, RB(OH)2, have been one such compound that has attracted a widespread interest over the last decade. They have been involved in various fields, such as sugar chemistry, thermoplastics, enzymes and various other areas in chemistry and biology.

Until now, the synthesis of these compounds has usually involved the reaction of Grignards or organolithium derivatives with trialkyl borates. Although widely applicable, this method normally works well only at low temperatures and is limited to substrates lacking functional groups which could react with the organometallic intermediates.(synthesis will be discussed later)

Figure 1-1: Some common boronic acids.

Boronic esters, RB(OR¢ )2, have also made great strides in research over the last decade. They are being used in macrocyclic chemistry, carbohydrate chemistry, and as molecular recognition compounds.

Figure 1-2: Some common boronic esters.






Before I begin to talk about boronic acids I feel that it is fair to discuss the properties of these compounds first. By doing so, it will be easier to understand why these compounds react in the manner they do.


  • Acid Strength


Phenylboronic acid is about three times as strong as boric acid which is a weak acid. This is in agreement with the substitution effect in acetic acid. Since any effect that causes a reduction in electron density on the proton of the oxygen strengthens the acid, then an effect that increases the electron density on the proton weakens the acid. When a substitution of an ortho hydrogen by an alkyl or phenyl occurs a decrease in the strength of the phenylboronic acid occurs. A meta or para susbstitution on the other hand results in only a slight decrease in strength. When electronegative substituents replace a proton of the oxygen, the strength of the acid increases considerably. This increase occurs in the following manner:

NO2 > F > Cl > Br > COOH


  • Substitution in the Aromatic Nucleus


When treated with a concentrated solution of NaOH, n-Butylboronic acid gives a hydrated sodium salt and a calcium salt. This substitution in the aromatic nucleus forms the structure of the barium salt of p-carboxyphenylboronic acid.

Figure 1-2: Structure of the barium salt of p-carboxyphenylboronic acid


  • Anhydride formation


Another property of boronic acids is that they form anyhydrides easily. Formation of the anhydride is very important in determining the melting-point of the particular boronic acid. This is main reason why the melting point of boronic acids are not frequently misquoted. Phenylboronic acid has a melting point of ~218oC, and was first prepared in 1882. Boronic anhydrides are very easily hydolysed, and readily form esters of the same acid. This is done by heating with an alcohol or phenol.



Figure 1-3: Formation of esters via boronic acids

History of Boronic Acids:

The chemistry of boronic acids is a relatively new topic with respect to chemistry. Early accounts of boronic acid synthesis involved the production of diboronic acids. This synthesis was accomplished by using tetrahydrofuran as the solvent to attain a higher temperature for the completion of metallation, and a Grignard reagent was formed in the process.

Figure 1-4: Formation of diboronic acid.

The Importance of the Grignard reagent:

Formation of the Grignard reagent in the above step is essential to the formation of the diboronic acid. By insertion of the Grignard a polarization occurs causing there to be a slight negative charge on the carbon of the phenyl group and a slight positive charge on the Mg. This causes the phenyl group to act as a nucleophile easily react with (MeO)3B.

Another technique that was used in the early years of boronic acid synthesis also involved the formation of the Grignard reagent. A general procedure was to attach an alkyl group or aryl group to boron by use of a boron trihalide, and a Grignard reagent to supply a hydorcarbon group. Another method that was used in the early years was Khotinsky’s method. This involved the formation of a carbon-boron bond by interaction with an alkyl borate at 0oC.


B(OR)3(in ether) added to ArMgBr ® ArB(OR)2 ® ArB(OH)2

The use of boron triflouride and borontrichloride have also been used to form boronic acids when reacted with Grignard reagents.

BF3(Cl,Br) + RMgBr ® RBF2(Cl,Br) ® RB(OH)2+ 2HF(Cl,Br)

Modern day Synthesis and important reactions of Boronic Acids:


Today there are numerous ways to synthesize boronic acids, and modifications continue to appear. One such technique involves the preparation of aryl boronic acid by heating a mixture of trialkyl borate with magnesium and the aryl bromide. The reaction requires heat which starts the reaction at ~150oC and by ~230oC the magnesium disappears after heating it for 2-3 hours under reflux. After the usual procedure of hydolysis the phenyl boronic acid was produced and isolated. The yield attained was between 33-44 %. The advantage of using bromobenzene over chlorobenzene is that you don’t have to heat the mixture as long. Otherwise, the same results are obtained.

Figure 1-5: The Cowie reaction used to synthesize aryl boronic acids.

Tri-n-butyl borate even reacts with dibromobenzene to produce a diboronic acid. However, this method is not preferred because the yield is substantially lower at 16%. On the otherhand, p-Dichlorobenzene responded poorly to the same reaction.

Figure 1-6: Synthesis of diboronic acid via dibromobenzene.


As was mentioned earlier when attempting to synthesize boronic acids, anhydrides can be easily formed. One such example occurs when attempting to prepare o-phenyldiboronic acid via the above method. A phenylboronic anhydride is produced with a mixture of olefins. This result is unexpected since hydrolysis of the later formed trimer will produce a phenylboronic acid. The exact mechanism of this reaction is unknown, but the most probable one is given below.





Figure 1-7: Probable mechanism of phenylboronic anhydride synthesis.


There are numerous other ways to synthesize boronic acids. As a result there are numerous groups of boronic acids. The first group which will discussed are the alpha-Aminoboronic acids. Although there are several examples of aminomethylboronate syntheses, the most important is probably the boronate amino acid isotere. This complex is a useful inhibitor of the serine chymotrypsin . Thus, it has triggered a great interest in both synthetic and medicinal use.




Figure 1-8: Formation of alpha-Aminoboronic acid

Another group of boronic acids that have vastly been studied are the alpha-haloboronic acids. These are produced by hydolysis of halomethyl boranes.


Figure 1-9: Formation of alpha-haloboronic acids

The aryl boronic acids undergo protodeboronation on treatment with carboxylic acids, such as HCO2H. The reaction proceeds via the formation of a six-membered cyclic transition state.



Figure 1-10: The protodebromonation of aryl boronic acids.


Synthetic routes to boronic acid synthesis:


There are six general synthetic routes to boronic acid synthesis. The first used, and in many ways the least likely, was the slow partial oxidation of the spontaneously inflammable trialkylboranes followed by the hydrolysis of the ester.


Et3B + O2 ® EtB(OEt)2 ® EtB(OH)2


Ph3B + O2 ® PhB(OPh)2 ® PhB(OH)2

The ready availability of many trialkylboranes makes this an attractive route but it is still relatively used very little. Dialkyl borinic acids can also be oxidized to produce boronic acids.


R2B(OH) + 1/2O2 ® RB(OH)2 + R(OH)

A second route to produce boronic acids involves the acid hydolysis of trialkyl- or triaryl-boranes. This method involves the change of these substituted boranes to borinic, boronic, and boric acids respectively.



R3B ® R2B(OH) ® RB(OH)2® B(OH)3

The third synthetic route is via halogenation and subsequent hydrolysis. While the appropriate alkyldihalogenborane intermediates can be prepared via the fourth route; the reaction of organometallics on boron trihalides.

Br2 H2O

Bu3B ® BuBBr2 + Bu2BBr ® BuB(OH)2 + Bu2B(OH)

I2 H2O air

R3B ® R2BI ® R2B(OH) ® RB(OH)2


BCl3 + Ar2Hg ® ArHgCl + ArBCl2 ® ArB(OH)2


R3B + BX3 ® RBX2 + R2BX® RB(OH)2+ R2B(OH)

By far the most widely used method for preparing boronic acids is the reaction of aryl or alkyl Grignard reagents on organo-orthoborates or boron halides. The preferred starting materials include B(OMe)3, B(OBu)3 and BF3.

500C H3O+

B(OR)3 + ArMgX ® [ArB(OR)3MgX] ® ArB(OH)2

Organolithium compounds have also been used to synthesize boronic acids.


B(OR)3 + R’Li ® [R’B(OR)3]Li ® R’B(OH)2

The final route to synthesize boronic acids involves the aromatic substitution of an existing boronic acids by conventional organic techniques; this is particularly important since the Grignard or organolithium syntheses just mentioned cannot be carried out in the presence of other reactive functional groups such as nitro, amino, carboxy, sulphono, etc. A typical sequence of reactions is as follows:


Figure 1-11: Formation of boronic acids via aromatic substitution.




Handling, Toxicity, and Analysis of Boronic Acids:

Since there are numerous boronic acids I will analyse each of the above headings in terms of just one boronic acid, namely benzene boronic acid. Benzene boronic acid like most boronic acids are sensitive to oxygen and are susceptible to hydolysis. When handling benzene boronic acid one must adapt to the chemical and physical characteristics of that particular compound.

The toxicity of benzene boronic acid is fairly high. It has an lethal dosage number of 50.(LD 50) That is, 50% of the rats that took a dosage of 740 mg/kg died. The dosage of 740 mg/kg is the approximate amount of benzene boronic acid that causes intake of it to be lethal.

The analysis of volatile boronic acids can be achieved by vapor phase chromatography provided that inert liquid and stationary phases are used, protic sites are converted to trimethylsilyl groups and injection port temperatures are regulated such that thermal degradation of the molecules does not occur. As a general rule, the retention time of boronic acids on a nonpolar column is similar to that of the corresponding hydrocarbon. Analytical determinations of boronic acids depend on complete cleavage of the boron-carbon bond.(cleavage of both if diboronic acid) Cleavage can occur via oxidation, or use of a variety of reagents. These reagents include NaOH, alkaline hydrogen peroxide, sulphuric acid, etc. Also, it appears that some boronic acids can be titrated by NaOH in the presence of mannitol using phenolphthalein as an indicator.


Applications of Boronic Acids and recent research:

Boronic acids have been used for various reasons. Alkyl boronic acids have been used for research and developnment purposes. These include the nonyl and dodecyl boronic acids. These boronic acids are termed as ‘bacteriastatic’, ‘fungistatic’and ‘surface active’. They are used as gasoline additives, polymer components and stabilizers for aromatic amines. These properties are of importance to slime organisms in the manufacturing of paper, and to minimize bacterial damage in paint, leathers, and polymers. They are also used in the dehydrated form, (RBO)3, as potential oil additives for stabilization against oxidation and polymerization. Alkyl boronic acids are also used in motor fuels, particularly in race cars.

Recently, boronic acids have been used as reagents in Suzuki coupling reactions to produce aryl substituted pyrrolo benzodiazepines(PBDs). These compounds have been used as gene targeting vectors and as antitumour and diagnostic agents.


Figure 1-12: Use of boronic acids to produce PBD via Suzuki Coupling.

Tandem Suzuki coupling-cyclisation with 2-Formylthiophene-3-boronic acid, was used to synthesise the thieno [2,3-c]-1,7-naphthyridine ring system.

Palladium catalysed Suzuki-type cross coupling of an iodocyclopropane with Thiophene-3-boronic acid, promoted by cesium fluoride.

Asymmetric boronic acid Mannich reaction of Furan-2-boronic acid, 15297, with an aldehyde in the presence of a chiral amine template.

Figure 1-13: New Boronic acids reactions.




There are numerous ways to synthesize boronic esters. Among these involve b -eliminations. b -eliminations are oxidative processes in which the boron becomes bonded to oxygen or another electronegative element, carbon loses an electronegative ligand at the b -position, and a carbon-carbon p -bond is formed. It is not remarkable that b -eliminations of boron and an electronegative atom occur, but boron is inusual for a relatively metallic element in that b -eliminations tend to be very slow. There are various types of b -eliminations; these includeb -elimination of halides, vinylic chlorides, alkoxides, oxide anions and azides.

Boronic esters can be easily made by the interaction of boronic acids with a hydoxy-compound. In this reaction water is removed as an azeotrope.

RB(OH)2 + 2R’OH ® RB(OR’)2 + 2H2O

Esters of alkyl boronic acids are in most cases more slowly hydrolysed than esters of arylboronic acid, or of boric acid, and indeed di-n-butylboronate. As a consequence they can be isolated from the Grignard system because of this relative stability. As a rule of thumb, boronic esters are far from being hydrolytically stable, although they have good thermal stability.

When aryl and alkyl groups are of ordinary reactivity then both esterification and hydolysis involves B-O fission, and not C-O fission. The is because(+)-2-methylheptanol is isolated unchanged in rotatory power after being converted into and obatined from the ester of phenylboronic acid. This optical result is confirmed when an (+)-alcohol is reacted with and phenylboron dichloride.

Figure 1-12: Ester preparation via a (+)-alcohol and a phenylboron dichloride.

Cyclic esters are an important part of boronic ester chemistry. One such method to prepare these involves the mixing of dihydric alcohols and phenols. Halogeno-ester preparation involves mutual replacement reactions. These produce both alkyl-and arylboronic halogeno-esters, and the reaction occurs quickly.

n-BuB(OBun)2 + BCl3 ® n-BuBCl(OBun) + BunOBCl2


Figure 1-13: Synthesis of halogeno-esters.


a -amino boronic esters:

Secondary amines react readily with iodomethylboronic esters to produce stable (dialkylamino)methyl boronic esters. These reactions are illustrated by the conversion of pinacol iodomethylboronate to piperidine and triethylamine deritvatives.


Figure 1-14: Synthesis of piperidine and triethyl amine derivatives


Air Oxidation:

Cyclic boronic acids are generally stable in air, and can be stored for a number of days or weeks without protection from atmospheric oxygen. But, this is misguiding because autooxidations can consume the entire sample within a relatively short time. Many accounts have been recorded where people used boronic esters for a number of weeks, but after a few months of storage the entire compound decomposed due to exposure to air by a hole in the bottle. Thus, a rubber septum cap is not sufficient for long-tem storage. Analytical samples of boronic esters should be stored in a screwcap vials. But, should be replaced every year if one intends to store these esters for a long time.

Reactions of boronic esters with lithium compounds:

The reaction of esters with lithium compounds is a complex process which involves several factors to consider. Among these are the choice of chiral diol, protection of the a-halo boronic ester, catalysis of the reaction, etc. When making the choice of the chiral diol, depending on the chiral diol used, there may be two different stereochemical outcomes that may result. One such case is when one uses two different borate complexes. That is, the borate complex derived from pinanediol alkylboronate rearranges stereoselectivity, but the borate complex from the pinanediol(dichloromethyl)borate yields a gross mixture of diastereomeric a-chloro boronic esters.


Figure 1-15: Reactions of esters with Lithium compounds.

Figure 1-16: New Boronic ester reactions.





  • Fieser & Fieser. Reagents for organic synthesis. Wiley-Interscience. New York, 1969.
  • Gerrard,W. The Organic Chemistry of Boron. Academic Press. New


York, 1969.

3. Herbert & Brown. Hydroboration. W.A. Benjamin, INC. New York, 1962.

4. Muetterties, E. The Chemistry of Boron and Its Compounds. John Wiley & Sons, 1967.

  1. Proctor, G. Asymmetric Synthesis. Oxford Science Publications. New York, 1996.



  • Abel et al. Comprehensive organometallic Chemistry II-A review of the literature 1982-1994. Volume 6,7, and 12. Pergamon Press. United Kingdom, 1995.
  • Greenwood N.N. and Earnshaw. Chemistry of the Elements. Pergamon Press. New York, 1984.
  • Katritzky et al. Comprhensive Organic Functional Group Transformations. Volumes 1,2,4. Pergamon Press. Glasgow, 1995.
  • Wilkenson et al. Comprehensive Organometallic Chemistry. Volumes 1,4. Pergamon Press. New York, 1982.


Web Sites:

  4. www.netscape/chemistry/boronic




  • Howard et al. Synthesis of Novel C7-Aryl Substitued Pyrrolo(2,1-C)(1,4)Benodiazepines(PBDs) via Pro-N-10-Troc Protection and Suzuki Coupling. Bioorganic & Medicinal Chemistry Letters 1998, Vol , Iss 2, pp. 3017-3018.
  • Metteson, D. Functional compatibilities in boronic ester chemistry. Journal of Oganometallic Chemistry. Washington, 1999. (web of science)
  • Nishimura et al. Preparation and Thermal Properties of Thermoplastic Poly(vinyl alcohol) Complexes with Boronic Acids. Kyoto, Japan, 1998.



Aseptic Manufacturing Operation: Chinese Company Zhuhai United Laboratories does not comply with EU GMP

Posted on Updated on


While the focus of attention has been on Indian manufacturers during the last 2 years now also Chinese manufacturers are in the spot light. On 15 June 2015 the National Agency for Medicines and Medical Devices of Romania entered a GMP Non-Compliance Report for Zhuhai United Laboratories into EudraGMDP. Read more about the GMP deviations observed at Zhuhai United.

While the focus of attention has been on Indian manufacturers during the last 2 years now also Chinese manufacturers are again in the spot light. Just recently the EU found serious GMP deviations at an API manufacturer (Huzhou Sunflower Pharmaceuticals) and on 15 June 2015 the National Agency for Medicines and Medical Devices of Romania entered a GMP Non-Compliance Report for Zhuhai United Laboratories Co., LTD located at Sanzao Science &Technology Park, National Hi-Tech Zone, Zhuhai, Guangdong, 519040, China into EudraGMDP.

According to the report issued by the EU GMP Inspectors, Zhuhai United was not “operating its aseptic manufacture operations in compliance with EU GMP Annex 1. This was evident from the high number of observations regarding the aseptic manufacturing facilities design, equipment, operations, environment monitoring and media fill validation”. The internal quality system of the company did not detect the problems. As a consequence the GMP non-compliance report was issued in the EU database. Companies purchasing from Zhuhai United must change the API supplier. The CEP has been suspended.

However, the case also reveals a serious problem for patients in Europe. Due to the move of manufacturing sites mainly to Asia alternative sources may not always exist. In this specific case the EU GMP certificate was withdrawn by the GMP Inspectors but will be reissued to include only non-sterile products. A restricted GMP Certificate will be issued for sterile critical medicinal products for Romania, France, and UK.

Stakeholders in industry, authorities and in government in the EU need to discuss the problems caused by outsourcing and to the fact that products will sometimes only be manufactured by single sources in the necessary quantity. Once these suppliers do not comply with GMP, products are no available any longer, or by exceptions these non GMP compliant products may still be used in order to treat patients with medicines they urgently need. Both options are not acceptable from a patient point of view.

Map of Zhuhai United Laboratories Co.,Ltd.

  1. Zhuhai United Laboratories Co.,Ltd.

  2. Pharmaceutical Company
  3. Address: Yulin Rd, Jinwan, Zhuhai, Guangdong, China
  4. Phone:+86 756 776 6777

Yulin Rd, Jinwan, Zhuhai, Guangdong, China

Zhuhai Airport

Sun Kim ……Quality-by-Design Evangelist

Posted on Updated on

Sun Kim

Sun Kim – Quality by Design for Pharma, Biotech, Medical Devices

Dr. Sun K. Kim is a Quality-by-Design Evangelist, transforming how Product Development is executed in the Biologics, Pharmaceutical and Medical Devices industry. In addition, he teaches at Keio University and Stanford University. His current focus of research is Quality-by-Design, Agile Development of Drugs and Therapeutics.

He received his MS and PHD in Mechanical Engineering at Stanford University. Sun was recently a Professor at Keio University in Japan. Prior to Silicon Valley days, he served in the Korean Army and worked at BMW in Munich, Germany

Sun Kim


Quality by Design – Founder


January 2013 – Present (2 years 6 months)

Founder of

Quality by Design for Biotech, Pharmaceutical and Medical Devices – Quality by Design Tools and Case Studies


Stanford University

2005 – Present (10 years)Stanford, CA

Teach Design for Manufacturing, Robust Design, Design of Experiments

Master Black Belt in Quality-by-Design, Lean Six Sigma, Sr. Manager

Bayer HealthCare

November 2012 – May 2015 (2 years 7 months)Berkeley, CA

Sr. Manager, Master Black Belt in Quality-by-Design, Design for Lean Six Sigma,
Leading Business Process Management

Design for Excellence Evangelist


March 2011 – November 2012 (1 year 9 months)

Master Black Belt (Lean Six Sigma), Project Management Professional, Scrum Master

Assistant Professor of Graduate School of Systems Design and Management Assistant

Keio University

March 2008 – March 2011 (3 years 1 month)

Lecture and advise graduate-level, professional students on system and product design, design thinking, creative brainstorming methodologies, prototyping, project management and business development. Solicited 15 industry project partners. Generated $35,000/year after developing non-degree curriculum for professionals. Co-Investigator of research projects of $50,000: Indoor Location-based Services Technology for Mobile Devices. Consults manufacturing companies (Hitachi, Toshiba) on growth strategies for Service Business Innovation. Others include developing cost simulation tool of product design based on injection molding, design for manufacturing and healthcare delivery systems. Began as a lecturer in Feb. 2008 to co-develop a project-based design curriculum, Active Learning Program Sequence (ALPS), educating over 100 graduate students every year.

Invited as an Assistant Professor in June, 2009.

Research, Teaching Assistant

Stanford University

September 2005 – June 2009 (3 years 10 months)

Lectured, coached and managed over 40 multi-disciplinary teams on Design for Manufacturing projects from Biomedical device (Medtronic, Maquet, St. Jude Medical, etc.) and automotive companies (Toyota, Nissan, GM, etc.). Served as the main research associate of Toshiba Corporation Six-Sigma Consulting Inc., developing systems design and manufacturing programs for Toshiba employees. Innovative projects were mobile personal-assistant IT system and agile transportation infrastructure.

Industry-sponsored Projects

Stanford University

September 2004 – June 2009 (4 years 10 months)

Maquet Cardiovascular: Coached and led a 3 member team in redesigning the crimping process of Hemashield Grafts, resulting in cost reduction of $75,000 and operators’ medical costs from injuries.

Satiety (Bariatric Surgery Device for Obesity Treatment) Design for Manufacturing Project: Coached a 3 member team in redesigning the packaging and supply chain for the Toga System, resulting in supply chain efficiency of 50% improvement by applying Lean and Errorproofing (Poka Yoke) Techniques.

Medtronic Vascular: Coached a 4 member team in redesigning the manufacturing line of a stent-graft, resulting in 73% reduction of lead time and increase in reliabilty and performance. Observed over 5 vascular and general surgery cases.

St. Jude Medical: Led a 4 member team in developing a 7-year supply chain strategy for new service centers of ICD/pacemaker programmers in Europe, Asia, N. and S. America and Oceania. The recommendation consists of an optimized cost model from net present value analysis and AHP location decision modeling that will save $461 million over 7 years and increase customer service rate compared to the existing service centers.

Nissan Motors: Led a 4 member team to construct a 20 year technology / business roadmap of Nissan Fuel Cell powertrain / vehicles which projects $ 4.8 billion revenue. Created a fuel cell vehicle concept design by applying Design for manufacturing tools including market research, manufacturability, and profitability analysis.

Zimmer Orthopedics: Implemented, with five team members, a FEA (finite element analysis) simulation tool with ABAQUS which assists in the development of treating knee osteoarthritis, based on MRI and gait data.

General Motors: Achieved potential cost reduction of $450 per vehicle and reduced 50kg of car weight by replacing wires with conductive coatings and RFID applications with a team of four members.

Design for Six Sigma Research Fellow


September 2005 – March 2009 (3 years 7 months)

Developed Design for Six Sigma Curriculum for Toshiba Corp.
Trained engineers, managers in systems design methodologies.
Coached Six Sigma projects.

Medical Device Design Innovation Program Developer

Johnson & Johnson

July 2007 – September 2007 (3 months)

Developed an innovation-incubation program to design/develop next generation product/technology with physicians and multidisciplinary design teams.

Design, Manufacturing Consultant

NeoGuide Systems

April 2006 – September 2006 (6 months)

Performed Robust Design, Design of Experiment, Developed manufacturing tooling, testing protocols and automated stations.

Reliability Research Assistant

Stanford Linear Accelerator Center

June 2005 – December 2005 (7 months)

Developed a reliability decision analysis tool for the LINAC System which will save $83 million per year on the $8 billion International Linear Collider Project. Enhanced reliability of the Accelerator system up to 20% and had increased throughput by linking Failure Mode and Effect Analysis of the Tuner to the evaluation tool.

Optimization/ Structural Analysis Intern

Samsung Electronics

February 2004 – February 2004 (1 month)

Improved impact-worthiness (20%) by optimizing design parameters of cell phone cases after performing structural analysis.

Design, Manufacturing Engineer


June 2003 – January 2004 (8 months)

Applied for 1 patent individually and created 3 design proposals as a team in 3 months. Saved $2,760 per month by implementing knowledge database management system. Resolved 3 process problems during 2 weeks of manufacturing
rotation program in Munich assembly plant with the manufacturing engineers.

US – ROK Army Radiology Tech, Company Leader, Manager

Republic of Korea Army

February 2000 – April 2002 (2 years 3 months)

Served 20,000 US Army patients as a Radiology Technician, tasks including diagnostic x-ray imaging, upper GI, etc. Managed 12 multi-national soldiers and a radiology department. Was awarded as “the accident-free company.” Increased the availability of the radiology department, which takes care of 20,000 patients, up to 130% by building a forecast schedule planning system.



Stanford University

Stanford University

Ph.D, Mechanical Engineering

2006 – 2009

Focus in Systems (Product, Service, Business) Design, Design Thinking, Design for Manufacturing and Six Sigma

Activities and Societies: Design SocietyASMEIEEEINCOSEACM

Stanford University

Stanford University

MS, Mechanical Engineering

2004 – 2006

Focus in Biomedical Device Design, Reliability Engineering, Operations Research

Activities and Societies: KOSEF Academic Fellow


A New Project-Based Curriculum of Design Thinking with Systems Engineering Techniques

International Journal of System of Systems Engineering


Agile Project Management for Root Cause Analysis Projects

International Conference on Engineering Design


A New Project-Based Curriculum of Design Thinking with Systems Engineering Techniques

Council of Engineering Systems Universities


Evaluation of Design for Service Innovation Curriculum: Validation Framework and Preliminary Results

nternational Journal of Services Technology and Management


A Validation Regarding Effectiveness of Scenario Graph

ASME International Design Engineering Technical Conferences


Wants Chain Analysis: Human-centered Method for Analyzing and Designing Social Systems

International Conference on Engineering Design


Scenario-based Amorphous Design (SAD) Framework for a Location-based Services Technology

Mobile Human Computer Interaction


Transforming Seamless Positioning Technology into a Business using a Systems Design Approach—Scenario-based Amorphous Design

IEEE- International Systems Conference


Design for Service Innovation: A Methodology for Designing Service as a Business for Manufacturing Companies

International Journal of Services Technology and Management


Preliminary Validation of Scenario-based Design for Amorphous Systems

International Conference on Systems Engineering


Tools for Project-based Active Learning of Amorphous Systems Design: Scenario Prototyping and Cross Team Peer Evaluation

ASME International Design Engineering Technical Conferences


Active Learning Project Sequence: Capstone Experience for Multi-disciplinary System Design and Management Education

International Conference on Engineering Design


Demystifying Ambiguity in The Design of Amorphous Systems

International Conference on Systems Engineering


Scenario-based Design for Amorphous Systems

ASME International Mechanical Engineering Congress and Exposition


Analysis and Design Methodology for Recognizing Opportunities and Difficulties for Product-based Services

Information Processing Society of Japan (IPSJ) Journal


Scenario Graph: Discovering new business opportunities and Failure Modes,”

ASME International Design Engineering Technical Conferences


Analysis and Design Methodology for Product-based Services

Annual Conference of the Japanese Society for Artificial Intelligence


Analysis and Design Methodology for Recognizing Opportunities and Difficulties for Product-based Services



Sun Kim

About QbDWorks…

Are you a Scientist in the Pharmaceutical, Biopharmaceutical or Medical Devices industries?

Then you are probably asking:

  • Does Quality-by-Design actually work?
  • Or is it just another program like Lean or Six Sigma?
  • How do I implement QbD successfully?
  • How do I persuade my management?
  • What is the first step?

As a QbD practitioner, I had the same questions and am trying to answer them as I test different elements of QbD.

Through our members’ successes and failures in QbD, you can save time by not having to repeat them yourself. There are many lessons learned and knowledge that you can share with your QbD team.

Who are You?

Sun Kim

My name is Sun Kim. I currently practice Quality-by-Design, transforming how Product Development is executed in Biopharmaceutical, Pharmaceutical, Biologics, and Medical Device industries.

In addition, I teach at Stanford University. My focus of research is Lean Quality by Design.

I received my MS and PHD in Mechanical Engineering at Stanford University and was recently an Asst.  Professor at Keio University in Japan. Prior to Silicon Valley days, I served in the Korean Army and worked at BMW AG in Munich, Germany, where my lifelong pursuit of “Product Development Methodology” began.

Why is an Engineer working in the Bio/pharmaceutical Industry?

Thanks for asking! When working at BMW, as a member of the elite “KREATIV” (Creative) team, my goal was to develop the best technologies and products for the automotive industry. However our approach was somewhat adhoc and heuristics-based. I knew there was a better way.

So I set my heart to learn how best products are developed across all industries. This led me to my PhD research at Stanford University.

Little did I know this would turn out to be more than just a graduate program. On top of the typical coursework, research and publishing, my schedule was packed with hands-on consulting/research projects with GE Healthcare, GE Aviations, Toyota, Nissan, Toshiba, Medtronic, Johnson & Johnson, Startup’s etc.

After contributing to the product development approaches for the Academia, Fortune 500 and Startup’s, I knew my heart was always with Health Care. So I returned to the benches and trenches.

For the last 10 years, I have been working in the Biopharmaceutical, Pharmaceutical and Medical Devices Industry, to make Quality by Design a reality.

So please join me in this Quality by Design journey.

Sign up to connect to the community and receive updates.

Together, we can change how drugs and therapies are developed for our patients and families.

Let’s Connect!

If you’d like to connect, please invite me:

Previously I worked (full time or consulting) with:

Company Logos

The views expressed on this website are personal opinions and in no way reflect the position of any organization.

Map of fremont ca

Inna Ben-Anat, Global QbD Director of Teva Pharmaceuticals

Posted on Updated on

Meet Inna Ben-Anat, Global QbD Director of Teva Pharmaceuticals. Inna is a key thought leader in Quality by Design for generics. Ben-Anat, InnaASSOCIATE DIRECTOR, HEAD OF QDD STRATEGY | TEVA PHARMACEUTICALSAssociate Director, Head of QbD Strategy Chemical Engineer with a degree in Quality Assurance and Reliability (Technion-Israel Institute of Technology). QbD Strategy Leader at Teva (USA). Headed the implementation of a global QbD training programme. More than 12 years of pharmaceutical development experience. Inna Ben-Anat Inna Ben-Anat is a Quality by Design (QbD) Strategy Leader in Teva Pharmaceuticals USA. In this role, Inna has implemented global QbD training program, and is supporting R&D teams in developing Quality by Design strategies, optimizing formulations and processes and assisting develop product specifications. Additionally, Inna supports Process Engineering group with process optimization during scale-up and supports Operations in identification and resolution of any technical issues. Inna has extensive expertise in process development, design of experiments for process and product optimization, and statistical data analysis, and has more than 10 years of pharmaceutical development experience. Inna received her BSc degree in Chemical Engineering and ME degree in Quality Assurance and Reliability in Engineering from Technion-Israeli Institute of Technology. Teva Pharmaceutical Industries Ltd. 5 Basel Street PO Box 3190 Petah Tiqwa , 4951033 Israel



Assc. Director, Global QbD Strategy

Teva Pharmaceuticals

May 2013 – Present (2 years 2 months)

QbD Strategy Leader

Teva Pharmaceuticals

March 2011 – Present (4 years 4 months)

Analytical R&D

TransPharma Medical

2001 – 2005 (4 years)

Employment History

  • Director, Global QbD Strategy and Product Robustness
    Teva Pharmaceutical Industries Ltd.
  • QbD Strategy Leader
    Teva Pharmaceutical Industries Ltd.
  • Analytical Research and Development Manager
    TransPharma Medical Ltd


Petah Tikva, ISRAEL

  1. Petah Tikva – Wikipedia, the free encyclopedia

    Petah Tikva (Hebrew: פֶּתַח תִּקְוָה, IPA: [ˈpetaχ tikˈva], “Opening of Hope”) known as Em HaMoshavot (“Mother of the Moshavot”), is a city in the Central …

Khayim Ozer Street, Petah Tikva, Israel

Understanding Pharmaceutical Quality by Design 2014 Lawrence X. Yu, Gregory Amidon, Mansoor A. Khan, Stephen W. Hoag, James Polli, G. K. Raju, Janet Woodcock
A quality by design study applied to an industrial pharmaceutical fluid bed granulation 2012 Vera Lourenço, Dirk Lochmann, Gabriele Reich, José C. Menezesa, Thorsten Herdling, Jens Schewitz
Application of the quality by design approach to the drug substance manufacturing process of an Fc fusion protein: towards a global multi-step design space. 2012 Eon-duval A, Valax P, Solacroup T, Broly H, Gleixner R, Strat CL, Sutter J.
Instrumented roll technology for the design space development of roller compaction process 2012 Vishwas V. Nesarikar, Nipa Vatsaraj, Chandrakant Patel, William Early, Preetanshu Pandey, Omar Sprockel, Zhihui Gao, Robert Jerzewski, Ronald Miller, Michael Levin
Rapid exploration of curing process design space for production of controlled-release pellets 2012 Katja Kristan andMatej Horvat
Automating push-through force testing of blister packs 2011 Jha, Salil
Quality by Design for ANDAs: An Example for Modified Release Dosage Forms 2011
QbD Status Update Generic Drugs 2011 Susan Rosencrance
Design-for-Six-Sigma for Development of a Bioprocess Quality-by-Design Framework 2011 Junker B, Kosinski M, Geer D, Mahajan R, Chartrain M, Meyer B, Dephillips P, Wang Y, Henrickson R, Ezis K, Waskiewicz M.
Quality by design and process analytical technology for sterile products–where are we now? 2010 Riley BS, Li X.
Development of quality-by-design analytical methods 2010 Frederick G. Vogt andAlireza S. Kord
Roadmap for implementation of quality by design (QbD) for biotechnology products. 2009 Rathore AS
Quality by Design for Biopharmaceuticals: Principles and Case Studies 2009 Anurag S. Rathore (Editor), Rohin Mhatre (Editor)
Quality by design for biopharmaceuticals 2009 Anurag S Rathore & Helen Winkle
Formulation optimization of long-acting depot injection of aripiprazole by using D-optimal mixture design 2009 Nahata T, Saini TR.
Quality by Design: Get to the production scale quickly and successfully with Design of Experiments 2009 Norbert Pöllinger, Stefanie Feiler, Philippe Solot,
Pharmaceutical quality by design: product and process development, understanding, and control. 2008 Yu LX
Quality by design for biopharmaceuticals 2009 Anurag S Rathore1 & Helen Winkle2
Why quality-by-design should be on the executive team’s agenda 2009 Ted Fuhr, Michele Holcomb, Paul Rutten
BioProcess Control: What the Next 15 Years Will Bring 2008 Michael Boudreau, Emerson Process Management, Inc. and Trish Benton, Broadley-James Corp.
QbD for Better Method Validation and Transfer 2010 Phil Nethercote, Phil Borman, Tony Bennett, GSK; Gregory Martin, Complectors Consulting LLC, and Pauline McGregor, PMcG Consulting
A Quality-by-Design Methodology for Rapid LC Method Development (Parts I, II and III) 2008 Ira Krull, Michael Swartz, Joseph Turpin, Patrick H. Lukulay, Richard Verseput;;
What Your ICH Q8 Design Space Needs: A Multivariate Predictive Distribution 2010 John J. Peterson, GlaxoSmithKline Pharmaceuticals
Quality-by-Design Using a Gaussian Mixture Density Approximation of Biological Uncertainties 2010 N. Rossner, Th. Heine, R. King
An Example of Utilizing Mechanistic and Empirical Modeling in Quality by Design 2010 Daniel M. Hallow, Boguslaw M. Mudryk, Alan D. Braem, Jose E. Tabora, Olav K. Lyngberg, James S. Bergum, Lucius T. Rossano, Srinivas Tummala
Process modeling and control in drug development and manufacturing 2010 Edited by Cynthia Oksanen and Salvador García-Muñoz
Bioprocesses: Modeling needs for process evaluation and sustainability assessment 2010 Concepción Jiménez-González, John M. Woodley
Computer-aided molecular design using the Signature molecular descriptor: Application to solvent selection 2009 Derick C. Weis , Donald P. Visco
Process modeling and optimization of batch fractional distillation to increase throughput and yield in manufacture of active pharmaceutical ingredient (API) 2010 Yubo Yang, , Rosalin Tjia
The use of modeling in spray drying of emulsions and suspensions accelerates formulation and process development 2009 James W. Ivey, Reinhard Vehring
Pharmaceutical process/equipment design methodology case study: Cyclone design to optimize spray-dried-particle collection efficiency 2010 Lisa J. Graham, Rebecca Taillon, Jim Mullina, Trevor Wigle
Practical application of roller compaction process modeling 2010 Gavin Reynolds, Rohit Ingale, Ron Roberts, Sanjeev Kothari, Bindhu Gururajan
Understanding variation in roller compaction through finite element-based process modeling 2010 John C. Cunninghama, Denita Winstead, Antonios Zavaliangos
Optimizing the design of eccentric feed hoppers for tablet presses using DEM 2010 William R. Ketterhagen, Bruno C. Hancock
Temperature and density evolution during compaction of a capsule shaped tablet 2010 Gerard R. Klinzing, Antonios Zavaliangos, John Cunningham, Tracey Mascaro, Denita Winstead
Drug product modeling predictions for scale-up of tablet film coating—A quality by design approach 2010 Andrew Prpich, Mary T. am Ende, Thomas Katzschner, Veronika Lubczyk, Holger Weyhers, Georg Bernhard
Handling uncertainty in the establishment of a design space for the manufacture of a pharmaceutical product 2010 Salvador García-Muñoz, , Stephanie Dolph, Howard W. Ward II
ICAS-PAT: A software for design, analysis and validation of PAT systems 2009 Ravendra Singh, Krist V. Gernaey, Rafiqul Gania,
An ontological knowledge-based system for the selection of process monitoring and analysis tools 2010 Ravendra Singh, Krist V. Gernaey, Rafiqul Gani
An ontological framework for automated regulatory compliance in pharmaceutical manufacturing 2009 M. Berkan Sesen, Pradeep Suresh, René Banares-Alcantara, Venkat Venkatasubramanian

Determining Criticality-Process Parameters and Quality Attributes

Posted on Updated on

Determining Criticality-Process Parameters and Quality Attributes Part I: Criticality as a Continuum

A practical roadmap in three parts that applies scientific knowledge, risk analysis, experimental data, and process monitoring throughout the three phases of the process validation lifecycle.

As the pharmaceutical industry tries to embrace the methodologies of quality by design (QbD) provided by the FDA’s process validation (PV) guidance (1) and International Conference on Harmonization (ICH) Q8/Q9/Q10 (2-4), many companies are challenged by the evolving concept of criticality as applied to quality attributes and process parameters. Historically, in biopharmaceutical development, criticality has been a frequently arbitrary categorization between important high-risk attributes or parameters and those that carry little or no risk. This binary designation was usually determined during early development for the purposes of regulatory filings, relying heavily on scientific judgment and limited laboratory studies.

Figure 1: Process validation lifecycle.

With the most recent ICH and FDA guidances endorsing a new paradigm of process validation based more on process understanding and control of parameters and less on product testing, the means of determining criticality has come under greater scrutiny. The FDA guidance points to a lifecycle approach to process validation (see Figure 1). “With a lifecycle approach to process validation that employs risk-based decision making throughout that lifecycle, the perception of criticality as a continuum rather than a binary state is more useful.” The problem is that a practical approach of determining this criticality “continuum” using risk analysis has been left to each company to develop.

This article presents the first part of a practical roadmap that applies scientific knowledge, risk analysis, experimental data, and process monitoring throughout the three phases of the process validation lifecycle to first determine and then refine criticality. In this approach, criticality is used as a risk-based tool to drive control strategies (Stage 1), qualification protocols (Stage 2), and continued process verification (Stage 3). Overall, a clear roadmap for defining, supporting and evolving the criticality of parameters and attributes throughout the process-validation lifecycle will allow pharmaceutical companies to easily embrace the new process-validation paradigm. Furthermore, processes will be more robust and continuous improvement opportunities more easily identified.

In Part I of this series, the author used risk analysis and applied the continuum of criticality to quality attributes during the process-design stage of process validation. After using process knowledge to relate the attributes to each process unit operation, the inputs and outputs of each unit operation were defined to determine process parameters and in-process controls. An initial risk assessment was then completed to determine a preliminary continuum of criticality for process parameters.


In Part II, the preliminary risk levels of process parameters provided the basis of characterization studies based on design of experiments (DOE). Data from these studies were used to confirm the continuum of criticality for process parameters.

In Part III, the control strategy for the process was developed from a design space established from characterization studies. As the process-qualification stage proceeds, the continuum of criticality was used to develop equipment qualification criteria and strategies for process performance qualification. Finally, in the continued-process-verification stage of process validation, criticality was used to determine the frequency of monitoring and analysis.

From binary to continuum
On the surface, deciding whether an attribute or parameter is critical or not may seem clear and simple. After all, data are compared to acceptance criteria in countless decisions regarding clinical trials, experimental studies, qualifications, and product release. Either the acceptance criteria are met, or they are not. Companies that take this familiar path have tried to draw a definitive line between the “critical” and “not critical” sides. Once a decision has been made about criticality, there is no need to look again. It doesn’t help that the guidance documents for industry have been vague on where this criticality threshold lies. The FDA’s PV guidance avoids the issue: “attribute(s) … and parameter(s) … are not categorized with respect to criticality in this guidance” (1).

ICH Q8(R2) provides the following definitions using the term critical:

Critical process parameter (CPP). A process parameter whose variability has an impact on a critical quality attribute and, therefore, should be monitored or controlled to ensure the process produces the desired quality.
Critical quality attribute (CQA). A physical, chemical, biological, or microbiological property or characteristic that should be within an appropriate limit, range, or distribution to ensure the desired product quality.

This interpretation of CQA is most applicable to in-process and finished-product specification limits, which suggests that these limits must be critical given that they were designed to ensure product quality. During the early stages of process development and design, other quality attributes may be measured that, over the course of development, do not end up as either in-process or finished-product tests in the commercial process. These test results may show little variation and present little to no risk to product quality. In other cases, while process duration or yield is measured, they are not related to the product quality and are, therefore, not CQAs. However, even when defined as critical, not all CQAs have equal impact on safety and effectiveness (3, 5).

The definition for CPP states that a parameter is considered critical when its variability has an impact on a CQA. The amount of impact is not defined, which leads to the question, does even a small impact to a CQA mean that the parameter is critical? It is not difficult to imagine the example of an extreme shift of a process parameter having a minor impact on a CQA, whether measurable or not. Extreme temperatures can destroy many pharmaceutical products; however, if a process inherently cannot produce such temperatures, is temperature still considered to be critical and, therefore, required to be monitored and controlled?

When these definitions are strictly interpreted, some companies find themselves in one of two extremes:

• Every quality attribute is critical (they all ensure product quality); every parameter is critical (product cannot be made without controlling them)
• No parameter is critical because if they are controlled, all quality attributes will pass specifications.

Reality lies somewhere between these extremes. Logic and common sense dictate that additional criteria must be necessary to aid in determining criticality. There is great value in understanding not only if a parameter/attribute is critical (i.e., has an impact), but also how much impact the parameter/attribute has. All companies have limited time and resources; therefore, the focus must be on that which provides the greatest benefit for the effort

By using risk analysis as a means to determine criticality, an opportunity arises to help resolve these potential conflicts. CQAs should be classified based on the potential risks to the patient. CPPs should be separated into those that have substantial impact on the CQAs and those with minor or no impact. The binary yes/no decision transforms into a continuum of criticality ranging from high impact to low impact critical to not critical. As knowledge increases or as improvements are made to a process throughout the lifecycle, risks may be reduced and the level of impact for a CPP can be modified and control strategies adjusted accordingly

The number of levels in the continuum is a matter of choice and the risk analysis method used. Each company must procedurally define what risk tools and risk levels it will use and consistently apply them across similar products. In the following examples, three levels of impact are used for simplicity’s sake. The different levels drive decision-making and action plans throughout the lifecycle.

For CQAs, a continuum of criticality provides a tool to designate particular attributes as the most important to the protection of the patient. For CPPs, a continuum of criticality allows for process control and monitoring strategies to focus where the greatest impact on product quality is achieved.

A practical roadmap in three parts that applies scientific knowledge, risk analysis, experimental data, and process monitoring throughout the three phases of the process validation lifecycle.

Quality risk management
Risk is the combination of the probability of occurrence of harm and the severity of that harm (5, 6). The value of risk assessment models is the formalized evaluation criteria that comes from agreed-upon ranking tables. Even though some may argue that the assessment is not quantitative, the benefit derived from framing the evaluation to an agreed upon risk criteria dramatically improves the ability to objectively evaluate the process risk profile. Per ICH Q9, there are two primary principles of quality risk management (3):

• The evaluation of risk to quality should be based on scientific knowledge and ultimately link to the protection of the patient
• The level of effort, formality, and documentation of the quality risk management should be commensurate with the level of the risk.

Formal risk management tools such as failure mode effects analysis (FMEA) or failure mode effects and criticality analysis (FMECA) (7) can be used to provide a structured semi-quantitative summary of risk. For Stage 1, however, often a qualitative risk assessment evaluating low, medium, and high risk is sufficient to distinguish relative differences in risk.

Continuum of CQAs
Prior to the development of a new drug, companies frequently decide and document a therapeutic need in the marketplace for a new pharmaceutical. It is through this effort that the quality and regulatory aspects of the new drug are defined such as the type of dosage form, the target dose, the in-vivo drug availability, and limit of impurities. Current guidance identifies this documentation as the quality target product profile (QTPP). The QTPP provides the basis of the desired quality characteristics of the drug product, taking into account safety and efficacy (i.e., purity, identity, strength, and quality). The QTPP should not be confused with the drug product specification, which created later, is generally a list of specific test methods to perform and their acceptance criteria designed to ensure drug efficacy and safety. The QTPP is an input to these activities whereas the quality attributes and specifications are outputs.

The initial list of quality attributes from the QTPP should be created as early as possible in the development process so that data can be collected from experimental runs. To assign the continuum of criticality to that initial list of quality attributes, knowledge of the severity of the risk of harm to the patient is paramount. This comes from prior knowledge such as early safety trials and scientific principles.

Quality attributes are rated as the highest criticality level because they have a high severity of risk of harm. Severity is the primary criteria for assessing quality attribute criticality because it is unlikely to change as understanding increases over the life-cycle. For example, an impurity may be determined to severely harm the patient (high severity score) if beyond its limit. If its level does not increase in the process or on stability testing, the occurrence score is low and its overall risk to the patient may be low. However, it is still rated as high risk due to its high severity. That severity will not change and as a high-risk CQA, it has to be tested and monitored.
Examples of risk levels for CQAs:

• High: assay, immunoreactivity, sterility, impurities, closure integrity
• Medium: appearance, friability, particulates
• Low: container scratches, non-functional visual defects.

For a quality attribute to be designated as “not critical,” it has to have no risk to the patient (e.g., yield, process duration). Attributes that are not critical to quality are sometimes named process performance attributes to distinguish from quality attributes.

Not all CQAs are tested as part of finished-product testing. Some are tested in-process to define limits such as pH and conductivity. Although frequently designated as “in-process controls,” they are still quality attributes that should be assessed for their criticality. Consideration should be given to the relationship between in-process controls and finished-product CQAs when making this decision. While this is one example of how to assign a continuum of criticality to quality attributes, other examples are also available (8-10).

A practical roadmap in three parts that applies scientific knowledge, risk analysis, experimental data, and process monitoring throughout the three phases of the process validation lifecycle.
Dec 01, 2013
Volume 26, Issue 12

Cause-and-effect matrix
Once risk levels have been assigned to the CQAs, the next activity is to begin to relate which parts of the process have impact on these attributes. This cause-and-effect analysis breaks the process into its unit operations and conveys its impact on the CQAs. An example of the cause-and-effect matrix for a biologic is given in Table I.

Table I: Example of cause and effect matrix for a biologic. H = high, M = medium, L = low.

Critical quality attribute (risk level):

Unit operations

Preculture & expansion

Fermen-tation and harvest

Centri- fugation

Cation exchange chroma-tography

Anion exchange chroma-tography

Viral filtration

Concen-tration & dia-filtration

Vial filling

Appearance (M)









Impurities (H)









Protein content (H)









Immunoreactivity (H)









Purity (H)









pH and ionic strength (M)









Amino acid content/ratio (H)









Bioburden (H)









In-process controls
Fill weight check (M)









Visual inspection (M, L)









In addition to the matrix, it is important to document the justification for these decisions as part of this analysis. For example, the parameters of the cation- and anion-exchange chromatography processes are expected to have a high impact on impurities because they are designed to remove impurities of different ionic charge than the desired product. With the knowledge of which unit operations have impact on particular CQAs, it is now possible to analyze each process’ inputs and outputs to determine how process parameters affect the CQAs.

Input and output process variables
Each unit operation has both input and output process variables. Process parameters signify process inputs that are directly controllable and can theoretically impact CQAs. Process outputs that are not directly controllable are attributes. When the attribute ensures product quality, it is a CQA. The output of one unit operation can also be the input of the next unit operation. These parameters and attribute designations and their justification should be documented in either a formal process description document, or process flow diagram/drawing. This documentation should also include the scale of each unit operation, equipment and materials required, sampling/monitoring points, test methods, and relevant processing times and storage times/conditions.

The intent of assessing process parameters is to determine how they affect the process variation of CQAs along with their control strategy. Each company should clearly document their methodology for defining and assessing parameters. The following may be considered in making that assessment:

• Raw material attributes are outputs of the release of materials. Critical material attributes (CMA) should be considered along with CPPs as impacting process variability.
• Fixed parameters such as equipment scale, equipment setup, pre-programmed recipes should be documented but are assessed as either low or non-critical.
• Parameters for sterilization processes and cleaning process and the preparation of process intermediates can be included in the primary process assessment. Alternatively, they can be treated as independent processes with their own process parameters, quality attributes, criticality assessments, and process validation.
• Calibration and standardization setting for equipment and instruments are usually not included as process parameters.
• Formulation recipes can be considered fixed parameters (low or not critical); these parameters generally have relatively tight limits, which are justified during formulation development. Such a parameter, which does not vary, cannot impact process variability. An exception to this rule is the case where operators must calculate a quantity based on a variable input such as biological activity; this variable process parameter may lead to process variation.
• Holding/storage times and conditions where no processing occurs should be qualified to show little to no impact on the product. These should be documented and, if these factors are included as process parameters, they are considered low or non-critical.
• Environmental conditions during process (room temperature, humidity), such as holding times, are to have set limits so that they have little to no impact to the process. Process-specific environmental conditions such as cleanrooms, cold rooms, and dry rooms are included as process parameters because they are monitored to ensure product quality.

When a process parameter is determined to be non-critical either by process knowledge or by process study, companies may choose to further designate the parameter as a key performance parameter if that parameter impacts a process performance attribute.

From knowledge to risks
Once each unit operation is related to CQAs through a cause-and-effect matrix and the process parameters and attributes are documented, an initial risk assessment to determine the potential impact of each process parameter is performed. Prior to process characterization experiments, this risk assessment may be more high level using primarily prior knowledge and scientific principles. However, a more formal FEMCA may also be considered.

Table II is an example of an initial risk assessment for a single unit operation. Included in the justification is the expected relationship with CQAs and how the parameter may be influenced during scale-up. Fixed parameters are set to non-critical as they do not impact process variability. For the initial process characterization experiments, process parameters with medium to high impact will be included.

Table II: Example of initial risk assessment of process parameters

Process parameters Initial
risk assessment
Inoculum in-vitrocell age Low
  • Separate end of production studies have justified limit of cell age.
Osmolality Medium
  • Can affect impurities and cell viability.
  • Keep constant in scale-up.
Antifoam concentration No
  • Knowledge from previous studies have defined acceptable range to have no impact to quality.
  • Keep constant in scale-up.
Nutrient concentration Medium
  • Must be sufficient to maintain cell viability.
  • Keep constant in scale-up.
Medium storage time and temperature No
  • Knowledge of medium storage from previous studies.
  • No effect when kept within pre-established limits.
Medium expiration (age) No
  • No effect when kept within pre-established limits.
Volume of feed addition Medium
  • Related to component concentration.
  • Scale by fermentor volume.
Component concentration in feed Medium
  • Yield impact and impacts cell viability.
  • Related to volume of feed addition.
  • Keep constant in scale-up.
Amount of glucose Low
  • Glucose fed as needed to maintain cell viability. leading to different cell concentrations.
  • Scale by fermentor volume.
Dissolved oxygen High
  • Must be sufficient to maintain cell viability.
  • Impacts yield by low cell growth.
  • Controlled by rate of aeration.
  • Scale to large scale by pre-defined model.
Temperature High
  • Impacts cell growth and viability.
  • Keep constant on scale up.
pH High
  • Impacts cell growth and viability.
  • Keep constant on scale up.
Agitation rate Low
  • Speed set by previous process experience.
  • Scale to large scale by pre-defined models.
Culture duration (days) High
  • Related to nutrient concentration for cell viability.

In Part I of this series, the author looked at criticality as a continuum to apply risk analysis during process design, and to relate process unit operations to quality attributes using a cause-and-effect matrix.

In Part II, the continuum of criticality for parameter and attributes will be used to design process characterization studies using DOE. From the initial risk assessment of critical parameters, experimental data from formal studies will confirm the criticality assignment—critical or not—and help to assess the level of impact to CQAs.

1. FDA, Guidance for Industry, Process Validation: General Principles and Practices, Revision 1 (Rockville, MD, January 2011).
2. ICH, Q8(R2) Harmonized Tripartite Guideline, Pharmaceutical Development, Step 4 version (August 2009).
3. ICH, Q9 Harmonized Tripartite Guideline, Quality Risk Management (June 2006).
4. ICH, Q10, Harmonized Tripartite Guideline, Pharmaceutical Quality System(April 2009).
5. ICH, ICH Quality Implementation Working Group Points to Consider (R2), ICH-Endorsed Guide for ICH Q8/Q9/Q10 Implementation (6 December 2011).
6. ISO/IEC Guide 51: Safety Aspects-Guidelines for their inclusion in standards, 2nd ed. (1999).
7. IEC 60812, Analysis Techniques for System Reliability-Procedure for Failure Mode and Effects Analysis (FMEA), Edition 2.0 (January 2006).
8. ISPE, Product Quality Lifecycle Initiative (PQLI) Good Practice Guide, Overview of Product Design, Development, and Realization: A Science- and Risk-Based Approach to Implementation (October 2010).
9. ISPE, Product Quality Lifecycle Initiative (PQLI) Good Practice Guide, Part 1-Product Realization using QbD, Concepts and Principles (2011).
10. Parenteral Drug Association, Technical Report 60, Process Validation: A Lifecycle Approach (2013).

part 2

The most recent FDA (1) and International Conference on Harmonization (ICH) (2-4) guidance documents advocate a new paradigm of process validation based on process understanding and control of parameters and less on product testing. Consequently, the means of determining criticality has come under greater scrutiny. The FDA guidance points to a lifecycle approach to process validation (see Figure 1).

Figure 1: Process validation lifecycle.

In Part I of this series, the author introduced the concept of continuum of criticality and applied it to the concepts of critical quality attributes (CQAs) and critical process parameters (CPPs). In the initial phase, the CQAs had their criticality risk level assigned according to the severity of risk to the patient. Applying a cause-and-effect matrix approach, the potential impact of each unit operation on the final product CQAs was assessed and each unit operation was thoroughly analyzed for its directly controllable inputs and outputs. Finally, a qualitative risk analysis or a formal failure mode effects and criticality analysis (FMECA) was conducted for each of the identified process parameters. The purpose of this assessment is to provide a focus for the downstream process characterization work required to complete process validation Stage 1 (process design).

This initial risk assessment is performed prior to the baseline characterization work and can be used as the primary means of determining the criticality of process parameters under the following conditions:

• When a platform process that possesses similar properties and process to another commercial product (e.g., new strength or new dosage form)
• When there is a significant body of published data on the process
• When experimental studies and commercial data are available, such as when the process validation lifecycle is applied to a legacy product to substantiate the initial assessment.

In these cases, this initial assessment can be further bolstered through the addition of an uncertainty component to the traditional risk score. For example, a high-risk critical parameter with low uncertainty (due to substantial supporting data) may not require further study, but a medium-risk parameter with high uncertainty may require further experimentation to quantify the risk to product performance.

The challenge facing most organizations is how to effectively evaluate the impact of potentially hundreds of process parameters on product performance to determine what is truly critical. Few companies have the time or resources to design experimental studies around all potentially critical process parameters. The initial risk assessment provides a screening tool to sort out the parameters that have low or no risk.

Design space and design of experiments
The goal is to increase process knowledge by providing a mechanistic understanding of the relationship between process parameters, raw material attributes, and CQAs. This is defined as both the demonstration of impact and the quantification of the contribution of each parameter to the product’s performance. Through this exercise, it will be possible to identify the process design space. The ICH guidance defines three elements–knowledge space, design space, and control space–to establish a process understanding (seeFigure 2) (2).

Figure 2: Knowledge, design, and control space.

ICH Q8 defines design space as, “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality.”

The design space is part of an enhanced process development approach referred to as quality by design (QbD). Prior to QbD, pharmaceutical development did not require the establishment of functional relationships between CPPs and CQAs. Consequently, process characterization experiments were primarily univariate (one factor at a time [OFAT]), showing that, for a given range of a process parameter (referred to as proven acceptable range or PAR), the CQAs meet acceptance criteria. While univariate experiments can provide some limited knowledge, a compilation of OFAT studies cannot typically define a design space because it cannot substantiate the importance or contribution of the parameter to the product CQA being evaluated. To do this, multivariate studies must be performed to account for the complexities of interactions when several CPPs vary across their control ranges.

Design spaces can be developed for each unit operation or across several or all unit operations. Although it may be simpler to develop for each unit operation, downstream unit operations may need to be included to sample and test the appropriate CQAs. For example, to perform a multivariate study on a fermentation unit operation, additional processing through cell lysis and purification unit operations is needed so that CQAs may be sampled and tested. The challenge faced by most development programs is how to efficiently and cost-effectively derive maximum process understanding in the fewest number of studies. To do this, a staged approach using multiple studies is most efficient.

A staged design of experiment approach
The following is an example of a simple staged design of experiment (DOE) approach. More complex DOE designs and strategies may be required, but these designs are typical:

Screening (fractional factorial, Plackett-Burman). To identify or screen out process parameters that have no significant impact on a CQA. Screening designs can test main (individual impact and contribution) effects of each parameter being evaluated.
Refining (full factorial). Having dropped out parameters, which do not impact the product CQAs, the refining step tests both main effects and interactions between the remaining parameters and generates first-order (linear) relationships between process parameters and CQAs. The criticality level of a CPP is determined from the quantitative impact on the CQA shown in the modeled relationship.
Optimization (central composite, Box-Behnken). To generate response surfaces and illustrate second-order (quadratic) relationships between process parameters and CQA. This analysis allows optimal set points for the design space or control space to be identified to target desired CQA (or performance attributes) values.

The DOE design assists in determining which parameters are studied and what set point value is used for each experimental run. The initial risk assessment, using prior knowledge and scientific principles, provides an expected relationship as to which CQAs and their related in-process controls will be affected by the given process parameters. Although the focus is on quality impact, process performance attributes (no quality impact) should also be sampled and measured as appropriate. This step is especially important during the optimization stage because a trade-off may be required in terms of optimizing quality and performance attributes.


For process validation Stage 1 process characterization studies, analytical methods for measuring CQAs may not yet be fully validated, but still, must be scientifically sound.
The level of accuracy and precision of the analytical method or measurement system must be well understood because they directly impact the quantitative decision process when interpreting study results early in the process-design stage. Techniques of measurement system analysis such as Gage repeatability and reproducibility (Gage R&R) studies are recommended because they provide information on the variability of the measurement system. The Gage R&R study provides a quantitative measurement of the measurement tools contribution to variation for any measurement made. Typically, a percent contribution from R&R variability must be < 20% and demonstrate at least five distinct categories for the method result to be meaningful. The distinct categories are the number of discernable groups of measurements that can reliably span the range of the CQA.

Including replicate runs in addition to the study experimental design provides crucial data for estimating the underlying variability of the study. This is because, during each run, small unmeasured and uncontrolled variations always occur and may influence the result. Two otherwise identically configured runs may produce slightly different responses due to changes in environment, equipment, measurement, sampling, and operators, among others. Even deliberately fixed parameters (those not under study) may not be exactly identical from run to run. Together, these are called “noise” factors and are important in discerning true responses (i.e., “signal”) caused by the changing parameters from the inherent variability. Differences between sets of replicate runs allow for the quantification of this variability. Large changes in the responses between replicates may indicate either an unstable experimental platform (such as poor run-to-run control) or that a low-risk CPP or non-CPP, may have a higher impact on the CQAs than originally assessed.

Where a raw material has a critical material attribute (CMA) of medium-risk to high-risk impact to CQAs, it should be included as a parameter of the study where possible. Multiple lots or lots with extreme variation in CMAs may not always be available during early development or characterization studies. This limitation frequently is one of the primary drivers of establishing the continuous process verification (CPV) program in Stage 3 to monitor the future impact from this raw material variation. For large studies, multiple lots of raw materials may be required. Consideration should be given to either proportional mixing of the raw material lots for each run or use of a statistical technique called blocking, which incorporates change of material lots into the experimental design.

In each design, the choice must not only be made on the number of parameters to be studied, but at how many levels (i.e., set points within the range) and how many times a particular set of conditions is repeated (replication).

The number of levels is related to the mathematical relationship between the parameters and the CQA measured (e.g., two levels for linear or three for quadratic). For screening designs, it is typical to use only two levels (minimum and maximum of the range); for these designs, any known non-linear relationship may have to be mathematically transformed. For refining design, center points (midpoint of ranges for all parameters) are added to estimate variability and to detect potential curvature.

Because of the cost involved or availability of API, it is not possible to perform all experimental studies at commercial scale (such as with fermentors of 5000 to 25,000 L), hence, most biotech process development programs rely heavily on modeling the process at smaller or intermediate scale. Some process parameters may be independent of the scale or may have simple models to account for scale changes. Scale itself may be considered a parameter. Establishing similar run conditions at multiple scales is an important consideration when trying to qualify the comparability between full-scale and small-scale experiments. Substantial prior experience with scaling particular unit operations may provide key information such as dimensionless parameter groups and scaling equations.

Areas considered for experimental scale include, but are not limited to:
• Aspect ratios of bioreactors and mixing tanks
• Impeller number, size, and location
• Aeration method and effectiveness of oxygen transfer
• Location of addition ports and effect on mass transport and uniformity
• Temperature control and heat-transfer surface area
• Location of instrument sensors and control-loop tuning parameters.

Screen, refine, and optimize
The advantage of a screening design is that it can handle a fairly large number of parameters in the fewest number of runs. The disadvantage is that the interaction effect of each CPP on a CQA cannot be directly determined because the experimental parameters are confounded. Confounding refers to a scientific state where there is insufficient resolution in the experiment to resolve the interaction effects from the main contributions of each parameter studied. However, at the screening stage, the objective is to eliminate as many parameters from the potential list of CPPs as possible so that the true process design space can be determined in the refining studies.

At this stage, the criticality of parameters has not yet been verified and parameter control ranges (proven acceptable range) have not yet been determined. Although it is usually the goal to meet the CQAs’ acceptance criteria to ensure product quality, the purpose here is to show how the process responds to the parameters even if the CQAs may not meet their criteria.

Figure 3: Screening design of experiment (DOE) Pareto of parameters for critical quality attribute (CQA) (aggregates). Temp is temperature; Osm is osmolality; Med conc is medium concentration; Inoc conc is inoculum concentration; DO is dissolved oxygen.

Figure 3 is an example output chart from a screening study. This Pareto chart shows the standardized effect or relative impact, for each of eight process parameters on a CQA. A reference line at 2.45 is the threshold below which the parameter’s effect is not statistically significant for this study (p-value > alpha of 0.05). In this example, six of the parameters may be screened out of further studies, provided they do not produce significant effects for other CQAs. A similar approach can be used for process performance attributes (non-CQAs) to evaluate parameters that impact process performance, but not quality; these non-critical parameters are frequently called key parameters. If these investigations had been conducted as OFAT studies, it would be impossible to quantitatively determine which parameter had an impact on the product’s CQA and to what extent. Through the use of a DOE, it is possible to measure both and define the level of variation explained by the parameters evaluated based upon the data observed.

Once the screening DOE has been completed, parameters that have not shown strong responses to any of the CQAs are now kept constant or well controlled to reduce the number of parameters for refining studies. By employing a full factorial design, all main effects and interactions are separated with regard to the CQA responses; there is no confounding in a full-factorial design. Center-point conditions (runs at the midpoints of all parameter ranges) are recommended because they can be used to detect if significant curvature (non-linear relationship) exists in the response to the parameter and they provide replication to determine the inherent variability in the study.

Figure 4: Refining design of experiment (DOE) Pareto of parameters and interactions for critical quality attribute (CQA) (impurities).

Figure 4 is an example output chart from a four-parameter, full-factorial study. The Pareto chart shows a threshold line. Two parameters and one two-factor interaction are statistically significant for this CQA. All parameters and interactions below this threshold are not statistically significant and their effects have no more impact than the inherent run-to-run variation.

Figure 5: Refining design of experiment (DOE) Pareto of parameters, reduced model, for critical quality attribute (CQA) (impurities). DO is dissolved oxygen.

Because these parameters and interactions are not significant, they may be treated as random noise and the model for this attribute is reduced as shown in Figure 5. A mathematical model was generated using the significant factors (pH and temperature) from Figure 3 and the significant interaction (pH and dissolved oxygen [DO]):

Impurities = Constant + α(pH) + β(temperature) + γ(DO) + δ(pH) (DO)
where: Constant is the intercept generated by the DOE analysis
α, β, γ, and δ are the coefficients generated by the DOE analysis for each parameter or interaction.

Positively signed coefficients indicate the CQA increases with an increase of the parameter; negatively signed coefficients indicate the CQA decreases with an increase of the parameter. The model equation is a regression, or best fit, from the data for the experiment, and therefore, is valid for the specific scale conditions of the experiment including the ranges of the parameters tested. Models are tested for their “goodness of fit” or how well the model represents the data. The simplest of these tests is the coefficient of determination, or R-squared. Low R-squared values (such as below 50%) indicate models with low predictive capability, that is, the parameters evaluated across the defined range do not explain the variation seen in the data.

This model only represents what would be expected on average for this CQA from the unit operation(s) tested in the study. Even so, the model is a fit to the most likely mean. Recognizing that any model has uncertainty, the model can also be represented with a confidence interval (e.g., 95%) around that mean. Individual runs will also show day-to-day variation around that mean. A single-run value for the attribute cannot be predicted, but a range in which that value will likely fall can be predicted. This range for the single-run value is called the prediction interval (e.g., 95%) for the model. Empirical models such as these are only as good as the data and conditions from which they are generated and are mere approximations of the real world.

Despite the limitation, these empirical models relate not only what parameters have a statistical impact on a CQA, but also the relative amount of that impact. The range through which the parameter is tested in the study has an important relationship to the model generated. For example, perhaps the parameter temperature was initially assigned as high risk. If temperature is only tested through a tight range, the parameter may have little to no impact to CQAs in the study; its effect may be no greater than the inherent variability. If temperature is not statistically significant for the range studied (i.e., its PAR), it is designated as a non-CPP, but only for that PAR. If the temperature should ever move outside the studied PAR, there is a potential risk that it could have a quality impact become critical.

Some organization quality groups rely on the original risk assessment of the process parameter. If this parameter’s severity was initially rated high, this parameter can remain designated as critical but should be designated as a low-risk CPP as long as the parameter is in its PAR. Parameters outside the PAR would be considered outside the allowable limits for that process step because there has been no study of the parameter outside of this range.

If curvature is detected during earlier DOE stages, or if the optimization of any CQA or process performance attribute is needed, then response-surface experimental designs are used. These designs allow for more complex model equations for a CQA (or performance attributes). Two of the simpler response-surface designs are the central composite and Box-Behnken. Both designs can supplement existing full-factorial data. The central-composite design also extends the range of parameter beyond the original limits of the factorial design. The Box-Behnken design is used when extending the limits is not feasible. The empirical models are refined from these studies by adding higher-order terms (e.g., quadratic, polynomial). Even if these higher-order terms are not significant, adding more levels within the parameter ranges will improve the linear model.

Because most empirical models are developed with small-scale experiments, the models must be verified on larger scale and potentially adjusted. Applying the knowledge of scale-dependent and scale-independent parameters while developing earlier DOE designs reduces risk when scaling-up to larger pilot-scale and finally full-scale processes. The models from small-scale studies predict which parameters present the highest impact (risk) to CQAs. Priority should be given in the study design to those high-risk parameters, especially if they are scale-dependent. Since the empirical models only predict the most likely average response for a CQA, several runs at different parameter settings (e.g., minimum, maximum, center point) are required to see if the small-scale model can still apply to the large-scale process.

Significance and criticality
Statistical significance is an important designation in assessing the impact of changes in parameters on CQA. It provides a mathematical threshold of where the effects vanish into the noise of process variability. Parameters that are not significant are screened out from further study and excluded from empirical models.

A CQA may be affected by critical parameters in several different unit operations (see the Cause-and-Effect Matrix in Part 1 of this article [5]). Characterization study plans may not be able to integrate different unit operations into the same DOE study. Consequently, several model equations may exist for a single CQA; each model is composed of parameters from different unit operation. The relative effect of each parameter on the CQA can be calculated from these models using the span of the PAR for each parameter. The relative impact of each parameter on the CQA is based on the range of its acceptance criteria. Sorting each parameter from highest to lowest, the criticality of each parameter can be assigned from high to low.Table I is an example of one method for assigning the continuum of criticality.
The steps in determining the continuum of criticality for process parameters are summarized as follows:
• Show statistically significance by DOE
• Relate significant parameters to CQAs with empirical model
• Calculate impact of all parameter from model(s) for CQA
• Compare the parameter’s impact on the CQA to the CQA’s measurement capability
• Assign parameter risk level based on impact to CQA
• Update initial risk assessment for parameters.

CPP risk level % Change in CQA as CPP spans PAR
High risk > 25%
Medium risk 10% to 25%
Low Risk < 5%
Below measurement capability
Low risk in risk assessment (not in DOE)
Non-CPP Not significant in DOE
No risk in risk assessment (not in DOE)

Table I: Example of criticality risk assignment for process parameters. CPP is critical process parameter;
CQA is critical quality attribute; PAR is proven acceptable range; DOE is design of experiment.

As process validation Stage 2 (process qualification) begins, criticality is applied to develop acceptance criteria for equipment qualification and process performance qualification. Finally, in process validation Stage 3 (continued process verification), criticality determines what parameters and attributes are monitored and trended.

In the third and final part of this article, the author applies the continuum of criticality for parameters and attributes to develop the process control strategy and study its influence on the process qualification and continued process verification stages of process validation.

1. FDA, Guidance for Industry, Process Validation: General Principles and Practices, Revision 1 (Rockville, MD, January 2011).
2. ICH, Q8(R2) Harmonized Tripartite Guideline, Pharmaceutical Development, Step 4 version (August 2009).
3. ICH, Q9 Harmonized Tripartite Guideline, Quality Risk Management (June 2006).
4. ICH, Q10, Harmonized Tripartite Guideline, Pharmaceutical Quality System(April 2009).
5. M. Mitchell, BioPharm International 26 (12) 38-47 (2013).

Mark Mitchell is principal engineer at Pharmatech Associates.


With the most recent FDA (1) and Inter-national Conference on Har-monization (ICH) guidances (2-4) advocating a new paradigm of process validation based on process understanding and control of parameters and less on product testing, the means of determining criticality has come under greater scrutiny. The FDA guidance points to a lifecycle approach to process validation (see Figure 1).

Figure 1: Process validation lifecycle.

In Part I, the author used risk analysis and applied the continuum of criticality to quality attributes during the process design stage of process validation. After using process knowledge to relate the attributes to each process unit operation, the inputs and outputs of each unit operation were defined to determine process parameters and in-process controls. An initial risk assessment was then completed to determine a preliminary continuum of criticality for process parameters.

In Part II, the preliminary risk levels of process parameters provided the basis of characterization studies based on design of experiments. Data from these studies were used to confirm the continuum of criticality for process parameters.

At this point in the process development stage, the design space has been determined. It may not be rectangular (if there are higher-order terms in the models) and may not include the entire proven acceptable range (PAR) for each critical process parameter (CPP). In fact, the design space is not defined by the combination of the PARs for each CPP, given that the full PAR for one CPP ensures the quality of the critical quality attribute (CQA) only when all other CPPs do not vary. The design space represents all combinations of CPP set points for which the CQA meets acceptance criteria.

Overall, the design space developed from process characterization study models represents a level of process understanding. Like all models, however, the design space is only as good as the data that drives the analysis. The CQAs, on average, may meet acceptance criteria, but individual lots–and samples within lots–are at risk of failure when operating at the limits of the design space. For this reason, the operational limits for the CPPs are frequently tighter than the design space. This tighter space is the last part of the ICH Q8 paradigm (2) (see Figure 2) and is called the control space, which equates to normal operating range (NOR) limits for each CPP.

Figure 2: Knowledge, design, and control space.

Stage 1: From models to design space to control space
At the conclusion of the process characterization studies, the design space describes each CQA as a function of process parameters of various levels of risk, or continuum of criticality. Additionally, these models have been confirmed, by experiment or prior knowledge, to adequately represent the full-scale manufacturing process. This classical multivariate approach combines impact from each CPP to predict the response of the CQA as each CPP moves through its PAR. These mathematical expressions can be represented graphically as either contour or 3-D response surface plots.

Even this view of the design space is too simplistic. To ensure a process has a statistically high probability (e.g., > 95%) that a CQA will reliably meet the acceptance criteria for a combination of CPP requires a more involved computational analysis. This analysis may lead to revising CPP set points and ranges.

Several computational statistical methods are available for analysis of process reliability. Each of these requires specialized statistical software.
These methods include:
Monte Carlo Simulation inputs CPPs as probability distributions to the design space models and iterates to produce the CQA as a probability distribution. Capability analysis can be applied to the CQA’s acceptance criteria. This method is limited by the estimations of the CPP distributions from process characterization studies, which will not necessarily represent the same level of inherent variation as the commercial process. Using sensitivity analysis on these estimated distributions may enhance this approach.
Predictive Bayesian Reliability (5) incorporates the CPPs, uncontrolled variables such as raw material and environmental conditions, inherent common cause variability, and variation due to unknown model parameters, to determine a design space with high reliability of meeting the CQAs.

Design space models often become a series of complex, multifactor equations, which are not suitable for describing the required ranges for each CPP in a production batch record. Contained within the design space, the control space consists a suitable NOR for each parameter.

Table I provides some example methods for developing the NORs and the control space together with their advantages and disadvantages. Option 1 is included only since it represents a historical approach broadly employed in NOR establishment. This approach is not consistent with the current quality-by-design (QbD) approach to process validation and will not be sufficient to defend a final NOR establishment. Issues with Option 2 have been discussed previously. Of these three options the reliability approach (Option 3) is the most robust, but requires sophisticated statistical skills. This option may be reserved for only very high risk CPPs.

Table I: Example methods for determining a normal operating range (NOR) for a critical process parameter (CPP). CQA is critical quality attribute, PAR is proven acceptable range, QbD is quality by design.

Option Advantages Disadvantages Span of range
1. NOR set equal to PAR for single CPP Simple No other CPP effects or interactions considered. Not consistent with QbD methodology (i.e., poor assurance of quality). Widest
2. NOR same as design space
(optional: rectilinear space)
Account for other CPP effects on CQA Based on process characterization models. Does not ensure lot-to-lot performance (model is “on average”). Narrower than #1
3. NOR based on reliability methods Accounts for other CPP effects on CQA. Ensures high reliability of meeting CQAs. Based on process characterization models. Requires sophisticated analysis. May be tighter than available control capability. Generally narrower than #1 and #2, possible narrower than all options
4. NOR based on design space with “safety margin” Accounts for other CPP effects. Partial allowance for lot-to lot variation. Based on process characterization models. Only partial allowance for lot-to-lot performance. Narrower than #2
5. NOR set by control capability Good for non-CPPs. Keeps CPP in tight range of control (lower risk by lowering occurrence) Range is narrow and may not allow for future unknown variability. Exceeding range does not necessarily lead to CQA failure. May be narrowest of all options depending on level of control

Option 4 is based on a “safety margin” that may be determined in a variety of ways. One choice is to measure how much an actual parameter will vary around its set point. For example, if a temperature set point is 30.0 °C, it may be observed to vary from 29.5 °C to 30.5 °C (± 0.5 °C). The safety margin of 0.5 °C is applied to narrow the CPP limit from the design space. Therefore, if the design space is 25.0 °C to 35.0 °C, the NOR becomes 25.5 °C to 34.5 °C. Additional factors, such as calibration error, can be added to provide a wider safety factor.

Option 5 is the narrowest method applied for determining the NOR. Here, the ability to control the parameter determines its range. For example, a pH set point of 7.0 may have a design space of 6.5 to 7.5. However, if the control of the pH is shown to be ± 0.2, then the NOR is 6.8 to 7.2. The primary disadvantage of such a narrow range is that even if the CPP’s NOR is exceeded, the CQA may not move outside of its acceptance range. Option 5 is suitable for setting the NOR of non-CPPs since the CQAs are not affected. For example, a mixing set point of 200 rpm is a non-CPP. If the mixer’s control is qualified for ± 20 rpm, then the NOR is 180-220 rpm.

The conclusion to process validation Stage 1 (process design) is documented by summarizing the control strategy per ICH Q10:
Control strategy: A planned set of controls, derived from current product and process understanding, that assures process performance and product quality. The controls can include parameters and attributes related to drug substance and drug product materials and components, facility and equipment operating conditions, in-process controls, finished product specifications, and the associated methods and frequency of monitoring and control (4).

The control strategy may be a single document or a package of several documents as described in the company’s process validation plan. This documentation includes or references the following:
• Continuum of criticality for process parameters
• Continuum of criticality for quality attributes
• The mechanistic or empirical relationships of CPPs to CQAs (design space)
• The set points and NOR for CPPs (control space)
• Acceptance criteria and sampling requirements for CQAs, in-process controls, and raw materials testing
• In-process hold (non-processing) time limits and storage conditions
• Key operating parameters (KOPs) and performance attributes (all non-critical), which are used to monitor process performance, set points, and ranges.

WHO publishes final Guideline for Hold-Time Studies

Posted on

After the WHO had released the second draft of the guideline for the design of hold time studies in March already, it now released the final version as part of the Technical Report Series 992. Find out more about the Guideline for Hold Time Studies.


After the World Health Organisation (WHO) had released the second draft of the guideline for the design of hold-time studies in March already, it now released the final version as part of the Technical Report Series 992 (TRS 992, Annex 4).

The GMP regulations require that raw materials, packaging materials, intermediate, bulk and finished products need to be stored under suitable conditions. This also includes the definition of maximum hold-times for intermediate and bulk products prior to their further processing. The definition of these times should be justified on the basis of scientific data. This guideline aims at reflecting aspects that may be important in the design of hold-time studies. Active substances and biological products are explicitly excluded. According to WHO, hold-time studies can be part of the development or also be carried out during the later Scale-Ups. In any case they should be confirmed during product validation, though.

With coated tablets as an example the guideline demonstrates at what points in the manufacturing process samples can be taken and examined for durability. The number of batches to be examined is supposed to be risk-based. The times, according to which the listed intermediates could be sampled and examined are also mentioned exemplary. Whereas the original draft comprised propositions with regard to combined hold times – meaning the impact of hold times of various intermediates among one another – they are not part of the final guideline any more.