Saturday, February 8, 2014

Quality by Design: Concepts for ANDAs

quality by design is an essential part of the modern approach to pharmaceutical quality. There is much confusion among pharmaceutical scientists in generic drug industry about the appropriate element and terminology of quality by design. This paper discusses quality by design for generic drugs and presents a summary of the key terminology. The elements of quality by design are examined and a consistent nomenclature for quality by design, critical quality attribute, critical process parameter, critical material attribute, and control strategy is proposed. Agreement on these key concepts will allow discussion of the application of these concepts to abbreviated new drug applications to progress.
Key words: control strategy, critical material attributes, critical process parameters, design space, quality by design


The Food and Drug Administration (FDA) (13) and pharmaceutical industry (46) are talking about quality by design, and there are many important terms that are used as part of this discussion. However, industry comments indicate that there is still much confusion in the generic industry as to the meaning of quality by design and its associated nomenclature. In this paper, we provide a consistent set of definitions to provide a clearer understanding of quality by design for abbreviated new drug applications (ANDAs).
In order to describe quality by design, we must first define what we mean by quality. In a 2004 paper, Janet Woodcock (Director for the Center for Drug Evaluation and Research) defined pharmaceutical quality as a product that is free of contamination and reproducibly delivers the therapeutic benefit promised in the label to the consumer (1). Traditionally, the relationship of product attributes to product quality has not been well understood, and thus FDA has ensured quality via tight specifications based on observed properties of exhibit or clinical trail batches and constraining sponsors to use a fixed manufacturing process. In this approach, specifications are valued not because they are related to product quality, but because they are able to detect differences batch to batch that may potentially have therapeutic consequences.
FDA’s emphasis on quality by design began with the recognition that increased testing does not improve product quality (this has long been recognized in other industries). The following equation indicates where quality comes from:
Pharmaceutical Quality = f (drug substance, excipients, manufacturing, packaging).
In order for quality to increase, it must be built into the product. To do this requires understanding how formulation and manufacturing process variables influence product quality; this is the function f in the equation above.


We start with the assertion that Quality by Design (QbD) is a systematic approach to pharmaceutical development that begins with predefined objectives and emphasizes product and process understanding and process control, based on sound science and quality risk management (7). It means designing and developing formulations and manufacturing processes to ensure a predefined quality. Thus, QbD requires an understanding how formulation and process variables influence product quality. Relevant documents from the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH), ICH Q8 (8), Pharmaceutical Development, along with ICH Q9 (9), Quality Risk Management, and ICH Q10 (10), Pharmaceutical Quality Systems, indicate on an abstract level how quality by design acts to ensure drug product quality. Especially for ANDA sponsors, who were not actively involved in the ICH processes, there is a need for more concrete descriptions of quality by design.
Over the past several years, pharmaceutical scientists have provided several more specific definitions of what are the elements of quality by design (2,4) and a draft of an annex to ICH Q8 has been released (7). These discussions have generally focused on the development of new drugs. Drawing on these discussions and some specific aspects of the development of generic products, a QbD development process may include (Fig. 1):
  • Begin with a target product profile that describes the use, safety and efficacy of the product
  • Define a target product quality profile that will be used by formulators and process engineers as a quantitative surrogate for aspects of clinical safety and efficacy during product development
  • Gather relevant prior knowledge about the drug substance, potential excipients and process operations into a knowledge space. Use risk assessment to prioritize knowledge gaps for further investigation
  • Design a formulation and identify the critical material (quality) attributes of the final product that must be controlled to meet the target product quality profile
  • Design a manufacturing process to produce a final product having these critical material attributes.
  • Identify the critical process parameters and input (raw) material attributes that must be controlled to achieve these critical material attributes of the final product. Use risk assessment to prioritize process parameters and material attributes for experimental verification. Combine prior knowledge with experiments to establish a design space or other representation of process understanding.
  • Establish a control strategy for the entire process that may include input material controls, process controls and monitors, design spaces around individual or multiple unit operations, and/or final product tests. The control strategy should encompass expected changes in scale and can be guided by a risk assessment.
  • Continually monitor and update the process to assure consistent quality
Fig. 1
Overview of QbD
Design of experiments (DOE), risk assessment, and process analytical technology (PAT) are tools that may be used in the QbD process when appropriate. They are not check-box requirements.
The difference between QbD for NDA and ANDA products is most apparent at the first step of the process. For an NDA, the target product profile is under development while for the ANDA product the target product profile is well established by the labeling and clinical studies conducted to support the approval of the reference product.


The first aspects of QbD are an articulation of the design goals for the product.

Definition of TPP

FDA published a recent guidance defining a Target Product Profile (TPP) (11): “The TPP provides a statement of the overall intent of the drug development program, and gives information about the drug at a particular time in development. Usually, the TPP is organized according to the key sections in the drug labeling and links drug development activities to specific concepts intended for inclusion in the drug labeling.” When ICH Q8 (8) says that pharmaceutical development should include “...identification of those attributes that are critical to the quality of the drug product, taking into consideration intended usage and route of administration”, the consideration of the intended usage and route of administration would be through the TPP.
The TPP is a patient and labeling centered concept, it can be thought of as the “user interface” of the drug product. Thus a generic version and its reference product would be expected to have the same TPP. A generic product may use a different formulation or design to implement the TPP. The characteristics and performance tests of a drug product would depended on the particular implementation and may differ between a generic and reference product. For a new drug, changes to the TPP may require new safety or efficacy data, but changes to product characteristics or performance that result from a reformulation may not.
Many aspects of the TPP constrain or determine the actions of formulation and process development scientists. These can include the route of administration, dosage form and size, maximum and minimum doses, pharmaceutical elegance (appearance), and target patient population (pediatric formulations may require chewable tablets or a suspension). Common aspects of drug product quality are implicitly in the TPP. If the label states a tablet contains 100 mg of active ingredient, this is a claim relating to the assay and content uniformity. It is the role of a pharmaceutical scientist to translate the qualitative TPP into what we define as the target product quality profile (TPQP) for further use in a quality by design process.

Definition of TPQP

The target product quality profile (TPQP) (12) is a quantitative surrogate for aspects of clinical safety and efficacy that can be used to design and optimize a formulation and manufacturing process. International Society of Pharmaceutical Engineers (ISPE) Product Quality Lifecycle Implementation (PQLI) calls this the Pharmaceutical Target Product Profile (4). It should include quantitative targets for impurities and stability, release profiles (dissolution) and other product specific performance requirements. Product specific examples include resuspendability for an oral suspension, adhesion for a transdermal system, and viscosity for a topical cream. Generic products would include bioequivalence to the RLD as part of the TPQP.
The TPQP is not a specification because it includes tests such as bioequivalence or stability that are not carried out in batch to batch release. The TPQP should only include patient relevant product performance. For example, if particle size is critical to the dissolution of a solid oral product, then the TPQP should include dissolution but not particle size. Particle size would be a critical material attribute and thus included in the process description and control strategy. The TPQP should be performance based and not mechanism based.
Examples of a TPQP can be found in the mock quality overall summary (QOS) presented on the Office of Generic Drugs website (13,14) although the term TPQP was not clearly stated in the mock QOS. Another example of a TPQP is presented in the European Mock P2 (15) that was developed to facilitate a scientific and regulatory dialogue between the Industry Association European Federation of Pharmaceutical Industries Associations, and Regulatory Authorities on the presentation of enhanced product and process understanding in regulatory dossiers. The European Mock P2 uses the nomenclature Target Product Profile, but their Table I fits our definition of a TPQP. They claim that the TPQP is a definition of product intended use and a pre-definition of quality targets (with respect to clinical relevance, efficacy and safety) and thus summarizes the quality attributes of the product required to provide safety and efficacy to the patient.

Definition of CQA

The ISPE PQLI (4) defines critical quality attributes (CQAs) as physical, chemical, biological or microbiological properties or characteristics that need to be controlled (directly or indirectly) to ensure product quality. ICH Q8 (R1) defines CQAs as physical, chemical, biological or microbiological properties or characteristics that should be within an appropriate limit, range, or distribution to ensure the desired product quality (7). CQA has been used by some (16) to describe elements of the TPQP (such as dissolution) while others (17) have used CQA to describe mechanistic factors (such as particle size and hardness) that determine product performance. Thus CQA is used to describe both aspects of product performance and determinants of product performance.
It was stated that the ICH working definition of CQA was: “A CQA is a quality attribute (a physical, chemical, biological or microbiological property or characteristic) that must be controlled (directly or indirectly) to ensure the product meets its intended safety, efficacy, stability and performance” (18). This CQA definition implies that the intended safety, efficacy, stability and performance are not CQAs. Safety and efficacy clearly fall under the domain of the TPP. But if stability and performance are not CQA and not part of the TPP, then what are they? We are thus compelled to acknowledge that there is an intermediate category of product performance (or surrogates for quality) that we have defined as the TPQP.
As shown in Fig. 2, it seems more precise to consider the TPP, TPQP, and material attributes as separate categories. The use of CQA can be reserved for cases where there is a need to refer collectively to the targets of a QbD approach. CQA is generally assumed to be an attribute of the final product, but it is also possible to indicate a CQA of an intermediate or a raw material.
Fig. 2
An Illustration of How Under QbD the Identification of Critical Process Parameters and Critical Material Attributes is Linked to the TPQP and Finally to TPP that Represents the Clinical Safety And Efficacy
Although many people have identified dissolution as a critical quality attribute, we consider that a set of critical material attributes (CMAs) that are independent of each other provide specific goals with which to evaluate a manufacturing process. For example a dissolution test may depend on particle size and hardness. Particle size and hardness are CMAs which can be directly linked to raw materials and manufacturing process parameters. Independent CMAs are the best way to provide a mechanistic link of the product quality to the critical process parameters in the manufacturing process. At the 2005 Drug Information Association meeting, Reed discussed dissolution in detail and indicated the greater value of have very specific CQAs (19). Others (20) have commented negatively that processing behavior of materials is usually evaluated in performance tests (flowability) rather than focusing on fundamental material properties. Differentiating between CMAs (properties) and multi-faceted performance tests is part of the movement away from quality by testing to quality by design.
The evolution of ICH Q8 is also consistent with making a distinction between CMA and performance tests. The 2004 Q8 draft (21) put CQA and performance tests into the same pile of physiochemical and biological properties:
The physicochemical and biological properties relevant to the performance or manufacturability of the drug product should be identified and discussed. These could include formulation attributes such as pH, osmolarity, ionic strength, lipophilicity, dissolution, redispersion, reconstitution, particle size distribution, particle shape, aggregation, polymorphism, rheological properties, globule size of emulsions, biological activity or potency, and/or immunological activity. Physiological implications of formulation attributes such as pH should also be addressed.
However, the final version of Q8 (8) made clear that this section was focus on product performance:
The physicochemical and biological properties relevant to the safety, performance, or manufacturability of the drug product should be identified and discussed. This includes the physiological implications of drug substance and formulation attributes. Studies could include, for example, the development of a test for respirable fraction of an inhaled product. Similarly, information supporting the selection of dissolution vs. disintegration testing (or other means to ensure drug release) and the development and suitability of the chosen test could be provided in this section.
Other examples may help to show the benefit of this analysis. Consider alcohol induced dose dumping. The TPP would be the labeling statement (supported by clinical data) that the product does not dose-dump when taken with alcohol. A performance test in the TPQP would be an in vitro dissolution test in alcohol. The CMA would be the thickness of a tablet coat. Defining the CMAs on this mechanistic physical property level makes it the best link to the manufacturing process variables.


What is a Process Parameter?

There is confusion about what is a process parameter. Previously, some have defined a critical process parameter (CPP) as any measurable input (input material attribute or operating parameter) or output (process state variable or output material attribute) of a process step that must be controlled to achieve the desired product quality and process consistency. In this view, every item in Fig. 3 would be a process parameter.
Fig. 3
An Example of Identification of Process Parameters and Material Attributes Prior to Pharmaceutical Development
We propose that process parameter be understood as referring to the input operating parameters (mixing speed, flow rate) and process state variables (temperature, pressure) of a process or unit operation. Under this definition, the state of a process depends on its CPPs and the CMAs of the input materials. Monitoring and controlling output material attributes can be a better control strategy than monitoring operating parameters especially for scale up. For example, a material attribute, such as moisture content, should have the same target value in the pilot and commercial processes. An operating parameter, such as air flow rate, would be expected to change as the process scale changes.
For a given unit operation, there are four categories of parameters and attributes
  • input material attributes
  • output material attributes
  • input operating parameters
  • output process state conditions

What is an Unclassified Process Parameter?

We recognize that there are many material attributes and process parameters that are important and even essential to product quality, but it is of little value to define all parameters as critical. Thus we propose three categories for attributes or parameters: unclassified, critical, or non-critical. The criticality of an unclassified parameter is undetermined or unknown. Sponsors’ pharmaceutical development studies can provide the additional data needed to classify an unclassified parameter as critical or non-critical. For a process or dosage form we expect wide agreement on the set of attributes or parameters that need classification. Prior experience and standard texts will guide this process. Figure Figure33 provides an example identification of unclassified process parameters (UPP) at the beginning of a development process.
These UPP may later be classified as critical or non-critical. For example, in the granulation process, the impeller speed should clearly be identified as an unclassified process parameter because if impeller speed were zero the process step would not be successful. However, this does not mean that impeller speed is always a critical parameter. If development studies demonstrated the granulation was not affected by realistic changes in impeller speed, it would not be identified as critical. An application that did not include the results of pharmaceutical development studies investigating the criticality of the UPP would have a large number of UPP remaining in the final submission.

What is a Critical Process Parameter?

A parameter is critical when a realistic change in that parameter can cause the product to fail to meet the TPQP. Thus, whether a parameter is critical or not depends on how large of a change one is willing to consider. A simple example is that an impeller speed of zero will always fail. Thus the first step in classifying parameters is to define the range of interest which we call the potential operating space (POS). The POS is the region between the maximum and minimum value of interest to the sponsor for each process parameter. The POS can also be considered as the extent of the sponsor’s quality system with respect to these parameters. This definition is at the discretion of the application that sponsor must balance the trade-offs in its definition. The POS defines the scope of the application and the sponsor’s quality system so that going outside of the POS must need an amendment or supplement to the application. Thus sponsors benefit from defining a large feasible POS. The cost of a large POS is the need for the pharmaceutical development (in the form of prior knowledge, process models or experimental data) to cover the POS and the increased chance that a parameter will be found critical in the large POS. The only constraint on the narrowness of the POS is that the POS must encompass the variability of the process parameters around their target values.
Our criteria for identifying critical and non-critical parameters are that a parameter is non-critical when there is no trend to failure within the POS and there is no evidence of interactions within the proven acceptable range (PAR)(see explanatory footnote on first page of article), which is the range of experimental observations that lead to acceptable quality. A sponsor has the option of conducting experimental observations over the entire POS; in this case the POS could be equivalent to the PAR. Alternatively a sponsor may use prior knowledge, mechanistic models and trends from the PAR to draw conclusions about sensitivity over a POS that is larger than the PAR. If the lack of interaction part of the test cannot be met, then the parameter remains a UPP. A parameter is critical when there is an observation of failure or a trend to failure predicted within the POS. If the interaction between two parameters is significant enough to predict a potential failure in the POS, then both parameters should be considered as critical.
The most definitive way to identify critical and non-critical parameters is by scientific investigations involving controlled variations of the parameters. The focus in the process development report is on the additional studies that build this knowledge. These studies can be conducted on pilot or lab scale and do not need to be conducted under current Good Manufacturing Practice. When the sensitivity of process parameters is established, this can be used to design appropriate control strategies.
However, it may not be possible (due to economic and time constraints) to conduct scientific investigations on all UPP. We believe that prior knowledge and experience with the unit operations can be used to classify some UPP. The prior knowledge can be used in a formal risk assessment process to prioritize unclassified parameters for further experimental study. This is potentially a challenging issue for FDA review, if the reviewer does not agree with the risk assessment used to classify parameters as non-critical, then all further conclusions may be in doubt because a potential critical variable was left out of the experimentation that was used to develop a design space.
Our criteria for identifying critical and non-critical process parameters are based on the sensitivity of product characteristics to changes in the process parameters. Other approaches presented in the literature link the classification as critical to the variability in a process parameter (22,23). The variability of a process parameter impacts the control strategy that will be used, but we concur with ISPE PQLI that control of a variable does not render it non-critical (4). Table I summarizes the proposed classification of process parameters.
Table I
Classification of Process Parameters

Uniqueness of Critical Process Parameters

Because of the broadness of the CPP definition it is possible for two investigators to examine the same process and come to a different set of CPP. The set of CPP is not unique, but the chosen set must be sufficient to ensure product quality.
Different sets of CPP can have several origins. One is that the definition of operating parameters depends on the engineering systems installed on a piece of process equipment. For example, one fluid bed dryer may define the product temperature as an operating parameter and have an internal control system (a thermostat) that maintains that temperature, while another fluid bed dryer may have inlet air flow rate and inlet air temperature indicated as operating parameters. The batch record for the first unit might indicate a fixed temperature, while the second unit would have a design space that indicated the combination of inlet air flow rate and inlet air temperature that would insure the appropriate product temperature.
Another source of differences in the set of CPP comes from the balance between control of operating parameters and material attributes. Morris (24) indicates that set of CPP and CMA (which he refers to as process critical control points (PCCP)) can affect the scale up process
  • PCCPs are preserved throughout scale-up, the magnitude of the responses may not scale directly, but the variables being monitored reflect the “state” of the process
  • Monitoring material properties makes scaling less equipment dependent (as opposed to only monitoring equipment properties) equipment differences (scale and type) may have an effect, however, differences in the material should reflect significant changes in the PCCPs


A control strategy may include input material controls, process controls and monitoring, design spaces around individual or multiple unit operations, and/or final product specifications used to ensure consistent quality. A control strategy is what a generic sponsor uses to ensure consistent quality as they scale up their process from the exhibit batch presented in the ANDA to commercial production.
Every process has a control strategy right now. Figure Figure44 shows a simplified quality assurance diagram under the current regulatory evaluation system. In this system, product quality is ensured by fixing the process to produce the active ingredient, raw material testing, performing the drug product manufacturing process as described in a fixed batch record, in-process material testing, and end product testing.
Fig. 4
An Example of Control Strategy for Pre-QbD Process
The quality of raw materials including drug substance and excipients is monitored by testing. If they meet specifications or other standards such as USP for drug substance or excipients, they can be used for manufacturing of the products. As the drug substance specification alone may not be sufficient to ensure quality, the drug substance manufacturing process is also tightly controlled. Potentially significant changes to the drug substance manufacturing process will require the drug product manufacturer to file supplements with the FDA.
The finished drug products are tested for quality by assessing if they meet specifications. In addition, manufacturers are usually expected to conduct extensive in process tests, such as blend uniformity or tablet hardness. Manufacturer are also not permitted to make changes to the operating parameters (a large number of UPPs) specified in the batch record or other process changes without filling supplements with the FDA.
This combination of fixed (and thus inflexible) manufacturing steps and extensive testing is what ensures quality under the current system. A combination of limited characterization of variability (only three pilot lots for innovator products and one pilot lot for generic products), a failure of manufactures to classify process parameters as critical or non-critical, and cautiousness on the part of regulator leads to conservative specifications. Significant industry and FDA resources are being spent debating issues related to acceptable variability, need for additional testing controls, and establishment of specification acceptance criteria. The rigidity of the current system is required because manufacturers may not understand how drug substance, excipients, and manufacturing process parameters affect the quality of their product or they do not share this information with FDA chemistry, manufacturing and controls (CMC) reviewers. Thus the FDA CMC reviewers must act conservatively.
A QbD based control strategy is shown in Fig. 5. Pharmaceutical quality is assured by understanding and controlling formulation and manufacturing variables to assure the quality of the finished product. The end product testing only confirms the quality of the product. In this example, PAT provides tools for realizing the real time release of the finished product although its use is not required under the paradigm of the Quality by Design.
Fig. 5
An Example of Control Strategy for QbD Process

Implications of Process Parameter Classification

The classification of process parameters as critical or non-critical is essential to evolve the control strategy toward the QbD based goal. Full classification of all parameters as either non-critical or critical can lead to reduced end-product testing. It is the uncertainty about the UPP that leads to extensive testing.
Without development studies, UPP may need to be constrained at fixed values or narrow ranges (used to produce acceptable exhibit batches) because they might be critical. The presence of UPP also leads to inclusion of extensive release and in-process tests into the control strategies. The goal of development studies is to move parameter from unclassified (criticality unknown) to either non-critical or critical. This classification is an important step toward a flexible manufacturing process because unclassified parameters classified as non-critical may be monitored and controlled via monovarient ranges or as part of a sponsor’s quality system (see Table II). For non-critical parameters it may be possible to designate a normal operation range (NOR) up to (or beyond) the proven acceptable range (PAR) depending on trends and prior knowledge. The superposition of NOR for non-critical parameters would be considered as part of the design space.
Table II
Impact of Classification of Process Parameters on Control Strategy
The ranges of critical parameters must be constrained to a multidimensional design space or fixed at values of all parameters known to be acceptable. Univariate PAR can be used for critical parameters only when there is evidence that there are no significant interactions between the CPP. However the establishment of this knowledge about CPPs may render them lower risk than UPP. A control strategy appropriate to the known CPP may also have less need for release testing than one for a process with many UPPs.

Design Space

In the presence of interacting critical process parameters a design space is one approach to ensure product quality although it is not a check-box requirement. The current definition of design space is “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality.” (8) This definition evolved from early ICH Q8 drafts where design space was defined as “the established range of process parameters that has been demonstrated to provide assurance of quality” (21). The change emphasizes the multidimensional interaction of input variables and closely binds the establishment of a design space to a conduct of a DOE that includes interactions among the input variables. A design space may be constructed for a single unit operation, multiple unit operations, or for the entire process.
Submission of a design space to FDA is a pathway obtaining the ability to operate within that design space without further regulatory approval. A design space is a way to represent the process understanding that has been established. The benefits of having a design space are clear; one challenge to the effective use of a design space is the cost of establishing it.
In a typical design space approach a sponsor identifies the unclassified parameters and then does a DOE on some of the unclassified parameters with the other unclassified parameters fixed. Thus the end is a regulatory situation where there is some space for the selected parameters but no flexibility for the other parameters. This operating parameter based design space is limited to the equipment used to develop the design space. It might change on scale up or equipment changes.
In the development of a design space, the key issue to efficiency is demonstrating or establishing that the unclassified parameters left out of the DOE are truly non-critical process parameters and are thus by our definition non-interacting. Before attempting to establish a design space, effort should be invested to reduce the number of unclassified process parameters. This may involve a screening DOE to rule out significant interactions between process parameters. When they are non-interacting, univariate ranges for non-critical parameters are appropriate and can be added to the design space presentation without additional studies.
It is best to exploit the non-uniqueness of CPPs to define the design space in terms of scale independent (dimensionless) parameters and material attributes. Understanding the design space in terms of material attributes allows scale up and equipment changes to be linked to previous experiments. The scalability of the design space can be evaluated in the transfer from lab to exhibit batch manufacturing.

Feedback Control and PAT

Application of PAT (25) may be part of a control strategy. ICH Q8(R) (7) identifies one use of PAT as ensuring that the process remains within an established design space. In a passive process, PAT tools provide continuous monitoring of CPP to demonstrate that a process is maintained in the design space. In process testing of CMA can also be conducted online or inline with PAT tools. Both of these applications of PAT are more efficient ways to detect failures. In a more robust process, PAT can enable active control of CPP, and if there is variation in the environment or input materials the operating parameters can be adjusted to keep the CMA under control to ensure quality.
A PAT system that combines continuous monitoring of CMA (instead of CPP) can potentially be combined with feedback control of process parameters to provide an alternative to design space based control strategies. A problem with design space is that it can limit flexibility. A design space is usually a specified space of process parameters that has been demonstrated to provide acceptable quality. There may be sets of process parameters that lead to acceptable quality but were not explored in the establishment of the design space. Thus, pursuit of a design space can be movement in the opposite direction from a flexible and robust manufacturing process. Direct assessment of product quality via PAT may support more flexibility and robustness than is represented by the design space. When CMA can be actively monitored and feedback control applied to the CPP, then variation in the environment or input materials can be counteracted by new values of the CPP (even values outside of a design space that represents prior experience) to keep the CMA within desired limits. When direct assessment of product quality by PAT is established, it may be more valuable to invest pharmaceutical development resources toward an active control system than toward documentation of a design sp


Quality by design is an essential part of the modern approach to pharmaceutical quality. This paper clarifies the use of QbD for ANDAs including:
  • Emphasis on the importance of the Target Product Quality Profile in articulating a quantitative performance target for QbD.
  • Identification of critical material attributes that provide a mechanistic link of the product quality to the manufacturing process.
  • Clarification that critical process parameters are operating parameters and should be combined with critical material attributes to describe the relation between unit operation inputs and outputs.
  • A definition of non-critical, unclassified, and critical that provides a way to classify process parameters and in-process material attributes
  • The role of the control strategy as the mechanism for incremental implementation of QbD elements into practice
  • An efficient path to a design space through the identification of non-interacting process variables and their exclusion from formal experimental designs.


Opinions expressed in this manuscript are those of the authors and do not necessarily reflect the views or policies of the FDA.
The PAR is the range of experimental observations that lead to acceptable quality. A sponsor has the option of conducting experimental observations over the entire POS; in this case the POS could be equivalent to the PAR. Alternatively a sponsor may use prior knowledge, mechanistic models and trends from the PAR to draw conclusions about sensitivity over a POS that is larger than the PAR.


1. J. Woodcock. The concept of pharmaceutical quality. Am. Pharm. Rev.Nov/Dec 2004:pp. 1–3.
2. M. N. Nasr. Implementation of quality by design (QbD): status, challenges, and next steps. FDA Advisory Committee for Pharmaceutical Science. Available at: 11/21/2007).
3. L. X. Yu. Implementation of quality-by-design: OGD initiatives. FDA Advisory Committee for Pharmaceutical Science. Available at: 11/21/2007).
4. ISPE PQLI. Draft PQLI summary update report. 11/21/2007).
5. W. P. Ganzer, J. A. Materna, M. B. Mitchell, and L. K. Wall. Current thoughts on critical process parameters and API synthesis. Pharm. Technology.46–66 (2005), July.
6. M. Glodek, S. Liebowitz, R. McCarthy, et al. Process robustness: A PQRI white paper. Pharm. Eng.1–11 (2006), Nov/Dec.
7. ICH. Draft consensus guideline: pharmaceutical development annex to Q8. Available at:, 11/21/2007).
8. FDA CDER. Guidance for industry. Pharmaceutical development. May 2006.
9. FDA CDER. Guidance for industry. Quality risk management. June 2006.
10. FDA CDER. Draft guidance for industry. Pharmaceutical quality system. July 2007.
11. FDA CDER. Draft guidance for industry and review staff. Target product profile—a strategic development process tool. Mar., 2007.
12. Yu L. X., Raw A., Lionberger R., et al. U.S. FDA Question-based review for generic drugs: a new pharmaceutical quality assessment system. J. of Generic Med. 2007;4:239–248. doi: 10.1057/palgrave.jgm.4950073. [Cross Ref]
13. FDA Office of Generic Drugs. Model quality overall summary for an extended release capsule. Available at (accessed 11/21/2007).
14. FDA Office of Generic Drugs. Model quality overall summary for an immediate release tablet. 11/21/2007).
15. Potter C., Beerbohm R., Coups A., et al. A guide to EFPIA’s Mock P.2 document. Pharm. Technology Europe. 2006;18:39–44.
16. R. Nosal. PQLI-criticality. ISPE PQLI Berlin Conference. Sept. 2007.
17. Tong C., D-Souza S. S., Parker J. E., Mirza T. Commentary on AAPS workshop dissolution testing for the twenty-first century: Linking critical quality attributes and critical process parameters to clinically relevant dissolution. Pharm. Res. 2007;24:1603–1607. doi: 10.1007/s11095-007-9280-x. [PubMed] [Cross Ref]
18. J. Berridge. ICH Q8 & Q9 (+Q10) defined and undefined: gaps and opportunities. ISPE PQLI Washington Conference. June 2007.
19. R. A. Reed. A quality by design approach to dissolution based on the biopharmaceutical classification system. DIA Annual Meeting, June 2005.
20. Hlinak A. J., Kuriyan K., Morris K. R., Reklaitis G. W., Basu P. K. Understanding critical material properties for solid dosage form design. J. Pharm. Innovation. 2006;1:12–17. doi: 10.1007/BF02784876. [Cross Ref]
21. FDA CDER. Draft guidance for industry. Q8 pharmaceutical development, version 4.3 (draft), Nov. 2004.
22. T. Parks. A science and risk-based approach for establishing critical process parameters and critical intermediate quality attributes. Available at:$File/12+-+CPP+Poster.ppt.(accessed 11/21/2007).
23. R. Nosal. Industry perspective of risk-based CMC assessment under QbD. AAPS Annual Meeting, Oct. 2006.
24. K. R. Morris. Risked-based development and CMC question-based review: Asking the right questions for process understanding, control and filing. FDA Advisory Committee for Pharmaceutical Science. Available at: 11/21/2007).
25. FDA CDER. Guidance for industry: PAT-a framework for innovative pharmaceutical development, manufacturing, and quality assurance. Sept. 2004.

HVAC Strategies for the Cleanroom

Richard A. Bilodeau

Q: What are some key points in HVAC engineering for cleanrooms?
A: "Good things, when short, are twice as good." ~Baltasar Gracián, The Art of Worldly Wisdom, translated from Spanish

A dilemma

This month's topic poses a dilemma. The HVAC system in any controlled environment is the kingpin of clean. Given its criticality, and the myriad considerations in engineering an HVAC system for a cleanroom, this column could end up rivaling War and Peace. To save your sanity (and mine), I decided old Baltasar must be right, so following is the CliffsNotes version, touching on some of my favorite topics. Just remember to consult your favorite engineer when undertaking your next project for "the rest of the story."

Some grounding

When it comes to cleanroom HVAC, it's all about contaminants and environment control. The nasty "C-word" lurks everywhere. Contaminants —invisible to the eye—can pack a punch and wreak havoc on product yield and integrity if not controlled. This sets up a potential double whammy: hitting the company's yield and bottom line, or a research team’s multi-year effort.

The optimal HVAC design solution is determined by desired temperature and humidity control, air flow and pressure, and filtration requirements and air change rates, among other considerations. These design factors are dictated by the requirements unique to your process, facility, and regulatory requirements.

Whether creating a controlled environment for an electronics manufacturer or a life sciences environment free of pathogens, the HVAC system controls your success and will significantly impact your operating costs. 

Contamination lurks everywhere

Whether your goal is to create an environment at ISO 9 classification—not tremendously different from outside air—or drive the cleanliness to ISO 1, the most stringent of controlled environments, a few fundamentals apply:

• Contaminants are not your friend—start by not allowing them in from the outside.
• Those that infiltrate your environment must be eliminated quickly—don't let them accumulate or hang out. Your cleanroom is not the neighborhood bar.
• Besides worrying about particulate interlopers from the outside environment, make sure you have your own house in order. This means minimizing contaminants that your manufacturing or research processes—including the equipment integral to your operations—throw off, whether through biological, chemical, or operating processes. And make sure your employees consistently follow protocols developed to minimize contamination.

A tool you can use: Computational Fluid Dynamics (CFD) analysis

CFD is a software modeling tool that can provide an accurate view of both existing airflow conditions while also modeling projected airflows of a variety of HVAC solutions or system adjustments. CFD is a precise and valuable tool both to design new controlled environments and also to diagnose and solve problems in existing cleanrooms.

CFD can trace its roots to the 1930s, when two-dimensional models were developed to solve linearized potential equations. The development and continued advancement of computing not only enabled the analysis and modeling of any gas or liquid fluid flow in 3D, but also enabled analysis of complex equations. Los Alamos National Labs lays claim to being the first to use computers to model fluid flow using the Navier-Stokes equations.

A valuable tool for today's controlled environments engineer evolved from that rich history. Not only can CFD modeling assist in selecting the most efficient and effective air handling system when designing a new controlled environment, it is extremely useful in dealing with underperforming or problematic controlled environments. CFD can analyze airflow problems, humidity issues related to airflow, temperature gradient issues, the impact of tool sets and other equipment on airflow, and identify air pressure differentials throughout a space or track contaminant flow. This analysis is priceless when working to identify root causes of underperforming cleanrooms.

Figure 1: An example of CFD modeling of a Class 1,000 cleanroom that was experiencing inconsistent airflow and stagnation issues.

Figure 1 shows CFD modeling of a Class 1,000 cleanroom that was experiencing inconsistent airflow and stagnation issues, while the "after" shot in Figure 2 validates the effectiveness of the solution.

Figure 2: After the problem was identified, CFD modeling validated the effectiveness of the solution. 


Energy is king in the HVAC kingdom

Cleanrooms rank among the top energy consuming facilities in the world, driven in large part by their HVAC requirements to meet stringent airflow and pressurization requirements within strict temperature and humidity controls. And as existing technologies increase in sophistication and new technologies create additional demand, controlled environments have proliferated around the globe in more than 30 different industry sectors and are a mainstream feature of academic, medical, industrial, and defense research facilities. It’s been estimated that cleanrooms demand between 10 and 100 times more energy than standard office spaces—mainly driven by air cleanliness standards—and the HVAC system can account for more than half of the facility's energy costs. This impacts operating costs, on top of an already costly capital facility.

Following are a variety of strategies to help reduce energy costs related to your HVAC system:

1. To begin, minimize demand. Take a look at your building. Can you increase the efficiency of the shell? When building new, carefully orient and develop the building form. Is there an opportunity to reduce the volume of your cleanroom? Less volume equates to less air re-circulation with resulting HVAC savings. 
2. Make sure you accurately scope the level of cleanliness and the square footage required. Going overboard in either category will drive up your costs. Considering reducing positive pressurization where prudent. 
3. Flexibility is key. Design your HVAC system with an eye towards flexibility, not only for sustainability, but for future product line and expansion capabilities as well. Don’t forget to plan your HVAC equipment to accommodate part load scenarios.
4. Subdivide your facility's space classifications. Carefully examine the proposed process and product requirements when determining your required cleanroom classification. Don’t shoot an ant with an Uzi. Do you really need the entire space to be stringently controlled?
5. Mini- and micro-environments are your friends; stick or prefab? Consider the use of micro- or mini-environments (see the May 2013 issue of Controlled Environments) and a mix of stick built and prefabricated areas—determined by process specifications and flexibility needs. Utilize these tools to meet your process requirements instead of upgrading your entire cleanroom.
6. Invest in high efficiency equipment. Your upfront costs are an investment with surprisingly short payback periods. And don't forget to use high efficiency filters.
7. Consider energy recovery and waste recovery strategies.
Energy recovery strategies such as an exhaust energy recovery system, co-generation, and equipment or other heat recovery systems can cut demand and costs.
8. Use alternate energy appropriately. You can reduce the load on your HVAC system by carefully analyzing and appropriately using alternative energy sources throughout your facility. Consider solar heating and power, daylight, wind energy, and thermal where technically sound and fiscally responsible. "Green for a reason" is the mandate at SMRT, ensuring that alternate energy sources are operationally sound, financially responsible, and appropriate to the application. Don’t let anyone sell you on being green for green's sake.
9. Analyze the viability of reducing air change rates (ACR). The sizes of your motors and fans are driven in large part by the air change rate in your cleanrooms. Larger motors and fans drive increased HVAC investment and operating costs. You can reduce power usage by approximately two-thirds if you reduce your ACR by approximately one-third.
10. Adjust your airflow to match your production load. Scheduling software and timers can be used to decrease air recirculation and the HVAC load during periods of reduced production. Ditto the wonders of occupancy sensors that can make automatic adjustments depending on the occupancy levels of your biggest contaminant source—people.
11. Locate equipment outside the cleanroom where appropriate. This is a triple bonus strategy. When you locate process tools in an adjacent chaseway and provide critical clean access on the cleanroom side, you will reduce heat gain as well as the square footage required in your cleanroom, resulting in less demand on the HVAC system. You will also make future equipment maintenance easier and less costly.
12. Use variable frequency drives (VFDs). Variable frequency drives, which adjust HVAC equipment speed to match conditions, can cut your energy up to a third compared to constant speed drives.
13. Use particle counters to manage airflow in real time. Carefully located optical sensors provide 24/7 particle counts to the building management system, allowing the HVAC system to operate with efficiency matched to need.
14. Analyze your air distribution system to reduce pressure drop. Your HVAC fans have to work harder in a restrictive air distribution system, raising energy consumption. Keep the freeway open with straight ductwork where possible, eliminating obstructions and carefully sizing duct diameters. Consider the pressure drop properties of supporting equipment like coils, fans, and filters.
15. Don't be overly conservative or cautious. Don't overdesign your HVAC system, or build in too many safety nets. Those behaviors compromise operating efficiencies.

A final word

The world of HVAC design for controlled environments is an always-evolving field with new equipment and constantly emerging operational innovations. While this article provides an overview of some key considerations, the unique properties of your process, product, or research requirements—coupled with those of your physical plant—will determine best practices.

Access Control in Cleanrooms

Bryan Sanderford

When cleanrooms are used in manufacturing or scientific research, the doors and framing materials must have proper gaskets and seals to not allow contaminated air in or out of the area, and the door hardware must close and reseal after each personnel passage. However, additional door devices—such as card access systems, electric locks, station controls, and traffic lights—may be necessary to limit access to authorized persons and operate the airlock for environmental conditions.

Access conditions and options

In order to seal the cleanroom and also allow access to personnel, vestibules with two or more doors are constructed and only one door is allowed to be opened at a time. A simple two-door airlock will have electric locks on each door—opening either door will cause the other door to lock. The door hardware must cause the doors to close immediately after an entry or exit. If both doors are accessed simultaneously, it will be impossible to prevent possible contamination through the unsealed doorway. One solution is to have one or both doors normally locked. Access can be granted through a locked door by card access or push button controls located adjacent to the door and within arm's reach.

An emergency override switch must be located at each doorway and be tied into the fire alarm system.

Frequently, the construction and/or locations of interlocked doors do not allow visibility of the other controlled doors. Traffic lights can be used in these situations to allow a smooth entry and exit by personnel. A normally unlocked door will have an indicator that turns red when the door is locked. If the door is normally locked then the light should be green to indicate that access is allowed. PLC-based controllers can easily provide this function, but most relay logic interlocks cannot. Large, hi-intensity LEDs are best for these applications and operate for an extended life cycle.

If a door does not fully close, then the other related doors will be inhibited and not allow entry or exit. A door prop alarm function may be incorporated into the PLC programmed logic or can be an independent stand-alone alert device. The door prop function monitors how long the door remains opened—should it not be closed in a predetermined time, an alert is sounded and a supervisor may be summoned.

Overrides for safety

Special lock override controls may be included to allow escape in an emergency. The system must be tied into the fire alarm system to unlock the doors in case of fire. An emergency override switch should be located within the room and at each doorway. Some facilities use an emergency pull station or a latching push-pull switch with a key reset; these devices frequently incorporate a built-in sounder to alert area personnel that the doors are unlocked.

Traffic lights can give personnel visual signals for door access.

Custom cleanroom sequences

Some facilities require automatic door operators to allow rolling carts. Most swing door operators are powered open and spring closed. Sliding doors are nearly always powered open and power-closed. If locks are needed, special programming is required to have the door sequenced to unlock before the door operator attempts to open it. Traffic can be directed by specifying motion sensors and/or push button control switches on only one side of the doorway.

Complex traffic patterns can be accommodated using a PLC-based controller. These can involve dozens of doors, or doors used to pass between two sterile rooms. This situation requires that all doors in each of the two rooms be secured whenever the "shared" door between them is open. The shared door may be normally unlocked for faster traffic if the other doors in the two rooms are normally locked.

A PLC programmed interlock can operate devices and systems other than electric locks. A custom timing sequence may be required to allow exhaust fans to extract contaminated air from the airlock before allowing the door to be unlocked. Similarly, one system uses a special function to inflate and seal the door gaskets before allowing access through another door.

Performing a Clean Zone Smoke Test

Ruben Contreras

As a Registered Cleanroom Certification Professional, I have seen an increase in the use of "clean zones" inside of cleanrooms in pharmacies. There are numerous types of unidirectional-flow devices that have been used in the past; however, when using a unidirectional-flow device in the compounding processes, the suppliers, customers, and certifiers need to keep in mind the meaning of the direct compounding area, first air, and critical site.
• Direct compounding area (DCA): a critical area within the ISO Class 5—a primary engineering control (PEC) where critical sites are exposed to unidirectional HEPA-filtered air, also known as first air.
• First air: the air exiting the HEPA filter in a unidirectional air stream that is essentially particle-free.
• Critical site: a location that includes any component or fluid pathway surfaces (e.g., vial septa, injection ports, beakers) or openings (e.g., opened ampoules, needle hubs) exposed and at risk or direct contact with air (e.g., ambient room or HEPA filtered), moisture (e.g., oral and mucosal secretions), or touch contamination. Risk of microbial particulate contamination of the critical site increases with the size of the openings and exposure time.
To test a clean zone, there are several issues like smoke reflux, dead spots, and numerous smoke deviations to look for. It is important not to separate the media filter from the wall since the farther it is from the wall, the more reflux can be found. The HEPA filter media should be flush with the walls of the clean zone and the front shield. The lexan shield must be consistent in its area of opening from the HEPA filter through the entire travel of the ISO-5 zone. Providing an incline to expand the area of the opening will lower the velocities and uniformity of the supply velocity. The uniformity of airflow velocity can be affected by the type of filter utilized, like a room side replaceable HEPA or ULPA filter. The typical loss of filter area also lowers the cfm (cubic feet per minute) of the filter, thus lowering the velocities as well as the volume.
To perform this test personally, I have taken 10 measurements (two columns, five rows) of velocity (testing a 2 by 4 ft. HEPA filter) using a termo-anemometer held by a suitable stand to avoid the manual fluctuations.
The test procedure to validate these clean zones requires a thermo-anemometer with a ring stand placing the probe of the anemometer 12 in. under the filter and beginning 6 in. away from the corner in each direction. Then a 12 in. grid is set up with readings taken every 12 in. on center.
Compute the arithmetic mean of the velocities recorded. Ex. average x = (y1 + y2 +…+ yn /n).
y= Average of each test position
n= Number of test positions
x = Average of filter face velocity
Compute the standard deviation (S)²=((y1 - x)² + (y2 - x)²
+ ….+ (yn - x)²) / (n-1)
Compute the relative standard deviation (RSD)= (standard
deviation (S) / average velocity (x)) * 100
Most compounding pharmacy clean zones have I.V. bars and bags which makes it important to get the relative standard deviation as low as possible. IEST-RP-CC-002.3 Section 6.1.1 states, "The maximum relative standard deviation is typically 15% when using an electronic micro-manometer with multipoint probe."
It's preferable to keep the RSD below 10% to create the effect necessary to pass the smoke challenge without high reflux, dead spots, or airflow that travels sideways as opposed to straight down.
Selecting the filters for the clean zones
Careful consideration of filter manufacturers is recommended as we have seen large discrepancies in uniformity of face velocities among different manufacturers and even filter types, like HEPA to ULPA designs. Uniformity or unidirectional flow is the key to the functionality of these systems as well as a continuous low level return beneath the work zone.
Achieving a deviation of less than 14 degrees in an ISO-5 clean zone
There must be a continuous length of HEPA filter across a wall away from the door with a 4 in. continuous wall return cavity along the back of the area. The clean zone must have 100% coverage of the area, and use a polycarbonate directional flow shield from the ceiling grid to 5 ft. off the floor. The work table or zone should be moved nearer or farther away from the back wall as necessary to cause a waterfall effect on the front and back edges of the work table. A solid table produces an effect similar to a biological safety cabinet centerline smoke split. (Note: Perforated tables are difficult to clean and maintain.) Be sure the table is no more than 30 in. in height and 30 in. or less in width. Once the desired water fall effect is noticed, secure the work table at that location and perform the smoke pattern test.
Performing the clean zone smoke pattern test
Purpose: This test determines that the airflow within the clean zone conforms to the manufacturer's design criteria. This shows the airflow within the DCA moving in a downward direction with no dead spots or refluxing in the critical site. Be sure that the ambient air does not enter the clean zone or other areas except through the supply HEPA filter. Once the air enters the DCA, it must move to the returns without reentry. (Note: This test should be performed following completion of the airflow velocity, volume, room air changes per hour, room pressures, and uniformity tests.)
Apparatus: A source of visible smoke that is generally neutrally buoyant. Chemical smoke tubes or glycol-base smoke generators are examples of acceptable smoke sources.
Good results can be achieved using a theatrical fog generator that has a mixture of glycol with a fan speed controller connected to a delivery tube, supported by a hands-free stand (1 in. diameter PVC with small holes to create a laminar-like curtain smoke pattern). It is recommended that the PVC tube be the same size as the distance between the front shield and the wall.
• Adjustable support stand
• Plumb bob
• Tape measure
• Video or digital camera
Procedure: • Place the delivery tube with the plumb bob.
• Turn on the smoke generator and adjust the fan speedcontrol to get the desire laminar airflow.
• Introduce the aerosol stream isokinetically and, as nearly as practical, isothermally.
• Generate the smoke remotely from the vicinity of the source.
• Move the smoke tube through the entire area to be tested, sliding the hands-free stand slowly so that the whole clean zone area is observed and video recorded.
• With the pointer mounted in the support stand, at the work table exit plane, measure the offset distance (Δs) between the theoretical straight-line flow point and the
center of the source stream. Measure the distance between the delivery outlet tube and the work table exit plane (d).
• Calculate the angle of deflection, Theta (θ). The angle (θ) is found as the arctangent of the ratio expressed as (Δs)/(d) using the equation: θ = Arctangent(Δs)/(d).
• Example: if the distance between the delivery outlet tube and the work table (d) is 4 ft. and the offset distance (Δs) between the theoretical straight-line flow point and the center of the source stream is 1 ft. Using the equation, we have θ = Arctangent (1 ft./4 ft.).
• θ = Arctangent 0.25.
• θ = 14.036°. That means that for every 4 ft. (d) we will have no more than 1 ft. (Δs).
Acceptance: Readings in excess of a 14-degree offset should be discussed with the customer and either approved or corrections made until acceptable.
The clean zone smoke pattern test needs be performed "as built," "at rest," and "operational" phase.
Other tests that need be performed are: airflow volume, airflow velocity, and uniformity test; HEPA filter leak test; airborne particle count test; rooms pressurization test; light level and uniformity test; noise level test; temperature uniformity test; moisture uniformity test; vibration test; microbiological sampling test and air changes per hour (ACPH) calculations. (See IEST-RP-CC-006.3 Section 5.1 and USP 34 (1116) for details.)
References1. IEST RP-CC-001.5, HEPA and ULPA filters.
2. IEST RP-CC-002.3, Unidirectional flow, clean air devices.
3. IEST RP-CC-006.3, Testing Cleanrooms.
4. United State Pharmacopeia (USP) 797, Pharmaceutical Compounding physical test.
5. United State Pharmacopeia (USP) 1116, Microbiological evaluation of clean rooms.
6. ISO 14644.1:1999, Cleanroom and associated controlled environments.
7. Controlled Environment Testing Association (CETA) CAG-003-2006, Revised Jan. 31, 2012,  Sterile Compounding facilities.

Key Considerations When Specifying Cleanroom Lighting

Jamie Pearson
Get today's news and top headlines for cleanroom professionals - Sign up now!
Specifying lighting systems for cleanroom facilities requires considerations beyond energy and maintenance savings. While lighting for cleanrooms should be energy efficient and provide proper illumination for the task at hand, it is crucial that lighting coordinates with air-supply systems and minimizes any chances for contamination.

The first step to identifying what lighting solution can be utilized in the cleanroom facility is determining the ISO classification. The higher the ISO classification, the greater the number of particles allowed into the controlled environment. This means there will be fewer air filters taking up space in the ceiling. In the most stringent cases, the entire ceiling must be covered with filters to allow the absolute minimum amount of particles into the environment.

Airflow systems create lighting challenges
Everything in the cleanroom, including the lighting fixtures, is designed to ensure successful air filtration and maintain the laminar airflow in a contamination-free environment. Depending on the function of the controlled environment, a cleanroom will use either HEPA or ULPA filtration. These air-filtering systems are typically an expensive component and one of the first to be considered in construction. They take up a majority of the ceiling space, which leaves a real challenge for lighting the environment.

Hours of thorough research and design are required to create cleanroom lighting fixtures that maintain the integrity of the space and work effectively with the HVAC system. Regardless of the industry, every cleanroom lighting design should be virtually maintenance-free and provide the right amount of illumination.

Common fixture styles

The need for multiple air filters in cleanroom facilities leaves minimal space for light fixtures. There are three common fixture styles for cleanrooms that maximize the use of the space: recessed troffers, surface-mount fixtures, and surface-mount teardrop fixtures. These are best-suited for use in cleanroom facilities, depending on ISO classification.

Recessed troffers: These fixtures are ideal for cleanrooms designated with an ISO 8 to ISO 5 classification because they have enough space in the ceiling and plenum. While rooms with an ISO 8 and ISO 7 classification have more space in the plenum than rooms with a classification of ISO 6 or ISO 5, both environments can use recessed fixtures. Environments with these classifications are typically electronic assembly, pharmaceutical processing, semiconductor manufacturing, and chemical lab facilities.

Surface mounts: For more demanding environments—those with ISO 3 and ISO 4 classifications—surface-mount fixtures can be installed below the ceiling plane. Because of the high air-filtration level required, there typically is not enough plenum space for recessed troffers so surface-mount fixtures are especially efficient.

Surface-mount teardrops: Surface-mount teardrop fixtures are suitable for controlled environments with an open-area design and moderately demanding classifications of ISO 5 and ISO 4. These fixtures are mounted to a 2-in.-wide ceiling grid and are suitable for individual or continuous row mounting. The aerodynamic shape of the teardrop fixture minimizes any disturbance of the laminar airflow.

Achieving effective light quality

In addition to becoming familiar with the fixture styles for each ISO classification, it is important to know what to consider when selecting the exact fixture. Understanding what lighting features are most important for cleanroom environments will help determine the most efficient solution that requires the least amount of maintenance possible.

Light levels

In most cases, cleanroom facilities produce and/or test small objects. Pharmaceutical and semiconductor industries, for example, typically use cleanrooms to manufacture small pills and computer chips, respectively.

Lighting fixtures must provide a high level of illumination so employees have enough light to carefully handle small components. Illumination levels may vary depending on the different process areas within the cleanroom, so it is important for the specifier to review each process area to determine the illumination level required for each task.

When specifying fixtures, the goal is to select the least number of fixtures that can provide the necessary illumination for a cleanroom. Limiting the number of fixtures helps reduce the amount of maintenance needed. Often, lighting manufacturers can help determine the number of fixtures needed for a given space by reviewing the layout of the cleanroom facility.
Furthermore, surfaces in most cleanrooms tend to be highly reflective. Fixtures need to produce a high intensity of light but should not produce any glare. There are high-efficiency fluorescent luminaires on the market today—ones that produce the amount of light intensity needed with minimal glare.


Fixture housings must be constructed of materials such as aluminum, carbon steel, or stainless steel. These help eliminate the chance of contamination and stand up to rigorous cleaning procedures. The fixture should be welded or sealed shut to prevent the passage of particles into or out of the housing. This minimizes the chance for dust or particle collection on the fixture.

In some cases, fixtures should be wet-listed to accommodate cleanrooms that need to be periodically hosed down.


Lenses should have a smooth outer surface, meaning those with prisms should be inverted so the prisms are on the inside of the fixture. This makes the fixture easy to clean and creates less opportunity for contaminants or dirt to collect. Many lenses and diffusers are available for each ISO classification. Acrylic lenses currently are the most common type because they are the most durable.

For cleanroom environments in the semiconductor industry, lenses can be constructed with radio filters so they do not compromise any electronics in the facility.

Advancements in cleanroom lighting

Continuous technological innovations have made the cleanroom industry dynamic and ever-changing. Manufacturing has become more demanding and regulations are becoming more stringent. Given those factors, plus the current focus on energy savings and the environment, a growing number of cleanroom lighting projects today are retrofits or renovations.

Currently, fluorescent lighting is the most common source for lighting cleanrooms. However, because of the need for more energy-efficient and reduced-maintenance options for cleanroom facilities, research and advancements in LED luminaires for cleanrooms are on the horizon.

Even with international standards for cleanroom facilities, it is important for the lighting specifier to consider the function of the specific controlled environment. Ultimately, the primary goal for all cleanroom facilities is to ensure there is no contamination and the space is properly illuminated for the task at hand. Selecting the most reliable and efficient lighting fixtures can help ensure both of these objectives are achieved.

Trimming Down Cleanroom Insulation

Ed Sullivan
Get today's news and top headlines for cleanroom professionals - Sign up now!
In some respects, the insulation traditionally used in cleanroom manufacturing is like those 1980s-era cellular phones—much too clunky and somewhat prone to performance problems. But then, the conventional open-celled polyethylene foam insulation used in cleanrooms dates back to the 1980s or earlier.

The problems with cumbersome insulation designs become very pronounced in the manufacturing cleanroom environment, where thousands of feet of fairly narrow reactor piping form a congested maze of plumbing once the insulation has been installed.

A half-inch line with three-inch insulation becomes 6.5 inches in diameter. When you consider the multitude of lines in the typical manufacturing cleanroom, it’s no wonder the space gets crowded.

“One of the problems with the traditional melamine fiber or foam insulation design used in most cleanrooms is that it severely constricts the space needed by technicians to access the many points in the lines where instruments and controls are located,” explains Mark Ginchereau, vice president of Termar Inc., a Ventura, Calif.-based maintenance contractor and insulation installer.

Ginchereau adds that, until recently, the only alternative to having a cleanroom densely packed with open-celled or polyethylene insulated lines was to build a larger cleanroom and install longer lines so that there would be more elbow room for technicians—an expensive solution.
He mentions other drawbacks to conventional insulation as well. Standard open-celled insulation sheds particulate when cut. This makes it necessary to provide additional protection from cross contamination and exposure. 

Yet, standard open-celled insulation can also shed particulate due to everyday contact from workers who need to gain access through tight spaces. Any uncontrolled particulate shedding can require extensive replacement and unscheduled cleanroom downtime.

Pipe lines can be a maze. Thinner pipe insulation offers more space and easier access for maintenance and repair. 

Ginchereau also mentions that cleanrooms with melamine fiber-filled insulation in some chilled-water applications may be susceptible to condensates forming in fiber, due to chinks or even cracks resulting from impact damage or normal wear and tear from worker contact. Condensate formations can provide a breeding ground for biological growth, a highly undesirable intruder to any cleanroom environment. The condensate problem aside, cracks or other leaks in insulation are detrimental to maintaining exacting temperatures in cleanroom applications.

Another serious concern about the use of bulky traditional insulation is the possibility of serious worker injuries due to contact with superheated or frigid lines. Ginchereau explains that the greater the congestion of plumbing due to the use of insulation, the greater the potential for injury.

A new breed of insulation

One of the newer materials that provides a new option in cleanroom insulation is a PVDF-based, high-purity foam. This specialty plastic material is a closed-cell foam that in a thickness of only one-quarter inch offers chemical and heat resistance as well as other properties that are equivalent to eight times of what conventional foam provides for cleanroom applications. In other words, one-quarter inch of PVDF-based insulation is equivalent to two inches of open-celled insulation.

In recent years UFP Technologies, Georgetown, Mass., a producer of foam, plastic, and composite products, incorporated the new PVDF technology into an advanced tube and pipe insulation system specifically developed for process lines and equipment in cleanroom environments.

“When you consider that instead of six-inch-plus insulation on dozens of reactor lines, you are adding only one half inch in diameter to a one-inch or two-inch pipe, you can save a lot of real estate,” Ginchereau explains. “In the overall, the insulation is taking up only about one-tenth the space of traditional fiberglass.”

This savings of space translates to many benefits, including reduced cleanroom size requirements. When you consider the space requirements of cleanrooms housing multiple reactors connected to thousands of feet of pipeline with “fat” insulation, the amount of space is dramatically reduced with the use of a “slimmer” product. 

Also, with thinner pipe insulation, more space is available for technicians to access reactors and plumbing, resulting in improved worker productivity as well as less exposure to contact with super-heated or super-cooled lines.

This PVDF-based product offers several other features that render important benefits to operators of manufacturing cleanrooms. The system includes custom-molded coverings for fittings, and an overlapping, self-adhering tape that provides a superior seal. This reduces the possibility of condensate, which can saturate ordinary foam insulation, creating leaks and enabling biological contamination.

Unlike traditional open-cell insulation, this technology does not shed when cut. This means fewer impurity problems while cleanrooms are live, and no need for protective bags and hoods or downtime during installation.

The PVDF-based product doesn’t burn or smoke, which is very important to pharmaceutical cleanrooms where millions of dollars’ worth of drugs could become tainted and lost if exposed to smoke; and it is compliant with Factory Mutual Approvals 4910 standard for cleanroom materials. It has also successfully completed FM’s 4924 Pipe Chase Flammability Test and is rated for use by the semiconductor industry.