Wednesday, June 3, 2009

process scaling parameters

Is Your Process Scalable?

When it comes to process scale-up, the sooner the better

During the lengthy transition from discovery to commercialization, new chemical and biopharmaceutical processes must be scaled up to operate in production-scale equipment. Companies often perform initial scale-up efforts quickly to meet early-stage material needs with little consideration of the implications for long-term commercialization. This can unknowingly entrench dead-end or unnecessary methodologies into the process and cause significant delays in process development. These unexpected delays in a new product launch are costly in both time and resources as teams scramble to try and understand what went wrong and how to rework the process at larger scale.

Conversely, planning ahead with upfront engineering of process scaling parameters will provide clear direction in carrying the process from the laboratory, through pilot and clinical scale, and ultimately to commercial scale. Having such a process scale-up "road map" very early in the process development cycle is critical to a speedy, problem-free product launch and can ultimately reduces process development costs.


There are a myriad of laboratory techniques and operations that cannot be easily scaled up in standard plant equipment. Two classic examples are evaporating to dryness in a rotovap and crash-cooling in an ice bath. Processing equipment for eventual scale-up is often selected with only limited data from development runs and frequently on the basis of simplicity, cost, and availability. All too often, equipment selection overlooks critical scale-up factors that are key to a particular process performing as expected.

Such choices may prove unwise or even unworkable in the long run. Mixers may not allow aseptic processing at larger scales, membranes may not be available in larger sizes, or a particular centrifuge may not meet the CIP requirements of cGMP. But other, less obvious, scale-up limitations can surprise development teams and delay launch by adding months or even years to the development cycle.

For example, operations generally take much longer in the plant than in the lab. Extended reagent addition times at the pilot or commercial scale can create transient changes in stoichiometry that can adversely affect reaction selectivity and may result in unexpectedly high impurities. Besides the obvious quality issues this creates, even small amounts of impurities can significantly affect crystallization or other downstream operations. Development is effectively put on hold until the impurities are identified and their sources eliminated, which sometimes requires major changes to the process chemistry at a relatively late stage in development.

Extended cycle times for operations at a larger scale means that the various product streams must be stable at the given conditions, or yields will suffer and new degradation byproducts will appear. This can even lead to decomposition and thermal runaway. It's easy to overlook stream stability when a process can be completed in a matter of hours at the lab bench. Two other common scale-up pitfalls, process heating and mixing, are considered below in more detail.


Heat transfer is an area of critical importance in scale up because it affects not only process performance, but can also have serious implications for process safety. The heat generated by exothermic reactions, for example, is often inconsequential at laboratory scale, but in large vessels heat removal becomes rate limiting. As vessel size increases, the available area for heat transfer per unit volume decreases exponentially.

Consider crystallization, where accurate temperature control can play an important role in producing the desired particle size distribution, crystal morphology and even the correct polymorph. The precise cooling profile for successful scale-up should be based on solubility studies, super-saturation modeling and other engineering studies performed at the bench. Even so, the impurity profile of the feed stream and the width of the metastable zone for nucleation can change in large scale equipment, resulting in unexpected behavior. Many a new polymorph has been discovered too late-for example, at the end of the first registration batch!

Many biomolecules, cell extracts and whole cell preparations are extremely sensitive to unexpected temperature excursions. It is therefore important to keep in mind that the thermal environment in large scale reactors can be quite different from that in laboratory glassware. One common factor often overlooked is the effect of vessel wall temperature. While the average temperature of the batch may be routinely recorded, scientists may be less likely to measure the actual surface temperature of their lab vessels. To reduce heat cycle time, operators of large scale equipment will try to maximize the temperature differential between the vessel jacket and the batch. This can mean that the batch is subjected to higher wall temperatures and for longer times than in the lab, again, opening the door for decomposition.


Vessel mixing is a particularly difficult operation to scale up. Most development scientists take for granted the fact that they can turn up the mixing speed in their laboratory flasks if they encounter a problem such as unsuspended solids or poor dispersion of a heterogeneous reaction mixture. It is often not practical to simply "turn up the mixer" in larger industrial vessels. For one thing, mixer motor horsepower increases exponentially with impeller diameter and the power requirement for strictly matching laboratory mixing conditions can become prohibitive.

Fig. 1: Mixing Screening Tests Define Stable Operating Region

Consider two geometrically identical vessels with capacities of 1 liter and 100 liters. Both are stirred at a rotational speed of 300 rpm. Due to the non-linear nature of scale-up, the 100 liter reactor would experience an impeller circumferential tip speed nearly 5 times greater than the 1 liter vessel, a degree of fluid turbulence nearly 20 times greater, and a mixing energy input per unit volume nearly 400 times greater!

This illustrates the fact that one must carefully select the basis for mixing scale-up (writing a process recipe to stir at "300 rpm" is of little value without more information). Mixing engineers must apply a balanced approach that considers the relative importance of maintaining geometric similarity, dynamic similarity and kinetic similarity between the two processing scales.

These classical engineering parameters are used for quantifying mixing conditions in a stirred reactor. For example, if attempting to compare a large and small vessel in terms of kinetic similarity, one could calculate impeller tip speed, or a dimensionless measure of fluid turbulence called the Reynolds number, Re. If concerned with maintaining dynamic similarity (energetics and force components), a useful quantitative parameter is the energy dissipation, often abbreviated Ei. It is a function of rotational speed, impeller geometry, batch size and batch properties. The expression for energy dissipation is given below:

Where N is the rotational speed of the mixer, D is the diameter of the impeller, V is the volume of the fluid being mixed and NP is an empirical power number, characteristic of the specific mixing system in use. Taking the time to measure these important variables can prove quite valuable in successfully predicting mixing performance during scale-up.

Even beyond power limitations, mixing comprises a number of physical and mechanical phenomena that do not scale up in a linear fashion. Further complicating the picture is the myriad of proprietary impeller designs and special-purpose mixers available for chemical reactors.


A pharmaceutical company was having difficulty scaling up a reactive crystallization process for a pharmaceutical product that involved mixing two disparate fluids in a semi-batch operation. Laboratory runs were reproducible, but the first pilot runs were fraught with difficulty. The company was unable to consistently produce on-spec product at the pilot scale and, as a result, was facing delays in the schedule to commercialize the product.

The chemistry and crystallization kinetics were well understood, but the high viscosity of one of the fluids involved and potential mass transfer effects were not considered in early bench-scale work. The bench runs were successful because it was possible to use very high mixing rates in laboratory equipment. In such small, well-mixed systems reaction kinetics are rate-limiting and mass transfer is usually not a factor.

A process that runs close to the edge of the acceptable operating region may produce consistent results at small scale, but as scale increases, mass transfer limitations can become controlling parameters. In the example under discussion, poor mixing dynamics tended to move the process into an unacceptable operating regime. Engineers determined that non-homogeneity and the resulting concentration gradients caused by poor bulk mixing were responsible for the irreproducibility observed at the pilot scale. Such poorly mixed systems are difficult or impossible to control. Furthermore, a direct linear scale-up of the bench process would have required mixing energy inputs that were prohibitive.

In this case, the reaction rate occurred on a much faster time-scale than bulk mixing dynamics. This situation can be expressed in terms of the Damkohler number, Da, the ratio of a characteristic mixing time (tmix) to a characteristic reaction time, tRx. Ideally, to ensure that the process remains reaction-limited upon scale-up, mass transfer should be fast compared to reaction rate; i.e. the Damkohler number should be very small:

Damkohler number = Da = tmix/tRx <<1

The goal then was to achieve complete mixing (which included good dispersion of the entering feed stream, good bulk mixing in the vessel, and sufficient micromixing to facilitate the reaction) at an energy input that was both technically feasible and economical for large-scale processing. The exercise was further complicated by the non-Newtonian (shear-thinning) rheology of the system, and the need to avoid transitional regimes (such as near the impeller periphery) while still providing a high shear environment for blending the two very different fluids.

The first task was to identify the boundaries of the acceptable operating region, that is, to determine where the process failed due to poor mixing. The chemical phase diagram for product crystallization had been identified earlier using high-throughput screening techniques. In similar fashion, screening tests were performed to define stable operating regions for dispersion and mixing.

These screening tests were performed via a PD-Scale unit (Process Development and Scale-up Platform, IMPACT Technology Consultants, Lincoln, Mass.). The PD-Scale unit is a proprietary equipment and test platform with two mixing vessels that are designed to be proportionally equivalent in mixing power (P/V), L/D, etc., yet at two different scales (in this case 100ml and 1 liter).

The PD-Scale unit provided a test platform to verify the mass transfer-limited nature of the process at two different scales and allowed engineers to model and identify the regions where mixing began to affect performance.

In Figure 1, the mixing studies are compared to the screening studies initially performed for crystallization optimization. Just as the initial chemistry studies defined the appropriate concentration and temperature conditions, the mixing studies identified the operating regime where good crystallization could be obtained reproducibly. This helped define the important parameters for mixing-vessel geometry, impeller type, agitation rate, etc.

Once these operating limits were identified, and the governing parameters for mixing were established, a novel scale-up design was proposed based on a modification of existing mixing equipment. The successful approach specified two mixing zones. Feed solutions were injected into the high shear environment of a small scale, high-speed dispersion mixer. A second, lower speed mixer and a pump around loop with an in-line homogenizer provided bulk mixing. This arrangement ensured high mixing precisely where it was needed to improve performance and maximize yield. This eliminated the need to apply such a high mixing rate to the entire batch mass, which would have been impractical. This arrangement generated consistently high quality product at pilot scale and is ultimately scalable to commercial production.


Something often under-appreciated in process development is the importance of proper "scale-down." Scale-down is the exercise of selecting the appropriate bench-scale equipment, operating conditions and mathematical models to successfully simulate pilot or production-scale operations in the lab. It helps one collect meaningful data at the bench that is translatable to larger scale equipment.

Let's consider a mixing scale-up case where your goal is to maintain constant energy dissipation at both lab and plant scales (this is often the most important consideration in scaling up heterogenous systems). The manufacturing site and the equipment train have already been earmarked for the campaign and thus you know the geometry, dimensions, impeller characteristics, and operating speed of the manufacturing vessel. This information, together with the fluid properties allows you to calculate the energy dissipation to be expected from the vessel. It then becomes your task to setup a laboratory-scale reactor and operate it in such as way as to achieve this same energy dissipation.

The laboratory vessel will most likely not have the same geometry or impeller type as the manufacturing vessel. Nonetheless, with a properly calibrated system and good controls, you can operate that laboratory vessel at a nearly identical level of energy dissipation. This provides the opportunity to test the reaction under mixing conditions that will quite accurately predict performance at the large scale. The scale-down system can then help one understand how to generate the same mixing energy input or the same degree of fluid turbulence or how to achieve the same bulk 'blend time' at large scale as in the laboratory.


Often, when problems are discovered on scale-up, the solution requires modifying the process in one way or another. Any such change late in development, be it in equipment type, processing strategy, operating conditions, or process recipe, can significantly extend the development and approval timeline, and can even force a repeat of otherwise successful clinical trials. This vicious and often avoidable cycle has killed many promising drug candidates, often with the final word being that "the process does not scale up." In many cases, the process could have been scaled up if appropriate rules had been applied and research conducted early in development. To avoid or mitigate this scenario, think about scale-up early in development. In fact, the process should be designed from the beginning with scalability in mind. This enables the development team to perform studies to collect the necessary data, and/or develop the appropriate models to facilitate transfer to pilot and commercial equipment. Any tough problems are identified and solved early, at significantly lower cost than if the project were allowed to progress.

The work invested up front to understand the key process parameters pays big dividends by providing a clear road map for process development, whether it's done in-house or out-sourced. This effort can help ensure that the ultimate commercial process is robust, reproducible and economical, and will operate within the constraints of cGMP. It can significantly reduce time-to-market by shortening the development period, and in some cases it can cause significant cost savings by identifying problems before go/no-go decisions are made.

Also, scalability is a critical factor for large pharmaceutical companies looking to invest in promising new drug product companies. Having the process scale-up road map and supporting data from the very beginning provides potential partners with added confidence in the company's ability to quickly and safely complete clinical trials and commercialize the process.


We have seen that scale-up means much more than simply buying a bigger tank. Instead, a trouble-free scale-up requires a comprehensive understanding of the fundamental principles and variables that govern the process. The critical process parameters must be identified and their limits established early on. This will not only ease scale-up, but will lay the groundwork for the all-important process validation that will follow.

Good process scalability is also critical to meeting company milestones because it ensures that on-spec product is produced consistently at increasingly larger operating scales. When unexpected process or equipment changes are required late in the development cycle, it inevitably produces delays in getting the product to the marketplace for an otherwise efficacious and promising new drug candidate. Clinical trials may have to be halted or even repeated because of these unanticipated problems.

Considering process scalability at an early stage is a minimal investment that can pay off handsomely as a new drug process or formulation proceeds smoothly through clinical trials and into production and commercialization. -PFQ

No comments: