Saturday, October 24, 2009

Kinetics of Residual Hydrogen Peroxide in Presence of Excipients and Preservatives

Vaccine samples were spiked with 10 ppm hydrogen peroxide and stored at 4, 25, and 37°C for approximately 90 days with and without trace amounts of thimerosal present. Thimerosal dramatically reduced hydrogen peroxide levels in samples stored at 37°C.
Vaccine samples were spiked with 10 ppm hydrogen peroxide and stored at 4, 25, and 37°C for approximately 90 days with and without trace amounts of thimerosal present. Thimerosal dramatically reduced hydrogen peroxide levels in samples stored at 37°C.

Quantitation of residual hydrogen peroxide (H2O2) and evaluation of the impact on product stability is necessary as unwanted H2O2 can potentially be introduced during the manufacturing of pharmaceuticals, biologics, and vaccines. A sensitive and convenient microplate-based method with fluorescence detection for H2O2 quantitation was recently reported (Towne et al., 2004, Anal Biochem 334: 290-296).

This method was found to be highly robust and reproducible, with a level of detection of 0.015 ppm and a level of quantitation of 0.025 ppm (in water). The relatively small sample requirements and amenability for automation make this assay an attractive tool for detecting residual H2O2 levels. Without additional manipulation, the assay can be conducted on heterogeneous solutions with significant degree of turbidity, such as the presence of suspensions or aluminum-containing adjuvants.

The quantitation of H2O2 and its decomposition kinetics was also studied in presence of two common vaccine preservatives (thimerosal and phenol) and eight commonly used excipients (polyols). Over time, there is a distinct, temperature dependent decrease in H2O2 recovered in thimerosal and phenol containing samples versus non-preservative containing controls. Based on the half-life of spiked H2O2, the decay rates in eight polyols tested were found to be: ribose > sucrose > (glycerol, glucose, lactose, mannitol, sorbitol, and xylose).

Towne V, Oswald CB, Mogg R, et al. Measurement and decomposition kinetics of residual hydrogen peroxide in the presence of commonly used excipients and preservatives. J Pharm Sci. 2009; 98:3987-3996. Correspondence to Victoria Towne, Department of Bioprocess and Bioanalytical Research, Merck Research Laboratories at or (215) 652-5370.

Analysis of Heparins and Potential Contaminants Using 1H-NMR and PAGE

Chemical structures of major repeat units of the sodium salts of (A) heparin, (B) chondroitin sulfate (R = SONa+; R = H, CSA; R = H; R = SONa+, CSC), (C) dermatan sulfate, and (D) oversulfated chondroitin sulfate.
Chemical structures of major repeat units of the sodium salts of (A) heparin, (B) chondroitin sulfate (R = SONa+; R = H, CSA; R = H; R = SONa+, CSC), (C) dermatan sulfate, and (D) oversulfated chondroitin sulfate.

In 2008, heparin (active pharmaceutical ingredient, API) lots were associated with anaphylactoid-type reactions. Oversulfated chondroitin sulfate (OSCS), a semi-synthetic glycosaminoglycan (GAG), was identified as a contaminant and dermatan sulfate (DS) as an impurity.

While DS has no known toxicity, OSCS was toxic leading to patient deaths. Heparins, prepared before these adverse reactions, needed to be screened for impurities and contaminants. Heparins were analyzed using high-field 1H-NMR spectroscopy. Heparinoids were mixed with a pure heparin and analyzed by 1H-NMR to assess the utility of 1H-NMR for screening heparin adulterants.

Sensitivity of heparinoids to deaminative cleavage, a method widely used to depolymerize heparin, was evaluated with polyacrylamide gel electrophoresis to detect impurities and contaminants, giving limits of detection (LOD) ranging from 0.1% to 5%. Most pharmaceutical heparins prepared between 1941 and 2008 showed no impurities or contaminants. Some contained DS, CS, and sodium acetate impurities.

Heparin prepared in 2008 contained OSCS contaminant. Heparin adulterated with heparinoids showed additional peaks in their high-field 1H-NMR spectra, clearly supporting NMR for monitoring of heparin API with an LOD of 0.5-10%. Most of these heparinoids were stable to nitrous acid treatment suggesting its utility for evaluating impurities and contaminants in heparin API.

Zhang Z, Li B, Suwan J, et al. Analysis of pharmaceutical heparins and potential contaminants using 1H-NMR and PAGE. J Pharm Sci. 2009;98:4017-4026.Correspondence to Robert J. Linhardt, Department of Chemistry and Chemical Biology, Rensselaer Polytechnic Institute, at or (518) 276-3404.

SIMANIM Particles for Modified-Release Delivery of Antibodies

Scanning electron micrograph of the spray-dried, IgG formulation (a), and  transmission electron microscopy image of poly(lactide-co-glycolide) nanoparticles produced upon incubation of the spray-dried microparticulate formulation in aqueous media (b).
Scanning electron micrograph of the spray-dried, IgG formulation (a), and transmission electron microscopy image of poly(lactide-co-glycolide) nanoparticles produced upon incubation of the spray-dried microparticulate formulation in aqueous media (b).

Simultaneously Manufactured Nano-In-Micro (SIMANIM) particles for the pulmonary delivery of antibodies have been prepared by the spray-drying of a double-emulsion containing human IgG (as a model antibody), lactose, poly(lactide-co-glycolide) (PLGA) and dipalmitoylphosphatidylcholine (DPPC). The one-step drying process involved producing microparticles of a diameter suitable for inhalation that upon contact with aqueous media, partially dissolved to form nanoparticles, 10-fold smaller than their original diameter.

Continuous release of the model antibody was observed for 35 days in pH 2.5 release media, and released antibody was shown to be stable and active by gel electrophoresis, field-flow fractionation and enzyme linked immunosorbent assay. Adding 1% L-leucine to the emulsion formulation, and blending SIMANIM particles with 1% magnesium stearate, achieved a fine particle fraction of 60%, when aerosolised from a simple, capsule-based, dry powder inhaler device. SIMANIM particles could be beneficial for the delivery of antibodies targeted against inhaled pathogens or other extracellular antigens, as well as having potential applications in the delivery of a wide range of other biopharmaceuticals and certain small-molecule drugs.

Kaye RS, Purewal TS, Alpar HO. Simultaneously manufactured nano-in-micro (SIMANIM) particles for dry-powder modified-release delivery of antibodies. J Pharm Sci. 2009;98:4055-4068.Correspondence to H. Oya Alpar, Centre for Drug Delivery Research, The School of Pharmacy, University of London at or +44-20-7753-5928.

Improved Permeation Enhancers for Transdermal Drug Delivery

Permeation profiles of melatonin in the presence of chemical penetration enhancers.
Permeation profiles of melatonin in the presence of chemical penetration enhancers.

One promising way to breach the skin's natural barrier to drugs is by the application of chemicals called penetration enhancers. However, identifying potential enhancers is difficult and time consuming. We have developed a virtual screening algorithm for generating potential chemical penetration enhancers (CPEs) by integrating nonlinear, theory-based quantitative structure-property relationship models, genetic algorithms, and neural networks.

Our newly developed algorithm was used to identify seven potential CPE molecular structures. These chemical enhancers were tested for their toxicity on (a) mouse embryonic fibroblasts (MEFs) with MTT assay, and (b) porcine abdominal skin by histology using H/E staining at the end of a 48-h exposure period to the chemicals. Further, melatonin permeability in the presence of the enhancers was tested using porcine skin and Franz diffusion cells. Careful toxicity tests showed that four of the seven general CPEs were nontoxic candidate enhancers (menthone, 1-(1-adamantyl)-2-pyrrolidinone, R(+)-3-amino-1-hydroxy-2-pyrrolidinone, and 1-(4-nitro-phenyl)-pyrrolidine-2,5-dione). Further testing of these four molecules as potential melatonin-specific CPEs revealed that only menthone and 1-dodecyl-2-pyrrolidinone provided sufficient enhancement of the melatonin permeation.

The results from our permeability and toxicity measurements provide validation of the efficacy and ability of our virtual screening algorithm for generating potential chemical enhancer structures by virtual screening algorithms, in addition to providing additional experimental data to the body of knowledge.

Godavarthy SS, Yerramsetty KM, Rachakonda VK, et al. Design of improved permeation enhancers for transdermal drug delivery. J Pharm Sci. 2009; 98:4085-4099. Correspondence to Khaled A.M. Gasem, School of Chemical Engineering, Oklahoma State University at or (405) 744-5280.

Dissolution Profiles From Enteric-Coated Dosage Forms

Dissolution profiles of theophylline, antipyrine and acetaminophen from enteric-coated granules in simulated intestinal fluid of pH 6.8 (paddle method, 900 mL, 50 rpm). The black circle represents AS-LG-coated granules;  the black triangle represents AS-MG-coated granules; and the black square represents AS-HG-coated granules. Each value represents the mean ± s.d. of six experiments.
Dissolution profiles of theophylline, antipyrine and acetaminophen from enteric-coated granules in simulated intestinal fluid of pH 6.8 (paddle method, 900 mL, 50 rpm). The black circle represents AS-LG-coated granules; the black triangle represents AS-MG-coated granules; and the black square represents AS-HG-coated granules. Each value represents the mean ± s.d. of six experiments.

We examined the in vitro dissolution-in vivo absorption correlation (IVIVC) for enteric-coated granules containing theophylline, antipyrine or acetaminophen as model drugs with high solubility and high permeability. More than 85% of each drug was released from granules coated with hypromellose acetate succinate (HPMCAS) (AS-LG grade, which dissolves at pH above 5.5) at a mean dissolution rate of more than 5 %/min after a lag time of less than 4 min in simulated intestinal fluid of pH 6.8. The lag time and the dissolution rate were significantly extended and reduced, respectively, when AS-LG was replaced with AS-HG (a grade of HPMCAS that dissolves at pH above 6.8). Enteric-coated granules were administered intraduodenally to anesthetized rats.

Statistical significances of differences of in vitro lag time between AS-LG- and AS-HG-coated granules were consistent with those in vivo, for all drugs. Significant differences in dissolution rates between granules also corresponded to those in absorption rates calculated using a deconvolution method, and both parameters had comparable absolute values, except in the case of antipyrine-containing granules with relatively fast dissolution rates. Thus, a good IVIVC was generally obtained; however, the exception suggests the importance of developing a dissolution test that fully reflects the in vivo situation.

Sakuma S, Ogura R, Masaoka Y, et al. Correlation between in vitro dissolution profiles from enteric-coated dosage forms and in vivo absorption in rats for high-solubility and high-permeability model drugs. J Pharm Sci. 2009; 98:4141-4152. Correspondence to Shinji Sakuma, Faculty of Pharmaceutical Sciences, Setsunan University at or 81-72-866-3124.

Metronidazole Loaded Pectin Microspheres for Colon Targeting

The shape and surface morphology of pectin microspheres were studied using scanning electron microscopy. The sample was prepared by lightly sprinkling the microspheres powder on a double adhesive tape, which was stuck on aluminum stub. The stubs were then coated with gold.
The shape and surface morphology of pectin microspheres were studied using scanning electron microscopy. The sample was prepared by lightly sprinkling the microspheres powder on a double adhesive tape, which was stuck on aluminum stub. The stubs were then coated with gold.

A multiparticulate system having pH-sensitive property and specific enzyme biodegradability for colon-targeted delivery of metronidazole was developed. Pectin microspheres were prepared using emulsion-dehydration technique.

These microspheres were coated with Eudragit S-100 using oil-in-oil solvent evaporation method. The SEM was used to characterize the surface of these microspheres and a distinct coating over microspheres could be seen. The in vitro drug release studies exhibited no drug release at gastric pH, however continuous release of drug was observed from the formulation at colonic pH. Further, the release of drug from formulation was found to be higher in the presence of rat caecal contents, indicating the effect of colonic enzymes on the pectin microspheres.

The in vivo studies were also performed by assessing the drug concentration in various parts of the GIT at different time intervals which exhibited the potentiality of formulation for colon targeting. Hence, it can be concluded that Eudragit coated pectin microspheres can be used for the colon specific delivery of drug.

Caution Urged on Dose-Dumping Drugs

Manufacturers should consider ethanol vulnerability at design stage

A Swedish researcher has concluded that controlled release drugs that are vulnerable to alcohol-induced “dose dumping”—releasing the drug faster and in higher concentrations than is safe—should be withheld from the market or reformulated.

Hans Lennernäs, PhD, professor of biopharmaceutics, department of pharmacy, Uppsala University, Uppsala, Sweden, recommends that pharmaceutical companies “avoid developing and marketing oral controlled release products whose in vivo dissolution and/or absorption is sensitive to intake of alcoholic beverages.”

Further, “before any work and investment of a new oral controlled release product is initiated, it is crucial [that manufacturers] consider the ethanol vulnerability of the pharmaceutical formulation that is on the design table,” Dr. Lennernäs told Pharmaceutical Formulation & Quality. The research was published recently in Molecular Pharmaceutics.

an important Problem

Matthew Traynor, PhD, a senior lecturer in pharmaceutics in the school of pharmacy at the University of Hertfordshire in Hatfield, England, agrees this is an important problem. “I strongly believe that no more formulations of this type should be made or approved without full and rigorous checking for this problem,” Dr. Traynor told Pharmaceutical Formulation & Quality.

The problem does not appear to be widespread, however, he said. “The number of potentially problematic formulations reported, combined with the fact that alcohol is generally contraindicated with these formulations anyway, means that this is not a widespread problem that will have a significant impact on a large number of patients,” Dr. Traynor said.

Further, although several authors have reported in vitro data with isolated examples of alcohol-induced dose dumping at concentrations relevant to real-life alcohol consumption, the number of adverse events reported for products currently on the market is low, Dr. Traynor said. “A problem exists. No more formulations of these type with excipients susceptible to alcohol should be approved without more rigorous testing—perhaps at all—but a full recall of all products is an over reaction.

“However, in light of the numerous reports of this phenomenon that have been observed in vitro, a more rigorous examination of new products in development and seeking regulatory approval is a wise move,” he added. “The FDA is actively seeking a solution as to what is the best, most robust method for performing such studies.”

Keys to Dose-Dumping Risk

According to Dr. Lennernäs, several key factors determine a drug’s dose-dumping risk:

  • the solubility of the pharmaceutical excipients;
  • the solubility of the drug;
  • the formulation’s drug release mechanism;
  • the pharmacological effects of the drug; and
  • the gastrointestinal factors critical for dissolution, transit, and absorption.
  • The risk for dose dumping is high when these factors are in interplay,” he said.

If a formulation proves susceptible to alcohol as described by the U.S. Food and Drug Administration’s established in vitro guideline, the next step is in vivo testing in human volunteers, Dr. Lennernäs said.

The manufacturer should reformulate the product if in vivo testing reveals an increased absorption rate in healthy people who have consumed alcohol, Dr. Lennernäs said. Reformulation would increase the safety of any drug arsenal in any country.

This testing is important because it is impossible to predict when and in which patients dose dumping will occur, Dr. Lennernäs said. The alcohol-drug interaction depends on drinking behavior and highly variable gastrointestinal factors critical for dissolution, transit, and absorption.

The patient who takes an ethanol-vulnerable controlled release drug just before bedtime with a large volume of strong alcohol—say, two double whiskeys—is at the greatest risk, he said. “In this case, the patient’s reclining posture may prolong the gastric residence of the drug, making a harmful interaction more likely. If the ethanol-vulnerable drug is an opioid, the effect of the opioid itself may also prolong gastric residence.” Elderly patients are at substantial risk for dose dumping because they have less stable gastrointestinal function that may be exacerbated by the effects of other drugs they are taking.

Cord Blood Cells Converted into Embryonic-Like Cells

Cells free of genetic mutations found in converted adult cells

Reprogramming cells is not new science. Researchers have been taking adult cells and converting them into embryonic-like cells for several years. In a new twist on a familiar theme, however, a research team has reprogrammed human cord blood cells into embryonic-like cells.

Cord blood induced pluripotent stem cells (iPSC) offer two advantages, said Juan Carlos Izpisúa Belmonte, PhD, a professor in the gene expression laboratory at the Salk Institute for Biological Studies.

First, reprogrammed cord blood cells are mutation free. “When we become adults, we develop mutations in our cells. If you reprogram those cells, the mutation will stay there,” Dr. Izpisúa Belmonte told Pharmaceutical Formulation & Quality. “[Cord blood cells] have not accumulated any genetic stress because of living.”

Second, cord blood cells require less immunological matching, said Dr. Izpisúa Belmonte, who led the study. When reprogrammed cells taken from the skin and hair are used, they must be immunologically matched to the receiving patient or they will be rejected. Cord blood cells do not stimulate rejection by the immune system. “You don’t need to have a full matching requirement between the patient and donor for the graft to work,” he said. The research was published recently in Cell Stem Cell.

Cord Blood Readily Available

These cells offer a practical advantage, too. Currently, there are more than 400,000 cord blood units available worldwide. By now, the cord blood banks around the globe cover most people with regard to immunological matching, Dr. Izpisúa Belmonte said. “If you were to need a cord blood transplant, the right one for you is probably already in a bank, whether it is in London or New York.”

This easy accessibility would allow researchers to reprogram the most common haplotypes, making them available for most of the world’s population, Dr. Izpisúa Belmonte said. This would significantly reduce the number of cell lines needed for human leukocyte antigen matching. “I’m not certain about the exact number, but I think to cover 60% to 70% of the world’s population, you would only have to reprogram 500 cord blood cell types,” he said. “That’s manageable.” These embryonic stem cells could be stored in a bank, much like the cord blood bank model.

Reprogrammed cord blood cells offer great promise, said another researcher. “Although all therapeutic options are highly speculative and premature at this time, because umbilical cord blood cells are so widely banked, generating iPSC from them might make them even more valuable as a source of pristine, versatile stem cells,” said George Q. Daley, MD, PhD, an associate professor in the department of biological chemistry and molecular pharmacology at Harvard Medical School.

Only Two Factors Needed

It takes about two weeks to reprogram cord blood cells, which is quicker than reprogramming adult cells. In addition, Dr. Izpisúa Belmonte and colleagues were able to reduce the number of factors needed from four to two. “This is a technical advancement. The four factors we used to reprogram cells, they are oncogenes—they can induce cancer. So if we eliminate some of them, well, we reduce the risk of cancer.”

It remains unclear exactly how these factors reprogram a cell. Until researchers solve the “black box” of reprogramming, the field cannot advance, Dr. Izpisúa Belmonte said. “Yes, you’ve reprogrammed cord blood cells, you generate a bank, but if these cells have an ability to induce cancer, you are not going to transplant that cell into a human being.” These cells demonstrate enormous potential, but they are still years from clinical use,

Cost-Effective Tools for Acetonitrile Shortage

Situation is an opportunity for optimization and innovation


With no end in sight to the worldwide shortage of acetonitrile, the popular high-performance liquid chromatography (HPLC) solvent, laboratories are in search of cost-effective solutions to manage the impact on their research and business time line. The emerging innovations represent yet another example of how “greener” and more cost-effective laboratory practices are advancing the field of analytical chemistry, especially in HPLC analysis.

The pharmaceutical industry consumes approximately 70% of the world’s acetonitrile supply, using the solvent in a range of applications in both manufacturing and analytical settings. Acetonitrile is commonly used in gas chromatography (GC) analysis, ultraviolet (UV) analysis, thin-layer chromatography (TLC), and HPLC applications, as well as other wet chemistry test methods in the laboratory. Acetonitrile is the chosen solvent for today’s HPLC analyses, largely due to its miscibility with water and most organic solvents as well as its low toxicity, viscosity, and chemical reactivity. Acetonitrile is also used in the synthesis and manufacturing of drug substances and products.

The Great Acetonitrile Shortage, as it has come to be known by suppliers, arose due to a series of events that occurred in 2008. First, Chinese production of acetonitrile dropped significantly as the country prepared to host the 2008 Summer Olympics in Beijing. Chinese factories in the vicinity, including China’s largest acetonitrile producer, were shut down to minimize air pollution. After the Olympics, newly implemented import bans significantly limited acetonitrile export from China. At the same time, active hurricanes in the Gulf of Mexico interrupted acetonitrile manufacturing in Texas. Possibly the most substantial and long-lasting impact on the acetonitrile supply was triggered by the worldwide economic slowdown that started in 2008.

Acetonitrile is a by-product of the synthesis of acrylonitrile. In this process, manufacturers use acrylic fibers and acrylonitrile-butadiene-styrene resins to produce plastics for automobiles, carpeting, luggage, telephones, computer housings, and other products. Due to the economic downturn, consumer purchasing and manufacturing production of these items has slowed. This shrinking demand prompted the world’s acrylonitrile producers to slow production; now, fewer resources are being invested in collecting and purifying acetonitrile to the high purity grades the pharmaceutical industry requires.

Impact in the Lab

Consequently, the prices for high quality and HPLC-grade acetonitrile skyrocketed in 2009, with acetonitrile prices increasing from $30/liter to $100/liter between July and September. As the major acetonitrile producers ration their supplies, they have started advising customers to develop alternative methods in order to eliminate or reduce acetonitrile use. Although the long-term forecast of cost and availability is still uncertain, the general feeling is that acetonitrile prices will continue to rise. Many labs will find it difficult to acquire needed quantities in a cost-effective way.

Because the pharmaceutical industry relies on acetonitrile for a wide range of applications, including many that must be conducted under current good manufacturing practices (cGMP), the scarcity of this single industrial chemical has the potential to delay progress. From a drug development standpoint, the shortage can affect the timeline for application approvals and delay market launches. For agency-approved products, it will become the norm for companies to manufacture fewer batches to reduce the amount of testing.

The U.S. Food and Drug Administration (FDA) has received numerous inquiries related to the acetonitrile shortage, primarily with regard to the solutions that companies may apply to already validated methods requiring acetonitrile. FDA response has been cautious: “Regardless of the changes a firm makes to address the shortage, appropriate method validation and compliance with relevant current good manufacturing practices (CGMPs) are necessary.”1

Changes made to existing validated test methods within an approved application—New Drug Application (NDA) or Abbreviated New Drug Application (ANDA)—to accommodate the use of less acetonitrile or an alternative solvent may be as simple as a minor change in the annual report for the given drug application, as long as the change meets the criteria stated in the FDA Guidance for Industry: Changes to an Approved NDA or ANDA and in the Code of Federal Register title 21 CFR 314.70(d)(2)(vii). The guidance and CFR allow “A change in an analytical procedure used for testing … that provides the same or increased assurance of the identity, strength, quality, purity, or potency of the material being tested as the analytical procedure described in the approved application…” If the same or increased is not achieved, prior approval supplement will be required.1

As the acetonitrile shortage continues, the pharmaceutical industry is motivated to locate both short- and long-term solutions that will minimize reliance on acetonitrile.

Recommended Solutions

The prices for high quality and HPLC-grade acetonitrile skyrocketed in 2009, with acetonitrile prices increasing from $30/liter to $100/liter between July and September. As the major acetonitrile producers ration their supplies, they have started advising customers to develop alternative methods in order to eliminate or reduce acetonitrile use.

One possible solution is outsourcing. Companies using acetonitrile with cGMP-validated HPLC methods that have already been submitted in application have two options. These companies can continue to pay the current high—and escalating—prices to secure a continued acetonitrile supply, or they can modify their methods to eliminate or reduce their acetonitrile use based on a risk-benefit analysis.

In the latter instance, companies without extensive understanding of the regulatory guidelines and HPLC technology may strategically opt to partner with an analytically focused contract laboratory facility that is versed in up-to-the-minute regulatory guidelines and HPLC method optimizations. By outsourcing to a contract laboratory, cGMP HPLC projects can minimize the delays that many small and large pharmaceutical companies are experiencing because of the shortage.

For companies committed to finding a long-term, cost-effective solution that minimizes their use of acetonitrile as an HPLC solvent, a contract laboratory can explore replacing acetonitrile with a more widely available solvent or identifying a method optimization to reduce overall solvent consumption significantly.

Another potential solution is solvent replacement, but three fundamental factors must be considered: the chemical properties of the solvent, the physical properties of the solvent, and the effects these properties have on the chromatographic process (e.g., separation, detection limits, and analytical reproducibility). Unfortunately, acetonitrile has no equivalent substitute in the reverse-phase (RP) HPLC ultraviolet (UV) application, where it is employed the most. The superior UV absorbance characteristics and solubilizing properties of acetonitrile are unmatched.

Depending upon the chromatography type and the detection wavelengths used, it may be possible to replace acetonitrile with methanol or with a longer chain alcohol; however, because of methanol’s significant absorbance, up to 215 nm, its substitution for acetonitrile is restricted either to working at > 235 nm or limiting the methanol level in the mobile phase to less than 15% at = 215 nm. Tetrahydrofuran (THF) is also a viable substitute, although drawbacks associated with unpreserved or UV-grade THF make it significantly less suitable than methanol.

Solvent Reduction

Because solvent replacement can substantially affect method performance and specificity/robustness, it is not technically feasible in many situations. It is often less complicated to optimize a method that will lower solvent consumption. Reduced consumption patterns are further supported by many pharmaceutical companies’ commitments to greener strategies in an effort to minimize pollution and waste. There are three approaches to reducing acetonitrile use: simple changes to what occurs pre- and post-separation, method changes that reduce the overall amount of the mobile phase required, and a reduction of the percent of acetonitrile required for effective separation.

First, an analysis of what occurs pre- and post-separation can lead to significant reductions in acetonitrile use. For RP column equilibration, for example, most modern columns can be equilibrated using only 10 column volumes. Methods should also be evaluated to minimize run times after final peak elution, possibly using more needle wash capabilities prior to the next injection. Finally, for optimal solvent conservation, solvent recycling technologies that collect acetonitrile are an option as long as the components of the mobile phase remain separate.

Method changes that can reduce acetonitrile usage can be grouped into those that may and those that may not affect specificity/robustness. One of the most common changes that reduces the amount of mobile phase appreciably without significantly affecting specificity/robustness is to reduce the column’s internal diameter (ID). For example, a 2.1 mm ID column consumes nearly five-fold less mobile phase than the more commonly used 4.6 mm column; this represents an 80% reduction in acetonitrile usage.

This approach, however, requires instrument parameter adjustments in the method (i.e., flow rate) as well as to the analytical system (i.e., smaller diameter tubing, connectors, and microflow detector cells) to achieve the separation and pressure criteria required by the method. When gradient programs are required, the dwell volume should also be scaled down using smaller-volume mixing chambers.

Alternate HPLC Applications

Although lowering the column ID is the easiest approach, it does present limitations. An alternative is ultra HPLC (UHPLC). UHPLC involves smaller particle sizes and smaller columns. The UHPLC approach minimizes solvent usage while optimizing peak separation ability. Combining UHPLC with new column technologies means that HPLC separations are far more efficient. Separations that were not possible in the past are now achievable—with less solvent use.

In addition to UHPLC, technological advances in HPLC packing materials, such as fused-core particle technologies, have been specially developed for hyper fast chromatographic separations and universal detection. While UV detection is the most widely used HPLC application, it has significant limitations because molecular structure dictates the absorbance of UV light. By using universal detector technologies such as evaporative light scattering detectors and chemiluminescent nitrogen detectors, scientists can measure the electrical charge associated with analyte particles. The charge is in direct proportion to the amount of the analyte in the sample and remains consistent regardless of the compound. The result is that universal detection can “see” any non-volatile analyte, including those without chromophores, thus reducing dependence on highly polarized solvents like acetonitrile and offering a wider range of usable solvents.

The pharmaceutical industry is under pressure to find cost-effective solutions for the acetonitrile shortage. Aligning with a reputable and knowledgeable contract laboratory can enhance your ability to select the best strategy to meet their unique product goals and testing timeline. With appropriate guidance, a successful acetonitrile reduction or replacement program can become a competitive advantage. Moreover, reducing acetonitrile use is consistent with a broader move towards green industry practices by significantly reducing waste and inefficiency. In short, the global acetonitrile challenge is an opportunity for optimization and innovation.

FORMULATION - RTU Drug Products | Ensure Safety, Efficacy of Ready-to-Use IV Drug Products

Stability considerations are key

Editor’s Note: This is the second in a three-part series on ready-to-use parenteral products. The first part appeared in our September issue, and the third part will be posted on our Web site,, when our December/January issue goes online in late December.


Ready-to-use (RTU) intravenous drug products are pre-mixed solutions of drug and intravenous diluents that are typically packaged in 50 mL to 1,000 mL flexible plastic containers. Key considerations in the development of intravenous RTU drug products have been described previously.1 After the formulation and the container system have been selected and the analytical methods validated, the manufacturer must conduct registration stability studies to demonstrate the product’s acceptability over its intended shelf life. These study data are included in the regulatory filing.

In addition, stability study results help set or refine appropriate specifications and establish the shelf life applicable to all future commercial batches. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Q1A (R2) guidance document offers guidance on stability testing for new drug substances and products and provides directions on what should be included in stability submission packages.2 In this article, we discuss considerations in designing stability studies, data evaluation, and expiration dating for parenteral products—with an emphasis on intravenous RTU drug products in plastic containers.

Stability Study Design

For U.S. new drug applications, typically three batches per drug product configuration are required with data through 12 months of long-term (25°C/60% relative humidity [RH]), six months of accelerated (40°C/75% RH), and 12 months of intermediate (30°C/65% RH) storage conditions, if applicable. Two of the three batches placed on stability should be manufactured at “not less than 10%” of the intended commercial batch size (at least at pilot scale); the third batch may be smaller.

Stability study results help set or refine appropriate specifications and establish the shelf life applicable to all future batches.

Using multiple active pharmaceutical ingredient lots and exposing some batches to worst-case processing conditions such as maximum hold times and sterilization temperature and time should be considered when manufacturing stability batches. The batches should be manufactured at the proposed site for commercial production, using equipment equivalent to commercial use.

For proposed products with multiple presentations, manufacturers may consider a matrixing or bracketing design approach described in ICH Q1D.3 Both designs offer potential cost savings, either by decreasing the need for testing or reducing the number of batches needed. These designs are amenable to products with the same constituents, the same container materials, and similar attributes.

A bracketing study design involves testing only samples from the extremes of the proposed product configurations, with the assumption that the extremes represent the stability of the intermediate configurations. Table 1 provides an example of a bracketing approach for 12 product presentations, reducing the number of registration batches from 36 to 12.

A matrixing study design includes all samples being tested at the initial and final time point, with only a subset of samples tested at any given time point in between; the assumption is that the stability of the tested samples represents the stability of all of the samples at a given time point. Table 2 provides an example of a matrixing test design. This design can be risky: If results indicate a difference among the configurations tested, then the untested configurations will be assigned the shortest dating determined until actual configuration testing confirms what is appropriate.


Manufacturers must evaluate aqueous-based drug products packaged in semi-permeable containers, such as flexible plastic containers, for potential water loss, in addition to the physical, chemical, biological, microbiological, and functional attributes of the container. These products must demonstrate the ability to withstand low relative humidity environments. We describe the storage conditions for room temperature products packaged in semi-permeable containers in Table 3, along with the typical test intervals for each condition and the minimum amount of data required for the submission.

The duration of the storage period at room temperature (25°C) and corresponding test schedule must cover the intended shelf life of the proposed product and demonstrate its stability profile. Testing intervals should be at a sufficient frequency to characterize the degradation profile adequately. Typically, samples stored at the intermediate condition are not tested unless a significant change is observed at accelerated conditions. (See reference 2 for the definition of significant change.)

If significant change is observed, the manufacturers should conduct an intermediate storage condition study. For frozen products, the long-term storage condition is –20°C. Short-term thawed testing, which generally consists of storage at 5°C for up to 30 days or for up to three days at 25°C after thawing, is also performed at various long-term frozen intervals. Due to the nature of frozen products, the length of the study at each storage condition will be specific for each drug product.

Tests Performed During Studies


Selection of specific tests/assays is based on the technical understanding of the solution product and the container system. The test schedule should focus on the parameters controlling shelf life or those parameters likely to change, in addition to meeting regulatory requirements regarding test type. The typical tests performed on parenteral drug products include appearance, color, potency, degradation products, pH, particulate matter, sterility, pyrogenicity, and container leachables. For products in semi-permeable flexible plastic containers, it is essential to monitor water loss as well.

The critical product attributes or those likely to change, such as potency, degradants, pH, and water loss should be monitored at each test interval. Attributes expected to remain stable, such as excipients or sterility, may be tested less frequently, perhaps every six or 12 months during the course of the study. To characterize the stability profile of a particular parameter, it may be necessary to schedule additional intervals (e.g., one, three, five, six, seven, or nine months) depending on its rate of change.

Additional studies may be needed once the product is removed from the overpouch, because the water loss rate may increase with the overpouch. Or, in the case of oxygen-sensitive products, rapid ingress of oxygen into the container may result. Manufacturers should conduct a photostability study per ICH Q1B to demonstrate the product’s stability when exposed to light and the effectiveness of the packaging system, as appropriate.4 Manufacturers should also conduct temperature cycling studies to demonstrate the effects of temperature variation that the product might undergo during shipping and distribution. The number of replicates per test depends on the variability of the method and the expected change over time of the attribute. Typically, three samples are scheduled: one for potency, one for degradants, and pH testing to facilitate statistical analysis.

Data Evaluation


Once available, the stability data are evaluated to set the appropriate expiry date. The ICH Q1E guidance document provides direction on how to assess stability data, including the use of statistics to estimate product shelf life.5 To ensure that the drug product will remain within acceptance criteria through its shelf life, product expiration dating must consider the following: stability data from registration batches, formulation development data, manufacturing process data, analytical variability, release and stability specifications, and stability data supporting in-use conditions. Each attribute should be evaluated separately—and an overall assessment used—to propose a shelf life. The shelf life should not exceed that predicted for any single attribute.

As indicated in ICH Q1E, chemical attributes, such as potency or degradants, generally follow zero-order kinetics during long-term storage. Zero-order kinetics can also estimate water loss for products packaged in semi-permeable containers. Using statistics to establish shelf life provides a higher degree of confidence that all future batches will meet the acceptance criteria. When statistics are performed, if the rates from different batches meet the criteria for poolability, a mean rate is used to establish shelf life. If the rates cannot be pooled or cannot be considered statistically the same, then the worst-case rate predicts the expiration date.

Intravenous RTU products that survive terminal sterilization and exhibit minimal change in potency and degradation products over time often have their shelf life based on water loss. The water loss rate through the semi-permeable container system is linear over time. In some cases, dating may be determined by pH, which may change due to lack of formulation buffer or due to low levels of container-related leachables.

Using statistics to establish shelf life provides a higher degree of confidence that all future batches will meet the acceptance criteria.

For aseptically filled room temperature products, the level of degradation products is often the shelf life-limiting parameter. For aseptically filled frozen products, manufacturers must evaluate the change in potency, pH, and degradants on frozen storage stability as well as on thawed stability at 5°C and room temperature to determine the appropriate expiration dating for the product.

Extrapolation to extend shelf life beyond the period covered by the available long-term data can be proposed if no significant change is observed at the accelerated storage condition. A proposed shelf life based on extrapolation should always be confirmed by additional real-time long-term stability data as soon as the data become available. The post-approval commitment batches should be tested at a point in time that corresponds to the extrapolated shelf life.

In the end, the key to a successful RTU pre-mix drug stability program is developing and implementing study designs based on scientific understanding of the formulation stability and container properties—specifically flexible plastic containers—along with applicable ICH guidelines. Stability study data are evaluated to establish appropriate expiration dating periods for the products. Well-designed studies fully characterize the stability profile of the RTU pre-mixed product and ensure that it is safe and efficacious and will meet its requirements through expiry while in the market.

Youngberg Webb is senior director, stability operations, and Dr. Chilamkurti is senior director, pharmaceutical technology, at Baxter Pharmaceuticals & Technology. Reach Dr. Chilamkurti at


1. Chilamkurti R, Youngberg Webb P. The keys to RTU parenterals. Pharm Formulation Quality. 2009; 11(5):40-42.

2. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. ICH Guideline Q 1 A (R2): Stability testing of new drug substances and products. Geneva, Switzerland; 2003. Available at: Accessed September 29, 2009.

3. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. ICH Guideline Q 1 D: Bracketing and matrixing designs for stability testing of drug substances and drug products. Geneva, Switzerland; 2002. Available at: Accessed September 29, 2009.

4. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. ICH Guideline Q 1 B: Stability testing: photostability testing of new drug substances and products. Geneva, Switzerland; 1996. Available at: Accessed September 29, 2009.

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. ICH Guideline Q1 E: Evaluation of stability data. Geneva, Switzerland; 2004. Available at: Accessed September 29, 2009

CONTAMINATION CONTROL - HVAC SYSTEMS | Let Clean Manufacturing Manage Your Risk

HVAC equipment selection is critical decision in cleanroom design

Breakthroughs in science and technology have increased demand in the pharmaceutical and electronics markets. With the need for clean production, design, construction, and assembly and packaging processes are under intense pressure to create clean environments. Schedules are shorter, design criteria are often uncertain, and the useful life of facilities can be unknown. All of these trends involve increased risk for design professionals, equipment manufacturers, and contractors.

With a focus on the heating, ventilation, and air conditioning (HVAC) system, this article addresses those issues and offers recommendations that can reduce risk. Because standards and design criteria differ significantly from one application to the next, this overview cannot include specific standards or design guidelines. Instead, the broad-scope ideas discussed below can be used as a guide when planning a cleanroom.

Most importantly, clarify project requirements before building a cleanroom. Whether planning on the bid-spec route or design-build, take time to establish and document the requirements; they will become the roadmap for all concerned. Changing requirements midstream can result in mistakes, delays, and cost overruns.

A drawing of the linear airflow valve. The valve linkage is locked in place. The cone moves toward the venture as pressure increases to maintain a constant air flow.
Figure 1. A drawing of the linear airflow valve. The valve linkage is locked in place. The cone moves toward the venture as pressure increases to maintain a constant air flow.

Project requirements should include the capabilities of the cleanroom (present and future), a schedule for completion, and a budget. Energy efficiency or operating cost may also be goals, but production reliability is often more important.

With careful consideration given to cleanroom requirements and HVAC equipment selection, risk can be reduced significantly while costs are controlled. Risk-mitigating factors include the following strategies:

  • provide redundant components;
  • slightly oversize the air handler casings and coils;
  • configure equipment so it can be serviced without the need for a shut-down;
  • design the system for ease of operation;
  • design flexibility into the system to anticipate changes; and
  • incorporate packaged chiller systems to shorten installation and start-up time.

When project requirements are not clear or are likely to change in the future, potential for risk increases. Three key design aspects can help minimize risk: simplicity, reliability, and flexibility. These are used in the selection of key components of the HVAC system. Common characteristics of cleanrooms are contaminant control, and temperature and humidity control. Downtime in most cleanrooms is costly, so system reliability is crucial.

Control of Contaminants

Contaminant control is typically much more stringent in cleanrooms than in comfort cooling HVAC applications. In addition to 30% efficient filters used to protect the air handler coils, HEPA filters are used to remove over 99% of particles 0.3 micron and larger.

The best location for the HEPA filters is at the very end of the air delivery system in the ceiling grid. This location ensures that the cleanest possible air is entering the room. The quantity and location of HEPA filters depend on the airflow rate required to achieve the desired air change rate in the room and on the equipment layout.

The goal is to provide a certain direction and airflow velocity at the point of production or packaging. The air coming into the room is clean, but the system also needs to control any particles that are generated inside the room by quickly capturing them and steering them away from the product.

Additional methods of reducing contaminants include:

  • providing a tight room envelope to eliminate infiltration;
  • providing a vapor barrier to eliminate vapor transmission;
  • providing an air lock for passage of people and materials;
  • maintaining a positive pressure in the room with respect to the surrounding spaces; and
  • adhering to owner operational practices.
A drawing of the air conditioning chiller. The evaporative condenser is on the left and the mechanical room with pumps, controls, refrigerant compressors, water mass storage tanks, and hydronic accessories is on the right.
Figure 2. A drawing of the air conditioning chiller. The evaporative condenser is on the left and the mechanical room with pumps, controls, refrigerant compressors, water mass storage tanks, and hydronic accessories is on the right.

If the system uses a dedicated outside air handler, use HEPA filters in that unit as well. Once the room is in operation, the dirtiest source of air entering the system will be from the outside air intake. The room air handlers will be moving primarily recirculated air from the cleanroom, so introduced outside air should be brought to the same level of cleanliness before mixing it with the recirculated air.

Maintaining tight temperature and humidity conditions in the cleanroom through different seasons requires several environmental processes: air cooling, dehumidifying, heating, and humidifying. Sometimes there is also a low-temperature process cooling requirement.

The room itself is usually relatively stable. In this case, the outside air becomes the most variable part of the process because of temperature and humidity fluctuations. It is for this reason that a dedicated outside air handler is recommended. Pretreating the outside air ensures that the room air handlers see little variability. The air conditions entering the room air handlers are fairly constant, and temperature and humidity from these air handlers can be held to extremely tight ranges.

While it is important to set room conditions low enough to achieve the desired result, be careful not to arbitrarily set lower room conditions than needed to avoid drastic increases in equipment cost.

Room Supply Air Handlers

Figure 3. A drawing of the low temperature process chiller. The ice storage bank is on the right; the mechanical room with pumps, controls, refrigerant compressors, and hydronic accessories is on the left.

When outside air is preconditioned and filtered, the demands on room air handlers are reduced because the large, variable outside air load has already been taken care of. In this example, the room air handlers may have a 30% efficient prefilter, a four-row sensible cooling coil, and redundant fans.

The room air handlers are designed with four fans that are isolated from each other with back draft dampers. Each fan is equipped with a variable frequency drive. Each fan compartment is accessible from outside the air handler so that it does not require shutdown if a fan, motor, variable frequency drive, or belt fails. This design improves flexibility and makes the system more reliable. Operating sequences are simple, and maintenance is easier. Furthermore, if three fans are sized to handle the full airflow requirement, there is room for additional airflow if it is ever needed.

Another way to improve flexibility and hedge against future requirement changes is to upsize air handlers to the next cabinet size. With a little extra cost, this change provides numerous benefits. The fan variable frequency drive allows the fan to run at current airflow needs, with extra capacity available if needed. In the meantime, operating costs are lower due to lower pressure drop through the air handlers.

Because of a cleanroom’s tight temperature and humidity requirements, outside air is not used for cooling; instead, mechanical cooling is needed year round. In this case, an atomizing-type humidifier can be used. In addition to humidifying, this type of humidifier provides cooling as water evaporates into the airstream.

This “free” cooling brings significant energy savings and avoids cooling capacity. Even after considering the cost of providing compressed air and the additional preheat required when the outside air is cool, savings are substantial. As an added measure of safety, do not take this cooling credit when sizing the cooling coil. This type of humidifier is very reliable, requiring almost no maintenance.

Conditioning the Outside Air

The dedicated outside air handler improves reliability and flexibility. Several components that accomplish this are included in the make-up air handler:

  • 30% efficient prefilters;
  • preheat coil;
  • humidifier;
  • deep, 12-row cooling coil;
  • fan(s);
  • high efficiency particulate air final filters; and
  • reheat coil.

As with the room air handlers, the outside air handler can usually be oversized at minimal extra cost while greatly improving flexibility. If future program changes are anticipated and the outside air handler is found to be too small, it is much easier to add a second air handler or replace the outside air handler than to replace all of the room air handlers. Assuming the room exchange rate remains the same, the room air handlers simply take a larger percentage of preconditioned outside air and a smaller percentage of recirculated air. The load on the room air handlers does not change.

The conditioned outside air is delivered to numerous room air handlers. The pressure loss in each duct run is different, and the pressures within the ducts may vary. The use of a linear air flow valve in each duct run ensures constant outside air flow even when pressures vary in the ducts. As long as the outside air flow to each room air handler is constant, the air valves can be set and locked in place. They do not require an actuator (see Figure 1).

Chiller Efficiency

Figure 4. In this example, the minimum size of the low temperature process chiller is 60 tons. Because there would be no room for error or mechanical failure, it is sized at 87 tons. The chart indicates tons stored (background) versus tons used (spiked line).

A novel design can be implemented that involves matching the outside air handler water flow to the room air handler water flow. This results in an outside air coil with a greater water flow rate and low water temperature rise. The lower leaving water temperature provides greater dehumidification, and the water can then be used in the room air handler cooling coils to provide additional sensible cooling required due to space and fan motor heat gains.

If the coils are equipped with their own pumps, there is a slight pumping energy penalty at the outside air handler coil pump. The total water temperature increase is greater than industry norms, and energy savings are realized at the chiller (see Table 1).

The high return water temperature is acceptable in this application because the high airflows passing through the cleanrooms allow a 65°F air temperature supply. Water temperature returning to the chiller is 4°F warmer than a conventional design, which improves chiller efficiency.

Reheat coils provide any heat required to neutralize the ventilation air temperature after subcooling the air for dehumidification purposes.

Process Cooling


Sometimes there is a low temperature process-cooling requirement. All cooling loads could be handled from one chiller plant; all the cooling needs would be handled by 30°F glycol. Because only a portion of the total cooling load needs to be at 30°F, this would be terribly inefficient.

Two chiller plants should be designed, one for the 42°F air conditioning system and one for the 30°F process system. This will save energy and also limit the amount of glycol to the 30°F process system. Separating the process load may introduce a problem if the load is very intermittent. Large surges in load make chiller control difficult and quick response impossible. If this is the case, couple the process chiller with an ice storage bank. The ice storage bank will smooth out the load to the chiller. Because the chiller is decoupled from the process load, it can run longer and the chiller capacity can be reduced. (See a graph of load versus cooling capacity in Figure 4.)

A water chiller package is shown in Figure 2, and the low temperature glycol chiller package is shown in Figure 3. The packages are useful in saving construction costs and in shortening the project schedule. Schedule reduction is achieved by reducing field labor and by testing the package at the factory. Redundant compressors and pumps provide an extra measure of safety.

In conclusion, when project conditions introduce uncertainty, take steps to lessen the risk. Providing a dedicated outdoor air handler offers many benefits, including tighter control over temperature and humidity, simplicity of operation, and easier and less costly changes if additional outside air is needed.

Slightly oversizing air handlers and chillers offers a hedge against future changes. Designing equipment with multiple fans, pumps, and compressors provides system reliability. Using atomizing humidifiers and pressure independent airflow valves reduces maintenance and minimizes down time. Designing packaged chiller systems saves money and shortens the schedule.

Lab Microscopy | Solve Vial Delamination Before it Starts

Manufacturers becoming more proactive in testing for product-packaging compatibility

After years of research and millions of dollars invested in shepherding a drug to market, a manufacturer has rigorously tested every aspect of a new drug’s safety and efficacy. But what about the packaging?

Packaging is a vital and critical step in the drug development process and should be tested just as rigorously as the drug in order to establish product compatibility. Otherwise, there could be perilous consequences, both for the consumers’ well being and for the manufacturer’s good name.

With this risk in mind, progressive manufacturers are moving away from reactive quality testing and toward proactive pre-testing for product-packaging compatibility—testing how drug solutions will react with packaging before the product is brought to market.

These quality control issues are particularly important for injectables and liquid solutions supplied in glass vials, products that are often overlooked for packaging testing. Made of common borosilicate and soda lime glasses, the vials are chosen for their strong barrier properties, cost effectiveness, and perceived stability. Glass vials are presumed to be safe because of their longevity in the industry, but long history does not equal safety. Because glass vials can pose contamination risks that are often invisible to the naked eye, they should be pre-tested to ensure product compatibility.

Glass Delamination a Risk

Packaging is a vital and critical step in the drug development process and should be tested just as rigorously as the drug in order to establish product compatibility.

Glass vials used for pharmaceutical products must withstand attack from many chemicals and processes during the manufacturing process, including heat treatment, sterilization, product filling, and storage of potent drug mixtures. In rare instances, chemicals from the product solution or manufacturing process can cause a vial to delaminate and discolor, posing serious contamination risk to the drug product and potential liability for the drug manufacturer.

When glass delaminates, the top layers of a glass surface separate and flake off at a scale that is typically invisible to the naked eye. Delamination is often correlated with pitting, extremely localized corrosion that creates small holes in the glass surface. Delamination can occur at any point during the vial’s life, whether it is during manufacturing (e.g., fusion at the neck or base where delamination and/or pitting are frequently found), during heat treatment or sterilization, or after the manufacturing process due to the vial’s adverse reactions to its contents (e.g., etching due to an acidic or basic solution, or one with a high sodium or salt content).

There are many ways to investigate glass delamination in pharmaceutical vials, ranging from examining the chemical nature of the vial’s contents to evaluating manufacturing conditions. Because most pharma quality control departments are not equipped for extensive microanalytical testing, they must turn to independent laboratories to discover whether glass delamination is occurring.

Recently, a leading manufacturer of branded injectable pharmaceutical products decided to pre-test product-package compatibility because it determined that delamination could occur in one of its drug candidates. The injectable company predicted that the drug, which had a high pH level, would react unfavorably with its traditional glass vial packaging, but the company did not have the equipment or expertise to further examine the issue. The injectable company consulted with McCrone Associates, the microanalysis division of The McCrone Group, to compare vials from three different manufacturers for glass delamination and corrosion potential. For confidentiality purposes, we will refer to them as companies A, B, and C.

Prior to sending the vials to McCrone Associates, the drug company stored them at varying temperatures for different time frames to determine the conditions under which the vials were least likely to corrode. Vials from all three companies were stored at 55°C for two weeks, while additional sets of vials from the three companies were stored at 40°C, 30°C, 25°C, and 4°C for four weeks. A third set of vials, from company B, was studied at these temperatures for eight weeks. These vials warranted additional study because they exhibited numerous striations along the inner vial wall, a characteristic that, while likely a byproduct of their manufacture, was a potentially harmful variable (see Table 1).

At First Sight

Four Ways to Avoid Delamination Disasters

When pre-testing your glass vials for product compatibility, do not limit your examinations to the vials themselves. Remember that if the stopper and seal materials will touch the solution, they also require testing.

Whether pre-testing or doing a regular quality control examination, make sure to look for pitting. Although pitting is usually invisible to the naked eye, a simple stereomicroscope can often detect it. Pitting is often a precursor to or an indicator of delamination, so examine the inside of your empty vials under the scope early and often.

Most pharmaceutical manufacturers are not equipped with the specialized skills and expertise available at independent analytical laboratories. Consider outsourcing your vial testing and contamination analyses to a trusted laboratory, especially if you discover pitting in your routine checks.

Think about introducing barrier coatings to the inside of your vials. Typical coatings include ceramics, polymers, and silicones to protect the glass from the drug solution.

Researchers first examined the empty vials using a stereomicroscope. The scientists noticed some pitting and striations on a few of the vials across each set but could make no definitive assessments of delamination. The vials were then submitted to McCrone’s ISO Class 5 cleanroom for further sample preparation and more rigorous examination.

Although the manufacturer sent empty vials, McCrone recommends that pharmaceutical manufacturers submit their vials filled with the original drug solution for drug compatibility testing and vial evaluation. With proper illumination and microscope set-up, an examination of the vial can reveal any particulate matter floating in the solution. Often particulates are not noticeable with visual inspection alone, and only a stereomicroscope exam will reveal them suspended in solution.

Using a combination of transmitted light, coaxial illumination, and fiber optic oblique illumination, the nature of the particulate can be characterized. Microscopists look for a “twinkling effect” that results when light reflects off particulate contaminants in solution. Thus, an initial examination with a stereomicroscope can lead to early identification of delamination problems, saving both time and money on additional testing.

Next, whether the vials are filled or empty, they must be prepared for microanalysis. To prevent cross-contamination, sample preparation must be performed in a cleanroom environment. When performing microanalysis, even tiny amounts of introduced contaminants can lead to confusion and potentially incorrect conclusions.

In this case, the scientists carefully broke the empty vials into smaller pieces to examine the interior surfaces, then mounted the individual fragments on different substrates for various analyses.

For drug-filled vials, scientists opened the vials in an ISO Class 5 cleanroom hood and filtered the liquid onto a polycarbonate membrane filter to aid with characterization and isolation of the particulates. Microscopists then used finely pointed tungsten needles, only a few micrometers thick at the tip, to isolate the suspected contaminant(s) and prepare samples for analysis. These needles, far smaller than commercial dissecting needles, are essential for small particle isolation. McCrone typically manufactures its needles in house to maximize quality and utility.

Once isolated, the filtered particulate and vial pieces were mounted on various substrates for further analysis. The substrates varied depending on the analytical technique and instrumentation available. For instance, for elemental analysis in an electron microprobe, a polished carbon substrate was typically used; for infrared analysis, a polished salt crystal was used. Subsequent analysis of the small particles using additional analytical techniques often requires carefully moving the particles to different substrates. Therefore, each particle must be treated uniquely, and proper control in a cleanroom environment is the only way to ensure its integrity.

Vial Microanalysis


For this case study, after further examination in the cleanroom, scientists mounted particulate from the filtered solutions and residues from the inner vial walls on potassium bromide crystals for micro-Fourier transform infrared (FTIR) spectrometry analysis. They used infrared spectroscopy to identify organic—and some inorganic—materials present, including glass. The micro-FTIR instrument shines a beam of infrared radiation through the sample and records the different frequencies at which the sample absorbs the light.

Company A’s vials showed an unusual pink and brown discoloration in all temperature ranges, so scientists isolated a portion of the discoloration from the vial wall of the 55°C set for FTIR analysis, along with delamination flakes, to search for a similarity between them. Both IR spectra showed a broad band at ~1100 cm-1, characteristic of silica. This suggested that the discoloration was a property of the glass and not a foreign contaminant.

The scientists then mounted particulate filtered from the glass vials on a carbon substrate for scanning electron microscopy/energy dispersive spectroscopy (SEM/EDS) analysis. These analyses can confirm whether the contaminant is glass and can provide clues to the conditions under which delamination is most likely to occur. The SEM/EDS method yields two types of information: high-resolution electron images showing the features of the contaminant and X-ray spectra of the elemental constituents present in the sample. SEM/EDS is commonly used to analyze inorganic materials to identify particulate matter such as metals and glass fragments. SEM/EDS analysis can also be performed directly on large fragments of glass from vials.

Figure 1: Glass pieces flaking off the interior walls of company C vials, captured by scanning electron microscopy/energy dispersive spectroscopy analysis, confirmed glass delamination was occurring.

For some of the company C vials, SEM/EDS captured images of glass pieces flaking off the interior walls; this was confirmed to be glass delamination (see Figure 1). Imaging of the glass fragments in the SEM/EDS also revealed pitting on the glass surface, and further examination at higher magnification demonstrated tapering and enlargement of the pits (see Figures 2 and 3), indicating that as pitting increased, the pits’ borders overlapped and caused pieces of glass to flake off or delaminate.

Finally, the scientists examined the vial fragments using X-ray photoelectron spectroscopy (XPS, also known as electron spectroscopy for chemical analysis, or ESCA). XPS is the method of choice for analysis of very thin surface films. In the XPS instrument, an Xray beam generates photoelectrons in the sample; these, in turn, provide analytical information from only the outermost (~5 nm) surface of the sample. This technique is well suited for analyzing thin surface layers and residues on the surface of solid samples, and it is often essential for determining the chemical nature of glass delamination.

Figure 2 and Figure 3: Fine pitting on the glass surface of vials demonstrates tapering and enlargement of pits.

In the vials from all three companies, XPS analysis revealed the chemical composition of the glass surfaces of the vials and showed changes in the relative amounts of glass component elements sodium (Na), calcium (Ca), magnesium (Mg), and silicon (Si) in the fragments. These chemical changes are indicators of chemical attack on the glass by the drug product solution, which in turn produced preferential leaching of reactive components. The XPS therefore revealed that this chemical attack was the cause of pitting and, eventually, the delamination of the vials.

Drug Company Case Study Findings

Using a combination of light microscopy, micro-FTIR, SEM/EDS, and XPS analyses, we found that the highly basic drug in this compatibility study produced glass delamination in all three tested vials and that the delamination often occurred near the fill line. McCrone also found that as the storage temperature and time increased for these vials, their vulnerability to delamination increased.

Pitting is often indicative of and a precursor to delamination. Our analytical results in this study showed that as pits grow larger and/or appear in greater number, they may begin to consolidate to produce the flat flakes typical of glass delamination. (For complete study results, see Table 1)

The drug company discontinued the development of the highly basic drug after reviewing the analytical results. “Patient safety comes first—and this particular drug reacted with packaging in volatile ways,” said the company’s lead product developer. “Even if we could solve the delamination problem with new kinds of vials or different packaging, there might still be a risk for extractables or leachables due to the drug’s high pH level and the way the drug reacts with its packaging. We saved money on additional testing and decided the drug was not worth the potential risk of letting it go to market.”

Manufacturers should select packaging components for their physiochemical compatibility with the product, for proper fit and package functionality, and for their ability to withstand processes such as sterilization and the anticipated product distribution cycle.

When bringing a product to market, packaging is a critical step. Unfortunately, it is often the last step, and companies can easily overlook potential packaging problems until very late in product development. As this case study illustrated, the wrong packaging can delay or even shut down a developing project. Therefore, the package’s stability, functional performance, and integrity must be tested early in the development process.

Manufacturers should select packaging components for their physiochemical compatibility with the product, for proper fit and package functionality, and for their ability to withstand processes such as sterilization and the anticipated product distribution cycle. Whenever possible, manufacturers should challenge the integrity and functional performance of the package at the limits of component specifications and the operational extremes of filling, assembly, shelf life, and distribution. Laboratories specializing in microanalysis can provide invaluable information in the assessment of packaging integrity, drug compatibility, and possible failure mechanisms.

Pharmaceutical industry quality control personnel face considerable challenges in keeping abreast of not only evolving regulatory requirements but also of the various microanalytical capabilities available to resolve their problems. Many educational institutions cater to this growing need.

New Microbiological Tests Provide Earlier Results

New techniques can reduce a 14-day safety and quality checks to four hours

Faster and earlier results are the mantra of biopharmaceutical companies when it comes to the tests required before they release a batch of products. Rapid microbiology systems, which can eliminate time-consuming culture procedures and improve the breadth of testing, have become the leading technology in the race to get products out the door, cutting the time to test results from 14 days to four hours.

DuPont Qualicon’s automated RiboPrinter rapid microbiological test system, which provides strain-level characterization and identification information on microorganisms.
DuPont Qualicon’s automated RiboPrinter rapid microbiological test system, which provides strain-level characterization and identification information on microorganisms.

“It takes too long for the pharmaceutical industry to get results from microbiology to release a batch of product,” said Philippe Gadal, PhD, CEO of AES Chemunex Inc., a rapid microbiology system maker based in Cranbury, N.J. “They have to keep a ton of product before they get results and can release it.”

Rapid microbiology tests aim to relieve that bottleneck. Some tell simply whether or not a microorganism is present; others, like the DNA-based test from DuPont Qualicon in Wilmington, Del., go to the next level by characterizing the organism. Even with those advances, the new rapid systems will require validation at customer sites and broader use before customers become completely comfortable with them.

“With rapid screening microbiology, you get results in a matter of hours. The question is if it will withstand the test of time,” said Fran McAteer, vice president of quality at Microbiology Research Associates Inc., a lab in Acton, Mass., that specializes in United States Pharmacopeia (USP) testing. He said he considers rapid microbiology technology to be in the beta-test stage.


Users of the new systems may find themselves in a catch-22 situation: Although they are attracted by the rapid test response, they must get U.S. Food and Drug Administration (FDA) approval for each use of the product, a process that can take from two months to two years.

“The FDA validates each specific application, by product and on site,” said Dr. Gadal. “And you need to submit plenty of data. So if you have 10 products, you need to make 10 applications [to the FDA].”

That makes the technology easier to deploy for new products, because drugs already on the market would require revalidation with the new systems. “This isn’t an easy task for customers,” said Karen Mullen, senior market manager for biopharma at bioMerieux Inc., a rapid microbiology system company in Durham, N.C. “Biotech companies like those using stem cells and cell transplants are probably close to putting this technology into routine use. It’s easier to do this with new products.”

Mullen said customers get a return-on-investment by being able to release products earlier. There are a number of rapid microbiology systems on the market, she said, but “there is no one size fits all. What you use depends on the expertise of the person running the system and what you are trying to find.”

Tweaking Existing Tests

The newest version of BioMerieux’s Media Fill Test changes color to indicate a contaminated solution
The newest version of BioMerieux’s Media Fill Test changes color to indicate a contaminated solution.

BioMerieux, which already has several testing products on the market, launched a new version of its BacT/Alert sterility test system at the Parenteral Drug Association (PDA) conference in early October. Ray Turnley, senior product manager for BactT/ Alert, described the new BacT/Alert 3D Dual-T dual-temperature product as an innovation in the process of sterility testing.

“USP<71> customers have to test at two temperatures and with two media,” he said. The new product tests at 32°C and at 20-25°C. The prior version of the product tested at 32°C with two different media, one aerobic and the other anaerobic. The two cultures and temperatures more closely mimic USP<71> and European Pharmacopoeia’s EP 2.6.1 testing, he said. The dual product is being validated at two sites.

The BacT/Alert 3D Dual-T automates the compendial sterility test method. It can save three to four days over traditional manual methods, said Turnley. The manual methods use a raw material production sample and incubate it at two temperatures in two different media; it is then checked from day seven to day 14. BioMerieux’s automated system checks the samples every 10 minutes, performing impedance detection to see if there is any microorganism growth. Turnley added that the system can detect more organisms at a lower temperature, and it can test for yeast and mold.

The company also recently announced a new version of its Media Fill Test for validation and control of aseptic processes. The current practice requires technicians to manually check from 5,000 to 50,000 vials filled with culture media to look for turbidity. “Turbidity reading can be subjective,” said Chad Wilkey, product manager for culture media at bioMerieux. Human reading of so many vials is another bottleneck, often requiring the hiring of temporary help.

The new test, used in the filling process, changes the color of a contaminated solution, making it easier to spot. Wilkey said a great deal of research and development went into identifying a color that would work with the large potential number of microorganisms. “We test 150 different commonly isolated strains,” he said. “A color change happens 98% of the time. In the other 2%, the affected products turn turbid.” The new test is being evaluated.

Improved Sterility Test

AES Chemunex’s new Stereal-T sterility test has improved discrimination, and the company hopes it will cut false positives by half.

AES Chemunex launched its Stereal-T sterility test at the October PDA conference. The test, done in real time, is specified by USP <71> and harmonized with EP 2.6.1 and the Japanese Pharmacopoeia. Dr. Gadal said the new test has improved discrimination, and he hopes to cut false positives by half. The test is done using the ScanRDI rapid analyzer for microbial control of filterable products, which has a sensitivity down to one cell. It is based on flow cytometry and uses fluorescent labeling of single cells with laser excitation. It distinguishes microorganisms based on their electronic and optical properties and keeps them live even after labeling.

“The critical limitation of the traditional sterility test is it takes 14 days for cell growth,” said Dr. Gadal. The improved test does not require cell growth, and it uses a different indicator targeting microbial viability: enzyme activity. The results are available in less than four hours. He added that the test has enhanced sensitivity compared to growth technology, along with improved recovery of injured, stressed organisms.

To combat the electronic noise issues inherent in flow cytometry, the company uses more than 20 discriminants to tell what is noise, mold, or yeast, for example. Up to eight tests per day can be performed, with results typically available within fewer than four hours from filtration.

Going forward, Dr. Gadal said the next step is to increase the level of automation on the ScanRDI and then look at adding capability to identify the microorganism. Current test requirements only require a “yes” or “no” for the presence of organisms to release a batch of pharmaceuticals.

DNA-Based Tests ID Pathogens

Another approach can both confirm the presence of and characterize an organism. DuPont Qualicon makes a rapid microbiological test that is DNA-based. The RiboPrinter system, an automated version of a Southern blot, provides strain-level characterization and identification information within eight hours by comparing an organism to 7,000 fragment patterns in the system’s database.

If there is contamination in a plant, areas as well as workers can be sampled to find a match, locate its source, and target remediation, said George Tice, global director of research and development for DuPont Qualicon. The system can be used to develop a microbial map of the facility to locate problem areas and spot trends, such as a pathogen that keeps reappearing.

Manual Southern blotting requires a lot of lab equipment, but automated Southern blotting systems like the RiboPrinter are simpler to use and can identify organisms within eight hours.

The test results give a percentage of similarity of the pathogen to other fragment patterns in the database. To get a match, the similarity has to be at least 85-90%. Even when there is no match, users can generate their own libraries by storing samples for possible future use.

Testing begins with isolating a colony and preparing samples. The automated process starts when the cells are lysed to release the DNA. A restriction enzyme cuts the DNA into sequence-specific fragments, which are then separated by size using gel electrophoresis. The sample is transferred to a membrane, and fragments are hybridized with a DNA probe and mixed with a chemiluminescent agent. Light emissions are captured as image data using a digitizing camera. The system then extracts the RiboPrint pattern from the data, and the pattern is compared to the database for characterization and identification. Eight samples can be handled at a time.

The system can discriminate at the strain level. Alcon, the contact lens solution products company, used the system when a contaminant was discovered during water sterility testing. The contaminant turned out to be Staphyloccoccus. One sample was isolated from water and another from the garment of a line worker. Tice said it would have been logical to conclude that the line worker was the contamination source, but the system showed two different Staphyloccoccus strains. According to Tice, information below the species level cannot be attained using conventional testing methods. He said the company plans to add capabilities to the system, including testing for yeast and mold.

Mass Spec Evolution Drives Field’s Growth

Advanced MS techniques and tools have revolutionized the pharma lab

Editor’s Note: This article on the history and impact of mass spectrometry in the pharmaceutical industry is the second in a new series for Pharmaceutical Formulation & Quality. In “PharmaTools: Technologies That Changed Pharma and Biotech,” we look at various technologies such as mass spectrometry that have played a key role and had an indelible impact on the pharma and biotech industries. In our next two issues, we will examine the evolution of liquid chromatography and gas chromatography. To view other mass spectrometry materials, please see below.

Even in the midst of a world economic downturn, some products continue to attract buyers. Within the pharmaceutical industry, one standout is mass spectrometry (MS) systems. According to Strategic Directions International, a market research firm that tracks instrument business trends, the mass spectrometry market, already a $2 billion annual concern, is expected to grow at a 9% annual rate through 2012.

“The market will be led for the foreseeable future by the more advanced methods, including Fourier transform (FT)-MS, tandem LC-MS, and quadrupole time-of-flight QTOF LC-MS (9),” the report states.

In the last issue of Pharmaceutical Formulation & Quality, we looked at the overall history of mass spectrometry in the pharmaceutical industry and how its evolution from the “expert-from-Switzerland” mode to easy-to-operate, benchtop tandem LC-MS systems has made it a ubiquitous tool in virtually every pharma lab (“Bringing Mass Spec to the Masses,” September 2009, pgs. 16-21). In this issue, we’ll explore the history of some of the more advanced methods and adaptations of mass spectrometry that are now helping to drive the field’s growth.

One technology that waited decades for its time to come is time-of-flight (TOF). W.E. Stephens, at the University of Pennsylvania, developed the concept in 1946. In 1948, Cameron and Eggers, at Oak Ridge Laboratories, built the first TOF instrument with very low mass resolution. But it took more than five decades for improvements in electronics, software, and engineering design to make TOF mass spectrometry the indispensable industry tool it is today.

“The issue was fundamentally low performance in terms of analytical service, resolution, sensitivity, and mass accuracy,” says Iain Mylchreest, PhD, vice president and general manager of life sciences mass spectrometry for Thermo Fisher Scientific. “Quadrupoles were much easier to interface with, so TOF took a back burner until the supporting technology and the means to interface with that technology caught up.”

Quad TOF Vital for Proteomics

This image shows a 1990s-vintage mass spectrometer made by a company that is now part of Waters Corp. At the time, it was the largest, and arguably the most complex, spectrometer the company ever built. Only three of the systems were shipped; the systems took an engineer months to install. The advanced mass spec technologies available today are smaller and much easier to install.

TOF made a key leap forward in 1984 when Gary Glish, PhD, now the president of the American Society for Mass Spectrometry, published the first paper (in Analytical Chemistry) on quadrupole TOF mass spectrometry. “The evolution of the quad TOF over the past two decades has been very important, particularly for proteomics and metabolomics,” said Gary Siuzdak, PhD, senior director of the Scripps Center for Mass Spectrometry in La Jolla, Calif. “Having quad in the front and TOF in the back gives you accurate mass measurements.”

Previous quad-TOF instruments notoriously lacked robustness and accuracy, but that has changed in recent years, according to Dr. Siuzdak. “Previously, they tried hard to achieve five PPM [parts per million] accuracy, but more often than not the range was more like 20 or 30. Now, with new techniques and improved detectors, they’re routinely getting sub-five PPM accuracy, and in some reports I’ve even heard of sub-part-per-million accuracy.”

Engineering improvements, such as changing the composition of the flight tubes, have made this possible. “They’re now making them out of a ceramic-like material that has a very low coefficient of expansion, so even if the temperature changes in the room, the flight tube won’t change,” Dr. Siuzdak said. “When you’re talking about PPM accuracy, having a flight tube change in size can make a pretty big difference.”

The evolution of the quad TOF over the past two decades has been very important, particularly for proteomics and metabolomics.
—Gary Siuzdak, PhD, Scripps Center for Mass Spectrometry

Quad-TOF instruments are particularly useful for proteomic and metabolomic accuracy. “Now, people can create a profile of the sample using quad TOF without generating MS-MS data and then do comparative analysis between the profiles and look at which peaks are changing significantly between different samples, such as with a benign versus malignant cancer sample,” Dr. Siuzdak explains. “Instead of getting fragmentation data on everything, you can go after what’s significantly different.

“It gives you a more manageable quantity of information, and the fragmentation you get is of higher quality, only focusing on the molecules that matter. This all means that we can do much more direct lead analysis.”

The coupling of TOF with MALDI (matrix-assisted laser desorption/ionization), a soft ionization technique that allows the analysis of biomolecules and large organic molecules that are vulnerable to fragmentation with conventional ionization, has advanced both tools, Dr. Mylchreest said. “It’s a natural marriage, since MALDI is a pulse technology and TOF deals with pulses of ions.”

Introduced before electrospray was in wide use, MALDI allowed the direct analysis of big proteins and peptides, something very difficult to do with the technology of the time. “It was attractive to the biologists, because it had gel spots you could directly analyze in the mass spectrometer and get some idea of molecular weights, which you could never do before. You had to use chromatographic methodologies, which were very inaccurate,” Dr. Mylchreest said.

Pioneered in the early 1980s, the first MALDI instruments were linear, single-stage instruments. “With those, the primary application within proteomics was measuring the mass of intact proteins,” says Ronan O’Malley, PhD, MALDI product manager for the Waters Corporation. “The next development in MALDI, in the late 1980s and early 1990s, was the introduction of a reflectron into the instrument, an ion mirror that has the advantage of lengthening the flight tube, thereby increasing TOF and improving the mass accuracy that can be achieved.”

But the earliest MALDI machines had significant disadvantages. “The M in MALDI is for matrix. In the early days, you never knew why you’d get a good signal with a matrix in one case and not in another,” said Richard Caprioli, PhD, director of the Mass Spectrometry Research Center at Vanderbilt University in Tennessee and a developer of MALDI MS imaging.

“We might play all day with sample preparation to get our signal and try to understand these things, which is fine in academia, but not what they’re getting paid for in pharma. Today, there is a much better functional and chemical understanding of how to get a better signal,” Dr. Caprioli said.

Advantages of MALDI MS

The Synapt G2 System is a quadrupole time-of-flight mass spectrometer that recently came on the market.
The Synapt G2 System is a quadrupole time-of-flight mass spectrometer that recently came on the market.

Dual-stage MALDI, which came on the scene around the turn of the 21st century, allowed people to select ions in the first stage of mass spectrometry and fragment in the second. “This gave much more specificity, allowing you to add fragmentation to the molecular mass experiments,” said Dr. O’Malley. “Proteomics was the mainstream market for this capability, but there were also applications in quality control for formulated compounds and for analyzing oligonucleotides.”

Not long after, MALDI sources were combined with orthogonal instrumentation. “With an orthogonal system measuring TOF from the pusher to detector, you don’t have to take into account the TOF from source to detector,” said Dr. Mylchreest. “That’s important in MALDI, because there can be variations in the uniformity of the analyte across the target plate. With an orthogonal system, that doesn’t matter anymore. This brought the advantages of high resolution and exact mass to MALDI.”

Another advantage that MALDI MS brought to the table is speed. “The ability to do 2-5K laser shots per second—and you only need 10 or even less to give you a good analysis—enables a really rapid screening process,” said Dr. Caprioli. “Other MS techniques, although very valuable, are much slower. LC-MS might take one to three hours, whereas MALDI would take you one second to acquire the same data.”

Like TOF, Fourier transform mass spectrometry (FTMS) also took decades to reach its full potential. First developed in the mid-1970s, Fourier transform to ion cyclotron resonance (FTICR) mass analysis made FTMS applicable to the study of biomolecules. But FTMS and FTICR have only taken off within the last decade.

“You’re dealing with big magnet technology, which wasn’t that advanced back then,” said Dr. Mylchreest. “The magnets were huge, expensive, and weren’t shielded, and it took a long time to adapt that more academic technology for commercial usage.”

Today, FTICR offers ultra-high resolution along with impressive stability and accuracy. “It’s possible, with a skilled user, to get on the order of 500 parts per billion accuracy,” said Dr. Siuzdak. “This can really allow you to nail down elemental composition with relatively low ambiguity.”

FT has one significant downside: It is much more complex than TOF and quadrupole instrumentation. “FT instruments require a more advanced user,” said Dr. Siuzdak. “Within a day or so you can get reasonably familiar with [a] TOF or quadrupole instrument. With an FT system, especially the ICR instruments, it can take longer to learn all their aspects.” He added that some instruments on the market have made this easier, but they don’t offer resolution and accuracy as high as that attained from FTICR.

Ion Mobility Plays Important Role

Kevin Shanks, forensics manager at AIT Labs, prepares a time-of-flight mass spectrometer for use.

Driving the utility of many of these advanced instruments is ion mobility. As a technique, it’s been around for decades. Some of the first measurements were reported by researchers as early as the 1930s. Researchers at Bell Labs developed an instrument in 1967 that was “essentially an ion mobility drift tube combined with an orthogonal time-of-flight type analyzer,” said Alistair Wallace, PhD, Synapt product manager for Waters. But, just as with TOF, MALDI, and Fourier transform, ion mobility’s time had not yet arrived. “Electronics then were far less evolved, and the analyzers in use at that time were only capable of analyzing a single ion arrival event.”

It took another 25 years for pioneers like Michael Bowers’ group at the University of California, Santa Barbara, and David Clemmer’s at the University of Indiana, to move ion mobility mass spectrometry into the modern age. “Today, ion mobility is consistently increasing and driving the performance one can get from things like a TOF analyzer,” said Dr. Wallace.

Today, a technician or a grad student can come along, put in samples, and get accurate measurements from one to five PPM. That’s been a massive change for industry.
—Iain Mylchreest, PhD, Thermo Fisher Scientific

He compares the TOF analyzer to a big molecular dustbin. “You throw thousands of ions in there, and at the end of the day the limiting factor is the speed with which you can acquire the data. The faster and more powerful the electronics are, the more you can get out,” Dr. Wallace said.

Introduced in 2006, Waters’ Synapt instrument takes advantage of tri-wave technology to perform ion mobility at the limits of MS detection as it is currently known. “It can trap and accumulate ions prior to ion mobility separation, and the tight radial confinement of the T-wave enables you to get very high transmission of ions—nearly 100%—through the entire device,” said Dr. Wallace.

The impact of all of these advanced MS technologies on the pharmaceutical industry has been nothing short of revolutionary. “They’ve opened up new areas in terms of very high resolution mass analysis and accurate mass, something that has always been a big challenge in pharma. A lot of us have big fish stories on accurate mass,” said Dr. Mylchreest.

“They also made these experiments available to every lab. Today, a technician or a grad student can come along, put in samples, and get accurate measurements from one to five PPM. That’s been a massive change for industry. Although they all have different characteristics as to what they can do in terms of resolution, capabilities, and performance, they all address the same market space: opening up drug metabolism and structural analysis. They’ve opened up areas in proteomics and peptide sequencing and characterization that simply couldn’t be done before,” Dr. Mylchreest said.