Sunday, August 19, 2012

Mobile, Roofless Clean Room Proposed by Orion Engineers

The first space-worthy Orion Multi-Purpose Crew Vehicle (MPCV) will soon start to be assembled at the NASA Kennedy Space Center (KSC), in Florida, the former home of the space shuttle fleet. Engineers say that a mobile, roofless clean room could be built around the spacecraft.

For many years, scientists believed that a clean room-like environment could only be achieved by controlling the conditions inside a specially-designed, well-isolated chamber very rigorously. Now, they are beginning to figure out that this may not be the case at all times.

Currently, NASA is testing a design where the clean room is installed around the spacecraft being built. This is a significant departure from regular standards, where the vehicles are assembled inside aseptic conditions, Science Daily reports.

The concept was developed by Austin, Texas-based aerospace company Astrotech Corporation. Engineers here came up with the idea to build an installation that would ensure no dust settles on the spacecraft's surface.

They say that installing two, 10-foot (3-meter) high walls of filter-equipped fans on either side of the capsule could create an even airflow, and enough air motion to prevent any particles from settling on the MPCV's components.

The other two walls of the clean room would be made up of clear materials, and the ceiling would be non-existent. Astrotech experts explain the lack of a roof by the necessity to access the large cranes already installed inside the KSC Vehicle Assembly Building (VAB).

These cranes will be used to hoist the completed Orion atop the Space Launch System (SLS) rocket NASA is developing for its new spacecraft. The delivery system's maiden flight is scheduled for no earlier than 2017.

“The guys over in the Operations and Checkout Building are using this and that's what gave us the idea. It's pretty innovative, so we'll see if it works,” NASA Ground Systems Development and Operations Program engineer, Doug Lenhardt, says.

“The results have been encouraging. We don't have the full story yet, but it's been encouraging,” he concludes.

Evolution of the Clean Room


By: John Buie

GenTech Scientific
Although the principles of clean room design go back more than 150 years to the beginning of bacterial control in hospitals, the clean room itself is a relatively modern development. It was the need for a clean environment for industrial manufacturing during the 1950s that led to the modern clean room as we know it.
A clean room is a rigorously controlled environment that has a low level of environmental pollutants such as dust, airborne microbes, aerosol particles and chemical vapors. The air entering a clean room is filtered and then continuously circulated through high efficiency particulate air (HEPA) and/or ultra-low particulate air (ULPA) filters to remove internally generated contaminants. Staff wearing protective clothing must enter and exit through airlocks, while equipment and furniture inside the clean room is specially designed to produce minimal particles.
While more than 30 different industry segments utilize clean rooms, 70 percent of U.S. clean room floor space is in the semiconductor and other electronic components, pharmaceutical, and biotechnology industries.
1939 – 1945
Development of the modern clean room began during the Second World War to improve the quality and reliability of instrumentation used in manufacturing guns, tanks and aircraft. During this time, HEPA filters were also developed to contain the dangerous radioactive, microbial or chemical contaminants that resulted from experiments into nuclear fission, as well as research into chemical and biological warfare.
While clean rooms for manufacturing and military purposes were being developed, the importance of ventilation for contamination control in hospitals was being realized. The use of ventilation in a medical setting gradually became standard practice during this time.
1950s – 1960s
The evolution of clean rooms gained momentum as a result of NASA’s space travel program in the 1950s and 1960s. It was during this time that the concept of ‘laminar flow’ was introduced, which marked a turning point in clean room technology.
In the late 1950s, Sandia Corporation (which later became Sandia National Laboratories) began investigating the excessive contamination levels found in clean rooms. Researchers found that clean rooms were being operated at the upper practical limits of cleanliness levels and identified a need to develop alternative clean room designs.
In 1960, Blowers and Crew in Middlesborough, UK was the first to improve contamination control by creating a unidirectional airflow from an air diffuser fitted over the entire ceiling in an operating room. In practice, the airflow was disturbed by air currents and the movement of people, but the idea of unidirectional flow was born.
Also in 1960, McCrone Associates began developing advanced particle handling techniques using tungsten needles and collodion. These techniques, which later became industry standards, were incorporated into the McCrone Associates Class 100 clean room.
In 1961, Professor Sir John Charnley and Hugh Howorth, working in a hospital in Manchester, UK, managed to significantly improve unidirectional airflow by creating a downward flow of air from a much smaller area of the ceiling, directly over the operating table.
Also in 1961, the first standard written for clean rooms, known as Technical Manual TO 00-25-203, was published by the United States Air Force. This standard considered clean room design and airborne particle standards, as well as procedures for entry, clothing and cleaning.
In 1962, Sandia Corp. launched the Whitfield Ultra-clean room, which was publicized in Time Magazine, creating a great deal of interest. Instead of simply using filters to clean incoming air, Whitfield used filtered air to keep the room clean by introducing a change of ultra-clean air every six seconds.
In 1962, Patent No. 3158457 for the laminar flow room was issued. It was known as an “ultra clean room.”
By 1965, several vertical down flow rooms were in operation in which the air flow ranged between 15 m (50 ft)/min and 30 m (100 ft)/min. It was during this time that the specification of 0.46 m/s air velocity and the requirement for 20 air changes an hour became the accepted standard.
In 1966, Patent No. 3273323 was submitted and issued for the “laminar flow airhood apparatus.”
1970s
By the early 1970s the principle of “laminar flow” had been translated from the laboratory to wide application in production and manufacturing processes.
1980s – 1990s
The 1980s saw continued interest in the development of the clean room. By this stage, clean room technology had also become of particular interest to food manufacturers.
During the late 1980s, STERIS (formerly known as Amsco) developed the use of hydrogen peroxide gas for the decontamination of clean rooms, and marketed the idea under the trademark VHP (vaporized hydrogen peroxide). Hydrogen peroxide gas rapidly became the most widely used method of sterilization, due to its unique combination of rapid antimicrobial efficacy, material compatibility and safety.
In 1980, Daldrop + Dr.Ing.Huber developed an innovative clean room ceiling, known as ‘Euro Clean’, to meet the rising challenges from industry at the beginning of the 80s.
In 1987, a patent was filed for a system of partitioning the clean room to allow zones of particularly high-level cleanliness. This improved the efficiency of individual clean rooms by allowing areas to adopt different degrees of cleanliness according to the location and need.
In 1991, a patent was filed for a helmet system that can be used in a medical clean room in which the user is protected from contaminated air in the environment, while the patient is protected from contaminated air being exhausted from the user’s helmet. Such a device decreases the possibility of operating room personnel being contaminated with viruses carried by the patients being operated upon.
In 1998/1999, CRC Clean Room Consulting GmbH introduced the clean room filter fan. This involved the integration of a filter fan unit, with filter, ventilator, and motor directly into the clean room ceiling.
2000s
The pace of clean room technology transformation has accelerated over recent years. Since the year 2000, there have been significant advances in new clean room technology, which have helped to streamline manufacturing and research processes, while also reducing the risk of contamination. Most of the technological developments of the past decade have been directed towards the manufacture of sterile products, particularly aseptically filled products.
In 2003, Eli Lilly pioneered the development of a new system for the prevention and containment of crosscontamination during the manufacture of pharmaceutical powders using a specially designed ”fog cart”. This allows the operator to be covered by an exceptionally fine fog of water on exit from a critical area, virtually eliminating the risk of transferring dust traces beyond their proper confines.
In 2009, The University of Southampton, UK opened a Nanofabrication Centre containing a clean room with nanofabrication facilities, making it possible to manufacture high-speed and non-volatile ”universal memory” devices for industry that could process information faster than anything achieved with conventional technologies.
The Future of Clean Rooms
Clean room facilities in the United States have been predicted to grow fourfold from a baseline of 1998 to the year 2015, to an estimated 180 million square feet in 2015.
The most common applications of clean rooms currently are in the manufacture of semiconductor and other electronic components, as well as in the pharmaceutical and biotechnology industries. In addition to these traditional applications, clean room technology has more recently been applied to micro- and nano-system processes, and this looks certain to be an area of growth in coming years. The development of clean room technology is likely to continue to be driven by certain key factors including the increasingly technical use of exotic physical and biological phenomena, the central role of increasingly fine structures, the creation and use of materials of the highest purity, and the increasingly broad-based utilization of biotechnology. Given the scale of these challenges, clean room technology looks set to remain indispensable to production in coming years.

Anthrax



 is an acute infectious disease caused by the spore-forming bacterium bacillus anthracis. Anthrax most commonly occurs in wild and domestic animals including cattle, sheep, goats, camels, antelopes, and other herbivores, but it can also occur in humans when they are exposed to infected animals or tissue from infected animals.
Anthrax is most common in agricultural regions. These include South and Central America, Southern and Eastern Europe, Asia, Africa, the Caribbean, and the Middle East. When anthrax affects humans, it is usually due to an occupational exposure to infected animals or their products. It is rare to find infected animals in the United States.
As a biological weapon, anthrax may be inhaled by humans. It is also very cheap to produce, costing about $50 per kilogram. One test tube of feed stock (samples of anthrax) can produce a kilogram of anthrax in about 96 hours in a fermenter. Samples are relatively easy to find because it is a naturally occurring disease throughout the world. Anthrax is easy to deliver in the form of a weapon (crop dusting, air distribution systems, and enclosed spaces). Since HVAC air distribution systems may be a path of contamination, access to these areas should be secure.
Symptoms of disease vary depending on how the disease was contracted, but symptoms usually occur within 7 days. Once exposure to anthrax has been identified, timely medical treatment of the exposed individuals is recommended as the bacteria can be fatal.
There are three ways in which humans can be infected by Anthrax spores:
  • Cutaneous (or through the skin): Most (about 95%) anthrax infections occur when the bacterium enters a cut or abrasion on the skin, such as when handling contaminated wool, hides, leather or hair products (especially goat hair) of infected animals. Skin infection begins as a raised itchy bump that resembles an insect bite but within 1-2 days develops into a vesicle and then a painless ulcer, usually 1-3 cm in diameter, with a characteristic black necrotic (dying) area in the center. Lymph glands in the adjacent area may swell. About 20% of untreated cases of cutaneous anthrax will result in death. Deaths are rare with appropriate antimicrobial therapy.
  • Inhalation (or through the respiratory system): Initial symptoms may resemble the common cold. After several days, the symptoms may progress to severe breathing problems and shock. Inhalation anthrax can be fatal if treatment lags infection.
  • Anthrax InhalationIntestinal (or through eating contaminated products): The intestinal disease form of anthrax may follow the consumption of contaminated meat and is characterized by an acute inflammation of the intestinal tract. Initial signs of nausea, loss of appetite, vomiting, fever are followed by abdominal pain, vomiting of blood, and severe diarrhea. Intestinal anthrax may result in death in 25% to 60% of cases.
Direct person-to-person spread of anthrax is extremely unlikely to occur. Communicability is not a concern in managing or visiting with patients with anthrax. There is no evidence of person-to-person transmission of anthrax so quarantine of affected individuals is therefore not recommended. Anthrax spores may survive in the soil, water and on surfaces for many years. Spores can only be destroyed by steam sterilization or by burning. Disinfection of contaminated articles may be accomplished using a 0.05% hypochlorite solution (1 tablespoon of bleach per gallon of water). Spore destruction requires steam sterilization." It has also been reported that boiling (100 degrees C) for 30 minutes kills the offending spores. Given the current state of world affairs, anthrax bio-terrorism may include spores disseminated by the aerosol route, causing inhalation anthrax. Because atmospheric stability is important to its efficient spread, and because sunlight is highly toxic to biological agents, the most likely time of release will be at night. Particles from 1 to 5 microns in size (respirable size particles) are most efficient in causing infection. The considered infectious dose for man is upwards of 3000 particles. Some authorities quote the infectious range as 6000 to 8000 spores.
When questioned about anthrax the following points apply:
  • There is extremely low risk of biological attack if the facility is outside of a major urban area. Hence, if you are tens of miles outside of a major city, you probably do not need to do much to be prepared other than have food, water, power, supplies, etc. stored up in case of long infrastructure outages due to biological attacks.
  • The only true air filtration protection for anthrax is a HEPA filter. To be effective, this must be an air distribution system that takes outside air in through the filter and pushes this filtered air back out through the leak paths in the building - thus the HEPA system creates a slight positive overpressure in the facility. Such building positive pressure creates a balloon effect that helps keep contaminated air from entering the building. HEPA units that merely cleanse the recirculation air would provide little protection against anthrax laden air that is sucked into your facility, unless an area is provided with complete positive overpressure protection.
  • A document already exists that may be used to combat airborne anthrax spores in a facility. The CDC Guidelines for the Control of Mycobacterium Tuberculosis has some excellent recommendations for control of this type of contaminant. Airborne tuberculosis in its respirable form is also 1-5 microns in size. When considering controls these guidelines provide sound mechanical system advice that may be applied to reduce the airborne level of infectious anthrax
  • One additional point on filters bears mentioning. Although a HEPA filter is the only sure way of eliminating airborne spores, they may also be removed to a lesser degree by ASHRAE grade filters. MERV 14 and higher ASHRAE grade filters can remove more than 97%+ of the particles in the 1 to 5 micron range although they should not be considered a fail-safe alternative to HEPA filter application. A MERV 16 (commonly referred to as 95% DOP) filter is at least 99% efficient in this range.
  • Additionally any filter in an HVAC system only performs when air is moved through the system. To decrease numbers of particles per cubic foot, increase the number of air changes to the space (thus moving more air through the filter, decreasing the number of particles per cubic foot with each pass). Given a particular particle size, and a filter efficiency approaching 100%, the following chart demonstrates the importance of increased air changes.
    Air Change Chart
  • The typical commercial building with a constant volume system offers 6-8 air changes per hour. VAV systems sometimes operate as low as 2-3 air changes per hour. During incidences of infectious control (flu season, etc.), increasing air changes can reduce the number of infectious airborne contaminant. If the system has a fan 'on' switch for constant flow volume, you can increase the number of air changes per hour, by moving this switch to 'on'. As an example, in the aforementioned chart increasing air changes from 6 to 15 will 'clean' the air to a 99% removal rate in 18 minutes as opposed to 46 minutes.
If anthrax exposure is suspected:
  • Isolate any item suspected including mail or other items.
  • Isolate individuals that may have been exposed.
  • Call 911, or contact authorities, and state what has occurred.
  • Notify building maintenance to turn off any ventilation systems.
  • Ensure that all persons who may have been exposed wash their hands with soap and water.
  • List all persons who may have been exposed and provide the list to public health authorities.
  • Wait for the police and HAZMAT team to arrive. They will provide further directions.

Air Quality



In a normal day of going about our business at home and work we are being exposed to environmental pollutants that pose varying degrees of risk. Some risks are simply unavoidable, such as automobile pollution, environmental dusts, and normal environmental levels of bioaerosols (molds and bacteria). Some we choose to accept because to do otherwise would restrict our ability to lead our lives the way we want, such as pollutants from photo copiers, computers, or cleansing materials.
In the last several years, a growing body of scientific evidence has indicated that the air within homes and other buildings can be more polluted than the outdoor air in even the largest and most industrialized cities. Other research indicates that people spend approximately 90 percent of their time indoors. Thus, for many people, the risks to health may be greater due to exposure to air pollution indoors than outdoors.
In addition, people who may be exposed to indoor air pollutants for the longest periods of time are often those most susceptible to the effects of indoor air pollution. Such groups include the young, the elderly, and the chronically ill, especially those suffering from respiratory or cardiovascular disease.
Concern over the quality of air inside buildings is now receiving the attention that the quality of outside air received in the 1970's. Acceptable indoor air quality (IAQ) is defined in the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 62 Ventilation for Acceptable Indoor Air Quality as, "Air in which there are no known contaminants at harmful concentrations and with which a substantial majority of the people exposed do not express dissatisfaction."
Poor IAQ is caused by airborne particulates and gases or vapors that adversely affect occupant health and comfort, the building structure or its furnishings. Some IAQ problems are so adverse that the entire structure may be defined as a ‘sick building’.
Many of the environmental pollutants are new and are the result of changes in HVAC system design and operating practices; others have existed for years but health conscious building occupants are now complaining about them.
Results of Poor IAQ:
Poor IAQ has been shown to result in increased incidence of illness and absenteeism, reduced productivity, irritability, complaints about building odor and stuffiness and health problems. These may include allergies, coughing, diarrhea, eye, nose and throat irritation, fatigue, general respiratory problems, headaches, nausea, dizziness, runny nose and skin irritation.
Some specific contaminats and concerns are addressed in the documents listed in the accompainemnet window to the left.
Additionally, the United States Department of Environmental Protection published multiple documents pertaining to the subject of IAQ in Commercial Buildings.

Protocols Utilized in Camfil Farr Facilities


Filter Classifications

Quite a few inaccuracies and erroneous "jargon" are commonplace in the high efficiency filtration industry. One of the key issues pertains to nomenclature (i.e., HEPA, ULPA, VLSI, SULPA, etc.). This issue involves misconceptions regarding a filters efficiency and the relationship to particle size.
CEN HEPA/ULPA Filter Class...CEN, the Comite European de Normalization, has developed a Standard, EN 1822-1:1998, based on particle counting at the Most Penetrating Particle Size (MPPS). This European Standard applies to High Efficiency Particulate Air (HEPA) and Ultra Low Penetration Air (ULPA) filters used in the field of ventilation and for technical processes (e.g., for clean room technology or applications in the nuclear and pharmaceutical industries).
Key definitions from this Standard include:
  • Penetration — The ratio of the particle count downstream of the filter to the particle count upstream.
  • Efficiency —  The ratio of the number of particles captured by the filter to the number of the particles challenging the filter.
  • Overall Efficiency/Penetration —  The efficiency/penetration averaged over the "superficial/useable" face area of a filter element under given operating conditions of the filter.
  • Superficial/Useable Face Area —  The cross-sectional area of the filter element, through which the air passes.
  • Local Efficiency/Penetration —  The efficiency/penetration at a specific point on the superficial/useable face area of the filter element under given operating conditions of the filter.
  • Leak Threshold —  Local penetration greater than or equal to five (5) times the filters overall penetration.
This Standard allows a classification of filters in terms of efficiency and is, therefore, useful for both buyer and seller.

Basic Test Protocols

Leak Scanning
Camfil Farr leak tests each Megalam Panel and Ducted Ceiling Module HEPA/ULPA filter. Testing is performed in Class 100 (M3.5) clean zones within a Class 10,000 (M5.5) cleanroom. All testing is conducted per the controlled and documented procedures of Camfil Farr's ISO 9001 certified quality system.
To enhance upstream sampling capability, leak-scanning systems are equipped with dilution equipment for measuring high particle concentrations. Probe geometry has been optimized to maximize traverse rate and eliminate undetected leaks while maintaining isokinetic sampling. The entire face of the filter is scanned with overlapping strokes including the media to frame interface. Per customer requirements, Polystyrene Latex Spheres (PSL) is Camfil Farr's standard challenge aerosol.
Any leak with a penetration exceeding five (5) times the filters average rated penetration, is repaired with an alcohol based silicone sealant per industry standards or customer specifications. Polyurethane and other repair materials are available upon request.
Menu-driven, computer controlled auto-scanning is utilized for standard filter configurations. Manual scanning is performed for small quantity, custom filter designs/sizes and leak repair.
Filter Media Efficiency Testing
Per Camfil Farr raw goods supplier specifications, suppliers are required to test each master roll of Camfil Farr filtration media for efficiency utilizing Condensation Nuclei Counters (CNC) & Q127 Penetrometers. Test results are submitted to Camfil Farr for review & material acceptance prior to release authorization.
Filter Efficiency Testing
Manual Scan: Camfil Farr's computer integrated system gathers efficiency information from a fully encapsulated filter. The system features simultaneous upstream and downstream data collection. If the efficiency is lower than specified, the filter is rejected.
Auto-Scan: The discrete data points generated during the scan test are integrated to calculate the test filters global efficiency. If the efficiency is lower than specified, the filter is rejected.
Filter Media Pressure Drop Testing
Per Camfil Farr specifications, approved suppliers test each lot of media for pressure drop. Test results are submitted to Camfil Farr for review & material acceptance prior to release authorization.
Filter Pressure Drop Testing
Manual Scan: During the test, the system continuously monitors and collects filter pressure drop data. If the pressure drop is higher than specified, the filter is rejected.
Auto-Scan: During the scan test, the system continuously measures the filters pressure drop. If the pressure drop is higher than specified, the filter is rejected.
Manual Scanning Protocol
Depending on customer requirements, either Photometer or Particle Counter manual scanning techniques are utilized. Typically, depending upon the detection equipment selected, a solid aerosol (i.e., PSL - Polystyrene Latex spheres) is used. Probe geometry has been optimized to maximize traverse rate and eliminate undetected leaks while maintaining isokinetic sampling.
A summary of Camfil Farr's manual scanning protocol follows:
1) Typical test aerosol concentration is: PSL (Polystyrene Latex) > 5 x 107 N/ft3
2) Typical scan speed is 1.5 – 2.0 inches/second.
3) Testing: The entire face of the filter is scanned with overlapping strokes with particular attention given to the media pack to frame seal.
A. Particle Counter Scanning: If a particle count is detected, the operator checks the area for continuous counts. If continuous counts in excess of the specified leakage threshold are detected, the leak is repaired.
B. Photometer Scanning: If a discernable displacement of the % Penetration indicator occurs, or the alarm sounds, the operator re-checks the area of concern. If the % Penetration indicator displacement exceeds the specified leakage threshold, the leak is repaired.
4) Leak Repairs: If a leak exceeds the specification, it is repaired with a silicone sealant. Alcohol-based silicones and polyurethane are also available for use as leak repair materials. After a repair has been made, the entire filter face is re-scanned.
Note: Photometer Scanning is generally reserved for HEPA filters, while Particle Counter Scanning is used for ULPA filters and/or for customers with stringent outgassing requirements.
Auto-Scanning Protocol
Auto Scanner ProbeCamfil Farr Auto-Scanners have been designed to detect pinhole leaks in HEPA/ULPA filters. The test apparatus is an automated, computer-controlled system, utilizing multiple particle counters for accuracy. Polystyrene Latex (PSL) is the standard challenge aerosol. To further enhance system sensitivity, Camfil Farr uses advanced dilution equipment for measuring high upstream particle concentrations. The automated system eliminates the possibility of incorrect test results that can result from human error. The computer interface controls filter airflow rate, test aerosol injection, particle counting upstream and downstream of the test filter, probe traverse rate, data reduction and data storage.
A description of system parameters follows:
1) System protocol includes:
a) Aerosol Concentration: PSL concentration = 3 x 108 N/ft3 (typical)
b) Particle Counter Flow = 1 CFM (cubic foot per minute)
c) Sampling = Isokinetic d) Sample Time = Continuous e) Size Range = 0.1 – 0.5 m (0.1 m band widths)
2) Required operator input:
a) Min./Max. and Rated Efficiency
b) Leakage factor (per customer specification
c) Dilution ratio
d) Min./Max. and Rated Pressure Drop
e) continuous upstream sampling during the scan process
f) Programmed to automatically traverse the filter with overlapping strokes. Proximity sensors (mounted in the probe) monitor the probes location with regard to the clamping frame, ensuring that the probe overlaps the media to frame interface along the filters perimeter.
g) The system utilizes the Rated Efficiency, Leakage Factor and Dilution Ratio inputs comparing downstream samples, from the entire scan, with the average upstream sample to determine if a leak exists.
h) If a leak is detected, a reject report is generated that indicates the magnitude and location of the leak.
i) Measuring pressure drop continuously across the filter. If the pressure drop is higher than specified, the filter is rejected.
j) Calculating global efficiency by integrating the discrete data points collected during the scan test. If the efficiency is less than specified, the filter is rejected.
The scan rate is calculated per IEST-RP-00001.3 Section 9.2.2:
Sr = CcLsFsDp/(60NI)
Where:
Cc is the challenge concentration in particles/ft3
Ls is a significant leak in terms of standard penetration
Fs is the sample flow rate in CFM
Dp is the probe dimension expressed in inches parallel to the scan direction
NI is the number of particle counts that define the maximum leak
60 is the conversion factor from seconds to minutes.
Camfil Farr specifies that the variable NI is to be set to twice the particle counter background level or a minimum of 25.
Camfil Farr’s Cam Count Efficiency Testing Protocol
Camfil Farr’s Cam Count efficiency test system is designed to test HEPA/ULPA filters per IEST-RP-CC007.1 and EN1822. All testing is performed per the controlled & documented procedures of Camfil Farr’s ISO 9001 certified quality system.
Camfil Farr’s Cam Count efficiency test system has been designed to measure the overall efficiency and pressure drop of HEPA/ULPA filters. The test apparatus is an automated, computer controlled system, utilizing a single laser particle counter for accuracy. Poly Alpha Olefin (PAO) is the standard challenge aerosol.
A Poly Styrene Latex Sphere (PSL) test aerosol is also available upon request and is utilized on all high temperature filters. To further enhance system sensitivity, Camfil Farr uses advanced dilution equipment for measuring high upstream particle concentrations. The automated system eliminates the possibility of incorrect data that can result from human error. The computer interface controls the flow rate, the test aerosol injection, particle counting upstream and downstream, and data collection, reduction and storage.
A description of system parameters follows:
1) System protocol includes:
a) Aerosol Concentration: PAO concentration = 3 x 108 N/ft3 (typical) PSL concentration = 1-3 x108 N/ft3
b) Particle Counter Flow = 1 CFM (cubic foot per minute)
c) Sample Time = 20 second upstream & downstream sequentially (typical)
d) Size Range = 0.1 – 0.5 mm, 0.1 – 0.2 mm, 0.2 – 0.3 mm, 0.3 – 0.5 mm, and > 0.5mm.
2) Required operator input:
a) Minimum, maximum, & target efficiency
b) Minimum, maximum, & target pressure Drop
c) Test flow rate
3) System Operation:
The system sequentially measures the upstream & downstream particle concentration. After applying the dilution ratio to the upstream concentration, it calculates the filter efficiency, while simultaneously measuring the filter pressure drop using a calibrated pressure transducer. These values are automatically compared to the input minimum & maximum values. A filter with values outside the specified range is rejected. The system automatically generates a test label that includes the test results for each passing filter.
UL 900
Camfil Farr Megalam Panel and Ducted Ceiling Module type HEPA/ULPA filters are listed with Underwriters Laboratories per UL 900, "Standard for Test Performance of Air Filter Units" as either of the following:
Class 1: "those that, when clean, do not contribute fuel when attacked by flame and emit only negligible amounts of smoke".
Class 2: "those that, when clean, burn moderately when attacked by flame or emit moderate amounts of smoke, or both".
Please call factory for the specific rating of your product (s).
Factory Mutual
Camfil Farr's Megalam Panel and Ducted Ceiling Module type HEPA/ULPA filters meet the approval requirements of Factory Mutual Research Corporation (FM) for product construction of limited combustibility, when installed in an approved ceiling grid. For this approval, FM tests the filter as a component in a complete ceiling grid system.
During the ten (10) minute fire exposure test for Factory Mutual Standard FM-4920 ceiling system approval, there was no visible ignition of the Camfil Farr filter and no flame spread. For this test, the ceiling system tested was composed of a third party ceiling grid, third party gel sealant, and Camfil Farr filter. The complete system passed all technical requirements of the standard.

Classifications Of Clean Rooms



Clean rooms are classified by the cleanliness of their air. The method most easily understood and most universally applied is the one suggested in the earlier versions (A to D) of Federal Standard 209 of the USA. In this old standard the number of particles equal to and greater than 0.5 m m is measured in one cubic foot of air and this count used to classify the room. The most recent 209E version has also accepted a metric nomenclature. In the UK the British Standard 5295, published in 1989, is also used to classify clean rooms. This standard is about to be superseded by BS EN ISO 14644-1.

Federal Standard 209

This standard was first published in 1963 in the US and titled "Clean Room and Work Station Requirements, Controlled Environments". It was revised in 1966 (209A), 1973 (209B), 1987 (C), 1988 (D) and 1992 (E). It is available from:
Institute of Environmental Sciences and Technology
940 East Northwest Highway
Mount Prospect, Illinois, 60056 USA
Tel: 0101 708 255 1561
Fax: 0101 708 255 1699
e-mail: iest@iest.org:
The clean room classifications given in the earlier A to D versions are shown in Table 1.

Table 1: Federal Standard 209D Class Limits

FS 209D Class Limits
In the new 209E published in 1992 the airborne concentrations in the room are given inmetric units, (i.e. per m3), and the classifications of the room defined as the logarithm of theairborne concentration of particles ³ 0.5 m m e.g. a Class M3 room has a particle limit forparticles ³ 0.5 m m of 1000/m3. This is shown in Table 2.

Table 2: Federal Standard 209E Airborne Particulate Cleanliness Classes

FS 209E Cleanliness Classes

British Standard 5295:1989

This standard is available from:
B S I Standards
389 Chiswick High Road
London W44 AL
Tel 0181 996 9000
Fax 0181 996 7400
e-mail: info@bsi.org.uk
Because of the imminent publication of EN ISO 14644-1 parts of this British Standard have a limited life. Parts will be superseded by the ISO standards as they appear as an EN standard.
The British Standard is in five parts. These are:
Part 0 - General introduction and terms and definitions for clean rooms and clean air devices. (4 pages)
Part 1 - Specification for clean rooms and clean air devices. (14 pages)
Part 2 - Method for specifying the design, construction and commissioning of clean room and clean air devices. (14 pages)
Part 3 - Guide to operational procedures and disciplines applicable to clean rooms and clean air devices. (6 pages)
Part 4 - Specification for monitoring clean rooms and clean air devices to prove continued compliance with BS 5295. (10 pages)
Part 1 of the standard contains ten classes of environmental cleanliness. Shown in Table 3 are the classes given in the standard. All classes have particle counts specified for at least two particle size ranges to provide adequate confidence over the range of particle size relevant to each class.

Table 3 BR 525  Environmental Cleanliness Classes

BS 5295 Environmental Clean...

BS EN ISO Standard

Because of the large number of clean room standards produced by individual countries it is very desirable that one worldwide standard of clean room classification is produced. The first ISO standard on clean rooms has been published (June 1999) as 14644-1 ‘Classification of Air Cleanliness’. It is about to be adopted as a European standard and hence a standard for all countries in the EU. This standard is available from standard organizations throughout the world and in the UK is available from the BSI. Shown in Table 4 is the classification that has been adopted. Table 4. Selected ISO 209 airborne particulate cleanliness classes for clean rooms and clean zones.
ISO 209 Cleanroom Classes
The table is derived from the following formula:
Cleanroom Classes Formula
where:
Cn represents the maximum permitted concentration ( in particles/m3 of air ) of airborne particles that are equal to or larger than the considered particle size. Cn is rounded to the nearest whole number. N is the ISO classification number, which shall not exceed the value of 9. Intermediate ISO classification numbers may be specified; with 0.1 the smallest permitted increment of N. D is the considered particle size in m m. 0.1 is a constant with a dimension of m m. Table 4 shows a crossover to the old FS 209 classes e.g. ISO 5 is equivalent to the old FS 209 Class 100.
The occupancy state is defined in this standard as follows:
As built: the condition where the installation is complete with all services connected and functioning but with no production equipment, materials, or personnel present.
At-rest: The condition where the installation is complete with equipment installed and operating in a manner agreed between the customer and supplier, but with no personnel present.
Operational: The condition where the installation is functioning in the specified manner, with the specified number of personnel present and working in the manner agreed upon. The standard also gives a method by which the performance of a clean room may be verified i.e. sampling locations, sample volume etc. These are similar to FS 209. It also includes a method for specifying a room using particles outside the size range given in the table 4. Smaller particles (ultrafine) will be of particular use to the semiconductor industry and the large (³ 5m m macro particles) will be of use in industries such as parts of the medical device industry, where small particles are of no practical importance. Fibers can also be used.
The method employed with macro particles is to use the format:
‘M(a; b);c’
where
a is the maximum permitted concentration/m3
b is the equivalent diameter.
c is the specified measurement method.
An example would be:
‘M(1 000; 10m m to 20m m); cascade impactor followed by microscopic sizing and
counting’.

Pharmaceutical Clean Room Classification

EU GGMP

The most recent set of standards for use in Europe came into operation on the 1st of January 1997. This is contained in a ‘Revision of the Annex to the EU Guide to Good Manufacturing Practice-Manufacture of Sterile Medicinal Products’. The following is an extract of the information in the standard that is relevant to the design of clean rooms:
For the manufacture of sterile medicinal products four grades are given. The airborne particulate classification for these grades is given in the following table.
Medicinal Cleanroom Classific...
Notes:
(a) In order to reach the B, C and D air grades, the number of air changes should be related to the size of the room and the equipment and personnel present in the room. The air system should be provided with appropriate filters such as HEPA for grades A, B and C.
(b) The guidance given for the maximum permitted number of particles in the "at rest" condition corresponds approximately to the US Federal Standard 209E and the ISO classifications as follows: grades A and B correspond with class 100, M 3.5, ISO 5; grade C with class 10 000, M 5.5, ISO 7 and grade D with class 100 000, M 6.5, ISO 8.
(c) The requirement and limit for this area will depend on the nature of the operations carried out. The particulate conditions given in the table for the "at rest" state should be achieved in the unmanned state after a short "clean up" period of 15-20 minutes (guidance value), after completion of operations. The particulate conditions for grade A in operation given in the table should be maintained in the zone immediately surrounding the product whenever the product or open container is exposed to the environment. It is accepted that it may not always be possible to demonstrate conformity with particulate standards at the point of fill when filling is in progress, due to the generation of particles or droplets from the product itself. Examples of operations to be carried out in the various grades are given in the table below. (see also par. 11 and 12).
Cleanroom Uses by Class
Additional microbiological monitoring is also required outside production operations, e.g. after validation of systems, cleaning and sanitization.
Microbial Contamination Limits
Notes:
(a) These are average values.
(b) Individual settle plates may be exposed for less than 4 hours.
(c) Appropriate alert and action limits should be set for the results of particulate and microbiological monitoring. If these limits are exceeded operating procedures should prescribe corrective action.

Isolator and Blow Fill Technology (extract only)

The air classification required for the background environment depends on the design of the isolator and its application. It should be controlled and for aseptic processing be at least
grade D.
Blow/fill/seal equipment used for aseptic production which is fitted with an effective grade A air shower may be installed in at least a grade C environment, provided that grade A/B clothing is used. The environment should comply with the viable and non-viable limits at rest and the viable limit only when in operation. Blow/fill/seal equipment used for the production of products for terminal sterilization should be installed in at least a grade D environment.

Guideline on Sterile Drug Products Produced by Aseptic Processing

This document is produced by the FDA in the USA and published in 1987. Two areas are defined. The ‘critical area’ is where the sterilized dosage form, containers, and closures are exposed to the environment. The ‘controlled area’ is where unsterilized product, in-process materials, and container/closures are prepared.

The environmental requirements for these two areas given in the Guide are as follows:

Critical areas ‘Air in the immediate proximity of exposed sterilized containers/closures and filling/closing operations is of acceptable particulate quality when it has a per-cubic foot particle count of no more than 100 in a size range of 0.5 micron and larger (Class 100) when measured not more than one foot away from the work site, and upstream of the air flow, during filling/closing operations. The agency recognizes that some powder filling operations may generate high levels of powder particulates, which by their nature do not pose a risk of product contamination. It may not, in these cases, be feasible to measure air quality within the one-foot distance and still differentiate "background noise" levels of powder particles from air contaminants, which can impeach product quality. In these instances, it is nonetheless important to sample the air in a manner, which to the extent possible characterizes the true level of extrinsic particulate contamination to which the product is exposed.
Air in critical areas should be supplied at the point of use as HEPA filtered laminar flow air, having a velocity sufficient to sweep particulate matter away from the filling/closing area. Normally, a velocity of 90 feet per minute, plus or minus 20%, is adequate, although higher velocities may be needed where the operations generate high levels of particulates or where equipment configuration disrupts laminar flow.
Air should also be of a high microbial quality. An incidence of no more than one colonyforming unit per 10 cubic feet is considered as attainable and desirable.
Critical areas should have a positive pressure differential relative to adjacent less clean areas; a pressure differential of 0.05 inch of water is acceptable’.
Controlled areas ‘Air in controlled areas is generally of acceptable particulate quality if it has a per-cubic-foot particle count of not more than 100,000 in a size range of 0.5 micron and larger (Class 100,000) when measured in the vicinity of the exposed articles during periods of activity. With regard to microbial quality, an incidence of no more than 25 colony forming units per 10 cubic feet is acceptable.
In order to maintain air quality in controlled areas, it is important to achieve a sufficient airflow and a positive pressure differential relative to adjacent uncontrolled areas. In this regard, airflow sufficient to achieve at least 20 air changes per hour and, in general, a pressure differential of at least 0.05 inch of water (with all doors closed), are acceptable. When doors are open, outward airflow should be sufficient to minimize ingress of contamination’.
Comparison of CR Standards
This information was compiled from various sources including the listed agencies and the handbook ‘Clean Room Technology’ as written by Bill Whyte.

COLD CLEAN ROOM DESIGN IN A COST CONSCIOUS WORLD


Written by Rob Smith -
Executive Summary:
The need for biotech companies to design purification cold rooms and tissue culture warm/clean rooms at classified conditions to meet CGMP standards, and allowing for proper validation is critical to future growth and product development. However, the excessive cost of design standards often practiced with the larger pharmaceutical companies necessitates a need for a change in the design philosophy without an appreciable change in performance. This paper provides alternatives, time tested in the field, and completely validated to maintain Class 10,000 and Class 1,000 conditions with Class 100 zones for applications at 4 °C and +37 °C.
Prior to the early 1990’s, institutional cold rooms and pharmaceutical cold rooms were typically at 4 °C with no attempt at clean room classification. Although uniformities at the working plane were often at ±.5 °C, to prevent protein denaturing, process contamination, clean room classification and ultimate validation were not variables often considered. Although the majority of vendors in the environmental room industry have been providing strip chart test recording for many years, complete IQ and OQ validation were strictly in the hands of the client, with little pressure from external sources.
With the advent and growth of biotechnology products in the 80’s and 90’s, the need for purification cold/clean rooms and tissue culture warm/clean rooms, typically at Class 1,000 or Class 10,000 conditions, intensified. Recognizing that biotech companies operate in an extremely cost conscious environment to reduce the burn-rate on the company’s cash flow, alternative strategies to achieve these conditions at reduced costs needed to be developed. The traditional clean room approach with remote air handlers and recirculating fan units were excessive in cost due to the amount of insulation required at 4 °C applications as well as the integrity of the ductwork for these cold rooms needed to be welded ductwork to prevent condensation at critical joints. Although skidded air handlers, often with dehumidification systems, had been provided to pharmaceutical companies, the excessive cost of these cold/clean rooms was beyond the means of many biotech companies. Working with companies such as Genetics Institute, Genzyme, and BASF Bio Research, Harris Environmental Systems embarked on a patch to create cost efficient cold/clean rooms for the biotech industry.

Fundamental Design Criteria

In general terms, the design criteria for cold/clean rooms for purification applications in a pilot plant or production suite are as follows:
Temperature 4 °C to ambient
Temperature Uniformity ±.5 °C
Temperature Gradient (point to point control) ±1 °C
Classification Class 1,000 or Class 10,000 with Class 100 zones
Humidity Control Not always applicable, but limited defrost cycles if an at all, were requested
Options for
Refrigeration Redundancy
Usually Required
Validation All systems must meet validation, IQ and OQ protocols







Solutions To Meet Design Criteria:

Figure 1 shows the typical pharmaceutical cold/clean room utilizing a remote double wall conditioning system, complete with a desiccant dehumidifier ducted to HEPA filters within the cold/clean room. There are many advantages to this system including complete service access remote from the space, uniform architectural finishes with HEPA filters cut into the roof panels, which eliminates any potential ceiling leakage, and an extremely quiet system due to the ability to install large sound attenuators. Additionally, air balancing is provided by using flow hoods directly in the room during certification process or at a separate time. By installing desiccant dryer integral to the air handling system, the dewpoint can be maintained below the coldest surfaces, which are the evaporator temperatures, and thus no defrost cycles are required. This allows for uniform temperature at all times, with the additional benefit of preventing any potential mold or mildew growth due to higher relative humidities. Although these systems are by far the most preferred, and are typically done for large production pharmaceutical operations, the expense involving these systems is quite high. Double wall air handlers with completely clean in place finishes are extremely expensive. The utilization of expensive welded stainless steel ducting with 2” isocyanurate duct insulation due to the low temperatures is another increased cost. Furthermore the additional power and control runs from the remote air handler to the room bring an additional cost to these projects. All in all these systems are wonderfully designed and perform extremely well, easily meeting all IQ/OQ requirements, but at am extremely high cost, beyond the means of most new and/or even established biotech companies.
Typical pharmaceutical cold/clean room
Figure 1. Typical pharmaceutical cold/clean room

What to Do:

In looking at the solutions to this problem, there are certain aspects of the design criteria that cannot be changed or sacrificed. Room temperature control, temperature uniformity, room gradients and air changes per hour based on the classification must all be maintained. Additionally, any pressurization, if applicable, must be maintained. Furthermore, the possibility of utilizing desiccant dryers may, in fact, still be required if the client does not allow any defrost cycles. The solution proposed by Harris Environmental Systems and validated at many facilities, as shown on the attached Fig. 2, utilizes fan/coil plenum air handlers with fan/filter modules. With this design, first we calculate the required air volume based on the required gradient and the expected internal heat load. This total air volume is then matched with the air volume required to maintain the classification. Whichever air volume is greater is utilized and all air is then passed up a return wall with pre-filters, through fan/coil units, and into fan/filter modules, down through plenum ceilings. The result is a truly modular room with everything completely enclosed within the room with the exception of the condensing unit, which would be remote in all cases, and a small desiccant dryer that can be installed directly above the roof panels of the insulated environmental room.
Advantages that the biotech company realizes are as follows:
  • FDA validation for IQ and OQ is obtainable as is proven by installation Harris has completed at BASF Bio Research, Genetics Institute, Genzyme and others.
  • This is a cost-effective design since no external ducting outside of that to the dehumidifier is required.
  • The plenum after the fan/coil units is at a negative pressure due to the installation of fan/filter modules, which eliminates any potential ceiling grid leakage.
  • The installation time and labor requirement is simplified greatly due to everything being close-coupled to the plenum ceiling.
Years ago, the disadvantage of this type of system resulted from the fact that servicing had to be required from within the cold/clean room and that the sound levels remained quite high. With the newer, high energy efficient and extremely quiet fan/filter modules, sound levels typically at 60 to 65 dB can easily be attained, even with over 50% filter ceiling coverage. Although coils, valves, and other components must still be serviced in the room, the utilization of room side replaceable HEPA filters makes this a fairly simple matter for the filtration side. If redundant air handlers are utilized, we have found maintenance to be extremely low for refrigeration and air moving devices. Finally, complete certification and validation is attainable as long as PID or microprocessor based discrete controls are utilized. The end result for the biotech company is that rooms of any size, completely modular, which can in fact be relocated in the future, can be installed with this design.
Typical biotech purification cold/clean room
Figure 2. Typical biotech purification cold/clean room

To Dehumidify or not to Dehumidify:

With ventilation requirements for laboratory spaces at 20 CFM per person as a minimum, in most cases larger purification cold/clean rooms have in the vicinity of 50 to 100 CFM of make up air. Harris Environmental Systems experience, as well as basic physical laws, indicate that excessive defrost cycles will occur with cold, wet make up air unless desiccant dehumidification is utilized on this make up air to maintain the dewpoint below the coldest surface temperatures. Other various means such as refrigeration stripper coils operating continuously at a low suction temperature from the refrigeration system have been employed in an attempt to dehumidify make up air volumes. These methods at best can be described as secondary in the end result due to the fact that make up air coils at low temperatures can easily freeze up, causing additional defrost problems. Therefore, if the client is requiring no defrost cycles at any time, it is highly recommended that a small desiccant dryer be installed directly remote from the room and ducted to the return-side of the air handling system within the plenum.

SUMMARY

Purification cold/clean rooms and tissue culture warm/clean rooms have been installed for almost 20 years by Harris Environmental Systems throughout North America. The proposed design utilizing fan/filter modules, small desiccant dehumidifiers, and internal fan/coil units is predominantly what has been chosen by both the biotech industry, and in some cases by the larger pharmaceutical clients. Pilot plant and production facilities have met IQ and OQ validation procedures now and in the near future with this design procedure. Redundant refrigeration systems can easily be employed when the room is large enough to fit another secondary internal air handler within the plenum. Cost saving in the order of 40 to 50% over remote skidded dehumidification/air handling systems can be expected with the proposed fan/filter fan/coil.

Saturday, August 18, 2012

Focus on Nanopharmaceuticals



Everyone is talking about nanotechnology these days. Nanotechnology refers to the science of discovering what happens when you make things that are very, very small. Typically the new nanomaterials and nanodevices being developed are measured in nanometers. One nanometer is one-billionth of a meter. To put that scale in context, the comparative size of a nanometer to a meter is the same as a marble is to the size of the earth.1
One area that shows great potential is in medicine with the development of new nanopharmaceuticals. According to a report published by BCC Research, the market value of the worldwide nanomedicine industry was $72.8 billion in 2011. The market is estimated to grow at a CAGR of 12.5% to reach $130.9 billion by the fiscal year 2016. The market for anti-cancer products was valued at $28 billion in the fiscal year 2011 and is anticipated to reach $46.7 billion by the fiscal year 2016.
The pharmaceutical industry is undergoing a revolution and it is clear that many in business and science are predicting that nanopharmceuticals will play a major role in the future of medicine. The driving force for this revolution is the need to develop new drugs to meet market demand. However, it is becoming extremely difficult to develop new drugs the traditional way and an alternative R&D approach has to be found. Nanopharmaceuticals may be the answer. To understand why, we must first examine the current state of the pharmaceutical industry and how it will respond to the socio-economic changes that are occurring in the world.
Taxol Is an Example to Follow
Pharmaceuticals can be broadly classified into two categories: patent-protected high-priced branded drugs and low-cost generic drugs. However, the pipeline for developing new drugs is rapidly declining because of the extremely high costs of research and development. It can cost up to $1 billion and take 10 years to develop a new drug, and companies are reluctant to make this investment because there is uncertainty as to whether they can recoup their investment and make a profit in today’s business climate of reducing healthcare expenses.
So if it’s too expensive to make a new drug what can a company do to survive? One way is to simply engage in price-cutting and make up in volume what was lost in pricing. This is the approach taken by most generic manufacturers. The consequences of this approach will be the survival of the fittest, and fewer manufacturers of generic drugs in the future.
Another way is to take an existing drug and to make it better in some way (e.g. safer or more effective so that it is not subject to price competition from generics). So how do you make an existing drug better? You can’t easily change the chemistry of the drug itself, that’s more-or-less fixed. You can, however, change the way the drug is delivered and utilized in the body. And this is where nanopharmaceuticals will play a major role in the future.
Consider the case of Abraxane®, which was the first novel drug nanoparticle formulation to be successfully commercialized. The way this drug was developed, registered with FDA, and marketed is a case study in how all future nanopharmaceuticals will be developed.
Taxol® (paclitaxel) is a drug used to treat breast cancer. The drug is insoluble and has to be dissolved in castor oil before it is infused intravenously into the patient. When the patent for Taxol expired, it became open to generic competition. One manufacturer of generic paclitaxel developed a novel method of drug delivery using nanosized particles of paclitaxel coated with human albumin called Abraxane®. Patients tolerated the albumin-paclitaxel nanoparticles better than paclitaxel in castor oil and the time for drug infusion was shortened significantly. So the general conclusion was that this could be the best generic version of paclitaxel available. But here is where it gets really interesting. Abraxis petitioned the FDA to consider the Abraxane as a reformulated Taxol, not generic Taxol, under the 505(b)(2) regulation, to obtain marketing approval without all the requirements of a full-fledged NDA and yet have limited market exclusivity of a new drug. Abraxane’s clinical trial showed that it has a different Safety and Efficacy profile compared to Taxol, so it’s a de facto “new drug” for marketing purposes.
This special 505(b)(2) regulatory approval route by the FDA is extremely important because it opened up a new approach for developing new drugs at a fraction of the cost of more traditional development. Now it is possible to take any generic drug and change the delivery system to create a new drug with all the advantages that this brings. It also means that small companies can compete on more equal footing with large pharmaceutical companies because R&D costs can be much less of a barrier to innovation. Innovator companies developing novel nanopharmaceuticals will become prime targets for acquisition or partnerships with large, established companies.
Nanosized Drug Delivery Systems
The term “nanopharmaceuticals” covers a diverse collection of different nanosized drug delivery systems that can be used to create a new combination drug. These include nanoparticles, nanoemulsions, liposomes, dendrimers, nanocapsules, and lipid nanospheres to name a few. This article will focus on nanoparticles and liposomes to illustrate the significant advantages that these new combination drugs have over current drugs.
liposome
If you take an insoluble drug such as paclitaxel and make it into nanosized particles, then the particles behave just as though they were in solution. They flow freely inside the blood vessels and do not clog up the circulation. And, because they are nanosized, the surface area of each particle is very large compared to the volume and therefore the drug can dissolve more readily and become bioavailable. This could prove significant in the area of cancer drugs where 40% of all cancer drugs are insoluble. Nanosizing them would make them more “soluble” and therefore more bioavailable in the body.
Another way to treat insoluble drugs is to dissolve them in a lipid solution (e.g. oil) and then emulsify the drug-in-oil solution to prepare a nanoemulsion composed of nanosized droplets of oil containing the drug. Depending on the phase temperature of the oil, it will either remain as an oil above this temperature or it will harden to form solid lipid nanospheres below this temperature. As discussed earlier for nanoparticles, many insoluble drugs can also be treated as nanoemulsions or lipid nanospheres to make them better.
Soluble cancer drugs present a different kind of problem. When a soluble cancer drug is injected into the patient, it quickly distributes into all the body tissues so that only a small fraction of the drug actually reaches the tumor. Most of the drug enters normal tissues where it kills normal dividing cells, causing the serious side-effects associated with chemotherapy. Another disadvantage to soluble drugs is their rapid elimination from the body through the kidneys. Here again, nanopharmaceuticals comes to the rescue. Soluble drugs can be encapsulated within nanosized lipid vesicles called “liposomes” that will change their behavior for the better within the body.
Liposomal drugs provide a good example of the many advantages that nanopharmaceuticals have. For example, although liposomes are so small that they can circulate freely within the blood system they are too large to pass out through the endothelial pores of normal blood capillaries and penetrate into normal tissues and cause harm. Second, when a drug is enclosed within a liposome it is protected from being detoxified by the liver so that more drug is bioavailable to act upon the tumor. Third, because the drug is released gradually over time as the liposomes begin to leak, there is less risk of an acute systemic toxic reaction. Finally, there is more drug bioavailable for a longer period of time to act upon the tumor and inhibit its growth.
Liposomes are composed of a lipid membrane made up of phospholipids (e.g. egg lecithin) surrounding an aqueous interior. The simplest way of encapsulating a soluble drug is to hydrate the dried phospholipids with a drug solution whereupon a portion of the drug solution becomes entrapped within the interior of the liposome when it forms. Liposomes can also be used to carry insoluble drugs inside the lipid membrane layer by dissolving the drug in the lipid, drying the lipids and then hydrating the dried lipids with an aqueous solution.
One important feature of liposomes is that they can be made to be a certain size (e.g. 100 nm) so that they are small enough to pass through the enlarged endothelial pores of “leaky” blood vessels supplying the tumor and penetrate into the tumor tissue where the drug is released for maximum effect. This can result in ten times the amount of the cancer drug reaching the tumor as compared to the free drug.
Finally, there is ongoing research into developing specialized liposomal drugs that can target the tumor by attaching a tumor targeting antibody to the surface of the liposomes. These are called immunoliposomes. There are a number of anti-tumor antibodies developed that can target specific growth factor receptors on the cancer cell. For example, there is an antibody called Herceptin® that targets Human Epidermal Receptor -2 (HER-2) that is over-expressed in certain breast cancers; and another antibody called ErbituxR that targets Epidermal Growth Factor Receptor (EGFR) present in certain cancers of the head and neck. Animal studies showed that when the HER-2 antibody was attached to the surface of liposomes incorporating a cancer drug, the immunoliposomes bound to the breast cancer cells and killed them.

The Next-Generation of Novel Drugs
As pointed out earlier, there is a variety of different nanosized drug delivery systems such as liposomes, micelles, dendrimers, nanocapsules, and nanoparticles under development. Within each delivery system there is also a tremendous amount of variety in the physical and chemical components that can be used to make up each individual combination drug. Each new combination drug will have its own unique safety and efficacy profile. In the final analysis, the success or failure of any new drug will depend on its performance in clinical studies in comparison to other competing pharmaceuticals.
Nanopharmaceuticals represent the newest generation of novel drugs being developed. There are extensive ongoing research studies underway, and substantial investments being made in this area of medicine. It will be interesting to see which particular novel drug/delivery combinations will be the ones to succeed.

REFERENCES:
1.         BCC Research Report, http://www.azonano.com/news.aspx?newsID=24136
2. Kahn, Jennifer (2006). "Nanotechnology". National Geographic 2006 (June): 98–119.
3. Cancer Nanotechnology. “Going Small for Big Advances”. U.S. Department of Health and Human Services. National Cancer Institute. January 2004.
4. Torchilin V. P. Targeted Pharmaceutical Nanocarriers for Cancer Therapy and Imaging. AAPS Journal 2007; 9 (2) Article 15.
5. Drummond et al. Optomizing Liposomes for Delivery of Chemotherapeutic Agents to Solid Tumors. Pharmacological Reviews. Vol.51 No 4. pp 691-742, 1999.

Process Validation

Very simply, the aim of pharmaceutical development is to design a quality product and manufacturing process to consistently deliver the intended performance of a final therapeutic product.
To support a final quality assurance approach to manufacturing, it is the information and knowledge gained from pharmaceutical development studies and process characterization studies that lead to an effective quality control strategy, based on scientific understanding.
To that point, in January 2011 the FDA issued its new guidance on Process Validation (PV). The new PV guidance uses the basic principles of scientific understanding put forth in ICHQ8—the foundation of Quality by Design (QbD)—to establish process understanding and link it to product reproducibility. It effectively abandons the old concepts of demonstrating process validation and replaces it with a new, structured approach. It formalizes these principles by describing the level of product and process understanding necessary to satisfy the requirements of Stage 1 of the new PV guidance. To achieve this level of process understanding a framework that integrates product performance as part of process characterization is required. So simply put, the new PV guidance will make it much easier to justify moving toward QbD.
The challenge that many industry personnel face is bridging the gap between the former validation approach of “three batches and we’re done” to understanding how the new PV stages work together to establish process predictability.
In practical terms, the new PV guidance describes the three stages as follows:
Stage 1 – Process Design: The commercial manufacturing process is defined during this stage based on knowledge gained through development and scale-up activities.
Stage 2 – Process Qualification: During this stage, the process design is evaluated to determine if the process is capable of reproducible commercial manufacturing.
Stage 3 – Continued Process Verification: Ongoing assurance is gained during routine production that the process remains in a state of control.1
Predictability is the guideword. Pharmatech has established a framework designed to generate the necessary information to satisfy the new PV structure and provide a manageable plan. This framework has been discussed in detail in previous publications and is shown in Figure 1.
pharmatechfigure1
Figure 1: Process Validation Framework
Let us focus on Stage 1 - Process Design, because it constitutes the largest change from the classic PV roadmap. This stage is meant to identify and establish a control strategy for the parameters and attributes that affect process predictability and product performance.

Quality by Design and Process Validation
The model most commonly purported to embody the principles of QbD is shown in Figure 2.
pharmatechfigure2
Figure 2: Quality by Design Model
The key concept in Stage 1 of the new PV guidance and the QbD model links Product Design with Product Performance, as depicted in the inner circle of Figure 2. We will discuss each of these elements within the context of the PV framework and the QbD model.
Product Design
All projects begin with a summary of the target product performance attributes that are the foundation of the formulation activity. Understanding the role of each component within the function of the dosage form should include material characterization activities that could influence downstream processing such as particle size, solubility, melting point, bulk density, presence of polymorphs in the drug substance, loaded dose, etc. Ideally, a review of product design should also extend to understanding the basis for the in-vitro product release specifications and any in-process control measurements.
Product design review can be summarized in a risk table capturing the potential impact of each component on the process and product performance downstream. An example of a product performance risk summary is shown in Table 1.
pharmatechtable1
 Table 1: Product Design Risk Summary
Process Design-Process Risk Assessment
A process risk assessment is a very effective way to connect the product design, process unit operation and final product performance critical quality attributes (CQAs). In this step a tiered risk assessment approach can be used to identify potential sources of variability before beginning process characterization studies. The risk assessment can be divided into two parts. The first evaluation compares each process step against the defined CQAs to identify which process steps would require close characterization. The second tier of the risk assessment can focus upon the potential impact of the process parameters. Any parameters identified as having a high potential impact on CQAs can be targeted for further study.
Process Characterization: Knowledge Space, Design Space, Control Space
The use of equipment that has been verified to operate in a consistent manner is essential in determining process predictability. While small-scale equipment used in development does not need to be qualified, if the equipment is not predictable, characterization studies will not be reproducible. This should also be applied to any analytical or in-process measurements that will be applied during the development work.
Per the ICHQ8 guidance, the first step in establishing process understanding is to define the knowledge space. The knowledge space constitutes describing the contribution to process stability of the total set of variables for each unit operation across a practical range of variability. Applying a statistically unbiased approach to process characterization will allow rapid identification of the parameters that affect both process stability and product performance. Commercial process challenges can also be introduced at this stage to limit the variability during scale-up.
At the end of the knowledge space exercise the parameters that have been found to affect process stability and product performance should be evaluated using the same experimental design approach to narrow the process space. ICH refers to this as the Design Space. The ideal Design Space will be one in which the process parameters have no impact on the process stability and product performance. This activity will be critical to establish a Proven Acceptable Range (PAR) for the process.
One challenge with the new guidance is how to determine a defensible sampling and testing plan for the characterization activity. Sampling and testing plans can also be prototyped, evaluated and qualified at small scale in anticipation of commercial scale-up. Industry standards such as ANSI Z1.4-2008 tables can be a good resource for establishing and justifying sample sizes. In some cases, these sample sizes may be too large and other approaches such as performing a power calculation, generating Operating Characteristic (OC) curves and calculating Lot Tolerance Percent Defective (LTPD) to define the consumer risk are effective approaches to making sure you have a sampling plan with sufficient resolution to address the desired producer and consumer risk.
The last step in the characterization program is defining the final Normal Operating Range (NOR) defined by ICHQ8 as the Control Space. The Control space is typically a narrower portion of the Design space that represents the recommended limits that will be allowed in the master batch record.
Process Design-Validation Master Plan
With the characterization activities complete the Validation Master Plan (VMP) can be developed to put forth both the rationale, justification and final commitments in terms of moving into Stage 2 of the new PV guidance structure. 
Conclusion
Applying the basic principles of QbD as defined in ICHQ8 will establish the necessary foundation to satisfy the requirements for process understanding required to meet Stage 1 of the new PV guidance. The framework developed by Pharmatech is a practical roadmap for navigating the application of the principles of ICHQ8 and will identify those parameters which are critical to process stability and product reproducibility. With this foundation of understanding, the process is well positioned to move into Stage 2 of the new process validation guidance that will require final demonstration of process reproducibility at the commercial scale. For organizations that have struggled to get a foothold with QbD, the new guidance will provide the impetus for change if they are to successfully meet the new requirements for process and product reproducibility.
References
1. Guidance for Industry, Process Validation: General Principles and Practices, U.S. Department of Health and Human Services Food and Drug Administration, CDER, CBER, CVM, Current Good Manufacturing Practices (CGMP), Revision 1, January 2001
2. Chatterjee, Wong and Rafa, Using Operational Excellence to Meet the New Process Validation Guidance, Pharmaceutical Engineering, Sept 2011

Thermal Validation in the Pharmaceutical Industry




Considerations in Selecting a Temperature Sensor


The pharmaceutical industry is a highly regulated environment based on research, evidence, record-keeping, and validation. The term "thermal validation" is the process of validating / qualifying equipment and storage facilities to prove that they will create and maintain the temperatures they are designed for.
For those responsible, choosing the right temperature validation tool is decision #1 - and making that choice requires a thorough understanding of different sensor types. This paper will specifically focus on two common sensors: thermocouples and thermistors (see table below).
The following article will discuss the advantages and disadvantages of each sensor, especially as they are used in the pharmaceutical industry. But first, a brief definition of thermocouples and thermistors:
  • A thermocouple is made of two dissimilar metals in contact with each other. The thermocouple works by generating a small voltage signal proportional to the temperature difference between the junctions of two metals.

  • A thermistor is a resistive device made up of metal oxides that are formed into a bead and encapsulated in epoxy or glass. As temperature changes, so does resistance, causing a large voltage drop.
The follow table describes the stability, temperature ranges and gives brief details about the differences between thermocouples and thermistors.

Thermocouple Thermistor
Temp. Range -270 to 1800°C
(-454 to 3272°F)
-86 to 150°C
(-123 to 302°F)
Sensitivity Low High
Stability Low High
*Time-savings Lengthy set-up Minimal set-up
*Sources of Error Many Few
*Accuracy Low High
Ideal Applications High temperature oven profiling, Cryogenic freezing Warehouse monitoring, Stability testing, Chamber qualification, Cooler and Freezer, Monitoring, Lab monitoring, Cold Chain monitoring.

* This comparison looks at a total data logging system, and not just the sensor.

Temperature Range: The Key to Selecting Sensors

Thermocouples offer the widest range of measuring capabilities, which admittedly makes them a suitable choice for extreme temperature applications such as oven profiling and cryogenic freezing.
However, in the range of -86 to 150°C (-123°F to 302°F), thermistors become an option, and for most applications they are the better choice. Thermistors are primary sensors, meaning that they operate independently, without the need for a second reference sensor.
It is important to note that other systems, including thermocouple systems, often use thermistors as the reference sensor.

Data Loggers & Temperature Sensors

The stated temperature range of -80 to 150°C (-123°F to 302°F) is just for the thermistor itself, and not for an enclosed Veriteq data logger. Veriteq data loggers are designed to withstand the range of -86 to 85°C (-123°F to 185°F) meaning that the loggers themselves can be placed in the temperature environment and left there. This makes them an ideal solution for chamber qualifications, stability testing, warehouse, cooler and freezer monitoring.
Veriteq's data logger used in the higher range of 85°C to 150°C (185°F to 302°F) requires an external thermistor probe that allows the connected logger to remain outside the high temperature environment.

Sensitivity: Of Voltage & Signal Size

The term sensitivity refers to the size of signal received in response to a temperature change, and is an important component of sensor accuracy. Thermistors are highly sensitive; in fact the name thermistor evolved from the phrase "thermally sensitive resistor."
Stuart Ball, an electrical engineer and author for www.embedded.com writes that "of all passive temperature measurement sensors, thermistors have the highest sensitivity."
In comparing thermistors with thermocouples, Ball goes on to say: "The voltage produced by a thermocouple is very small, typically only a few millivolts. A type K thermocouple changes only about 40 microvolts per 1°C (1.8°F) change in temperature."
With such a minute voltage to measure, it becomes difficult to distinguish an actual temperature change from noise. Enercorp Instruments Ltd., a provider of thermocouples and thermistors, speaks directly to this issue:
"The voltage produced is very small and amounts to only a few microvolts per degree Celsius. Thermocouples are therefore not generally used within the range of -30 to 50°C (-22 to 122°F)."
The graphs below show the increased sensitivity that a thermistor-based system detects as compared to a thermocouple system.
Low thermocouple sensitivity makes it hard to distinguish real changes from noise

High sensitivity of Veriteq system makes distinguishing real changes easy

Stability: How Accurate for How Long?

Thermistors are very stable, which makes them ideal for portable applications such as warehouse and chamber qualifications. For example, Veriteq data loggers can be moved frequently without calibration, and still maintain an accuracy of +/- 0.15°C (+/-0.27°F).
To prove the point, Veriteq recently checked the calibration of 106 data loggers after a year of use in the field. Each logger was checked at the following calibration points: -20°C, 25°C, and 70°C. The results were impressive, showing less than 1% of the points to have any excess drift. Still, Veriteq recommends that data loggers are re-calibrated on a yearly basis.
Thermocouples, on the other hand, are known for low stability, which is why a pre-cal / post-cal is required with every use.

When a Sensor Saves Time: Set Up & Stability

A Veriteq data logger is a system in itself,easy to use, quick to set up and self powered. With on board memory, the data is not vulnerable to loss through power or network interruption.
Each data logger, containing a thermistor, is simply configured to the desired sampling frequency and then placed in the monitoring location. Following the test period, the data is downloaded. The system is very straightforward and doesn't require any stringing of wires. Validation without stringing thermocuouple wires reduces set up time and downtime in a high-traffic environment. The result is a significant time savings.
By contrast, a thermocouple based set-up is often time consuming, especially for high-accuracy applications requiring a pre- and post-calibration. For example, qualifying a chamber with a thermocouple system involves first putting all sensor ends (i.e. the hot junctions) inside a calibration unit and going through the pre-calibration process.
Following a successful calibration, the thermocouples are strung from the central data logging unit, to the chamber, through a door seal, and then taped into various positions. Care must be taken to keep a good seal on the door while minimizing damage to the thermocouple wire.
Once the data collection begins, all thermocouple sensors must still be moved to the calibration unit for post-calibration. Finally, it is not uncommon for thermocouples to fail the post-calibration, meaning that the whole process may need to be repeated.

Sources of Error: Cold Working, Cold Junctions, Calibrations

Being a self-contained unit means that Veriteq data loggers have less error sources to deal with - there are no wiring errors, no cold junction errors, and no errors associated with in-field calibration (see table below).

Thermocouple System Veriteq Thermistor System
Physical damage to sensor "Cold working" degrades thermocouple wires as they are repeatedly bent, stepped on, or shut in chamber doors. There is minimal risk because the thermistor sensor is protected inside the data logger
Non homogeneity
Consistency of thermocouple wire and the environment it runs through
Always present to some extent N/A
Cold Junction reference error
Temperature deviation between cold junction reference point and the actual cold junction; includes accuracy of cold junction sensor
The single largest source of error N/A
Pre & post calibration errors:
Reference transfer calibration error; traceable temperature standard; environmental stability; movement of sensors
In-field calibration introduces many sources of error Pre & post calibration is not required
Operator Error High level of knowledge required to minimize errors Less risk as the system is relatively simple
Analog to Digital conversion Minor Minor

Thermocouple systems have numerous sources of error, the most significant being the cold junction reference error. Goran Bringert, of Kaye Instruments, states the following:
"A change in ambient temperature is the most significant source or error in thermocouple measuring systems, particularly multi-channel systems with internal cold junction references"

Accuracy: Give or Take a Margin of Error

High accuracy is critical for temperature validations because of the 4:1 rule, which recommends that instruments be at least four times as accurate as the parameter being measured/validated. Therefore, Veriteq data loggers, with their accuracy of ±0.10°C (±.27°F), can be used to monitor/validate parameters as tight as ±0.60°C (±1.1°F).
As for thermocouple based systems, a leading provider claims to have a total system accuracy of ±0.28°C (±0.5°F). While this may be true from a theoretical point of view, it would require having optimal conditions available.
Other industry experts believe that ±1 to 2°C (±1.8 to 3.6°F) is a more realistic accuracy for such a system, meaning that it could be used to validate parameter specifications of ±4 to 8°C (±7.2 to 14.4°F), applying the 4:1 rule. In any event, very few people dispute the fact that thermistors are more accurate than thermocouples.

Conclusion

When choosing a system for performing thermal validations, the first question asked should be "what kind of sensor is being used?"
Thermocouple sensors should be avoided because they involve a lengthy set-up, numerous error sources, and marginal accuracy. It would be best to restrict thermocouple systems to applications involving very high or very low temperatures, simply because there are no other choices available at those extremes.
In contrast, thermistor sensors are ideally suited to high accuracy monitoring in the range of -86° to 150°C (-123°F to 302°F). The Veriteq thermistor based system is highly sensitive, stable, accurate and easy to use.
Using a thermister-based device for validation eliminates the many error sources associated with thermocouple systems and allows for a much quicker set-up time. In short, you save time, experience less downtime and obtain high-accuracy results.