Page 1

Volume 4 Issue 4

International Pharmaceutical Industry

Supporting the industry through communication

Peer reviewed

Homocysteine and Cognitive Impairment Things Pharmaceutical Supply Chain and Operations Directors Should Know But Do Not Automating and Accelerating the Environmental Monitoring Process In Pharmaceutical Manufacturing Syringe Siliconisation Trends, Methods, Analysis Procedures

0000 0000 0000 0000

0000 0000 Email: 000

PUBLISHER: Mark A. Barker EDITORIAL MANAGER Jaypreet Dhillon BOOK MANAGER: Anthony Stewart BUSINESS DEVELOPMENT: Madalina Slupic DESIGN DIRECTOR: Ricky Elizabeth CIRCULATION MANAGER: Dorothy Brooks FINANCE DEPARTMENT: Martin Wright RESEARCH & CIRCULATION: Maria Bolona COVER IMAGE: iStockphoto © PRINTED BY: SW TWO UK PUBLISHED BY: Pharma Publications Unit J413, The Biscuit Factory Tower Bridge Business Complex 100 Clements Road, London SE16 4DG Tel: +44 (0)20 7237 2036 Fax: +44 (0)01 480 247 5316 Email: All rights reserved. No part of this publication may be reproduced, duplicated, stored in any retrieval system or transmitted in any form by any means without prior written permission of the Publishers. The next issue of IPI will be published in February 2013. ISSN No. International Pharmaceutical Industry ISSN 1755-4578. The opinions and views expressed by the authors in this magazine are not necessarily those of the Editor or the Publisher. Please note that although care is taken in preparation of this publication, the Editor and the Publisher are not responsible for opinions, views and inaccuracies in the articles. Great care is taken with regards to artwork supplied, the Publisher cannot be held responsible for any loss or damage incurred. This publication is protected by copyright.


DIRECTORS: Martin Wright Mark A. Barker

Contents 06 Publisher’s letter Watch pages

International Pharmaceutical Industry

Supporting the industry through communication


Supporting the industry through communication


International Pharmaceutical Industry



08 Rising Supply Chain Exposures  Pharmaceutical, biotech and medical equipment companies increasingly source raw materials, components, manufacturing processes and services externally. By using suppliers in markets such as Eastern Europe, India, the Far East and increasingly China, they are able to take advantage of lower costs and greater efficiencies. Andrew Catton at Miller Insurance explains why with supply chains becoming more complex, and a growing reliance on “lean” manufacturing processes, there are also increasing risks associated with globalisation. Regulatory & Marketplace 10 Protecting Biological Inventions – A Well-defined Issue? Of the three major sciences - physics, chemistry and biology the biological sciences are the baby of the family. As such, our understanding of biological systems is less well developed than our understanding of the physical and chemical world around us. Jon Gowshall at Forresters describes how the relationship between science and the law is always a little fraught. 14 IP Audits: A Health Check for your Intellectual Property  For most biotechnology and pharmaceutical companies, Intellectual Property (IP) is the solid foundation on which the business is built. But when was the last time that you reviewed your IP assets? Elaine Eggington at IP Pragmatics discusses how an IP audit is a valuable tool to look at each potential area of IP, including patents, trade marks, copyright, branding and confidential know-how (trade secrets) to ensure that they are being identified and managed correctly. Drug Discovery, Development & Delivery 18 Homocysteine and Cognitive Impairment  One consequence of vitamin B12 deficiency is a raised blood level of homocysteine. This non-essential amino acid is derived from dietary protein. Its conversion to useful metabolites requires methyl-folate and vitamins B12 and B6 as cofactors. Blood levels of homocysteine therefore rise with deficiency of these important vitamins. Andrew McCaddon, Honorary Senior Research Fellow in the School of Medicine at Cardiff University, provides a case study of his finding of low B12 levels in families with early onset dementia and his search for evidence of deficiency in patients with late onset dementia. 22 Stem Cells and Drug Discovery  Stem cells are extraordinary cells, having many features and advantages that could revolutionise drug development and healthcare applications. Lilian Hook at Plasticell Limited discusses how they are capable of both self-renewal and differentiation to mature somatic cells in vivo and in vitro, and as such offer a limitless, consistent supply of physiologically relevant cells for applications such as cell replacement therapies, drug development and disease modelling.

2012 PHARMA PUBLICATIONS Volume 4 issue 4 Autumn 2012


Clinical Research 32 A Winner Emerges in the War Against Microbes Man has benefited from copper’s inherent antimicrobial properties since the dawn of civilisation, yet it is only in the last 10-20 years that scientific studies have been conducted to properly evaluate the metal’s potential in reducing contamination in critical environments such as hospitals and food processing facilities. Angela Vessey at Copper Development Association looks into why in the healthcare sector, the level of laboratory and clinical evidence has stimulated demand for incorporating copper into touch surface hot-spots in the fight against healthcare-associated infections (HCAIs). 40  Conducting Non-clinical Studies with Protein Biologics: Considerations in Test Article Characterisation and Method Development for Dose Formulation Analysis With the increase in the number of protein biologics in development, including monoclonal antibodies, the proportion of GLP studies conducted for protein test articles has correspondingly increased. Therefore, current GLP practices may need to be adjusted to accommodate the physiochemical characteristics of protein test articles. In terms of method development for non-clinical dose formulation analysis, analytical techniques specific to protein biologics will need to be used. Karina Kwok at MPI Research describes the current best practices applied in her organisation for test article characterisation and analytical method development of protein biologics for non-clinical dose formulations.

66 Applications of LIMS to Stability Testing Managing pharmaceutical stability testing can be very demanding, especially on small to medium-sized companies developing and producing OTC, generics and new Rx products. Some companies outsource the actual inventory management and testing requirements, but they are still required to track progress and report results as part of their QA or development process and will need to meet guidelines set by regulatory bodies such as the FDA and International Conference on Harmonization. John Boother at Autoscribe Ltd explains how Laboratory Information Management Systems provide a powerful way of managing and reporting the outcome of these studies. 70 RFID and Cold Chain Management  We would think new technologies in cold chain management would have made great progress in terms of automation in the last years, particularly with the evolution of temperature sensors using RFID communication, but somehow this has not been the case, at least in two of the industries that should benefit the most from it: biotech and pharma. Alex Guillen at Escort Cold Chain Solutions SA looks into what happened, why it happened, and why we should believe that things would fortunately change.

46  Cytomics: Managing Biocomplexity in Drug Development, Clinical Diagnostics, and Clinical Medicine  Cytomics, the science of analysis at the cellular level, objectively accounts for functional phenotypes in the context of the entire organism. Furthermore, the cytomics top-down approach of data analysis does not depend on prior knowledge of disease mechanisms, thus significantly simplifying the exploration of organismal biocomplexity and shortening the path for applications in drug development, clinical diagnostics, and clinical medicine. Dr Yvon at BioSciences Expansion and Dr Turner at Quintiles provide an introduction to cytomics and its applications through a review of the literature, focusing on genetics, genomics, and other ‘omics’. 54   One year after the United Nations Summit on Noncommunicable Diseases (NCDs): New Opportunities and Challenges in the Fight Against NCDs This past year has seen tremendous progress on NCDs at the global level, most notably with the adoption of a global target to reduce premature deaths from NCDs including cancer, diabetes, cardiovascular and respiratory disease by 25% by 2025. Julie Torode and Rebecca Morton Doherty at Union for International Cancer Control provide an overview on the tremendous progress on NCDs at the global level. Labs & Logistics 58   Things Pharmaceutical Supply Chain and Operations Directors Should Know, But Do Not  In considering supply chain operations in pharmaceuticals businesses, the naïve amongst us might assume that the answers to the basic questions are clear. How much inventory is enough? What does it cost to make a tablet? Which plants are the most efficient producers? How much capacity do we have? How should we plan and organise how demand hits operations? What does it cost to serve the customer? Dr John Harhen at Orbsen Consulting discusses that there is a fundamental lack of clarity to these most basic of questions.


Autumn 2012 Volume 4 Issue 4



74 Automating and Accelerating the Environmental Monitoring Process in Pharmaceutical Manufacturing  As part of a highly regulated industry, pharmaceutical companies must perform various levels of product monitoring in the manufacturing process. In addition to product testing, the manufacturing environment must also be tested. This includes testing of the room, surfaces, air, and personnel throughout the manufacturing cycle. In large environments, this can involve a large number of samples that must be captured, tracked and reviewed after incubation. Julie Sperry at Rapid Micro Biosystems explains why automating even a portion of this process can provide tangible benefits, and accelerating the process as part of a rapid testing programme can bring product to market faster.

92  Syringe Siliconisation: Trends, Methods, Analysis Procedures Ready-to-fill, i.e. sterile, prefillable glass syringes, are washed, siliconised, sterilised and packaged by the primary packaging manufacturer. They can then be filled by pharmaceutical companies without any further processing. These days the majority of prefillable syringes are made of glass, and the trend looks set to continue. The siliconisation of the syringe barrel is an extremely important aspect of the production of sterile, prefillable glass syringes because the functional interaction of the glass barrel siliconisation and the plunger stopper siliconisation is crucial to the efficiency of the entire system. Both inadequate and excessive siliconisation can cause problems in this connection. Bruno Reuter & Claudia Petersen at Gerresheimer Bünde explain how, with the use of modern technology, we can achieve an extremely uniform distribution of silicone oil in glass syringes with reduced quantities of silicone oil.

76  Creating Component Quality: Understanding the Holistic Quality by Design Process  Pharmaceutical manufacturers have challenged packaging manufacturers to increase the quality of components used in parenteral packaging. As new, sensitive pharmaceuticals and biopharmaceuticals are prepared for market, regulatory agencies have also asked manufacturers to build quality into products from the start. Sascha Karhoefer at West Pharmaceutical Services provides an overview of improving the quality of the drug product’s container closure system, and how pharmaceutical packaging manufacturers can help to ensure consistent reliability throughout a drug product’s lifecycle.

100 With Intelligent Packaging you’re Always One Move Ahead  IPI speaks with Christoph Hammer of Dividella about its history and the key innovations of the future. 102 Reviews & Previews

82 Something in the Air — Ionisation as a Solution to Static In 600BC the philosopher and mathematician Thales of Miletus reported that after rubbing a piece of amber on the fur of a cat the amber attracted and held feathers — the first account of static electricity. David Rogers at Meech discusses how generating a controlled static charge has positive applications in some manufacturing scenarios; however, in many operations across a multitude of industries, uncontrolled static electricity causes serious production problems. 88  Uses of Sieves in the Pharmaceutical Industry and the Increased Demand for Containment  A sieve or screener is an essential part of every pharmaceutical production process, particularly as product quality and integrity are so important. The use of a sieve safeguards against customer compensation or litigation, as it eliminates all oversized contamination. It therefore ensures that ingredients and finished products are quality assured during production and before use or dispatch. Rob O’Connell at Russell Finex explains that the design of sieving equipment has had to undergo radical changes in recent years to overcome the new demands of companies manufacturing pharmaceuticals.


Autumn 2012 Volume 4 Issue 4

Publisher’s letter By 2020 the pharmaceutical market is anticipated to more than double to US$1.3 trillion, with the E7 countries — Brazil, China, India, Indonesia, Mexico, Russia and Turkey — accounting for around one-fifth of global pharmaceutical sales. Further, the incidence of chronic conditions in the developing world will increasingly resemble those of the developed world. The current pharmaceutical industry business model is both economically unsustainable and operationally incapable of acting quickly enough to produce the types of innovative treatments demanded by global markets. In order to make the most of these future growth opportunities, the industry must fundamentally change the way it operates. Some of the major changes anticipated for the industry are: •H  ealthcare will shift in focus from treatment to prevention. •P  harmaceutical companies will provide total healthcare packages.  he current linear phase research & •T development process will give way to in-life testing and live licensing, in collaboration with regulators and healthcare providers.  he traditional blockbuster sales •T model will disappear. •T  he supply chain function will become revenue-generating as it becomes

integral to the healthcare package and enables access to new channels. • More sophisticated direct-to-consumer distribution channels will diminish the role of wholesalers. The current role of the pharmaceutical industry’s sales and marketing workforce will be replaced by a new model as the industry shifts from a mass-market to a target-market approach to increase revenue. Big Pharma, according to reports, is doing more for access to medicine in developing countries. The latest Access to Medicine Index, which ranks the top 20 pharmaceutical companies on their efforts to improve access to medicine in developing countries, finds that the industry is doing more than it was two years ago, with GlaxoSmithKline still outperforming its peers, but an expanding group of leaders closing the gap. The Index, published Wednesday, found that Johnson & Johnson was one of the most dramatic risers, climbing from the middle of the field in 9th position in the 2010 Index to 2nd this year, closely behind GlaxoSmithKline. It is one of two newcomers to the top three. Its rise is due largely to consolidation of its access activities under one business unit, which has resulted in a more strategic and integrated approach, and to its acquisition of vaccine-maker Crucell, which has increased the relevance of its research and development investments. It has also disclosed more

overall about its access activities. This year’s Index shows that companies are becoming more organised internally in their approach to access to medicine, and that those who do this best tend to perform well across the other aspects we measure. This is the last issue of the year 2012. Our editorial team brings you a wide array of editorials. We start the issue with Andrew Catton at Miller Insurance explaining why with supply chains becoming more complex and a growing reliance on “lean” manufacturing processes, there are also increasing risks associated with globalisation. In the Regulatory segment, Elaine Eggington at IP Pragmatics discusses how an IP audit is a valuable tool to look at each potential area of IP, including patents, trademarks, copyright, branding and confidential know-how (trade secrets) to ensure that they are being identified and managed correctly. In Drug Discovery, Development & Delivery we have Andrew McCaddon, Honorary Senior Research Fellow in the School of Medicine at Cardiff University, providing a case study of his finding of low B12 levels in families with early onset dementia and his search for evidence of deficiency in patients with late onset dementia. I hope you all enjoy this issue, and we wish you all a very Merry Christmas and a Happy New Year. See you all in 2013. Mark A. Barker Publisher

Editorial Advisory Board Bakhyt Sarymsakova, Head of Department of International Cooperation, National Research Center of MCH, Astana, Kazakhstan

Jeffrey Litwin, M.D., F.A.C.C. Executive Vice President and Chief Medical Officer of ERT

Rick Turner, Senior Scientific Director, Quintiles Cardiac Safety Services & Affiliate Clinical Associate Professor, University of Florida College of Pharmacy

Catherine Lund, Vice Chairman, OnQ Consulting

Jeffrey W. Sherman, Chief Medical Officer and Senior Vice President, IDM Pharma

Deborah A. Komlos, Senior Medical & Regulatory Writer, Thomson Reuters

Jim James DeSantihas, Chief Executive Officer, PharmaVigilant

Robert Reekie, Snr. Executive Vice President Operations, Europe, Asia-Pacific at PharmaNet Development Group

Diana L. Anderson, Ph.D president and CEO of D. Anderson & Company

Mark Goldberg, Chief Operating Officer, PAREXEL International Corporation

Sanjiv Kanwar, Managing Director, Polaris BioPharma Consulting

Franz Buchholzer, Director Regulatory Operations worldwide, PharmaNet development Group

Maha Al-Farhan, Vice President, ClinArt International, Chair of the GCC Chapter of the ACRP

Stanley Tam, General Manager, Eurofins MEDINET (Singapore, Shanghai)

Francis Crawley. Executive Director of the Good Clinical Practice Alliance – Europe (GCPA) and a World Health Organization (WHO) Expert in ethics Georg Mathis Founder and Managing Director, Appletree AG Heinrich Klech, Professor of Medicine, CEO and Executive Vice President, Vienna School of Clinical Research 6 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Nermeen Varawalla, President & CEO, ECCRO – The Pan Emerging Country Contract Research Organisation Patrice Hugo, Chief Scientific Officer, Clearstone Central Laboratories

Stefan Astrom, Founder and CEO of Astrom Research International HB Steve Heath, Head of EMEA Medidata Solutions, Inc T S Jaishankar, Managing Director, QUEST Life Sciences

Autumn 2012 Volume 4 Issue 4


Rising Supply Chain Exposures

With globalisation and outsourcing remaining ongoing trends in the life sciences sector, there are everincreasing exposures to contend with in the supply chain, explains Miller’s life sciences expert, Andrew Catton. Pharmaceutical, biotech and medical equipment companies increasingly source raw materials, components, manufacturing processes and services externally. By using suppliers in markets such as Eastern Europe, India, the Far East and increasingly China, they are able to take advantage of lower costs and greater efficiencies. But with supply chains becoming more complex and a growing reliance on “lean” manufacturing processes, there are also increasing risks associated with globalisation. Last year saw numerous examples of supply chain disruption caused by natural catastrophes and political unrest, to name a few. Falling Foul of the Regulators For life sciences firms, regulatory closures are proving a large source of business interruption and product recall, which potentially result in significant losses of profit and reputation. It is unsurprising that this is a major cause of disruption given the increasingly strict regulatory environment life sciences firms are operating in. One of the factors driving the increase in factory closures is outsourcing to developing countries. Within these areas it is more difficult to ensure adherence to the stringent standards set by the US Food and Drug Administration (FDA), UK Medicines and Healthcare products Regulatory Agency (MHRA) and other regulatory bodies. Even where suppliers are regularly audited and maintain high levels of compliance, it is tougher to keep up with regulation in this sector. For example, for a large organisation 8 INTERNATIONAL PHARMACEUTICAL INDUSTRY

like Johnson & Johnson, with 250 companies in over 50 countries and approximately 100 manufacturing plants, managing that risk plays a vital role in the organisation’s success. It has established its own organisation – JJSC – dedicated to supply chain issues, and has a new approach to quality and compliance following the McNeil Consumer Healthcare children’s medicines recalls in 2010. The recalls came after a routine inspection at a manufacturing facility in Pennsylvania. The FDA concluded that the manufacturing process was using flawed procedures which could lead to errors. “The McNeil situation has all of us rethinking business continuity planning and how we utilize our plants and partner suppliers,” said Robert Salerno, Vice President, Supply Chain Strategy and Project Management, JJSC. Often, while there may be no problem with the product itself, simple packaging errors or non-adherence to prescribed manufacturing procedures could be enough to instigate a recall or shut down a plant. Even the suspicion of a violation can cause disruption. Of course there are also plenty of examples of faulty products and components that are withdrawn because they are not up to standard. The US recall of artificial hip implants and European recall of the controversial PIP breast implants are both recent examples in the medical devices sector. A factory closure may cause inconvenience to large organisations, but with multiple contractors these life sciences giants are more able to weather the storm. It is the middletier organisations that are arguably at a greater risk of disruptions to their supply chain, which can lead to a substantial loss of profit. In some cases, insurable options are available for regulatory closure, supply chain risk and intangible assets.

The Way Forward More often than not, life sciences companies look to take out product recall insurance to indemnify them when things go wrong. Clearly, strong risk management procedures play an important role, including effective supply chain management. Regular visits to suppliers’ sites, dual sourcing, geographic diversification and business continuity management all help life sciences companies better manage their supply chain risk. Insureds are also encouraged to maintain an open and transparent dialogue with their insurers and brokers when it comes to supply chain exposures, explaining the measures they have taken to ensure that compliance is viewed as a priority. Even sensitive information should be shared if it helps insurance partners gain a clearer picture of the insured’s risk profile, as this makes it easier to access broad and cost-effective cover from the insurance market.

Andrew Catton has worked in the insurance industry since 1971, joining Miller, a leading independent insurance broker, in 1996. Specialising in pharmaceutical, medical and life science product liability, clinical trials medical malpractice, professional indemnity, biomedical errors and omissions, product recall and intellectual property insurance, Andrew has spoken at numerous biotechnology and clinical trials conferences and contributed to multiple publications. Email:andrew.catton@

Autumn 2012 Volume 4 Issue 4



Regulatory & Marketplace

Protecting Biological Inventions – A Well-defined Issue? Of the three major sciences, physics, chemistry and biology, the biological sciences are the baby of the family. As such, our understanding of biological systems is less well-developed than our understanding of the physical and chemical world around us. The basic laws of physics, at least insofar as they relate to our world, were established hundreds of years ago and underpin our application of physics to our environment, particularly in engineering. Equally, our understanding of chemical principles allows us to have a clear understanding of the roots of advances in chemistry and chemical engineering. Much of the technology in these fields can be described in a sort of technical shorthand, because the principles and laws on which such developments rest are implicit, not just to those working in the relevant fields but to anyone with a basic interest in science. Because of this, authors writing about advances in these sciences can refer to effects, confident that the reader will immediately grasp the cause of those effects. This is much less true of technological advances involving biological systems, wherein the underlying principles are often still to be figured out. Of course, it makes biotechnology an exciting field in which to work, but it also raises a unique set of problems when trying to use patent law, to commercially protect those developments. The relationship between science and the law is always a little fraught. Science is based on the understanding that nothing can be proved beyond doubt, only disproved. The law, on the other hand, requires certainty, no more so than in the use of language to define concepts and boundaries. Our understanding of how living 10 INTERNATIONAL PHARMACEUTICAL INDUSTRY

organisms work on a molecular level remains hazy. As the biosciences progress, the networks of complex interactions and feedback loops in organisms continue to reveal themselves as increasingly intricate on many levels. Our understanding of molecular interactions in biological systems continues to increase, but is not at the stage where we properly understand cause and effect. The tension between the hazy understanding of the complexity of biochemical pathways and relationships, and the insistence of the law on clarity and certainty, leads to considerable difficulties when patenting biological inventions, because patent law requires clear definition of the invention being protected. In many cases, this difficulty in defining a biological invention arises because the molecular interactions underlying the invention are not well understood. For example, the invention can reside in the discovery of a function of a compound, without a clear understanding or elucidation of the structural motifs of the molecule which give rise to those functions. European patent law further heightens those tensions because not only does it require that the applicant for a patent clearly defines its invention in the original patent application, but it also does not allow the applicant to add anything to that application after filing. This rule is applied rigidly, even if the European Patent Office (EPO) discovers new evidence which may cause the applicant to want to amend the invention definition, because the evidence alters the perception of what part of the subject matter of the invention might be patentable. On a practical level, the examiners in the EPO are very uncomfortable with such a disconnect between function and structure. They do not like to grant patents for biotechnological

inventions, which define the invention solely by its function. This reluctance of the law and some examiners to engage the realities of the science makes it difficult to get biotechnological patents which are of an appropriate breadth. For the applicant, it is especially important to ensure that the application contains as much subject matter as possible, to forestall potential objections to the way that the applicant has defined the invention. Because the EPO will not allow the applicant to add anything to the application after filing, the original application must contain information to enact not only plan B, but plans C, D and E as well. The first issue facing applicants for biotechnological patents is that the EPO often confuses function with the result to be achieved. An applicant cannot define an invention by the result to be achieved – in the profession this is known as a “free beer” claim. Take a scenario where an applicant discovers that compound W cures the common cold. A “free beer” claim defining the invention in their patent application would read “A compound which cures the common cold”. Unsurprisingly, the EPO will not allow this, because it covers all compounds which might work, most of which the applicant will have taken no part in discovering. This much is logical. The problems really come where, as often happens, the applicant not only discovers the compound but discovers the manner in which it functions to treat a condition. For example, the applicant may discover that compound X can cure osteoporosis, but also discovers that it does so by activating a receptor, Y. This discovery takes the scope of the applicant’s technical advance beyond a single compound, to encompass a whole group of compounds that can treat osteoporosis. Autumn 2012 Volume 4 Issue 4

Regulatory & Marketplace In such a case it seems logical that the claim defining the invention should read “A compound which activates receptor Y for treating osteoporosis”, and, indeed, many applicants define their invention this way in their patent application. (In case you are wondering why a treatment should be defined this way, the EPO will not allow a patent where the invention is defined as a method of treatment, but will allow a patent to a compound “for use” in that method.) Unfortunately, many examiners at the EPO are uneasy with such claims, couched in such purely functional terms. In particular, the examiners worry that the invention, defined this way, is not new (inventions must be new and inventive), is not clear (the definition must be clear so that third parties can easily decide if they might infringe the patent) and is too broad (the definition of the invention should reflect only its contribution to the field, and no more). The examiners’ novelty issue is driven by practical concerns. If the applicant defines the group of compounds solely by its function, EPO examiners worry that known compounds, already known for the same purpose, may inherently have that function but not be documented as doing so. In the example I have chosen, an old drug, Z, may already exist for treating osteoporosis. The literature may not say that it activates receptor Y but, if it does, then it is a compound that treats osteoporosis and activates the receptor Y. If that were so, the defined invention would not be new, because it includes a known compound (Z) for a known use and so would not be patentable. Although the EPO should give the applicant the benefit of the doubt, examiners often do not, at least not without some evidence to satisfy them that none of the compounds, previously known for treating that disease, inherently have the claimed function. The second issue is the clarity of the definition. This is a very difficult one to avoid, unless you remain objective when initially deciding how to define the invention. The EPO likes an applicant to use

structural or numerical definitions if possible. For example, examiners like compounds defined by a formula, or amino acid sequence, and like values defined numerically rather than by effect (“…in an amount sufficient to treat osteoporosis”, for example). Any subjective definition will cause them concern. Returning to our example of a group of compounds which activate receptor Y, the receptor will (or should!) be defined structurally, or by a name that is generally understood in the field. However, many EPO examiners will object to the term “activates”. How do you know when the receptor is

activated? What effect must you be able to see? What test do you use to determine if the receptor is activated? What results are you looking for to demonstrate “activation”? The examiner will often argue that, in the absence of that information, the definition is unclear. Their position will be that a third party, developing a drug, may not be able to tell if it “activates receptor Y”, because they don’t have the answers to these questions, and so cannot tell if their compound falls within the definition i.e. infringes the claim. Finally, there is the issue of the breadth of the definition. Many INTERNATIONAL PHARMACEUTICAL INDUSTRY 11

Regulatory & Marketplace examiners may have concerns that, of the wide number of compounds that might activate receptor Y, a large number cannot also treat osteoporosis. Sometimes they even have good reasons to support those concerns. In those circumstances, they will argue that the applicant is claiming more than they are entitled to, and will require that the claim is narrowed. So how do you avoid these problems? The answer is to prepare, and that involves shifting your focus away from the function of the invention (although that is likely to be the scientifically exciting part of the development) to the practical definition of how it achieves that function. You should also be prepared to accept that, if you cannot find underlying structural motifs aligned with that function, you may not be allowed to protect everything that works in that manner. You should always look to plan B – structurally defined groups which have the function. To prepare for potential objections that previously known compounds had the relevant function, you should include in the application technical reasons why they did not. The one advantage that the applicants have at the EPO is that the balance of probability is with them – if they can provide evidence that known compounds did not have the relevant function, the examiner is likely to accept their position. If you can include appropriate technical reasoning to explain why compounds having the relevant function are new, that may forestall such an argument. That does not mean to say that you should not have a fall-back position. If you rely solely on the function of the compound and a third party challenges your patent, and shows that just one known compound activates receptor Y (taking our example), this would invalidate the whole patent. However, if you can narrow the definition to exclude that known compound, you may be able to overcome the challenge. Therefore, you should try and delineate groups of compounds which have the relevant function and, if 12 INTERNATIONAL PHARMACEUTICAL INDUSTRY

possible, try and find structural motifs that the groups have in common. If our example related to proteins, and you found that a specific nine amino-acid sequence is required to activate the receptor, that sequence could be used as an additional, structural definition of the invention compounds. The EPO is much more willing to accept a definition involving a function, if it also includes a structure. You also need to prepare for clarity objections. To do this, when you come to define the function on which the invention relies, you must consider what every word in that definition means. You should then devise further definitions of those terms, structurally and/or numerically, if possible. These may have to be a little narrower than your preferred wording – but the EPO is more likely to allow patents using those definitions. If the function can only be defined by simple experiment, then you must decide upon (and include in the application) a clear and detailed assay or test for assessing whether or not any given example has that function. The application must also set out those values from that assay or test which indicate that the tested example has the function. In our example, you will need to explain what test is to be used to determine if receptor Y is “activated”, and what value in the test is the threshold, above which the receptor is deemed “activated”. Finally, to avoid arguments that the claim is over-broad, you should provide technical reasons (or, better still, wide-ranging data) to explain why everything having the defined function will work. It is important to remember that those reasons must be in context. It is all very well arguing that phenol compounds can be used to activate receptor Y in vitro. If the claim is to the use of those compounds for treatment of osteoporosis, the EPO will object that such toxic compounds should not be given to a patient, irrespective of whether or not they might activate the receptor, and so you still have not shown the therapeutic effect across the whole definition. Once again, you should also have a plan B, in case you cannot convince

the EPO that all compounds will work. You should include in the patent application teachings of smaller groups of compounds, defined either by a function more closely related to the end result, or by structure. Finally, it is absolutely vital, for every aspect of the definition of an invention, to have multiple subdefinitions to which you can retreat if necessary. It is also important to convince the examiner that your invention is truly effective, to have as much experimental evidence as possible. It is true of any aspect of science, even where it collides with the law, that data remains king. Of course, your patent attorney will ask you for all this information when drafting your application. But if you start thinking about these aspects of the invention at that late stage, you run the risk of being underprepared. It is much better if you are aware of these issues through the entire development project, and devote at least some of the development work laterally to their consideration.

Jon Gowshall Jon Gowshall’s background is b i o c h e m i s t r y, and so his primary technical fields are biotechnology, pharmaceuticals and medical devices. Jon has wide experience of patent law and practice, including UK litigation and freedom to operate opinions. Jon’s core area of expertise is law and practice at the European Patent Office (EPO), where he has considerable experience, including in opposition and appeal procedures. He is a tutor of UK trainee attorneys for the European law examination. Jon is primarily based in our London office, but spends several weeks each year in Munich for dealings with the EPO. Jon has been a UK and European patent attorney since 1989 and a partner since 1993. He is a member of the council of epi (Institute of Professional Representatives before the European Patent Office) and the council of CIPA (the UK Chartered Institute of Patent Attorneys). Email:

Autumn 2012 Volume 4 Issue 4



Regulatory & Marketplace

IP Audits: A Health Check for your Intellectual Property For most biotechnology and pharmaceutical companies, intellectual property (IP) is the solid foundation on which the business is built. But when was the last time that you reviewed your IP assets? An IP audit is a valuable tool to look at each potential area of IP, including patents, trade marks, copyright, branding and confidential know-how (trade secrets) to ensure that they are being identified and managed correctly. IP audits can address a range of different IP management issues, and offer practical advice on the most effective route forward. The most important aspect of a good audit is the interpretation of the findings in terms of the potential impact on a company’s commercial prospects. Aims of the Audit An IP audit can achieve many things, and should be repeated at different stages of company development to address different issues. Some of the main areas that can be included are: •P  atents - identifying pending applications, granted patents, potential patentable technology, potential patent infringements; •T  rade marks identifying registered and unregistered trade marks, use of searching procedures prior to introduction of trade mark, possible infringement of third party rights; •D  esigns - identifying registered and unregistered design rights, possible protection through Design Right and Community Design Right, possible infringement of third party rights; •C  opyright – identifying copyright (databases, websites, marketing/promotional material, photography, film), ownership/ assignment of copyright from creators, procedure for establishing date of creation, 14 INTERNATIONAL PHARMACEUTICAL INDUSTRY

copyright indicators on protectable works, database rights; • IP management - including confidentiality (or non-disclosure) agreements, trade secrets, technical know-how, employee agreements and dissemination of IP policy throughout the company, licensing, evaluating existing IP, IP policy including registration, renewal systems, monitoring/watching services and enforcement and international filing strategies. When commissioning an IP audit, it is vital that the findings are placed in the commercial context of the company, so that the recommendations and advice are relevant to the products which are being developed and the company commercial strategy, rather than being a simple inventory. This interpretation in terms of the potential impact on the company’s commercial prospects is where the true value of an audit lies. When a company has identified its intellectual property assets, and ensured that they are appropriately managed, they will be in a good position to develop a suitable IP strategy to take their products to market. An IP audit also puts a company in a strong position when subsequently entering due diligence with a potential partner, funder or acquirer. Stages of the Audit The first stage of any audit is information-gathering, which should cover not just the existing and potential IP assets of the company, but also the policies and procedures that are in place for managing IP, and the overall company business strategy. For different types of company, different IP rights will be more important. In the healthcare

arena, patents are the cornerstone of the technology protection, and are usually carefully managed. Other types of intellectual property may not be given such importance, but can still represent very valuable assets. Trade secrets are likely to be important to protect the tips and tricks that make a technology work in practice. Copyright will protect websites, photographs, marketing material and product leaflets. Design rights can protect the look and functionality of a medical device or instrument. Lastly, trade marks and other branding issues affect both the company name and the names of its products or services. Once the information has been gathered and reviewed, the next stage is to identify gaps in the protection held by the company, and to develop practical advice to fill these gaps. This is where it becomes important to consider not just what could be done, but what should be done to allow the product to be successfully commercialised. There is little point in spending considerable amounts of money in strengthening the patent protection around a technology which no longer forms part of a company’s product development plans. Whilst every audit will include a broad consideration of all the aspects of IP protection, more emphasis will be placed on different aspects, depending on the needs of a company at that time, on their history, and on the stage of product development. Research may be needed into the overall patent landscape, or existing trade marks, to put the findings into the broader context. The final stage of the audit is the preparation of a report which summarises the situation, presents the results of the investigations, and gives recommendations on actions that are needed. This should Autumn 2012 Volume 4 Issue 4



Regulatory & Marketplace be discussed with the company to ensure that the actions are practical and within available budgets, and to clarify how they fit into the overall company strategy. Functions of an Audit To illustrate some of the functions of an IP audit, and the many different ways that these can be used to support the growth strategy of a business, we will look at some case studies. These look at how to protect research ideas in an early-stage collaboration, the best strategies for a platform technology, and how to identify whether a service has freedom to operate. Other potential areas for audit include the trade mark protection and branding strategies which should be adopted, or best practice policies for IP identification, review and protection. Case Study – Early-stage Collaboration When research is at an early stage, an audit can provide a roadmap for intellectual property protection. This includes advice on when and how to include external collaborators, partners and potential licensees into the development pathway without compromising the IP position. A research institution was commencing a programme of screening their collection of novel organisms to identify natural compounds with a particular type of activity of value to the food industry. The screen was set up and running, but no lead candidates had yet been identified. They wanted an IP audit which would allow them to map out the steps they needed to take to identify and protect their IP, whilst engaging with potential partners. In this case, the screen itself did not contain any novel or inventive steps, and the tricks to get the best out of the screen were best protected as a trade secret. Depending on the outcomes of the screening, there could be potential to protect the novel organisms, the novel compounds with activity, associated production methods, and claims of methods of use. For each of these types of patent claim a different type of evidence is needed, and advice was given on how 16 INTERNATIONAL PHARMACEUTICAL INDUSTRY

to structure the research plan to gain the broadest patent protection. As no candidates had yet been identified, the audit also examined the broad patent landscape in this technology area to identify companies which are particularly active in the field, areas of high and low patenting activity, and the types of organism which have been investigated. Recommendations included how and when to approach potential partner organisations and the level of information that could be safely shared at different stages, as well as the appropriate use of confidentiality and material transfer agreements. Case Study – Protecting a Platform Technology At a later stage of technology development, an effective national filing strategy will be required, particularly where a patent covers the fundamental technology that underpins an entire product and application range. A small company had developed a new platform technology, which allowed the use of microwave energy

on a continuous process. The technology has wide applications in a range of industries, and the company has focused on exemplifying the use of their machines in one or two key application areas. They have filed a broad patent application on the technology, which was due to enter the 30-month national phase shortly. This is a very important commercial decision point. It is also typically a very expensive stage of the patenting process, as individual fees are required for each national office selected, together with translation charges in many cases. The selection of suitable territories at this stage therefore needs to balance the desire to protect the technology as widely as possible with the usual requirement in an SME to keep costs low. The patent application covers both composition of matter claims and a method claim. If granted, it can therefore offer protection both in territories where the machine is manufactured and those where the machine is sold and used. Ideally, the company should file national phase applications in as wide a selection of important territories Autumn 2012 Volume 4 Issue 4

Regulatory & Marketplace as possible where either use or production of their machine could be expected. As funding constraints were also important, a prioritised list of countries were recommended, supported by a forecast of the future patent costs which would arise in each territory. The audit also considered the potential ways in which the company could strengthen their brand by use of trade marks. The company name is descriptive of their technology, and so is not eligible for trade mark protection. There was, however, the potential to develop a non-descriptive word or phrase to use as a name for their technology process, and this could help to distinguish their technology from similar techniques which lack their specific advantages. By using a suitable trade mark and raising its brand profile, it should be possible to build up a reputation for the company as the only supplier of this specific technology. Case Study – Investigating Freedom to Operate Once a new product or service is identified, an IP audit can indicate whether there are any existing patents which might affect the freedom to operate the new service or sell the product. A consortium of academic and commercial partners had combined forces to develop and introduce a new service to the veterinary industry. The service was a new method to support cattle breeding, and involved bringing together, adapting and refining a number of different techniques which were already used elsewhere. The audit focused on whether the consortium was free to operate these methods, or if there might be specific areas where a licence to a use specific technique may be required. One of the partners in the consortium already had access to third-party patented technology surrounding one of the techniques to be used, and had a license from the patent owners to use the technology to produce sperm cells for commercial use, with royalty payments to be made on the semen dose or the resulting embryos produced from semen. Recommendations were

made to ensure that the way in which the consortium provided their service would comply with the terms of this licence and allow them to continue to use this technique. The patent landscape surrounding advanced animal breeding techniques was examined to identify who the major players are in this field, and the subject matter of the patents that they hold. As the research was still at an early stage, the precise techniques that would be used were not yet known, and so the scope of the audit did not permit a thorough Freedom to Operate search. The group aims to introduce the service into the UK in the first instance, so particular attention was paid to GB patents which are currently in force, as these will be the ones which are relevant to whether the consortium will be able to operate their service in the UK. The searches that were carried out showed that there are a large number of patents surrounding the advanced breeding techniques to be used in the service. The fundamental techniques used are well over 20 years old, and so any associated broad patent protection will have now expired. There are many patents on specific variations and refinements of the techniques, on apparatus and instrumentation used, and on media and reagents for the processes. Where the consortium is buying such material commercially, freedom to operate should not be an issue. Where they develop their own proprietary variations of these materials, however, it would be sensible to carry out follow-up searches on the specifics of these variations as they are developed. Some particular patents were identified during the review as being of potential concern, and the full patent specifications were provided for further study, and their current examination status and prosecution position were investigated. It is important to remember that a general patent landscape and search of this kind will be able to identify areas of concern or specific patents which should be investigated. This is different to (but can often be a precursor to) seeking a legally qualified professional opinion on Freedom to Operate once a company

has a definitive product or service it will be selling. Sources of Support An IP audit can usefully be completed internally by a company which has suitable in-house expertise, and this process may identify areas where more in-depth help and external advice is needed. Alternatively, the audit may use external professionals with a good understanding of both IP and the commercial space in which a company is operating. The UK Intellectual Property Office (IPO) has run a pilot scheme to provide funding to support IP audits for selected SME businesses with high growth potential. In their discussion paper, “From ideas to growth: Helping SMEs get value from their IP” 1, the IPO proposed to extend this excellent initiative to 200 SMEs in 2012/13. This has now been confirmed in the recently published conclusions to the discussion paper2, which will help to ensure that innovative technology in SMEs is built on a foundation of sound IP management. This audit funding can be accessed through the IPO’s partners, including GrowthAccelerator, Welsh Government and Scottish Enterprise. References 1. w, visited on 8 October 2012. 2. w w w. i p o . g o v. u k / b u s i n e s s - s m e conclusions.pdf, visited on 12 November 2012.

Elaine Eggington is a Principal Consultant at IP Pragmatics. Following a career in industry, she has spent the last 12 years helping companies and universities to commercialise early-stage life science technologies through venture capital investment and consultancy. Elaine regularly conducts IP audits to help companies of all sizes to make the most of their IP portfolio. Email: elaine.eggington@


Drug Discovery, Development and Delivery

Homocysteine and Cognitive Impairment I have had a research interest in B vitamins and Alzheimer’s Disease since 1990. During that year, as a GP Trainee, I met a fifty-threeyear-old patient with memory problems and a strong family history of dementia. The family were later found to be one of the first kindreds with a mutation in the amyloid precursor gene1. I was particularly intrigued by the finding of very low vitamin B12 levels in my patient and his family2, and set about researching the vitamin’s relationship to dementia. One consequence of vitamin B12 deficiency is a raised blood level of homocysteine3. This non-essential amino acid is derived from dietary protein. Its conversion to useful metabolites (S-adenosylmethionine and the antioxidant glutathione) requires methyl-folate and vitamins B12 and B6 as cofactors. Blood levels of homocysteine therefore rise with deficiency of these important vitamins. Prompted by my finding of low B12 levels in the family with early onset dementia, I decided to look for evidence of deficiency in patients with late onset dementia. In 1998, together with colleagues working at my local hospital, I discovered elevated homocysteine levels in patients with clinically diagnosed Alzheimer’s disease (AD)4. We later showed that homocysteine is an independent predictor of cognitive decline in healthy elderly individuals over a five-year period5. Others have since confirmed this observation6. For example, a high homocysteine in midlife is an independent risk factor for the development of late-life AD in women up to thirty-five years later7. Patients with vascular dementia and mild cognitive impairment (MCI) frequently also have ‘hyperhomocysteinemia’, and it is now a well-recognised risk factor for cognitive decline and incident dementia4,8,6. Although cognitive disorders have been the focus of such research in recent decades there are also reports of hyperhomocysteinemia 18 INTERNATIONAL PHARMACEUTICAL INDUSTRY

in other chronic neurodegenerative disorders such as Multiple Sclerosis and Parkinson’s Disease9. This has led to speculation as to whether hyperhomocysteinemia is an independent causal risk factor for AD or a secondary epiphenomenon related to neurodegeneration itself10. For example, one suggestion is that it likely reflects B vitamin depletion due to neuro-inflammatory oxidative stress11. In any event, hyperhomocysteinemia appears to be an important component of the dementia process and is related to neurotransmitter deficits, tau hyperphosphorylation and amyloid deposition12,13. Homocysteine can be lowered by B vitamin supplementation. Casestudies in cognitively impaired hyperhomocysteinaemic patients in my own general practice population showed improved cognition following B vitamin and antioxidant supplementation14. Clinical trial evidence has been equivocal for cognitively intact individuals and negative for patients with established AD15,16,17. However, a Cochrane review summarised intervention studies by suggesting that longterm supplementation of folic acid, with or without vitamin B12, might benefit healthy older people with high homocysteine levels18. VITACOG is a recently completed randomised placebo-controlled twoyear trial of high-dose B vitamin supplementation in elderly individuals (>70 years) with MCI19. Treatment comprised 0.8mg folic acid, 0.5mg vitamin B12 and 20mg of vitamin B6. The primary outcome was brain atrophy rate measured by serial volumetric MRI scans, but the trial also evaluated clinical and cognitive function. Complete scans were available for 83 placebo and 85 treated subjects. In these subjects, homocysteine levels fell by 22.5% in treated subjects but increased by 7.7% in the placebo group. Consistent with earlier reports, higher homocysteine levels were associated with an increased rate of brain atrophy. Atrophy was significantly slowed by 30% in treated individuals

with a baseline homocysteine level >9.5 μmol/L. Those with the highest baseline level (>13 μmol/L) had a rate of atrophy 53% lower than the placebo group. In the larger cohort of subjects completing the cognitive component of the trial (113 received placebo and 110 B vitamins) there was a significant benefit of B vitamins amongst those with homocysteine > 11 μmol/L in scores of global cognition, episodic memory and semantic memory20. There was also an improvement in clinical rating scales (the Informant Questionnaire on Cognitive Decline in the Elderly and the Clinical Dementia Rating) in the B vitamin group, but only in subjects with a homocysteine >13 μmol/L. Remarkably, treatment more than doubled the number of subjects with a Clinical Dementia Rating of zero (equating to no dementia) compared with no effect in the placebo group. In summary, the VITACOG trial shows that the combination of oral high-dose B12, folic acid and B6 significantly slows brain atrophy and cognitive decline in patients with MCI and raised homocysteine. Importantly, these two effects are highly dependent on baseline levels of homocysteine, perhaps partly explaining discrepant results in earlier trials15,16. Although very welcome news, there were several practical implications and difficulties in the light of these results. Physicians and public alike are generally unaware of the association of elevated homocysteine and the risk of developing dementia, and even less aware of the latest evidence showing beneficial effects of homocysteine reduction. Currently homocysteine assays are neither routinely nor widely available, although they can be a useful screening test for B12 and folate deficiency21. Testing for high homocysteine in elderly subjects with early MCI may also have a place in determining the appropriate treatment of such patients. The VITACOG trial suggests that patients with MCI and elevated Hcy will undoubtedly benefit from a combination of oral folic acid (0.8mg), Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery vitamin B12 (0.5 mg) and vitamin B6 (20 mg). However, no such single highdose licensed prescription formulation exists, at least in the United Kingdom. Treatment therefore requires a multipleitem prescription with its ensuing cost to the patient and the associated issue of poor compliance. In the light of the very positive results seen in the VITACOG trial I therefore set about full-scale development of such a combination product. My own practice with patients presenting with MCI is to measure their homocysteine level and, if >10 μmol/L, I had seen successful results from treating them with 0.8mg of folic acid, 20mg vitamin B6 and 1mg of oral vitamin B12 daily; the latter was not available on NHS prescription but available over the counter at many health stores. I had also found considerable additional cognitive benefit from including the antioxidant N-acetylcysteine (600mg) together with B vitamins14. This addresses oxidative stress, further lowers homocysteine by increasing its urinary excretion, and can benefit cognitive scores in patients with established AD22. I therefore formulated Betrinac® based on these components. The vitamins used in Betrinac® are within the allowable dose range for EU food supplements, hence it is available as an “over-the-counter” rather than prescription product. It is of course not yet confirmed whether such intervention on a larger scale will ultimately reduce the incidence and burden of dementia. However, a recent estimate is that homocysteine lowering has the potential to lead to a 20% reduction in risk of dementia, the financial and social implications of which are of course considerable23. Another issue is how frequently should patients with MCI have homocysteine levels tested? Current guidelines suggest an assessment of B vitamin status every 3–5 years because dietary or drug changes may lead to deficiency, the symptoms of which may be mistakenly attributed to insidious dementia and hence go undetected21. Importantly, VITACOG now sets the stage for larger and longer trials to study the effects of B vitamins on conversion rates from MCI to dementia. The results also offer a 20 INTERNATIONAL PHARMACEUTICAL INDUSTRY

glimmer of hope in treating a condition that has otherwise become associated with a degree of therapeutic nihilism. Physicians can now also avoid the twin trap of overtreatment by clearly defining those individuals who are most likely to benefit. References

1. A  .M. Kennedy, S. Newman, A. McCaddon, J. Ball, P. Roques, M. Mullan, J. Hardy, M.C. Chartier-Harlin, R.S. Frackowiak, E.K. Warrington. Familial Alzheimer’s disease. A pedigree with a mis-sense mutation in the amyloid precursor protein gene (amyloid precursor protein 717 valine-->glycine), Brain 116 ( Pt 2) (1993) 309-324. 2.  A . McCaddon, C.L. Kelly, Familial Alzheimer’s disease and vitamin B12 deficiency, Age Ageing 23 (1994) 334-337. 3.  A . McCaddon, C.L. Kelly, Alzheimer’s disease: a ‘cobalaminergic’ hypothesis, Med.Hypotheses 37 (1992) 161-165. 4.  A . McCaddon, G. Davies, P. Hudson, S. Tandy, H. Cattell, Total serum homocysteine in senile dementia of Alzheimer type, Int.J.Geriatr.Psychiatry 13 (1998) 235-239. 5.  A . McCaddon, P. Hudson, G. Davies, A. Hughes, J.H. Williams, C. Wilkinson, Homocysteine and cognitive decline in healthy elderly, Dement.Geriatr.Cogn Disord. 12 (2001) 309-313. 6.  S. Seshadri, A. Beiser, J. Selhub, P.F. Jacques, I.H. Rosenberg, R.B. D’Agostino, P.W. Wilson, P.A. Wolf, Plasma homocysteine as a risk factor for dementia and Alzheimer’s disease, N.Engl.J.Med. 346 (2002) 476-483. 7. D.E. Zylberstein, L. Lissner, C. Bjorkelund, K. Mehlig, D.S. Thelle, D. Gustafson, S. Ostling, M. Waern, X. Guo, I. Skoog, Midlife homocysteine and late-life dementia in women. A prospective population study, Neurobiol.Aging 32 (2011) 380-386. 8.  R. Clarke, A.D. Smith, K.A. Jobst, H. Refsum, L. Sutton, P.M. Ueland, Folate, vitamin B12, and serum total homocysteine levels in confirmed Alzheimer disease, Arch.Neurol. 55 (1998) 1449-1455. 9.  R. Obeid, A. McCaddon, W. Herrmann, The role of hyperhomocysteinemia and B-vitamin deficiency in neurological and psychiatric diseases, Clinical chemistry and laboratory medicine : CCLM / FESCC 45 (2007) 1590-1606. 10.  M. Farkas, S. Keskitalo, D.E. Smith, N. Bain, A. Semmler, B. Ineichen, Y. Smulders, H. Blom, L. Kulic, M. Linnebank, Hyperhomocysteinemia in Alzheimer’s Disease: The Hen and the Egg?, J Alzheimers Dis (2012). 11. A . McCaddon, B. Regland, P. Hudson, G. Davies, Functional vitamin B(12) deficiency and Alzheimer disease, Neurology 58 (2002) 1395-1399. 12.  A . McCaddon, P. Hudson, Alzheimer’s disease, oxidative stress and B-vitamin depletion., Future Neurology. 2 (2007) 537547. 13. A. Fuso, S. Scarpa, One-carbon metabolism and Alzheimer’s disease: is it all a methylation matter?, Neurobiol.Aging (2011). 14.  A. McCaddon, Homocysteine and

cognitive impairment; a case series in a General Practice setting, Nutr.J. 5 (2006) 6. 15. J. Durga, M.P. Van Boxtel, E.G. Schouten, J. Jolles, F.J. Kok, P. Verhoef, The effect of 3-year folic acid supplementation on cognitive function. A randomized controlled trial., Haematologica Reports 1 (2005) 1-1. 16.  J.A. McMahon, T.J. Green, C.M. Skeaff, R.G. Knight, J.I. Mann, S.M. Williams, A controlled trial of homocysteine lowering and cognitive performance, N.Engl.J.Med. 354 (2006) 2764-2772. 17.  P.S. Aisen, L.S. Schneider, M. Sano, R. Diaz-Arrastia, C.H. van Dyck, M.F. Weiner, T. Bottiglieri, S. Jin, K.T. Stokes, R.G. Thomas, L.J. Thal, High-dose B vitamin supplementation and cognitive decline in Alzheimer disease: a randomized controlled trial, JAMA 300 (2008) 1774-1783. 18.  M. Malouf, J. Grimley Evans, A. Areosa Sartre, Folic acid with or without vitamin B12 for cognition and dementia (Cochrane Methodology Review), The Cochrane Library (2003). 19.  A .D. Smith, S.M. Smith, C. de Jager, P. Whitbread, C. Johnston, G. Agacinski, A. Oulhaj, K.M. Bradley, R. Jacoby, H. Refsum, Homocysteine-lowering by B-vitamins slows the rate of accelerated brain atrophy in mild cognitive impairment: a randomized controlled trial., PLoS.One. 5 (2010) e12244. C. de Jager, A. Oulhaj, R. Jacoby, H. 20.  Refsum, A.D. Smith, Cognitive and clinical outcomes of homocysteine lowering B vitamin treatment in mild cognitive impairment: a randomized controlled trial, Int.J.Geriatr.Psychiatry (2011). 21.  H. Refsum, A.D. Smith, P.M. Ueland, E. Nexo, R. Clarke, J. McPartlin, C. Johnston, F. Engbaek, J. Schneede, C. McPartlin, J.M. Scott, Facts and Recommendations about Total Homocysteine Determinations: An Expert Opinion, Clin.Chem. 50 (2004) 3-32. 22.  J.C. Adair, J.E. Knoefel, N. Morgan, Controlled trial of N-acetylcysteine for patients with probable Alzheimer’s disease, Neurology 57 (2001) 1515-1517. 23. D.S. Wald, A. Kasturiratne, M. Simmonds, Serum homocysteine and dementia: Metaanalysis of eight cohort studies including 8669 participants, Alzheimers.Dement. 7 (2011) 412-417.

Andrew McCaddon MD is a Principal GP in Wrexham and an Honorary Senior Research Fellow in the School of Medicine, Cardiff University. His research interest in B vitamin deficiency and Alzheimer’s disease led to the formulation of Betrinac® – a high-dose B vitamin and antioxidant supplement. Email:

Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery

Stem Cells and Drug Discovery Introduction: Stem cells are extraordinary cells, having many features and advantages that could revolutionise drug development and healthcare applications. They are capable of both self-renewal and differentiation to mature somatic cells in vivo and in vitro1,2 and as such offer a limitless, consistent supply of physiologically relevant cells for applications such as cell replacement therapies, drug development and disease modelling. The ground-breaking emerging field of induced pluripotent stem cells (iPS cells) in which somatic cells can be reprogrammed to a pluripotent stem cell state1 has further increased interest in stem cell technology, as they present the opportunity to generate patient- and disease-specific cells for personalised medicine and disease modelling. Many different types of stem cell exist, of diverse origin and with differing potential for self-renewal and lineage differentiation. Pluripotent stem cells (embryonic stem (ES) and iPS) are the most potent of stem cells, being able to self-renew indefinitely and differentiate into all somatic cell types in vivo and many in vitro1. Of particular interest to the pharmaceutical industry, pluripotent stem cells have been used to generate human cardiac, hepatic and multiple neuronal (e.g. dopaminergic, GABAergic, motor neuron) cells in vitro. Multipotent, or adult, stem cells can be isolated from many foetal and adult tissues e.g. haemopoietic, neural, mesenchymal and muscle2. They have more restricted self-renewal and differentiation potential than pluripotent stem cells, typically limited to generation of cells of the tissue from which they were isolated – e.g. neural stem cells under normal circumstances are only capable of differentiating into the three neural lineages of neurons, astrocytes and oligodendrocytes3 (Figure 1). Stem cells have been utilised in cell replacement therapies for over 40 years in the form of bone marrow transplantation4. Haemopoietic stem 22 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Figure 1. Stem cell sources and their differentiation potential Different types of stem cells exist which differ in their longevity in culture and in the variety of mature cell types they can generate. Pluripotent stem cells – either embryonic or induced – are the most potent stem cells and are capable of infinite self-renewal in vitro and can generate all somatic cell types. Embryonic stem cells are isolated from the inner cell mass of blastocysts, whereas induced pluripotent stem cells are generated by reprogramming somatic cells. Adult, or tissue-specific, stem cells are more restricted in their differentiation potential, typically only being able to generate cells of the tissue from which they were isolated.

cells (HSCs), although present in bone marrow at a very low frequency, are capable of reconstituting the entire blood system of recipient patients5. More recently, other stem cell treatments have progressed to the clinic, for example Mesoblast’s adult stem cell RevascorTM therapy for congestive heart failure6 and Advanced Cell Technology’s human ES cell derived retinal pigmented epithelial cells for Stargardts disease7. However, the high cost of manufacture of these treatments along with a complicated and poorly understood regulatory pathway, particularly for pluripotent stem cell derived therapies, is impeding their widespread development. An alternative application of stem cells is their use in the discovery of conventional small molecule drugs for which the regulatory and manufacturing pathways are well established. Stem cells have application in all stages of the drug discovery pathway from target identification, to high-throughput screening to toxicology studies. Here we will highlight examples of how stem cells are already being utilised in this process and describe innovative techniques which are helping improve performance and functionality in

vitro in order to bring the application of stem cells to the forefront of the pharmaceutical industry. Stem Cells and Drug Discovery: High Throughput Screening: Current methods of drug screening rely largely on the use of recombinant transformed cell lines that express a target of interest, e.g. a GPCR receptor, but otherwise are not directly relevant to the disease being studied. More physiologically relevant primary cells are in short supply and batch variability limits their application. Stem cells offer an attractive alternative to primary cells and recombinant cell lines as they can be propagated for prolonged periods of time, can be cryopreserved and can differentiate to physiologically relevant cell types. Furthermore, iPS cells now offer the opportunity to generate disease-specific somatic cells and to rapidly generate panels of stem cells with a range of genetic phenotypes. While stem cell derived somatic cells have been used for several proof of concept studies with a small number of compounds8,9 there are few reports of true high-throughput screening (HTS) campaigns. Pfizer, however, have carried out one such screen, in which mES cells were Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery differentiated into pharmacologically responsive glutamatergic neurons and used to screen a library of 2.4x106 compounds10. Novel chemical hits for AMPA potentiation were identified and validated in secondary assays using hES cell derived neurons. There is increasing evidence that the large pharmaceutical companies are seriously contemplating the use of stem cells for drug discovery purposes. For example, Roche invested $20million in a deal with Harvard University to use cell lines and protocols to screen for drugs to treat cardiovascular and other diseases, and GSK have signed a similar deal worth $25m. Adult stem cells, or progenitors derived from pluripotent stem cells, also have application in discovery of regenerative drugs that would promote their in vivo counterparts to repopulate lost or diseased cells in conditions such as stroke or heart failure. Regenerative drugs are already available – for example Eltrombopag11 (Promacta/ Revolade), a TPO receptor agonist, which stimulates the production of platelets from haemopoietic progenitor cells. However, Eltrombopag was discovered in a traditional drug screen using a recombinant cell line expressing the TPO receptor12, an approach which relies on knowledge of the receptors and cytokines to target for regeneration of a particular tissue. For most tissues this information is not known and in these cases the in vitro use of stem cells and their progeny would be very advantageous. Toxicology: Approximately 30% of drugs that fail in early-stage clinical trials do so because of toxicity issues, primarily hepatic and cardiac toxicity. This costs drug developers billions of dollars a year and demonstrates that current preclinical toxicology models are ineffective. Primary hepatocytes and cardiomyocytes cells are expensive to manufacture, are in short supply and vary significantly from donor to donor, while transformed cell lines and animal models are not as physiologically relevant to human organ function. Pluripotent stem cells could provide a limitless, consistent alternative resource of human hepatocytes and cardiomyocytes for toxicity studies and greatly reduce the need for animal testing. iPS cells hold 24 INTERNATIONAL PHARMACEUTICAL INDUSTRY

particular value for this application since they can be readily derived from many different individuals, therefore providing an efficient system for generation of cell panels to test the effects of drugs on different genetic populations13,14. Several proof of concept studies have been carried out to evaluate the use of pluripotent stem cell derived hepatocytes and cardiomyocytes to predict drug effects in humans, and Roche is already using iPS derived cardiomyocytes15 (supplied by Cellular Dynamics International) in their drug discovery and toxicity studies. Disease Modelling: The ability to genetically manipulate mouse and human ES cells has been used for many years in the generation of somatic cell and mouse models of human disease in which genes are ‘knocked-out’ or point mutations are engineered16,17. The advent of reprogramming technology now brings the ability to generate iPS cells, from patients with a variety of diseases, which can then be differentiated to specific lineages producing diseaseand patient-specific somatic cells. For example, iPS cells have been generated from patients with a K+ channel mutation associated with cardiac arrhythmias18. Cardiomyocytes differentiated from these iPS cells were found to recapitulate the longer action potentials observed in the patients and used to discover small molecules that could correct the underlying electrophysiological defect. iPS cells have been generated from patients with a wide variety of diseases such as Huntingdon’s, ALS, SCID, juvenile diabetes and spinal muscular atrophy (SMA)19. Although such studies on diseased cells are informative, in many cases it has been shown that cellular responses to drug candidates observed in 2D cultures are not applicable to in vivo response. Much effort is therefore being applied to generate more functional, physiologically relevant in vitro 3D disease models which contain multiple cell types in a relevant tissue architecture - i.e. tissue engineering. One approach is to seed cells on a biomimetic scaffold that guides cells to differentiate and form a 3D cell construct. By modifying

the scaffold material, strength and structure different outcomes can be achieved, and by seeding different cell types or spatially organising developmental cues, a functional 3D structure can be generated which mimics organ architecture, cellcell and cell-ECM interactions20. For example, in a very elegant study, 3D hydrogel scaffolds were generated in which Sonic Hedgehog and Ciliary Neurotrophic Factor were simultaneously immobilised in distinct patterns. These factors differentially affected the differentiation of neural progenitor cells21, opening the possibility of generating 3D organ mimetics by spatially controlling the differentiation of stem and progenitor cells. Technologies such as bioprinting have also recently come to the fore. Bioprinting is a computer-controlled cell deposition technique that allows precise spatial resolution and control of 3D cell constructs22. For example, blood vessel substitutes of different diameters have been generated by printing mixtures of endothelial and smooth muscle cells in defined geometries and subsequently applying physiological signals such as shear flow20 (Figure 2). Figure 2. Bioprinted blood vessels The top image shows a template to build a construct with spheroids composed of smooth muscle cells (red) and endothelial cells (green). A transversial section after fusion, bottom image, shows the lumen is predominantly composed of endothelial cells. From: Jakab, K., et al., Tissue engineering by self-assembly and bio-printing of living cells. Biofabrication, 2010. 2(2): p. 022001.

Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery Controlling Stem Cell Differentiation A fundamental requirement for all the above applications of stem cells is the ability to reliably and robustly direct their differentiation to functional specific cell types in high yield. This is technically extremely challenging, and generating cell batches at scale, in a cost-effective manner, as required for cell therapy and drug discovery applications, is even more demanding. Many factors have to be considered when developing methods to differentiate stem cells. Typically the sequential addition of particular combinations of growth and patterning factors is required, essentially mimicking processes that occur in vivo during development 23. The microenvironment in which cells are cultured also needs to be optimised, as the extracellular matrix (ECM) substrate and spatial configuration of stem cells can have an enormous effect on their fate 24. Testing a significant number of such variables is very labourintensive and time-consuming, limiting the development of optimal protocols. Here we describe some high-throughput techniques that are being applied to expedite discovery of methods to control stem selfrenewal and differentiation. Discovering optimal cell culture media The addition of growth factors or small molecules that target particular signalling pathways is one of the principal methods researchers use in attempting to direct the differentiation of stem cells to a particular cell type. Selection of these factors is typically based on what is known of lineage development during embryogenesis or in the adult during tissue repair. For example, the differentiation of hES cells to pancreatic cells requires a series of four different culture media, each containing a combination of growth factors and/ or small molecules which first induce stem cells to commit to definitive endoderm, then to pancreatic endoderm, to pancreatic endocrine/ exocrine cells and finally to more mature islet cells 25. To date, the 26 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Figure 3. Combinatorial Cell Culture (below) Combicult速 is a high-throughput platform for the rapid identification of stem cell differentiation protocols. Stem cells on beads are exposed to multiple combinations of media, containing active agents such as growth factors or small molecules, using a split-pool technique. The optimal combinations for effective differentiation can be deduced rapidly and cost-effectively.

development of such complicated protocols has been carried out empirically and involved much effort and resource. The temporal, sequential nature of stem cell differentiation lends itself to a combinatorial approach to protocol discovery. Plasticell has developed a high-throughput platform that uses combinatorial cell culture (Combicult 速) technology

to screen tens of thousands of protocols in one experiment 26. Combicult 速 combines miniaturisation of cell culture on microcarriers, a pooling/splitting protocol and a unique tagging system to allow multiplexing of experiments. Stem cells grown on microcarrier beads are shuffled randomly, stepwise through multiple differentiation media using a split-pool method, Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery systematically sampling all possible combinations of media in a predetermined matrix (Figure 2). The tagging system allows the cell culture history (i.e. differentiation protocol) of beads bearing cells of the desired lineage to be deduced. The system has been successfully used to discover novel differentiation protocols for many different starting stem cell types and differentiated progeny, e.g. hepatocytes, neurons, and osteoblasts from hES, mES and hMSCs. Since large numbers of conditions can be tested in each screen, it is possible to efficiently discover optimised protocols that have advantages over more traditional cell culture methods - e.g. are serum-free, use only small molecules, or exclude other variable and expensive products. For example, a screen of 10,000 protocols identified serum-free, feeder cell-free protocols for the generation of megakaryocytes (platelet precursor cells) from hES cells. In several of these protocols growth factors were replaced with small bioactive molecules. Several groups have taken the approach of using automated robotic cell culture systems to screen multiple growth and differentiation conditions in multiwell format. These are typically coupled with an automated screening readout such as high content analysis platforms that enable simultaneous assessment of multiple cellular features in an automated and quantitative way. In particular, focus has been on the screening of small molecules for their effect on selfrenewal and stem cell differentiation, as they have advantages in terms of reproducibility and costeffectiveness. In one example, over 5000 compounds were screened for their effect on pancreatic differentiation of hES cells using high-content analysis of pdx1 expression as a readout. One compound in particular was found to promote efficient generation and expansion of pancreatic progenitor cells 27. Recreating the stem cell niche Understanding the 28 INTERNATIONAL PHARMACEUTICAL INDUSTRY

microenvironments in which stem cells reside and differentiate in vivo and trying to recapitulate these in vitro to further control stem cell differentiation has become an increasingly important area of stem cell research 28. In particular, focus has been on the biochemical and mechanical influence of different ECM components, and how these and the 3D configuration of cells affects their fate. Innovative microfabrication techniques have been used to investigate these influences, allowing a high throughput and cost-effective way of discovering how different materials affect stem cell fate 29. For example, different ECM and cell adhesion factors can be robotically spotted onto microarrays in various combinations, allowing screens of hundreds of putative microenvironments. La Flaim et al. used this technique to probe interactions of ECM components in combination with soluble growth factors 30. A multiwell microarray platform that allows 1200 simultaneous experiments on 240 unique signalling environments was developed. A reporter ES cell line (GFP under the control of the MHC promoter) was used to monitor cardiac differentiation using a confocal microarray scanner. The effect of mechanical forces on stem cell differentiation has also become a major topic of investigation. It is clear that applied mechanical forces can affect the activity and expression of transcription factors and chromatin remodelling enzymes in turn affecting stem cell fate. A study investigating different polyacrylamide gels showed that gel stiffness had a dramatic effect on the differentiation fate of MSCs, with culture on soft, intermediate or stiff gels resulting in differentiation to neurons, muscle and bone respectively 31. Highthroughput methods have also been developed to assess the effect of substrate stiffness on cell function. For example, libraries of micropost arrays of different heights, resulting in different stiffnesses, have been generated. These micropost arrays can also be microprinted with ECM components on their surface

to investigate ECM binding and substrate rigidity together 32. Conclusions The unique properties of stem cells offer enormous potential to many biopharmaceutical applications. In the area of drug development discussed here, they are already being used to some degree, particularly in disease modelling and toxicity studies. However, widespread adoption of stem cell technology in all aspects of the drug discovery process will be reliant on the development of robust, reproducible methods to culture, and in particular, direct their differentiation to specific lineages. Discovery and optimisation of stem cell differentiation protocols is technically challenging due to the large number of variables to consider, and adoption of higherthroughput techniques for protocol discovery would be advantageous. Focus needs to be on integrating all the signals that affect stem cell differentiation – i.e. soluble factors, cell-cell interactions, 3D configurations and the chemical and mechanical properties of cell substrates, to generate stem cell derived ‘mini tissues’ that are more physiologically relevant than current systems. As the techniques described above, and others - in particular new imaging technologies for tracking cells in complex 3D micro structures - are further developed, the use of stem cells will advance to the forefront of the pharmaceutical industry where their potential for transforming cell therapy and drug development can be realised. References 1.  H anna, J.H., K. Saha, and R. Jaenisch, Pluripotency and cellular reprogramming: facts, hypotheses, unresolved issues. Cell, 2010. 143(4): p. 508-25. 2.  A lison, M.R. and S. Islam, Attributes of adult stem cells. J Pathol, 2009. 217(2): p. 144-60. 3. M erkle, F.T. and A. Alvarez-Buylla, Neural stem cells in mammalian development. Curr Opin Cell Biol, 2006. 18(6): p. 704-9. C opelan, E.A., Hematopoietic 4.  stem-cell transplantation. N Engl Autumn 2012 Volume 4 Issue 4



Drug Discovery, Development and Delivery










J Med, 2006. 354(17): p. 181326. C zechowicz, A. and I.L. Weissman, Purified hematopoietic stem cell transplantation: the next generation of blood and immune replacement. Immunol Allergy Clin North Am, 2010. 30(2): p. 159-71. S ee, F., et al., Therapeutic  effects of human STRO-3selected mesenchymal precursor cells and their soluble factors in experimental myocardial ischemia. J Cell Mol Med, 2011. 15(10): p. 2117-29. S chwartz, S.D., et al., Embryonic stem cell trials for macular degeneration: a preliminary report. Lancet, 2012. 379(9817): p. 713-20. K iris, E., et al., Embryonic stem cell-derived motoneurons provide a highly sensitive cell culture model for botulinum neurotoxin studies, with implications for high-throughput drug discovery. Stem Cell Res, 2011. 6(3): p. 195-205. M akhortova, N.R., et al., A screen for regulators of survival of motor neuron protein levels. Nat Chem Biol, 2011. 7(8): p. 544-52. M cNeish, J., et al., High throughput screening in embryonic stem cell-derived neurons identifies potentiators of alpha-amino-3-hydroxyl-5methyl-4-isoxazolepropionatetype glutamate receptors. J Biol Chem, 2010. 285(22): p. 1720917. E rickson-Miller, C.L., et al.,  Preclinical activity of eltrombopag (SB-497115), an oral, nonpeptide thrombopoietin receptor agonist. Stem Cells, 2009. 27(2): p. 42430. E rickson-Miller, C.L., et al.,  Discovery and characterization of a selective, nonpeptidyl thrombopoietin receptor agonist. Exp Hematol, 2005. 33(1): p. 8593. S artipy, P. and P. Bjorquist,  Concise review: Human pluripotent stem cell-based models for cardiac and hepatic toxicity assessment. Stem Cells, 2011. 29(5): p. 744-8.


14. Laustriat, D., J. Gide, and M. Peschanski, Human pluripotent stem cells in drug discovery and predictive toxicology. Biochem Soc Trans, 2010. 38(4): p. 10517. 15.  Z hang, J., et al., Functional cardiomyocytes derived from human induced pluripotent stem cells. Circ Res, 2009. 104(4): p. e30-41. 16.  H ook, L., C. O’Brien, and T. Allsopp, ES cell technology: an introduction to genetic manipulation, differentiation and therapeutic cloning. Adv Drug Deliv Rev, 2005. 57(13): p. 190417. 17.  Z waka, T.P. and J.A. Thomson, Homologous recombination in human embryonic stem cells. Nat Biotechnol, 2003. 21(3): p. 31921. 18.  I tzhaki, I., et al., Modelling the long QT syndrome with induced pluripotent stem cells. Nature, 2011. 471(7337): p. 225-9. 19.  R ubin, L.L. and K.M. Haston, Stem cell biology and drug discovery. BMC Biol, 2011. 9: p. 42. 20.  et al., Tissue J akab, K., engineering by self-assembly and bio-printing of living cells. Biofabrication, 2010. 2(2): p. 022001. 21.  W ylie, R.G., et al., Spatially controlled simultaneous patterning of multiple growth factors in three-dimensional hydrogels. Nat Mater, 2011. 10(10): p. 799-806. 22.  G uillotin, B. and F. Guillemot, Cell patterning technologies for organotypic tissue fabrication. Trends Biotechnol, 2011. 29(4): p. 183-90. 23. D ’Amour, K.A., et al., Production of pancreatic hormoneexpressing endocrine cells from human embryonic stem cells. Nat Biotechnol, 2006. 24(11): p. 1392-401. 24.  D aley, W.P., S.B. Peters, and M. Larsen, Extracellular matrix dynamics in development and regenerative medicine. J Cell Sci, 2008. 121(Pt 3): p. 255-64. J iang, J., et al., Generation 25.  of insulin-producing islet-like








clusters from human embryonic stem cells. Stem Cells, 2007. 25(8): p. 1940-53. Y, C., Use of Combinatorial  Screening to Discover Protocols That Effectively Direct the Differentiation of Stem Cells Stem Cell Research and Therapeutics in Stem Cell Research and Therapeutics, D.O.C. Y. Shi, Editor 2008, Springer Science + Business Media. p. 227-250. C hen, S., et al., A small molecule that directs differentiation of human ESCs into the pancreatic lineage. Nat Chem Biol, 2009. 5(4): p. 258-65. C hoi, C.K., M.T. Breckenridge,  and C.S. Chen, Engineered materials and the cellular microenvironment: a strengthening interface between cell biology and bioengineering. Trends Cell Biol, 2010. 20(12): p. 705-14. K obel, S. and M. Lutolf, High throughput methods to define complex stem cell niches. Biotechniques, 2010. 48(4): p. ix-xxii. F laim, C.J., et al., Combinatorial signaling microenvironments for studying stem cell fate. Stem Cells Dev, 2008. 17(1): p. 29-39. E ngler, A.J., et al., Matrix elasticity directs stem cell lineage specification. Cell, 2006. 126(4): p. 677-89. Yang, M.T., et al., Assaying  stem cell mechanobiology on microfabricated elastomeric substrates with geometrically modulated rigidity. Nat Protoc, 2011. 6(2): p. 187-213.

Dr Lilian Hook is Plasticell’s Research Director. She has over 15 years’ experience in the stem-cell field, gained both in academia and industry. Her work has focused on the biology and biopharmaceutical applications of stem cells, particularly in the haemopoietic and neural fields.

Autumn 2012 Volume 4 Issue 4



Clinical Research

A Winner Emerges in the War Against Microbes Man has benefited from copper’s inherent antimicrobial properties since the dawn of civilisation, yet it is only in the last 10-20 years that scientific studies have been conducted to properly evaluate the metal’s potential in reducing contamination in critical environments such as hospitals and food processing facilities. In the healthcare sector, the level of laboratory and clinical evidence has stimulated demand for incorporating copper into touch surface hot-spots in the fight against healthcareassociated infections (HCAIs). Hand hygiene is a pillar of infection control, but in recent years the less than adequate compliance displayed by healthcare workers – before patients and visitors are even factored into the equation – has led many hospitals to the conclusion that more needs to be done in the fight against healthcareassociated infections. Significant reductions in certain HCAIs – such as MRSA and C. difficile – are encouraging, but current figures still show that, within the European Union, over 4 million patients contract an HCAI each year. Given these infections lead to upward of 16 million extra days in hospital, and account for an estimated 37,000 deaths while costing the NHS alone over £1 billion annually, it is clear a new approach is needed. Copper – an essential element required by both plants and animals to live – is a very familiar metal thanks to its superior electrical and thermal conductivity, and its ability to alloy with other metals to produce important metals such as brass and bronze. This same metal that has been part of daily human life for thousands of years could also be part of the solution to HCAIs. Professor Bill Keevil – now Chair in Environmental Healthcare and Principal Investigator (Microbiology & Environmental Health) at the University of Southampton – was the first researcher to demonstrate copper’s 32 INTERNATIONAL PHARMACEUTICAL INDUSTRY

ability to rapidly kill the bacteria that cause HCAIs. As he explains, even this application of copper is far from new to us. “Since ancient times, mankind has been aware of the beneficial properties of copper to reduce microbial infections. Even though people did not understand the germ theory back then, they recognised the correlation between copper and disease protection. 5000 years ago, the Egyptians for example used copper to transport water and to heal wounds. Later on, in the 1850s, it was noticed that during Parisian Cholera outbreaks, the copper workers did not get affected.”

Professor Keevil’s work suggested a role for copper in the healthcare environment: if it was effective at killing bacteria as well as viruses and fungi in the laboratory setting, could it be used for frequently-touched surfaces in hospitals to continuously reduce contamination and help break the chain of infection? This question was addressed by a clinical trial at Selly Oak Hospital in Birmingham, led by Professor Tom Elliott, Consultant Microbiologist for University Hospitals Birmingham NHS Foundation Trust. He and his team investigated whether copper – and specifically useful alloys that benefit from its antimicrobial efficacy, Autumn 2012 Volume 4 Issue 4



Clinical Research including brasses, bronzes and copper-nickels – deployed as touch surfaces such as grab-rails, door furniture, light switches, taps, overbed tables, sink-traps and toilet seats would result in significantly lower levels of contamination on these surfaces. The results appeared in national and international media, and caught the attention of the international infection prevention community: with normal cleaning, the copper surfaces achieved a greater than 90% reduction in bioburden compared to standard, non-copper surfaces. This remarkable reduction translates to a reduced risk of bacteria and viruses being passed between people via these surfaces, and consequently less chance of vulnerable patients acquiring life-threatening infections. Professor Elliott recently observed: “Self-disinfecting surfaces such as copper are a significant step forward in reducing infection-causing microbial bioloads on clinical surfaces. We should now ask the question: why select a non-antimicrobial surface when we know that some naturallyoccurring metals, such as copper, have this intrinsic antimicrobial activity?” Indeed, copper and the alloys that share its antimicrobial activity – collectively termed ‘antimicrobial copper’ – are of great interest to companies looking to ‘design out infection’ to meet the growing market demand for effective antimicrobial surfaces, and a number of clinical trials around the world have supported the Selly Oak findings. Trials have reported from Japan, Chile and the US, confirming the significant reduction in contamination, and further trials are underway in France and Greece. A recently completed US trial has gone beyond demonstrating bioburden reduction to look at impact on patient outcomes. Initial results, presented by trial leader Dr Mike Schmidt at the 2011 WHO International Conference of Prevention and Infection Control (ICPIC)1, indicated a greater than 40% reduction in the risk of patients acquiring a hospital infection when in single ICU rooms where key touch surfaces had been replaced with antimicrobial copper equivalents. The trial was conducted in three world-class facilities and funded by the 34 INTERNATIONAL PHARMACEUTICAL INDUSTRY

US Department of Defense, and had three distinct stages. In the first, the baseline microbial burden on frequently-touched objects in ICU rooms was established, prior to the installation of any antimicrobial copper items. The goal of this phase was to ensure the most effective deployment by identifying the most heavily contaminated surfaces. The most bioburden was found on bed rails (with an average 13,028 cfu per 100 cm2)2. Also highly contaminated were over-bed tables, visitor chair arms, nurse call buttons, data input devices and IV poles. The second stage was to replace these surfaces – which equalled around 10% of the room’s total touch surface area – with antimicrobial copper items, and compare the microbial burden on these and noncopper equivalents over the course of 135 weeks. Weekly sampling was undertaken in the copper and control rooms, with colony-forming units and indicator organisms counted. The median bioburden found on copper surfaces was 97% less than that on the control surfaces3. The third and, perhaps, most exciting stage – reported at ICPIC – assessed incidences of HCAIs in the copper and control ICU rooms. This data was reviewed by hospital statisticians to ensure it was robust and the results were significant. Preliminary findings show a significant reduction in the risk of acquiring an infection in rooms where antimicrobial copper touch

surfaces are present. The percentage reduction in risk is between 40 and 70%, and was described by the study team as a significant and consistent reduction in infection rates. The reason for the variation was that certain items (such as chairs) travelled between rooms, and bariatric patients were not able to use the standardsized antimicrobial copper-railed beds. The number of antimicrobial copper components in all the rooms was monitored throughout each patient’s stay, and the preliminary results show that patients who were in a room with 75% of the antimicrobial copper components present (by surface area) had a 40.4% reduced risk of acquiring an infection. This risk reduction increased to 61% if the patient was in an antimicrobial copperrailed bed in a copper room, and for patients in rooms with all antimicrobial copper components present for the full duration of their stay, the risk reduction was 69.1%. Trial leader Dr Mike Schmidt, Professor and Vice Chairman of Microbiology and Immunology at the Medical University of South Carolina, says of the results: “Bacteria present on ICU room surfaces are probably responsible for 35 to 80% of patient infections, demonstrating how critical it is to keep hospitals clean. “The copper objects used in the clinical trial supplemented cleaning protocols, lowered microbial levels, and resulted in a statistically significant reduction in the number of infections Autumn 2012 Volume 4 Issue 4



Clinical Research contracted by patients treated in those rooms.” Alongside this research into the impact of antimicrobial copper in the clinical environment, scientists are seeking to further our understanding of how copper exerts its antimicrobial effect. The exact sequence is still under investigation, however several mechanisms exist that appear to work in concert, and these are being studied by research groups around the world. The currently-known mechanisms are: •C  ausing leakage of potassium or glutamate through the outer membrane of bacteria, •D  isturbing osmotic balance, •B  inding to proteins that do not require copper, •C  ausing oxidative stress by generating hydrogen peroxide, •C  ausing degradation of bacterial DNA. The multiple mechanisms and, in particular, degradation of bacterial DNA are highly significant when considering the long-term deployment of antimicrobial copper touch surfaces, as UK researcher Professor Keevil explains: “We know that copper kills viruses and destroys DNA, including plasmids, so this should stop the transfer of DNA which would include those toxic genes and also the transfer of antibody resistance from one species to another.” Convinced by the science, infection control professionals next question the price of installing antimicrobial copper touch surfaces. To address this concern, the International Copper Association (ICA) commissioned York Health Economics Consortium (YHEC) to develop a cost-benefit model to illustrate the economic rationale of an antimicrobial copper intervention. Using figures for a UK ICU, the model shows the antimicrobial copper surfaces pay for themselves in less than one year. York Health Economics Consortium – a company wholly owned by the University of York – was established in 1986 to extend the University’s services into the healthcare sector. It was selected for this project by ICA as it provides consultancy and research in health economics to the NHS and the pharmaceutical and healthcare industries, and was well-placed to 36 INTERNATIONAL PHARMACEUTICAL INDUSTRY

develop a comprehensive and robust business model. YHEC used the results from the clinical trials previously described as a basis for reductions in HCAIs achievable following a copper installation. The model is populated with referenced datasets for rates and costs of HCAIs, cost of antimicrobial copper components and similar non-copper components without antimicrobial efficacy. It also offers users the opportunity to enter their own local data to produce customised calculations. Presenting a Master Class at the London Reducing HCAIs Conference in October, Mark Tur, Antimicrobial Copper Technical Consultant for Copper Development Association, explained the need for the business model: “The copper intervention is an engineering one: it’s different to other measures being deployed to tackle HCAIs, like new procedures or consumables. It requires capital spend, but then delivers savings to care budgets. We’re often asked about the cost of installing Antimicrobial Copper. The real question is about the value of copper, not the cost. This model will help infection control staff, who accept the science, convince their CEOs to look at implementing Antimicrobial Copper for any planned

extensions or refurbishments. Payback in less than one year makes this an intervention that warrants their attention.” The payback times demonstrated by the model back the findings of Professor Tom Elliott, leader of the Selly Oak clinical trial. At an event earlier this year, he noted: “For the one-off cost of installing Antimicrobial Copper surfaces, you get continuous microbial contamination reduction throughout the products’ life, and these materials are durable and longlasting. The cost for a 20-bed medical ward was equivalent to the cost of just 1.5 infections.” The final report and model are due for completion later in the year, but an advance document detailing a worked example using actual screenshots from the software is already available on the antimicrobial copper website. There is a stewardship scheme for antimicrobial copper, which is administered by the International Copper Association and a global network of centres that together form the Copper Alliance. This offers reassurance to those wishing to specify antimicrobial copper products that they are buying an efficacious product from a company that is aware of the requirements for supplying them, for example ensuring Autumn 2012 Volume 4 Issue 4



Clinical Research the product is uncoated, since any permanent or temporary coating would come between the active surface and pathogens, rendering it ineffectual. A Products and Services Directory on contains a list of approved companies. The antimicrobial copper website also contains case studies of installations using antimicrobial copper touch surfaces as part of their infection prevention approach, which are taking place around the world. Some are outside the healthcare environment, in other facilities and high-traffic areas where the spread of infection is a concern. In Europe, the latest are Hagen General Hospital in Germany (installing antimicrobial copper door furniture throughout a children’s ICU), Craigavon Area Hospital in Northern Ireland (who have copper surfaces in their trauma and orthopaedic facility, theatres and maternity and, most recently, in a new operating theatre suite) and NHS facility Homerton University Hospital (which installed antimicrobial copper during the renovation of a specialist Adult Rehabilitation Unit). In Asia, Hua Dong Hospital in China, the Respiratory Intensive Care Floor has been extensively fitted with a range of antimicrobial copper surfaces. Ochiai Clinic in Japan also has a range of antimicrobial copper surfaces (in a striking brass that appealed to the architect) and elsewhere in Japan – beyond healthcare – three kindergartens (including two rebuilding in the Fukushima area) also have installations including antimicrobial copper taps, serving trolleys, work surfaces and stair rails. South America has installations including Chile’s oldest paediatric hospital, where the ICU is equipped with numerous antimicrobial copper surfaces including bed rails, taps, IV poles and medical clipboards. Congonhas Airport – one of Brazil’s busiest transport hubs – has antimicrobial copper handrails and counter tops, and sections of the Chilean Metro also have antimicrobial copper handrails, with the plan being to gradually extend them around the network. For hospitals, healthcare facilities


Table 1. Bed rails Door knobs Sinks Dispensers Over-bed tables

Door push plates



IV poles

Visitor chairs

Work surfaces


Grab bars

Patient chairs

Computer input

Linen hampers

devices Light switches

Bedside tables

Call buttons


& sockets & pull cords

and other areas in which infection prevention is a concern, installing antimicrobial copper touch surfaces is a straightforward process. The highest-risk surfaces have been identified by clinical trial teams around the world based on their experience, and confirmed by sampling and subsequent testing. The following table shows items identified in clinical trials as having the greatest bioburden, and thus the focus for those looking to implement antimicrobial copper. The rapid antimicrobial efficacy of copper, demonstrated under typical indoor conditions of humidity and temperature, has highlighted the inadequacy of current test standards for antimicrobial materials, conducted at greater than 90% relative humidity and 35°C. Standards bodies are now working towards more appropriate test methods which will support manufacturers’ claims for their hard surface products. This realisation has been a major factor in component manufacturers switching to copper to differentiate their products, and a growing number are offering antimicrobial copper touch surfaces in their healthcare ranges. As the level of awareness of the fundamental research into the efficacy of copper and its alloys rises, other sectors are also looking at how to harness this inherent property of copper to control problematic microorganisms with a whole range of durable, cost-effective and versatile copper alloys. In the war against microbes, copper is the clear winner. References 1.  Schmidt, M. G. BMC Proceedings 2011, 5 (Suppl 6):053 (Oral presentation delivered at 1st International Conference on Prevention and Infection Control, June 29-July 2 2011, Geneva,

Switzerland.Further information: media/149124/pub-208-reducing-therisk-of-hcais-aug-2012-web.pdf 2.w w w. a n t i m i c r o b i a l c o p p e r. c o m / media/149621/aha-health-forumcopper-reduces-infection-risk-2011. pdf Risk Mitigation of Hospital Acquired Infections Through the Use of Antimicrobial Copper Surfaces, Moran, W. R., Attaway, H. H., Schmidt, M. G., John, J. F., Salgado, C. D., Sepkowitz, K. A., Cantey, R. J., Steed, L. L., Michels, H. T. Poster presented at the American Hospital Association and Health Forum Leadership Summit 2011, July 17-19, 2011, San Diego, CA. 3.w w w. a n t i m i c r o b i a l c o p p e r. c o m / media/69841/shea-poster-us-results. pdf A Pilot Study to Determine the Effectiveness of Copper in Reducing the Microbial Burden (MB) of Objects in Rooms of Intensive Care Unit (ICU) Patients, Salgado, C. D., Morgan, A., Sepkowitz, K. A., John, J. F., Cantey, J. R., Attaway, H. H., Plaskett, T., Steed, L. L, Michels, H. T., Schmidt, M. G. Poster 183, 5th Decennial International Conference on Healthcare-Associated Infections, Atlanta, March 29, 2010.

Angela Vessey Director of Copper Development Association in the UK, studied Physiology (BSc) at Bedford College, University of London, and Applied Immunology (MSc) at Brunel University. She initiated the Antimicrobial Copper programme in 2005 in the UK to exploit the benefits of copper for preventing the spread of infection. Email:angela.vessey@

Autumn 2012 Volume 4 Issue 4



Clinical Research

Conducting Non-clinical Studies with Protein Biologics: Considerations in Test Article Characterisation and Method Development for Dose Formulation Analysis Introduction Historically, the majority of nonclinical studies conducted under Good Laboratory Practices (GLP) regulationsi involve synthetic small molecule chemical entities. As a result, general GLP practices from testing facilities, including those for contract research organisations (CROs), have been developed from experiences working with this type of molecule. With the increase in the number of protein biologics in development, including monoclonal antibodies, the proportion of GLP studies conducted for protein test articles has correspondingly increased. Therefore, current GLP practices may need to be adjusted to accommodate the physiochemical characteristics of protein test articles, such as a different interpretation of test article characterisation requirements. In terms of method development for non-clinical dose formulation analysis, analytical techniques specific to protein biologics will need to be used.ii This article describes the current best practices applied in the author’s organisation for test article characterisation and analytical method development of protein biologics for non-clinical dose formulations. For the purpose of this article, a protein biologics test article refers to a protein formulated in a buffer to be used for preparing dosing formulations for nonclinical studies.

batch and shall be documented.�i For non-clinical testing at a CRO, the sponsor must provide this information in the form of a certificate of analysis, or a statement of testing for the particular lot of test article to be used in non-clinical testing. The certificate

of analysis must be maintained as part of the GLP study records. A sample certificate of analysis for a monoclonal antibody protein test article is shown in Figure 1. This example certificate of analysis illustrates common characterisation information generally

Test Article Characterisation According to Section 58.105(a) of the GLP regulations, “The identity, strength, purity, and composition or other characteristics which will appropriately define the test or control article, shall be determined for each 40 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Autumn 2012 Volume 4 Issue 4



Clinical Research supplied by the sponsor for a test article. Depending on the extent of characterisation work that has already been conducted, there may be additional tests and results listed beyond the examples provided. In Figure 1, the test article is uniquely identified as MAB-4321. The method used for its identification is a non-reduced SDS-PAGE method. The methods of non-reduced and reduced SDS-PAGE, size-exclusion chromatography (SEC-HPLC), and capillary isoelectric focusing (cIEF) are used to determine the purity of the bulk test article. A UV (A280) method is used for the strength or concentration measurement of the protein in the bulk materialii As a protein test article is generally supplied formulated, the formulation buffer listing the concentrations of the various components in the buffer is also given. Therefore, for this example, the certificate of analysis provided demonstrates how the GLP requirements of identity, strength, purity, and composition can be met for the test article. Section 58.105 (b) of the GLP requirement also states, “The stability of each test or control article shall be determined by the testing facility or by the Sponsor either (1) before study initiation, or (2) concomitantly according to written standard operating procedures, which provide for periodic analysis of each batch.” i In the example shown in Figure 1, the bulk test article is to be stored at -50°C to -90°C, with a re-test date of one year from manufacture, indicating the stability of the material at this storage condition for a period of one year. Additionally, a statement from the sponsor providing information such as refrigerated stability and freeze/thaw stability can constitute part of the stability requirement, in addition to assisting in the planning and conduct of the nonclinical study. For all the information provided to the CRO regarding test article characterisation and stability, it is the sponsor’s responsibility to maintain relevant documents and results to support the information provided. 42 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Analytical Method Development for Dose Formulations Analysis of Proteins The test article will be formulated in an appropriate vehicle to be used for dosing in non-clinical studies. According to section 58.113 (a) of the GLP regulationsi: F or each test or control article  that is mixed with a carrier, tests by appropriate analytical methods shall be conducted •T  o determine the uniformity of the mixture, and to determine, periodically, the concentration of the test or control article in the mixture •T  o determine the stability of the test and control articles in the mixture as required by the conditions of the study either (i) before study initiation, or (ii) concomitantly according to written standard operating procedures, which provide for periodic analysis of the test and control articles in the mixture To meet GLP requirements, suitable analytical methods must be developed and validated to determine the concentration, homogeneity, and stability of the test article in the vehicle. As for any analytical method development, the intended use of the method must be considered. For nonclinical dose formulation analysis, this would include the following considerations: •T  he protein analyte and its physicochemical characteristics •T  he vehicle for the dose formulation •T  he protein dose concentration ranges to be measured •T  he appropriate analytical methods that can address all of the above Additionally, the definition of “stability” for non-clinical dose formulations is different from that usually adopted for pharmaceutical sciences-type applications. For a protein biologics dose formulation, “stability” refers to the preservation of total protein content after storage at a specified stability condition. Generally it does not refer to the measurement of chemical stabilityiii or biological activity. Therefore, the

analytical method to be developed must be suitable and adequate for total protein concentration measurement, but it does not have to address the measurement of chemical stability or biological activity. If the chemical stability and biological activity of the protein in the dose formulation needs to be examined, it is suggested that these be evaluated separately from the method development related to total protein concentration determination in dose formulations. The following case studies illustrate the approaches to selecting and developing analytical methods for total protein concentration determination in non-clinical dose formulations. Case Study 1: A Monoclonal Antibody The protein is a monoclonal antibody. The vehicle is a phosphate buffer, containing sodium chloride, sucrose, and Tween 80. The dose concentrations for the non-clinical study range from 1–10 mg/mL. A UV method is available for concentration measurement, using an extinction coefficient of 1.41 mL/(mg*cm) at 280 nm. For an absorbance range of 0.141 to 1.000, a linearity range from 0.100 to 0.709 mg/mL can be calculated from the extinction coefficient. For dose concentrations of 1–10 mg/mL, analysis can, therefore, be performed by diluting the dose formulations to within the linearity range for measurement. For this particular example, a simple UV method is a suitable method for measuring concentrations from 1–10 mg/mL of the protein analyte in the vehicle. Case Study 2: A Monoclonal Antibody The protein is a monoclonal antibody. The vehicle is a Histidine buffer, containing NaCl. The dose concentration to be used in the nonclinical study is 0.01 mg/mL. Similar to Case Study 1, a UV method is available and the extinction coefficient is 1.41 mL/(mg*cm) at 280 nm. However, the dose concentration of 0.01 mg/mL is outside the calculated linearity range of 0.100 to 0.709 mg/ mL for the UV method. Therefore, a UV method would not be a suitable method for total protein concentration Autumn 2012 Volume 4 Issue 4



Clinical Research measurement. Instead, a sizeexclusion HPLC method (SEC-HPLC) would be more appropriate. Detection can be conducted for the protein at a wavelength of 215 nm due primarily to contribution from the peptide bonds. Separation by size-exclusion separates the protein analyte from the components present in the buffer that could potentially interfere at 215 nm. As the absorption at 215 nm is greater than at 280 nm for a protein, a lower quantitation limit can be achieved. Based on method development experience in the author’s laboratory, an SEC-HPLC method with detection at 215 nm for a monoclonal antibody in a similar vehicle can have a linearity range of 0.0025 to 0.02 mg/mL. For this case study, an SEC-HPLC method would be a suitable method for the measurement of dose concentration at 0.01 mg/mL. Case Study 3: A Recombinant Protein The test article is a non-glycosylated recombinant protein. The vehicle is saline, and the lowest dose concentration for the non-clinical study is 0.02 mg/mL. A UV method is available and the extinction coefficient is 1.4 mL/(mg*cm) at 280 nm. For an

absorbance range of 0.140 to 1.000, a linearity range from 0.100 to 0.714 mg/mL can be calculated from the extinction coefficient. As the intended dose concentration of 0.02 mg/mL is outside the calculated linearity range of the UV method, it would not be a suitable method for total protein concentration measurement. Instead, a reversed-phase HPLC method (RPHPLC) is a more suitable method. The absence of glycosylation in the protein allows for the main peak to be used for quantification, provided

good resolution can be achieved. As described in Case Study 2, detection can be performed at 215 nm and a lower quantitation limit can be obtained. Based on method development experience in the author’s laboratory, an RP-HPLC method with detection at 215 nm for a recombinant protein of similar size in the same vehicle can have a linearity range of 0.005 to 0.015 mg/mL. For this case study, an RP-HPLC method would be a suitable method for measuring the dose concentration at 0.02 mg/mL. A sample chromatogram illustrating the measurement of a similar recombinant protein diluted to a concentration of 0.01 mg/mL in a similar vehicle is shown in Figure 2. Case Study 4: Mixture of Proteins The test article is a mixture of proteins derived from a cell extract. The vehicle is saline. The dose concentrations for the non-clinical study range from 0.2 to 1 mg/mL, based on the total protein content in the cell extract. According to the certificate of analysis, the total protein content of the test article is determined by a Kjeldahl methodiv, which is capable of measuring the total protein content in a sample without the need for individual protein standards. Using the Kjeldahl concentration, calibration standards can be prepared and methods such as the Bradford, Bicinchoninic acid (BCA), and Lowry assaysii can be developed for total protein content measurement. Dose formulation analysis would be conducted by dilution of the sample


Autumn 2012 Volume 4 Issue 4

Clinical Research to within the linearity range of the method. In this case, an HPLC method would not be the most suitable approach, as dosing is performed by the total content of protein present in the mixture, and it is unnecessary to determine the concentration of each of the individual proteins. Conclusion When conducting non-clinical studies using a protein biologics test article, a good understanding of the protein’s unique physicochemical characteristics, its characterisation as a test article, and the use of appropriate methodologies for dose formulation analysis, all contribute to ensuring the study meets GLP regulatory requirements, so that a well-executed non-clinical study can be included as part of a regulatory submission.

References i.  21 CFR Part 58 Good Laboratory Practices for Nonclinical Laboratory Studies. ii.  United States Pharmacopeia 35 <1045> Biotechnology-Derived Articles.

iii. M  anning, M.C., Patel, K., Borchardt, RT, Pharm. Res., 1989, (6), 11, 903918. iV.  Miller, L., Houghton, J.A., J. Biol. Chem., 1945, (159), 373-383.

Karina Kwok, PhD joined MPI Research in July 2010 and now serves as Associate Principal Scientist in the analytical group, where she is responsible for the technical development of analytical methods to quantify chemical entities including proteins, peptides, and small molecules in non-clinical dose formulations. In addition, she also serves as a Study Director/Principal Investigator in managing the overall plan and conduct of non-clinical dose formulation analysis for GLP studies. Before joining MPI Research, Dr Kwok was a research scientist at Pfizer in bioprocess development, where she was responsible for the method development associated with the analysis and characterisation of protein and peptide drug substances and impurities. Before Pfizer, Dr Kwok was a research scientist at the Procter and Gamble company where she was responsible for both analytical and bioanalytical method development in the OTC-Health Care area. Dr Kwok received her doctorate in chemistry from the University of Kansas and is an active member of professional organisations, including the American Association of Pharmaceutical Scientists (AAPS). Email:


Clinical Research

Cytomics: Managing Biocomplexity in Drug Development, Clinical Diagnostics, and Clinical Medicine Abstract Dealing with the overwhelming volume and complexity of data is one of the major challenges in translating our increasingly sophisticated knowledge of biology into useful information for clinical medicine. Biocomplexity in organisms arises from a combination of the diversity of genotypes among individuals and the variable exposure histories to environmental influences throughout life. Conventional reductionistic science cannot explain the behaviour of complex biological systems made up of networks that have scale-free, or clustered, architecture, and that manifest emergent properties. Yet, diseases are a consequence of aberrant activity of a subcomponent, or module, of a biological network. Thus, a thorough understanding of most diseases requires a more integrated, holistic approach to capture the complex networks established by the interacting components, reflecting genetic and exposure influences. Cytomics, the science of analysis at the cellular level, objectively accounts for functional phenotypes in the context of the entire organism. Furthermore, the cytomics top-down approach of data analysis does not depend on prior knowledge of disease mechanisms, thus significantly simplifying the exploration of organismal biocomplexity and shortening the path for applications in drug development, clinical diagnostics, and clinical medicine. Introduction This paper provides an introduction to cytomics and its applications through a review of the literature, focusing on genetics, genomics, and other ‘omics’. It also explains certain terminology at the beginning of the paper so that readers who are new to this area can benefit from the paper without having to consult other resources. A glossary is provided as Table 1. 46 INTERNATIONAL PHARMACEUTICAL INDUSTRY

A Primer of the ‘Omics’ It is now 60 years since Watson and Crick proposed the structure of deoxyribonucleic acid (DNA), and others from the Cavendish laboratories at the University of Cambridge published complementary papers on this topic.1-3 The last sentence of Watson and Crick’s paper (composed by Crick) is one of the most beautifully understated scientific comments of all time: “It has not escaped our notice that the specific [base] pairing we have postulated immediately suggests a possible copying mechanism for the genetic

material.”1 (See the entry “Bases” in Table 1.) There is nothing understated, however, about the veracity of their postulate and the current explosion of biological information that stands on its shoulders, particularly information made publicly available from the Human Genome Project.4,5 Genetics is the science that examines how traits are passed from one generation to the next. Various subfields can be identified:6 transmission, or Mendelian, genetics, a term that has come to embody the definition in the previous sentence; molecular genetics, which focuses on Autumn 2012 Volume 4 Issue 4

Clinical Research the physiochemical structure of DNA, ribonucleic acid (RNA), and proteins; population genetics, which examines the genetic composition of large groups of individuals; and quantitative genetics, which employs sophisticated mathematic and statistical models to examine statistical relationships between genes and the traits they encode. The mathematics of transmission genetics were first described by Mendel in 1866, and the field of genetics therefore has a long history. In contrast, while the word genome first appeared relatively early in the twentieth century, the emergence of genomics as a new form of experimental biology is a relatively new phenomenon. While genomics is defined slightly differently by various authorities, a useful definition was provided by Brown7, who defined it as the use of high-throughput molecular biology techniques to study large numbers of genes and gene products all at once in whole cells, whole tissues, or whole organisms. Basic biological information, the series of the approximately 3 billion base pairs in a DNA molecule, is itself complex, and bioinformatics is a useful discipline in this context. Going one step further, integration of all of this basic information addresses questions about what is happening in extremely complex systems where tens of thousands of different genes are interacting simultaneously. An understanding of the genome, the entirety of an organism’s genetic information, and genomic technologies builds upon knowledge of transmission genetics and molecular biology, the study of how genes function to control biochemical processes within the cell.8 Genomics is thus the first ‘omics’ discussed in this paper. Proteomics and transcriptomics are also mentioned in due course. The field of proteomics involves the systematic analysis of proteins to determine their identity, quantity, and function.9 Until relatively recently, the study of proteins focused on individual proteins using various established techniques such as gel-electrophoresis and chromatography. The advent of highthroughput automated technologies

is now facilitating a move toward simultaneous analysis of all the proteins in a defined protein population.10 The human genome comprises approximately 25,000 genes, a number markedly lower than the estimate of 100,000 that was felt to be ‘authoritative’ less than 20 years ago. While genes control the production of proteins, there is not a strict one-to-one relationship, and the number of human proteins is considerably larger than the number of genes. This phenomenon is the result of “the simple although not widely appreciated fact that multiple, distinct proteins can result from one gene.”11 Each gene codes for an average of three proteins. The proteome comprises the totality of proteins. The journey from genome to proteome is not a straightforward one. Holmes et al.11 represented the journey in a multistep process, starting with a gene of interest: • DNA  replication results in many gene forms; •R  NA transcription leads to premessenger RNA; •R  NA maturation results in mature messenger RNA; •P  rotein translation results in an immature protein; •P  rotein maturation results in a mature protein in the proteome. The terms transcription and translation need to be defined here. Transcription refers to the process by which messenger RNA is synthesised from a DNA template, which results in the transfer of genetic information from the DNA molecule to messenger RNA. Bryson12 observed that “It is a notable oddity of biology that DNA and proteins don’t speak the same language,” even though DNA codes for proteins. For years after Watson and Crick’s postulation of the structure of DNA, this paradox of apparent contradiction led to puzzlement. Watson13 later addressed the quandary as follows: The prevailing assumption that the original life-form consisted of a DNA molecule posed an inescapable contradiction: DNA cannot assemble itself, it requires proteins to do so. Which came first? Proteins, which have no known means of duplicating

information, or DNA, which can duplicate information but only in the presence of proteins? The problem was insoluble: you cannot, we thought, have DNA without proteins, and you cannot have proteins without DNA. It turns out that the ‘missing link’ and answer to this riddle is provided by RNA. RNA is a DNA equivalent; it can store and replicate genetic information. Moreover, RNA is also a protein equivalent: it can catalyse critical chemical reactions. Thus, RNA is able to translate the genetic information encoded in human DNA into information that proteins can understand. Given RNA’s central role in this process, the set of all RNA molecules produced in a cell is called the transcriptome. Translation refers to the creation of proteins from the individual building blocks called amino acids. Very long strings of amino acids are assembled using messenger RNA as the construction template. It should be noted, however, that a protein is much more than the linear chain of amino acids that comprise it. Wishart14 observed that proteins are perhaps the most complex chemical entities on the planet, noting that “No other class of molecule exhibits the variety and irregularity in shape, size, texture, and mobility that can be found in proteins.” The linear chain of amino acids goes through a complex process to fold into a three-dimensional entity, at which time it becomes biologically active. Proteins fulfill various functions, including two of immediate relevance here. They are often drug receptors, the biological structure with which a drug interacts in order to achieve its therapeutic goal (if the drug reacts with an offtarget receptor, adverse effects can potentially occur). Secondly, enzymes are proteins. Enzymes catalyse many biological functions, including the degradation of drug molecules, which to the body are foreign substances that the body eliminates in various metabolic processes. The totality of an organism’s metabolic enzymes is referred to as the metabolome, a term that also begets the term metabolomics as the study of the metabolome.


Clinical Research Information Integration is Key The creation of individual databases is a useful starting point, but it is now well recognized that the complexity of modeling biological pathways requires that individual databases be integrated. In 2005, Bader and Enright15 asked an important question, “What would we want to know from an ideal cell biological experiment?” They then provided their answer: The answer is no less than everything: what molecules are in the cell at what time and at what place, how many molecules are there, what molecules they interact with, and the specifics of their interaction dynamics. Ideally, one would want this information not only over the course of the cell cycle, but also in all important environmental conditions and under all known disease states. Much scientific progress has been made since Bader and Enright’s question was posed, and the rest of this paper provides a discussion of one field of investigation, cytomics, which can be described as the study of single-cell phenotypes resulting from genotypes and exposures in combination with exhaustive bioinformatic knowledge extraction from analysis results. Setting the Scene for Discussions of Cytomics Dealing with the overwhelming volume and complexity of data is one of the major challenges in translating our increasingly sophisticated knowledge of biology into useful information for clinical medicine.16 Biocomplexity in organisms arises from a combination of the diversity of genotypes among individuals and the variable exposure histories to environmental influences throughout life, as well as from the significant cellular heterogeneity according to cell cycle, functional status, size, and molecular content. Conventional reductionistic science cannot explain the behaviour of complex biological systems made up of networks that have scale-free, or clustered, architecture, and that manifest emergent properties. It is more than challenging to predict the association and functionality of biomolecules in viable cells from the vast numbers 48 INTERNATIONAL PHARMACEUTICAL INDUSTRY

of coding gene sequences, along with the increasing number of senseantisense transcription units and noncoding RNAs. Yet, diseases are a consequence of aberrant activity (overor under-activity) of a subcomponent, or module, of a biological network, and therefore in most cases one has to look at them through a complex biological network lens to fully understand disease processes. Unlike a handful of single-mutation genetic disorders, most common diseases, heart disease and stroke included, do not result from a single-point mutation – or even a combination of them; such diseases also involve one’s own lifestyle and environmental exposures.17 Thus, a thorough understanding of such diseases requires a more integrated, holistic approach to capture the complex networks established by the interacting components, reflecting genetic and exposure (internal and external) influences. Cytomics Cytomics is the study of single-cell phenotypes resulting from genotypes and exposures in combination with exhaustive bioinformatic knowledge extraction from analysis results. Over the past several decades, a number of techniques such as genomic approaches and proteomics have attempted to capture biocomplexity by collecting large data sets enumerating, for example, the entire genome of an organism. Unfortunately, the analysis of such data is limited in its utility because it fails to account for the molecular integration of genes or proteins into functional units at the whole organism level. It seems that few diseases have strong enough genetic components to make genome sequencing a solid way to assess individual risk.18 Biologists agree that the smallest physiologically functional unit in the body is the cell. Cells constitute the elementary building units of cell systems, organs and living organisms. The organisational hierarchy in cell biology begins with the properties of individual molecules (e.g., amino acids, nucleotides), and goes via large-sized molecules (e.g. DNA, RNA) and finally to the whole cell. The cell then interacts with other cells and the environment (via juxtacrine,

paracrine, or endocrine signalling) and thus is exposed to factors that may modulate its behaviour or makeup in a unique way, leading to the observed functional and structural heterogeneity in cell systems or cytomes.16 The functional heterogeneity of the cell systems results from both the genome and environmental influences. Current approaches to understanding the functional diversity of an organism preferentially strive for a systems approach whereby the phenotypic classification of a specific cellular system is achieved first, prior to an attempt to perform genomic or proteomic analysis.19 Instead of concentrating on molecular targets within the relatively infinite network of molecular pathways of cells, one could focus on the end result represented by molecular phenotype of cells as a consequence of both genotype and environment influences. It is at this point that cytomics becomes relevant and important. Cytomics aims to determine the molecular phenotype of single cells and allows the investigation of multiple biochemical features of the heterogeneous cytomes. It links the dynamics of cell and tissue phenotype and function, as modulated by external influences, with genomics and proteomics. Ultimately, cytomics interrogates biocomplexity on a much higher level than any other high-throughput technology (e.g., genomics, proteomics, transcriptomics) and is thus much better suited to provide useful clinical data reflecting the physiological state of an organism as a whole. The Advantages of Cytomics Diseases are caused by modifications of molecular processes in cells or cytomes as consequences of genotype and exposure to internal and external influences. Given the high biocomplexity of mammals, it remains doubtful whether typical human disease processes, such as infections, cardiovascular disease, diabetes, and malignancies, can be efficiently explored in a classical way (examining molecular pathways) within reasonable timeframes to provide practical benefits for individual patients.20 Disease is enormously complex and Autumn 2012 Volume 4 Issue 4



Clinical Research considerable conceptual difficulty currently exists in understanding the corroborative action of the thousands of genes of the genome by bottom-up analysis from the genome level via the proteome, the metabolome, and up to the level of cells, cytomes, organs, and organisms. Cell phenotype changes may, in a substantial number of instances, be more closely linked to the actual disease process in individual patients and to its future development than either genomic status or environmental influence alone.21 Most current high-throughput, high content, data-generating approaches in biology involve a bottom-up approach of analysis. One may postulate that an individual genetic aberration results in a certain disease condition, but in complex diseases there is no way to make such a prediction with any reasonable certainty. One can also argue that patterns of gene expression reflect a disease state, but that mode of reasoning does not take into account the complex relationships among proteins, organelles, and cells, which ultimately result in cellular phenotypes that drive the behaviour of tissues and organs. In contrast, the topdown approach to data analysis offered by cytomics represents an efficient and simplifying alternative for systematically exploring the biocomplexity of human organisms:22 see Table 2. Cytomics collects data on the cellular/tissue level, which account for the functional and physiological characteristics of the organism, and are therefore more closely related to explaining a disease state than any collection of data on the molecular level.

offers the unique ability to look at single cells and report a true sum of cellular phenotypes and a distinction of the various phenotypes present in the sample. Cytomics approaches are based on evaluating all cellular units in the space individually as compared with the â&#x20AC;&#x153;populationâ&#x20AC;? or averaged approach of most genomic or proteomic methods. The ability of cytomics to interrogate single cells should not be confused with single cell analysis methods such as single cell genomics, single unit recordings, microfluidic methods for analysis of single cell contents or single cell oxygen consumption monitoring, or with the averaging effect achieved by many data-intensive methods, e.g., genomics, which looks at the average data based on a mixture of individual molecules. In this respect, the sensitivity of cytomic data becomes much higher than that of other high-data approaches. The main advantages of the cytomics approach are listed in Table 3.

Cytomics Technologies Many years of technology development have produced critical elements in the process of optimally analysing single cells: 1. Techniques and instruments are very sensitive and can reliably detect very small size targets; 2. A wide range of labelling reagents, biochemical probes, and antibodies have been developed that allow very specific identification and analysis of functional and molecule components in single cells; 3.  Sophisticated software tools have been developed to analyse large amounts of generated data. Cytomic analysis comprises four main steps: Sample collection, sample analysis, data analysis, and interpretation of results. Among cytomic analytical technologies, flow cytometry is of great relevance since it is well accepted in the clinical environment.

The essential characteristics of the top-down approach are the use of molecular data patterns instead of molecular pathways. Cytomics approaches, with the evaluation of all collected cell data in a data-driven fashion, enable the hypothesisfree exploration of novel data and knowledge spaces for discriminatory parameters.23 The genomic information serves as an inventory for the biomolecular capacity of organisms. More importantly, cytomics 50 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Autumn 2012 Volume 4 Issue 4

Clinical Research Sample Analysis by Flow Cytometry Flow cytometry is a technology that has made significant impact in cell biology and clinical medicine, and significant advancements have been made in the last few decades.25 It is a powerful and versatile tool that allows quantitative analysis of single cells. Cells are suspended in a fluid flow one at a time through a focus of exciting light, which is scattered in patterns characteristic to the cells and their components. They are often labelled with fluorescent markers so that light is first absorbed and then emitted at altered frequencies. A sensor detecting the scattered or emitted light measures the size and molecular characteristics of individual cells. Tens of thousands of cells can be examined per minute and the data gathered are processed by computer. Because flow cytometers can analyse single cells/ particles, it is possible to separate cells/particles into clusters based upon any of the variables that can be measured on the flow cytometers for each cell/particle. Flow cytometry is an ideal platform for cytomics analysis and its key advantages are presented in Table 4. A critical advantage of flow cytometry is the capability for highdimensional, high-content, highthroughput analyses. It enables the separation of very complex mixtures of cells/particles. Multiparametric data from flow cytometry serve as input for data analysis procedures. Data Analysis (Data Mining, Sieving) The dimensionality of measured molecular cell data can be substantial, especially when as many as six or

eight colour staining protocols on many different cell populations are performed. Knowledge extraction after information collection represents a critical step.26 Cytomics Applications The cytomics approach is beneficial in two main fields: clinical medicine; and new drug development for many therapeutic areas, including cardiovascular diseases, metabolic diseases, and cancer. Clinical Medicine There are two main applications for clinical medicine. The first is personalised medicine and companion diagnostics. In a non-personalised approach, drugs can be effective in fewer than 60% of treated patients.27 In a personalised approach, therapies can be chosen, adapted, and modified by the clinician on objective grounds according to the specific response of the patient, thus lowering the risk of therapeutic failure and sideeffects, while reducing therapy costs. The second is diagnostics and preventive medicine. Cytomics is used for clinical risk assessment for asymptomatic individuals, risk stratification for symptomatic patients, and to provide early therapeutic alternatives based on objective criteria. Consider the following examples of companion diagnostics. The antiretroviral drug abacavir is used against infection with the human immunodeficiency virus (HIV). Approximately 6% of individuals carry the HLA-B*5701 allele, a genetic variant that is strongly associated with hypersensitivity to abacavir. This hypersensitivity is a multi-organ

systemic illness that can have lifethreatening complications if the drug is continued while symptoms progress, or if it is given again following termination of treatment once the symptoms have dissipated (re-challenge). Screening potential recipients of the drug for the presence of the HLA-B*5701 allele has proved to be a successful strategy in reducing hypersensitivity reactions, while also allowing the large majority of patients to take the drug without fear of a serious adverse drug reaction.28,29 Crizotinib was approved by the FDA in 2011 to treat certain patients with late-stage (locally advanced or metastatic), non-small cell lung cancers who express the abnormal variant anaplastic lymphoma kinase (ALK) gene. Of interest is that it was approved with a companion diagnostic test that will help determine if a patient for whom the drug is being considered has the abnormal variant ALK gene. Vemurafenib, also approved by the FDA in 2011 and indicated for melanoma, can only be prescribed for patients with a certain abnormal variant of the BRAF gene, BRAFV600E, as identified by an FDA-approved test. These cases provide examples of the power of companion diagnostics from both a safety and an efficacy standpoint. Testing patients for the presence of the HLA-B*5701 allele before prescribing abacavir for them means that those who are at risk of a serious side-effect will not be prescribed the drug, whereas the majority of patients not at risk can be prescribed the drug and experience its therapeutic benefit. With regard to efficacy, since oncologic drugs often carry the risk of cytotoxicity as well as providing therapeutic benefit, prescribing such drugs for patients for whom it can be demonstrated that therapeutic benefit is not possible presents an unacceptable benefit-risk balance. In contrast, prescribing a drug to patients who will receive powerful therapeutic benefit likely presents a positive benefit-risk balance to them, with the benefits outweighing the risks. Drug Development The pharmaceutical companies and their research and development (R&D) process face a very challenging situation. In the past decade,


Clinical Research there has been a strong decline in pharmaceutical industry productivity, with the rate of new molecular entity (NME) approvals falling, while at the same time the cost of development for a drug kept rising.30,31 It takes 10 to 15 years to put a drug into the market and costs are over $1 billion.32,33 This has created an unsustainable situation for the research-based industry. Traditional target-based drug development oversimplifies both the complex mechanisms of chronic illnesses and the complex perturbations in these disease mechanisms brought about by pharmacological agents.34 Cytomics can be beneficial in several ways. First, it facilitates research of new drug targets. Molecular reverse engineering of data patterns by biomedical cell systems biology can provide information on disease-inducing molecular pathways, thus favouring the detection of new target molecules for drug discovery. Second, it allows evaluation of new drug candidate efficacy or liability effects.35,36 Drugs, in general, do not act on single targets operating in a vacuum: rather, they perturb a complex network of interacting proteins or metabolites to modify the dynamic output of a system that can extend well beyond the pathway in which the original target is operative. Third, it allows clinical trials to be designed to include subjects who are most likely to experience benefit and to exclude those who are most likely to experience adverse effects. Concluding Remarks There needs to be a shift in thinking in the biopharmaceutical industry and in areas of clinical medicine towards the acceptance of cytomics as the science of analysis on the cellular level, which objectively accounts for functional phenotypes in the context of the entire organism. The cytomics top-down approach of data analysis does not depend on prior knowledge of disease mechanisms, thus significantly simplifying the exploration of organismal biocomplexity and shortening the path for applications into clinical diagnostics and medicine. The acceptance that one needs to understand functional phenotypes for successful translation of science into the clinic will usher in a new medical 52 INTERNATIONAL PHARMACEUTICAL INDUSTRY

era based on the science of cytomics. Personalised cytomics approaches have the potential for greater, faster, and more cost-effective results for both discovery and clinical applications. References 1.  Watson JD, Crick FHC. A structure for deoxyribose nucleic acid. Nature. 1953;171:737-738. 2. Wilkins MHF, Stokes AR, Wilson HR. Molecular structure of deoxypentose nucleic acids. Nature. 1953;171:738740. 3.  Franklin R, Gosling RG. Molecular configurations in sodium thymonucleate. Nature. 1953;171:740741. 4. Venter JC, Adams MD, Myers EW, et al. The sequence of the human genome. Science. 2001;291(5507):1304-51. 5.  Lander ES, Linton LM, Birren B, et al. International Human Genome Consortium. Initial Sequencing sequencing and analysis of the human genome. Nature. 2001;409(6822):860921. 6.  Robinson R. Genetics for dummies. Wiley, 2010. 7.  Brown S. Essentials of medical genomics, 2nd Edition. WileyBlackwell, 2009. 8.  McCarthy J, Turner JR. Genomics. In Turner JR, Gellman MD (Eds), Encyclopedia of behavioral medicine. Springer, 2013, 854-855. 9.  Soloviev BR & Terrett J. Chip-based proteomics technology. In Rapley R, Harbron S (Eds), Molecular analysis and genome discovery. John Wiley & Sons, 2004. 10. Jones SD & Warren PG. Proteomics and drug discovery. In Chorghade MS (Ed), Drug Discovery and development: Volume 1, Drug Discovery. Wiley-Interscience, 2006, 233-271. 11. Holmes MR, Ramkissoon KR , Giddings MC. Proteomics and protein identification. In Baxevanis AD, Ouellette, BFF (Eds), Bioinformatics: A practical guide to the analysis of genes and proteins, 3rd Edition. Wiley-Interscience, 2005,445-472. 12.  Bryson B. A short history of nearly everything. Black Swan, 2003. 13. Watson JD. DNA: The secret of life. Alfred A Knopf, 2004. 14.  Wishart D. Protein structure and analysis. In Baxevanis AD, Ouellette,

BFF (Eds), Bioinformatics: A practical guide to the analysis of genes and proteins, 3rd Edition. WileyInterscience, 2005,223-251. 15. Bader GD, Enright AJ. Intermolecular interactions and biological pathways. In Baxevanis AD, Ouellette, BFF (Eds), Bioinformatics: A practical guide to the analysis of genes and proteins, 3rd Edition. WileyInterscience, 2005,244-255. 16.  Bruggeman FJ, Westerhoff HV, Boogerd FC. Biocomplexity: a pluralist research strategy is necessary for a mechanistic explanation of the â&#x20AC;&#x153;liveâ&#x20AC;? state. Philosophical Psychology. 2002;15:411-436 17. Roberts NJ, Vogelstein JT, Parmigiani G et al. The predictive capacity of personal genome sequencing. Science Translational Medicine. 2012, on-line publication April 2. 18.  Harmon K. How useful is whole genome sequencing to predict disease? Scientific American. 2012, online publication April 2. 19.  Bernast T, Gregoris G, Asem EK, Robinson JP. Integrating cytomics and proteomics. Molecular and cellular proteomics. 2006;5(1):2-13. 20.  Valet G. Cytomics, the human cytome project and systems biology: top-down resolution of the molecular biocomplexity of organisms by single cell analysis. Cell Prolif. 2005;38:171174. 21.  Valet G. Predictive medicine by cytomics. 2012. http://www. (Accessed 10th November 2012) 22.  Valet G. Predictive medicine by cytomics and the challenges of a human cytome project. Business briefing: future drug discovery. 2004, 46-51. 23.  Tarnok A, Pierzchalski A, Valet G. Potential of a cytomics top-down strategy for drug discovery. Current Medicinal Chemistry. 2010, 17, 17191729. 24.  Valet G. Cytomics: An entry to biomedical cell systems biology. Cytometry. 2005; 63A:67-68. 25.  Moore J, Yvon P. High dimensional flow cytometry comes of age. European Pharmaceutical Review. 2012;17(4):20-24. 26. Valet G, Leary JF, Tarnok A. Cytomics - New technologies: Towards a human cytome project. Cytometry. Autumn 2012 Volume 4 Issue 4

Clinical Research 2004;59A:167-171. 27.  Aspinall MG, Hamermesh RG. Realizing the promise of personalized medicine. Harvard Business Review. October 2007. Ingelman-Sundberg M. Editorial: 28.  Pharmacogenomic biomarkers for prediction of severe adverse drug reactions. New England Journal of Medicine. 2008;358:637-639. 29.  Mallal S, Phillips E, Carosi G, et al., for the PREDICT-1 Study Team. HLA-B*5701 screening for hypersensitivity to abacavir. New England Journal of Medicine. 2008;568-579. 30.  Kaitin K. Deconstructing the drug development process: the new face of innovation. Clin. Pharmacol. Ther. 2010;87(3):356-361. 31. Allison M. Reinventing clinical trials. Nature Biotechnology. 2012;30:4149. 32.  DiMasi JA, Grabowski HG. The cost of biopharmaceutical R&D: is biotech different? Manag. Decis. Econ. 2007;28:469-479. 33. Herper M. The truly staggering cost of

inventing new drugs. Forbes. 2012, on-line publication February 10. 34.  Personalized Loscalzo J. cardiovascular medicine and drug development. Circulation. 2012;125:638-645. 35. Crivellente F. The sooner the better. Utilising biomarkers to eliminate drug candidates with cardiotoxicity in preclinical development. Drug Discovery World. Summer 2011. 3136. 36. Smith B, Stocum M, Verst C, Cohen O. Building value through biomarkers: the â&#x20AC;&#x153;smarter developmentâ&#x20AC;? imperative. Drug Information Journal. 2012;46(4):397-403. Funding The authors report no specific funding in relation to this research. No editorial assistance was used. Disclosure Dr Yvon has disclosed that he is the Founder and President of BioSciences Expansion, LLC. Dr Turner has disclosed that he is a full-time employee of Quintiles Transnational.

Dr. Pascal Yvon has 25 years of global experience in the Diagnostics and Life Sciences industries working with startups to large companies. Most recently, Dr. Yvon has been providing life sciences companies with comprehensive services to introduce and develop their business in the US. He holds a Doctorate in Pharmacy from Paris University and an MBA from Rutgers University, NJ. He is a member of BioNJ where he co-chairs the Diagnostics and Personalized Medicine Committee.

J. Rick Turner, PhD, is Senior S c i e n t i f i c Director, Clinical Communications. He is an author/ co-author of 130 papers, an editor/coeditor of 14 books, and Editor-in-Chief of the Drug Information Journal. Email:

Clinical Research

One Year After the United Nations Summit on Non-communicable Diseases (NCDs): New Opportunities and Challenges in the Fight Against NCDs In September of this year, the global health community proudly celebrated the one-year anniversary of the United Nations (UN) Summit on Non-communicable Diseases (NCDs), held 19-20 September 2011, and the adoption of the UN Political Declaration on NCDs. For the first time in history, NCDs were formally recognised as a global health and development imperative that requires a multisectoral response involving all of government, as well as civil society, academia and parts of the private sector. This past year has seen tremendous progress on NCDs at the global level, most notably with the adoption of a global target to reduce premature deaths from NCDs including cancer, diabetes, cardiovascular and respiratory disease by 25% by 2025 1. The World Health Organization (WHO) is also leading a series of consultations that are vitally important to global action on NCDs. These consultations will result in: 1) A Global Monitoring Framework, with indicators, and a set of voluntary global targets for the prevention and control of NCDs; 2) A Global Action Plan for the Prevention and Control of NCDs 2013–2020; 3) Options for strengthening and facilitating multisectoral action for the prevention and control of NCDs through partnerships. Measuring Progress in the Fight Against NCDs Of the 57 million global deaths in 2008, 36 million, or 63%, were due to NCDs, mainly cardiovascular diseases, diabetes, cancers, and 54 INTERNATIONAL PHARMACEUTICAL INDUSTRY

chronic respiratory diseases. This proportion is projected to rise by 15% globally between 2010 and 2020, with low- and middle-income countries set to suffer a disproportionate burden2. In 2011, addressing health ministers from around the world, Dr Margaret Chan, Director General of the WHO, famously stated, “what gets measured gets done”, reminding the international community of the critical need to develop a solid monitoring framework for NCDs against which countries could measure and report progress on efforts to reduce the growing epidemic of NCDs. On 7th November 2012, following a year-long consultation process led by the WHO, UN member states agreed the first ever comprehensive Global Monitoring Framework for the Prevention and Control of NCDs, including a set of voluntary global targets and indicators. The approved set of nine global targets and 25 indicators is a milestone achievement and sends the strong message that all countries are committed to achieving their ambition to reduce premature deaths from NCDs by 25% by 2025. Assuring Balance for NCDs The Union for International Cancer Control (UICC) and its partners in the NCD Alliance have been pushing for a comprehensive approach to the Global Monitoring Framework since discussions began immediately after the UN Summit on NCDs in September 2011. In recent months civil society groups scaled up their advocacy efforts and kept pressure on the WHO and governments to ensure a comprehensive set of targets were

agreed that balanced prevention, treatment and care and matched the scale and complexity of the global epidemic. As part of this advocacy push, UICC organised a roundtable discussion in Geneva, Switzerland, co-hosted by the United States and Panama Missions to the United Nations. This was a uniquely global and multisectoral event with representatives from over 20 UN Missions, as well as representatives from the WHO, other UN agencies, non-governmental organisations, and the private sector. During this 90-minute discussion, participants had the opportunity to share views on the emerging Global Monitoring Framework. There was strong consensus around the table that the Global Monitoring Framework should embrace NCD targets and indicators beyond prevention, and recognise that we have a responsibility to the millions of individuals worldwide already living with NCDs. The Global Monitoring Framework for NCDs: Advocacy “Wins” and Shortcomings The global targets adopted cover prevention (tobacco, physical inactivity, alcohol, salt, raised blood pressure, and diabetes/obesity) and the health system response (improving the availability of essential medicines and technologies, and counselling and drug therapy for the prevention of heart attack and stroke). Mr Cary Adams, Chair of the NCD Alliance and CEO of UICC said, “We are proud to witness the first set of global targets and indicators that Autumn 2012 Volume 4 Issue 4



Clinical Research signal a new era of accountability for the millions of people with NCDs worldwide. We commend WHO and Member States for agreeing a comprehensive set of targets that balance both prevention and treatment.” In addition to the targets addressing risk factors for NCDs, UICC welcomes the adoption of indicators on availability of services important to cancer control, such as cervical cancer screening and the indicator on morphine consumption as a measure of availability of palliative care. UICC was particularly pleased to see that member states Key Advocacy Messages Achievement of the goal to reduce premature NCD deaths by 25% by 2025 requires: • Recognition that NCDs require a multisectoral response involving whole-of-government, UN agencies, civil society and private sector; • Recognition that NCDs are more than just a health issue; the priorities and targets of the NCD framework must be fully integrated into the next generation of development goals – the Post 2015 Development Framework; • Full integration of the Global Monitoring Framework with other complimentary components of the global NCD framework - the Global Action Plan for the Prevention and Control of NCDs 2013-2020, and Global Coordinating Mechanism for NCDs; • A Global Monitoring Framework for NCDs that: Delivers a comprehensive set of - bold targets and indicators to drive progress towards “25 by 25” - Strikes a balance between targets on prevention with those of detection, diagnosis, treatment and care for NCDs - Supports the application of global targets to regional and national levels - Includes a rigorous reporting system - Recognises the need to build the technical capacity of member states to measure indicators.


recognised the importance of cancerrelated infections by including targets for hepatitis B (HBV) and human papillomavirus (HPV) vaccines. There was some concern among member states about the current cost of the HPV vaccine which led to agreement of an indicator at policy level, but as many member states from the African region said, what is important at this stage is that it is being monitored. Julie Torode, Deputy CEO and Director of Advocacy and Programmes at UICC said, “The price of the HPV vaccine has come down tremendously in the few years it has been available. The GAVI announcement to include the HPV vaccine in its portfolio of vaccines allows access to those countries with the highest burden to really address this issue. We do believe that this will bring the price of this vaccine down even further in the coming years, making it a cost effective solution for other countries wishing to save women from cervical cancer.” One disappointment was that there was no breast cancer screening indicator signed off by member states, despite passionate support from some regions. However, we are pleased that this has led to a commitment to address breast cancer screening in the Global Action Plan on NCDs. Over the coming months, UICC and NCD Alliance partners will continue advocacy efforts to ensure that key issues like this, which are not addressed in the Global Monitoring Framework, will be not be lost, but rather integrated into the Global Action Plan for NCDs 2013-2020.

The components fit together neatly: the Global NCD Plan will define the priorities over the next seven years and recommend clear actions for all sectors; the Global Monitoring Framework with targets will be fully integrated into the Global Plan to monitor progress towards these priorities; and a Global Coordinating Mechanism (or partnership) will mobilise multisectoral action and resources to see the Plan fully implemented. UICC and its NCD partners have expressed support of their expertise and their knowledge in taking the Global Monitoring Framework forward into an equally bold and comprehensive Global Action Plan for NCDs for 2013-2020.

From Consultations, Declarations and Resolutions… To Action With these targets and indicators finalised, the focus will now turn to the 2013-2020 Global Action Plan on NCDs and its implementation, which is currently being drafted by the WHO and member states. The NCD Alliance has urged member states to view the three aforementioned WHO consultation processes as complementary components of a comprehensive Global NCD Framework. By explicitly integrating one to the other, the international community can end the piecemeal approach to NCDs.

The Power of Multisectoral Partnerships The UN Political Declaration on NCDs clearly articulated the need for multisectoral partnerships, engaging both health and non-health actors, including civil society and the private sector to promote and support the provision of services for NCD prevention and control. In addition to bolstering global and national advocacy efforts, such partnerships are essential for the implementation of cancer and other NCD interventions at country level, and given today’s financial climate, engagement of parts of the private

Union for International Cancer Control The Union for International Cancer Control (UICC) is a membership organisation that exists to help the global health community accelerate the fight against cancer. Founded in 1933 and based in Geneva, UICC’s growing membership of over 760 organisations across 155 countries features the world’s major cancer societies, ministries of health, research institutes and patient groups.Together with its members, key partners, the World Health Organization, World Economic Forum and others, UICC is tackling the growing cancer crisis on a global scale.

Autumn 2012 Volume 4 Issue 4

Clinical Research comprehensive global monitoring framework, including indicators and a set of voluntary global targets for the prevention and control of noncommunicable diseases (A/ NCD/2) Available at: http://apps. pdf Little, M., Schappert, J., (2012) 4.  Working toward Transformational Health Partnerships in Low- and Middle-Income Countries. BSR

sector, with appropriate safeguards to manage potential conflicts of interest, is more critical than ever. There is a clear willingness in the private sector to engage at this level; according to a recent survey conducted by BSR , 40% of companies expect to increase their commitment to Global Health Partnerships focused on NCDs in the next five years. Global Health Partnerships could play an important role in improving primary healthcare systems that are the front lines – particularly in low- and middleincome countries – for engaging communities with prevention, diagnosis and treatment across a range of diseases including cancer. UICC and NCD Alliance partners will continue to closely follow discussions surrounding the development of options for multisectoral NCD partnerships, which are tabled for discussion at the UN General Assembly on 28 November 2012. Looking Ahead: Priorities for 2013 The adoption of the UN Political Declaration on NCDs in September 2011 and the three aforementioned consultation proceses have started and will continue to carve out a new global advocacy space for the cancer and broader NCD community, and an opportunity to ensure that NCDs including cancer continue to occupy a place on the global health and development agenda. Ensuring that NCDs are part of

NCD Alliance The NCD Alliance was founded by the International Diabetes Federation, the Union for International Cancer Control, the World Heart Federation and the International Union Against Tuberculosis and Lung Disease (The Union). The NCD Alliance is a network of more than 2000 organisations leading the global civil society movement against premature death and preventable illness and disability from noncommunicable diseases (NCDs), including cancer, cardiovascular diseases, chronic respiratory disease, and diabetes.

the 2013 Millennium Development Goal review and emerging debate on universal health coverage, the sustainable development goals and other development issues will also be vital to ensuring that cancer remains central to future thinking. As cancer advocates, we face both new opportunities and challenges as ‘late-comers’ to the development discourse. Now more than ever, innovative partnerships that go beyond traditional health groups, and embrace partners in the development sphere including reproductive, maternal and child health organisations, as well as the AIDS community, are critical if we are to succeed in reducing the cancer burden for future generations. References 1.  World Health Organization (2012) World Health Assembly Resolution A65/54, Available at: http://apps. 2. World Health Organization (2011) Global Status Report on Noncommunicable Diseases 2010. Geneva: WHO 3. WHO (2012) Report of the Formal Meeting of Member States to conclude the work on the

Dr Julie Torode is Deputy CEO and Advocacy & Programmes Director of the Union for International Cancer Control (UICC). In addition to managing UICC’s flagship publications such as the TNM classification series, Dr. Torode has also been involved in the development of UICC’s global programmes in the areas of prevention awareness, paediatric oncology, and cervical cancer. Prior to joining UICC, she spent ten years in Germany working in the pharmaceutical industry including phase I-IV clinical research. Dr Torode holds a PhD in Organic chemistry from the University of Liverpool. Email:

Rebecca Morton Doherty is Advocacy and Programmes Coordination Manager for the Union for International Cancer Control (UICC). Since March 2011 Ms Morton Doherty has been supporting UICC’s communications and advocacy efforts in lead up to and follow-up of the UN Summit on Non-communicable diseases. Ms Morton Doherty holds an MSc from the London School of Economics and prior to joining UICC spent six years working in London and Geneva in international non-governmental organisations. Email:


Labs & Logistics

Things Pharmaceutical Supply Chain and Operations Directors Should Know, But Do Not

In considering supply chain operations in pharmaceuticals businesses, the naïve amongst us might assume that the answers to the basic questions are clear. How much inventory is enough? What does it cost to make a tablet? Which plants are the most efficient producers? How much capacity do we have? How should we plan and organise how demand hits operations? What does it cost to serve the customer? Amazingly, there is a fundamental lack of clarity to these most basic of questions. There are many things that supply chain directors in pharmaceuticals should know but don’t. Let us begin with conversion costs. How much does it cost to make a tablet, or indeed a litre of medicine in non-sterile and sterile formats? Conversion costs are measured in pharmaceutical companies, but the style of measurement is flawed. Most companies will say that the conversion cost in plant X is Y$/thousand or some such equivalent measure. The problem with such a measure is that it is over-summarised. It is best to review conversion costs on a product family basis and to present it in a volume context. In some cases, it can be useful to include the cumulative cost of waste in the conversion cost analysis. Influencing circumstances should be noted, blister-pack versus tub, small tablet versus large, very low dose, partial processes on site, etc. The benefits of informed conversion cost analysis is that it can clearly show how volumes influence cost and so separate economies of scale from fundamental operating efficiencies. 58 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Informed conversion cost analysis is fair to plants • In a scorecard ranking of plants. • In a decision process about which plants should stay in the network. • In determining which portfolios are most appropriate to particular sites. • In determining a road map for COGS improvement. • In identifying cost reduction opportunities in a due diligence or post-acquisition transformation. Benchmarking conversion costs is a source of some anxiety to pharmaceutical producers, originator and generic alike. Many of the benchmarks that are out there are flawed in that they underestimate true potential. Moreover, because they are only averages, they do not provide a roadmap to optimise the conversion costs. The conversion cost of the average generic producer is one-third that of the average originator producer

(in spite of the generic producer having more fragmented portfolios). The more cost-efficient generic producers have one-third the conversion costs of the higher quartile generic producers. There is a similar ratio between costefficient originator producers and the higher cost quartile originator producers. The best of the originators is still three times higher in conversion cost than the best of the generics. Surprisingly for some, the most costefficient plants are not in Asia, but in Eastern Europe and certain regions in the US. When one takes account of end-to-end costs, including logistics, retesting and the costs of longer supply chains, then the strategic network design activities and strategic sourcing initiatives of some companies could be seriously challenged. Some originator companies, in an effort to be efficient converters, are pursuing “Operational Excellence” programmes. The acid test for being lean in originator is simple. If you can Autumn 2012 Volume 4 Issue 4



Labs & Logistics defend your products when they go off patent, then you have some credibility in the lean conversation. If you cannot, you are only playing with lean. To understand conversion costs, we have to understand cost drivers. Cost drivers are those levers that can be pulled to achieve a movement in conversion cost. Each lever should be mapped to the conversion cost so that the impact of a policy change is made transparent. There is no rocket science in this, there is just properly structured activity-based costing. There are many cost drivers. Examples include OEE (setup and run-time efficiencies), crewing, batch size and scale up, standardisation and rationalisation, batches released per total QC heads per month, BMRs per total QA heads per month, technicians per machine shift, SKUs per planner, material moves per warehouse operators, as well as all indirect cost structures including depreciation. The most important cost driver is capacity. Almost universally, supply chain and operations directors are not, and cannot be, precise about how much capacity they have in their operations. The reason is clear. Capacity is elastic. Effective capacity is heavily influenced by how demand hits that capacity. The degree of elasticity is very significant. The difference in effective capacity between operations with poor portfolios and poor demand behaviours, and those with good portfolios and well-managed demand can easily be 300%. The source of the elasticity is the impact of setup and cleaning regimes. The dilution or lack thereof of major cleaning, minor cleans, sub-minor cleans, form changes etc. needs to be fully characterised in order to analyse capacity. We are also imprecise about the basic parameters that drive stage capacity. This imprecision derives from lack of clarity about what 100% means and the usual defensive positions in the mapping from equipment manufacturers’ speed, to validated speed to effective speed during run-time. Moreover, capacity is a multi-stage structure. To truly understand capacity, one must model and determine rate-limiting capacity across each of the stages from weighdispense, through wet granulation, 60 INTERNATIONAL PHARMACEUTICAL INDUSTRY

dry blending, compression/ encapsulation, coating, branding, filling and packaging, and take into account visit frequency in the case of wet granulation and coating. The principles involved are similar for solid, liquid and other forms. Emphasis may differ in bio-pharmaceuticals and medical technology as setup and cleaning regimes play out differently. Regardless, there is no excuse for imprecision. Capacity is Cost. Imprecision about capacity is equivalent to being imprecise about costs. Of course, to optimise pharmaceutical capacity, the key to efficient operations has traditionally been viewed as campaigning and long production runs. This is a fallacy. The weakness in naïve campaigning is that there is a point beyond which campaigning achieves nothing incrementally of significance. Economic order quantity theory is formulated as a trade-off between the dilution of set-up and inventory holding costs. This formulation is useless, because it ignores many other cost drivers, and in particular the costs of idle capacity associated with instability of the production plan. The better formulation is to seek a tipping point up to which campaigning makes sense, go that far and no further, and to pursue flow in supply beyond that point, subject of course to the constraints of demand and shelf-life. Flow is wonderful and is the tenet of lean thinking that has most merit. The breakthrough is to move away from thinking about replenishment as a process of order transaction management and instead view it as a process of accelerating and deceleration around a target average speed. When we drive down the highway, we don’t appreciate a journey which encompasses rapid acceleration from zero to 200kms/hour and then back to zero. Driving at 100 km/hour and gently adjusting the speed down to 80 or up to 120 is much more rational and effective. Yet the observed behaviour in our factories is boom and bust patterns as we respond to various inventory corrections further down the demand chain, in a lumpy manner based on poor planning architectures. It is not that the lean zealots

amongst us know all the answers. Inventory is not evil. Inventory in the right places is an enabling asset. Level push is perfectly fine, provided it is properly engineered. We don’t all need to pursue level pull to a point where it damages the business, especially in a context with fixed and immutable set up costs. There are of course, many barriers to flow. These include bids and tenders, non-RFT, lack of end-to-end visibility, APIs on allocation as well as art work, pricing and reimbursement changes. Architecting the planning process involves creating a plan that is appropriate to every part. There is a key tier of products which should be produced to a drum beat and to a variated level flow. The second and third tiers of products should be produced to an optimal quantity or an optimal cycle schedule. The fourth tier is erratic, and they should only be produced to order. A naïve implementation of “runner repeater stranger” theories doesn’t cut it. Fundamentally it is about characterising demand behaviours and engineering appropriate supply responses. Doing the engineering involves understanding the demand variability, the multi-stage capacity structure and the optimisation of capacity elasticity. It also involves taking account of the postponement opportunity in product variety buildup. In some contexts, there are distribution optimisation overlays. These can involve postponed finishing and cross dock structures, as well as direct shipment. Campaigning in the laboratory without reference to the overall planning architecture is another example of the well-intentioned but perverse behaviours that confound pharmaceutical supply chains. In understanding the roadmap for the optimisations of conversion costs, the capacity and planning architecture has three important cost optimisation platforms. These are: •O  ptimisation of the basic capacity parameters. •A  ligning resource as closely as possible with the average demand.  reating sustainable clear •C available capacity that can be intensely utilised. Autumn 2012 Volume 4 Issue 4



Labs & Logistics How well these are managed is illustrated every time you walk through a packaging hall or along the corridors of the bulk production suite. How many machines are running? There are also significant interactions in the end-to-end optimisation of the pharmaceutical supply chain. For instance, theory tells us that distribution inventory positioning (i.e. safety stock in the distribution centre) should be proportional to the square root of the cumulative lead-time. Consider two plants with equal capacity assets, one operating to a responsive weekly planning process and one operating to an inflexible monthly planning process. The inflexible plant will require over 70% more inventory in the DC compared with the inflexible plant, to achieve the same customer service level. This is merely based on the impact of demand variability in unbiased forecasting. However, we live in a world of bias and optimism. For products with positive forecast bias and a long frozen period in front of the plants, the inventory increase as a result of the forecast bias is amplified by the ratio of the cumulative lead-time to the inventory days in the inventory structure. Indicatively 10% positive bias will cause 15% further excess inventory in the value chain in the case of the flexible plant, and near 30% excess inventory in the case of the inflexible plant. In some cases, analysing demand behaviours below the SKU level and at the individual customer level can help identify erratic demand behaviours such as bad selling or buying behaviours, parallel imports, wholesaler or customer buyahead. It is strange how few supply chain directors know how much inventory is enough. Table 1 shows benchmark data from major pharmaceutical companies in terms of the amount of inventory days in the supply chain. As can be seen, inventories holdings range from 70 days (less than three months) of inventory to over 370 days of inventory (greater than one year). The median inventory in pharmaceutical companies is 165 days (say 5.5 months). Generic producers have indicatively the same amount of inventory, indicatively six 62 INTERNATIONAL PHARMACEUTICAL INDUSTRY

months, which compares favourably given the portfolio complexity that they have to deal with compared with originators. For sectoral comparison, medical technology operates with five months, and the consumer packaged goods sector, including the likes of Unilever, P&G, Danone and Nestle operate with between 45 days and 70 days. The range of variation within the pharmaceutical sector may suggest that significant opportunity in inventory reduction may exist for many companies. There are of course limitations to benchmarking. Consider for example the process producing premium wines. To bring a good red Rioja to market may take four years, whereas to bring a good young crisp Sauvignon Blanc to market may only take four months. Benchmarking wines producers without taking into account the grape varietals and maturation cycles within their portfolios will be ill-informed. Similarly benchmarking inventory positions in pharmaceutical companies with peer pharmaceutical companies may merely identify that some corporations have more inventory than others. However, it does not indicate whether this is a good thing or a bad thing given the business imperative. Nor does it tell one how to move the needle on inventory. Right-sizing inventory holdings requires the development of a multistage inventory model, and identifying the right-sized target holdings for each and every family across each stage of the conversion process. Right-sized inventory is attuned with the business imperatives, the business structure, and the business dynamics. Rightsizing the inventory also involves contrasting actual inventory holding with the theoretical suggested by the model. This is enabled by inventory analytics. The diagnostic comparison of actual inventory holdings with theoretical inventory holdings is a much more valuable exercise than having theory debates about what the theoretical model should be. Good analytics is better than good theory. General theories don’t cut it. If you cannot see your supply chains, you can’t even begin to have a theory of how to manage them. It is not just data access. It is about using that data to understand and diagnose

behaviours. Rapid analytics enables one to do discovery and go deep in the characterisation of value platforms. These behaviours relate to: • How customers buy, • How companies sell, • How demand patterns are distorted by behaviours and dealmaking, • How demand behaviours influence order fulfilment effectiveness and consequent cost to serve, • How disconnected supply planning is from demand and what this means in terms of operational ineffectiveness. The comparison of actual versus theoretical holdings makes clear the legacy of bad behaviours and poor execution that exists in the supply chain. This is the real value platform. It is probably true that half of pharmaceutical companies could take 3 months out of their inventory holdings without impacting customer service, and the other half could similarly take over 1 month out. Moreover, in doing so, they would drive better fundamental behaviours in their operations beyond the cash value of inventory saved. Table 1. Inventory Holdings Days Corporation Days Abbott F11 77 BMS F11 90 Astra Zeneca F11


Johnson and Johnson F11


Novartis F11 114 Bayer F11 124 Astellas F11 128 MSD F11 135 Roche F11 139 Takeda F12 164 Eli Lilly F11


Pfizer F11 188 GSK F11 195 Sanofi Aventis F11


Teva F11 208 Boehringer Ingelheim F11


Daiichi Sankyo F11


Gilead F11 239 NovoNordisk F11 273 Amgen F11 374

Value stream formation has some currency in pharmaceutical companies. A value stream is an organising construct, i.e. an endAutumn 2012 Volume 4 Issue 4



Labs & Logistics to-end decomposition of the total supply network, based on selecting product families with similar product and process characteristics. Use of value stream structures offer some advantages such as dedication of equipment, reduction of setup, creation of focus within the business at an overall and at an operator level. Velocity may be improved. On the other hand, capacity assets may be underutilised, capacity contingency may be lost, capacity imbalance may be created. Making the decision to go for value stream structures requires very careful analysis. Shigeo Shingo was an industrial engineer. If you don’t do the engineering, you are destroying value. S&OP Is about having informed and structured dialogues between supply chain and commercial and associated supporting functions at a number of levels in the company focussed on producing a shared business plan addressing •D  emand assessment  apacity strategy •C  upply strategy •S • Demand/capacity balancing • Inventory strategy and implications •F  iscal commitments  eadcount implications •H  usiness change •B The design issue is how do we choreograph such S&OP conversations? If we don’t talk together about what we need to talk about, or if key conversations are happening in other rooms without the appropriate people being at the table, then we don’t have an S&OP process. Too many companies simply go through the motions. Distribution and cost to serve optimisation are more potently managed in the FMCG sector than ever in the pharmaceutical sector. This is in part because of the division of the pharmaceutical path to the customer into wholesaler and possibly pre-wholesaler structures, as well as the general challenge of twice-daily delivery to pharmacy down to the single unit level. Concepts such as cost per pick and cost per move in primary and secondary distribution are not part of the common lexicon of pharmaceutical companies in the 64 INTERNATIONAL PHARMACEUTICAL INDUSTRY

way that they are in other sectors. Practically the wholesaler structure is what it is and immutable. However, demand feeds from wholesaler to prewholesaler and the associated costs to serve are poorly understood by pharmaceutical companies. Moreover, the choice of whether to in-source or outsource pre-wholesaler activity is poorly understood. And thirdly the move towards centralised distribution centres in an environment wherein SKU inventory is already divided by labelling smacks of the ill-informed pursuit of cost advantage in a situation where none exists. There is a portfolio governance overlay. Proper portfolio management involves the construction of landscapes of current and potential value for the business, that show the business value at a gross and net margin level, along with representation of the costs of complexity. The application of portfolio analysis has to accommodate both the legacy and prospective portfolios, and has very different imperatives in OTC, generics and originator pharmaceuticals. When the analysis is taken down to a SKU level, additional considerations such as standardisation and harmonisation of form are in play. Additionally, should SKU offerings be variated by market size, for the same molecule. i.e. should Moldova have the same SKU offering as Germany? There are other complications such as fellow travellers, i.e. products which go together, one of which is attractive and one of which is unattractive. Finally, when one adds it all up, there is the possibility of creating a roadmap that pulls all these strands together. Roadmaps are not theoretically difficult structures. They talk to what is to be done, by whom and when. The challenge is what should be on that road map and where does it point to in terms of measurable business outcomes. The take aways •C  apacity is elastic with a sectoral specific resonance. Capacity is Cost. •K  now the tipping points to pursue campaigning for efficient operations, and beyond which to go for level flow.

• Flow isn’t one number; flow can be variated. • Move the average resource deployed as close as possible to the average demand. • Create clear capacity space which can be further loaded in order to dilute your fixed costs. • Review your conversion costs on a family basis. • Examine demand patterns at the customer level and below the SKU level. • Know how much inventory you should have globally by inventory node and category and compare this with how much you actually have. Use this gap to diagnose behavioural weaknesses in your supply chain. • Maintain an opportunity map for inventory right-sizing. • Beware of leanspeak disconnected from the business imperative. • Choreograph a sales and operations planning process that adds value to the business. • Apply good governance to your portfolios. • Have a strategic roadmap for the journey you are going to take. • See your supply chain behaviours through use of rapid analytics.

Dr John Harhen is an independent consultant in the areas of supply chain and operations s t r a t e g y. His clients include the largest pharmaceutical company in the world, the leading premium drinks company in the world and the only life sciences company in Europe accredited with the Shingo bronze medallion. John also works with private equity and healthcare distribution. John can be contacted at johnharhen@ A profile of Orbsen Consulting appears elsewhere in the journal.

Autumn 2012 Volume 4 Issue 4



Labs & Logistics

Applications of LIMS to Stability Testing Managing pharmaceutical stability testing can be very demanding, especially on small- to mediumsized companies developing and producing OTC, generics and new Rx products. Some companies outsource the actual inventory management and testing requirements, but they are still required to track progress and report results as part of their QA or development process and will need to meet guidelines set by regulatory bodies such as the FDA and International Conference on Harmonization (ICH). Laboratory Information Management Systems (LIMS) provide a powerful way of managing and reporting the outcome of these studies. LIMS can be implemented in all types of pharmaceutical laboratories from QA/QC and R&D laboratories to laboratories analysing clinical trials of novel pharmaceuticals. Stability testing, however, is a particularly interesting application of LIMS. Ensuring that approved protocols are followed precisely, with “pulls” made on schedule and the appropriate tests completed, can be time-consuming and tedious tasks. Some stability managers and supervisors use Excel™ spreadsheets to store and track this work, but this approach lacks the necessary security and audit trail to comply with FDA regulations, including 21 CFR Part 11. In addition, as the information is contained in individual spreadsheets, reporting on complete studies and batches is difficult and often must be done manually. LIMS can overcome these problems. 21CFR Part 11 requires change control, validation and audit trailing of changes to the system and the data that it holds. Yet today’s laboratories must be able to offer the flexibility to adapt and change their processes and workflows when necessary, and their LIMS must stay 66 INTERNATIONAL PHARMACEUTICAL INDUSTRY

in step with these changes. Larger organisations may opt for a fully configured LIMS for stability studies, but some companies may prefer to implement a “stand-alone” stability system rather than a full-scale LIMS. The Stability Testing Process A stability study measures the shelflife of a given product by testing a series of samples stored in environmental chambers to simulate accelerated testing. Tests are conducted for samples stored under varying conditions and for varying lengths of time, since product stored in a warm, bright room might expire sooner than the same product stored in a cool, dark environment. The requirements for stability testing are described in 21CFR211.166 (Revised as of April 1, 2012): (a)  There shall be a written testing programme designed to assess the stability characteristics of drug products. The results of such stability testing shall be used in determining appropriate storage conditions and expiration dates. The written programme shall be followed and shall include: (1) Sample size and test intervals based on statistical criteria for each attribute examined to assure valid estimates of stability; (2) Storage conditions for samples retained for testing; (3)  Reliable, meaningful, and specific test methods; (4) Testing of the drug product in the same container-closure system as that in which the drug product is marketed; (5)  Testing of drug products for reconstitution at the time of dispensing (as directed in the labelling) as well as after they are reconstituted. (b) An adequate number of batches of each drug product shall be tested

to determine an appropriate expiration date and a record of such data shall be maintained. Accelerated studies, combined with basic stability information on the components, drug products, and container-closure system, may be used to support tentative expiration dates provided full shelf-life studies are not available and are being conducted. Where data from accelerated studies are used to project a tentative expiration date that is beyond a date supported by actual shelf-life studies, there must be stability studies conducted, including drug product testing at appropriate intervals, until the tentative expiration date is verified or the appropriate expiration date determined. (c)  For homeopathic drug products, the requirements of this section are as follows: (1)  There shall be a written assessment of stability based at least on testing or examination of the drug product for compatibility of the ingredients, and based on marketing experience with the drug product to indicate that there is no degradation of the product for the normal or expected period of use. (2)  Evaluation of stability shall be based on the same containerclosure system in which the drug product is being marketed. (d) Allergenic extracts that are labelled “No U.S. Standard of Potency” are exempt from the requirements of this section. Stability studies are required at different stages of the product lifecycle. Initial stability studies are required for product registration purposes and set shelf-life, storage conditions and specifications. Autumn 2012 Volume 4 Issue 4

Labs & Logistics

Follow-up (commitment) studies occur after registration to verify the registration data, and then there are ongoing studies after registration and marketing, to prove that conditions are still valid. The latter is generally required for all licensed medicinal products on the market, and will cover each product, dosage and primary package type. Use of LIMS in Stability Measurements Clearly stability testing is a major undertaking, requiring the management of significant amounts of data from a variety of sources. In addition, clear, cohesive reporting of stability testing results is required, both for dossier submission and for ongoing studies. The use of an appropriately configured LIMS or stability module within a LIMS can automate and control the entire operation of the stability study, including: •P  rotocol creation  tudy initiation and management •S • Inventory management

• Sample login scheduling • Future workload reporting • Stability study reporting This approach simplifies the whole study management process. Critical components in the study include sample actions, time points and storage locations. The three key types of action that a sample will experience in a laboratory are testing, moving and non-moving. Testing requires withdrawing the required quantity of sample to perform whatever analytical tests are required. Moving actions cover transfer of samples from one condition to another, whilst nonmoving actions cover in-situ activities such as freeze/thaw or shaking. Use of a stability LIMS can optimise the number of samples to be stored for a study, thus avoiding shortages and waste, as well as saving staff time by automatically registering pulled samples with test and limits. In addition, full management of storage room locations increases efficiency.

Typical Stability LIMS Components A well-configured stability LIMS will address all the key functionality of stability testing: a) Protocol Design Protocol design allows information concerning the study such as the purpose of the study, the identity of the study director, location, product, etc to be stored. Typical requirements may include each study having multiple batches with multiple conditions upon each batch. Batches may need to be initiated on a different date from a different production batch, with different raw material suppliers, or may be required because of packaging or dose changes. Batches in development may be caused by formulation or packaging changes. Good protocol design accommodates FDA and ICH guidelines, including matrixed protocols and standard room temperature, accelerated and intermediate conditions. b) S  torage Room Operations Stability storage room operations  should be appropriately managed to include planned start dates for each batch as well as the placement date and any cycles or moves required. A typical “cycle” might be to invert the container every three months or to turn the light in the chamber on and off at preset intervals. A move is defined as when the containers are moved from one condition to another. Unplanned or emergency moves required by chamber failures should be recorded. The exact location of each batch (room, chamber, rack, shelf, box) should be stored for every condition. All actions such as placements, pulls, moves, cycles, relocations and scrapping should be recorded with the actual time and date, and INTERNATIONAL PHARMACEUTICAL INDUSTRY 67

Labs & Logistics the ID of the person completing the action. These values may be compared to the planned dates to identify any problems or changes. Work lists may be generated for storage room staff, showing what tasks are required during the period. c) Inventory Management Good inventory management will  ensure that there is enough material in storage to complete the study, without generating large amounts of waste. This can be done by calculating the number of sample containers of each batch that must be placed at each condition in order to execute the protocol, taking into account factors such as the number of pulls, the amount of sample in each container, the amount required for each test, and whether replicates can be performed on the same sample. d) Recording Test Results By storing test results for each  batch, condition and time point in a results database, limits can be set up for each test at each condition/ time if required. The results database can provide a full audit trail so that if any value is changed once it is stored, the original value is not lost, but stored before the new value is accepted. The identity of the person making the change, the time/date and reason for the change should be recorded along with the new value. Historical versions of all results can be readily made available for investigation. e) Reporting Standard reports include the  protocol and batch status, placement and pull lists, tests required, summary reports that include test results to date and an OOS report. Typically reports may be exported in many different formats including Excel™, Word™, HTML and more for inclusion into other documents. Statistical analysis is also extremely useful, which may include shelflife projections or accelerated shelf-life calculations using the Arrhenius equation. The results to 68 INTERNATIONAL PHARMACEUTICAL INDUSTRY

LIMS workflow for stability testing

be analysed can be selected from study, batch, condition, date range, test and component, with the resulting plots and graphs stored and integrated into final reports. f) Regulatory Compliance The system should assist in  compliance with 21 CFR Part 11, for example to accommodate the implementation of an electronic signature, whenever it is deemed necessary (such as when editing a specification, amending a test or changing a test result). User authorities should be individually defined, controlled by a unique user ID and password combination. Password rules should include, for example, the minimum number of characters in a password and an agreed expiry date to ensure regular re-allocation of passwords to all users. Configurability It is clear that LIMS can make a considerable contribution to managing stability studies, but given the sheer range of studies that may be required, the system needs to be configured to the particular application. Most LIMS are configurable to some degree, however, depending on the particular system, a programmer or other IT person would need to write scripts or new custom programs to design new screens or link menus in different ways to support different workflows. Some LIMS, however, use exactly the same core program suite but feature a configurable ‘layer’, or set of configuration tools that allows

each system to be set up to exactly match user requirements. This makes the process of configuration much simpler for the supplier, and less time-consuming for the user. In fact, this approach even offers the user the possibility to be involved in the configuration process. Validation Computer systems, software and instruments regulated by authorities such as the FDA require “validation”. Each company must decide how to carry out the validation of the LIMS. One approach is to carry out the validation internally, providing there is a qualified person available. An alternative way is to utilise the services of a qualified validation specialist (often organised by the LIMS supplier) who will make on-site visits to both gather the necessary information and carry out the validation before producing a report.

John Boother Managing Director at Autoscribe Ltd, has had involvement in around 5000 LIMS projects. Autoscribe is a UKbased global supplier of LIMS to both the laboratory and the wider business markets, with distributors in every continent offering localised technical support. Visit for more information. E-mail:john.boother@

Autumn 2012 Volume 4 Issue 4



Labs & Logistics

RFID and Cold Chain Management We would think new technologies in cold chain management would have made great progress in terms of automation in the last years, particularly with the evolution of temperature sensors using RFID communication, but somehow this has not been the case, at least in two of the industries that should benefit the most from it: biotech and pharma. Let us analyse what happened, why it happened and why we should believe that things would fortunately change. In the last three years we have heard a lot of buzz in the market that technology will allow temperaturesensitive product manufacturers to finally have seamless monitoring from production to patient, to be able to monitor all the way to what we call in the industry “the last mile”. Biotech and pharma companies with temperature-sensitive products all seemed to agree that technology was going to take cold chain management beyond the enormous manual work to program, re-program, download, ship, change, reset, restart, re-download, save and retrieve data loggers, and still not be able to have full retrievable data all the way to the patient. The interest, of course, by biotech and pharma companies to evaluate new technologies was obvious, and many pharma companies became active in investigating new ways to improve their method of handling the cold chain, but despite the great efforts there was one problem: The manufacture, in most cases, did not always have control of the chain all the way to the patient. Control escaped with change of ownership, and due to the number of players in the supply chain - distributors, cargo companies, freight forwarders, pharmacies and physicians - full control was difficult. The manufacturer could make recommendations, audit and qualify partners, issue severe warnings, etc. to make sure the cold chain was strictly controlled, but full control was, and still is, what could be described as hard to reach. Throughout the supply chain, all the players certainly did and 70 INTERNATIONAL PHARMACEUTICAL INDUSTRY

RFID logger from CAEN RFID and Psion PDA

do their best to reinforce “their” chain, and never compromise the efficacy or quality of product, but due to the amount of players a solution to make one controlled chain instead of several independent ones seemed always to offer the best way for the future. After all, several players could always hold to one chain, and by having one chain the manufacturer could configure it and analyse it while other parties could control it and react to it - even the pharmacist or the physician - all with one goal: if the cold chain was broken the product would not reach the patient. Nevertheless, despite all the good intentions and strong efforts, cold chain management is still where it was 10 years ago with few, very few exceptions. Tradition and More Tradition We see that today even in the USA or Europe, and particularly in the UK, strip-chart recorders are still widely used to control temperature during transportation for pharmaceuticals. Strip-chart recorders are mechanical loggers that print the temperature history directly onto a strip of paper. They are cheap and they do not require additional equipment or software. Most users know its limitations but also their practicability; there is no need of software to have an idea of the

temperature conditions, and the driver can simply sign a piece of paper upon reception, confirming that he has seen that the temperature was maintained as it was supposed to be. Technology has had a hard time over the last decades to replace paper, and cold chain control - not only for pharma, but also for the food industry - was no exception. In fact the sales in volume for this type of equipment are numbered in millions, despite its bulky large plastic case and high cost of transport. Nevertheless, alternatives to paper-like ways to read data are coming, and the answer is of course smartphones, tablets, and for this temperature monitor manufacturers have already started to put new products in the market. The Swiss seem to be pretty innovative at this; Elpro was perhaps a pioneer with innovation. A year ago it launched a quick view solution for quickly obtaining information fast on paper, namely the LIBERO. Berlinger is always developing new electronic indicators to improve monitoring, particularly in the last mile, and ECCS (Escort Cold Chain Solutions SA) developed a USB logger that can be read by an Android tablet and the data sent to a cloud. All this is happening now quite fast, but these developments still require Autumn 2012 Volume 4 Issue 4

Labs & Logistics some form of manual intervention, and seamless control without manual or human intervention is what RFID can do. This is the technology that not only can provide seamless monitoring all the way to the last mile, but can also reduce operational cost. RFID for Cold Chain Management While electronic data loggers are similar to the strip-chart recorders, the difference is that they store the information inside a memory that can be downloaded later by connecting the device to a PC. They have to be extracted from the isolated box, breaking the cold chain and consequently not allowing intermediate checkpoints along the supply chain. RFID data loggers on the other hand are essentially electronic data loggers, with the difference that the data can be downloaded using a wireless link. The main advantages of this kind of devices are the following: data download does not need manual intervention; reading through the box is possible so intermediate checkpoints can be implemented without the risk of breaking the cold chain; mixing the identification capability of standard RFID with temperature data logging it is possible to implement an integrated track and trace and cold chain management solution. RFID data loggers are becoming cheaper and cheaper with new developments and the growth of the market, so that in a few months they should be well suited for box-level monitoring. Still the cost limits the usage at item level for most products but, in these cases, a complete solution for the cold chain that also covers the last mile can be implemented with a mix of technology (e.g. RFID logger at box level plus an RFID passive label with an integrated TTI at item level). Today there are very few players offering RFID solutions for the cold chain, but surprisingly the industry that most rapidly adopted RFID technology for cold chain monitoring was fresh produce, growers and exporters of fruits and vegetables. Fruit and vegetable exporters realised that RFID would give them actual data, whereas previously they only used temperature monitors in case there was dispute over the responsibility of breaking

the cold chain. Because of this, they, particularly in South America and South Africa as major export markets, just went for it. Exporters could finally reach their wish: “Monitoring food from farm to fork” and not only in terms of temperature, but also the muchneeded traceability to give assurance to the consumer that what they had on the plate really came from where they though it had come. The most recent case of chaos, with terrible health and economic consequences, was when Spain was forced to stop its exports of vegetables into the rest of Europe because E. coli was suspected to come from their cucumbers. Later it was found that the contamination did not come from Spanish cucumbers, but the economic damage to Spanish growers and exporters was done. A system manufactured by the company Stepac in Israel was the first to enter this market, as their RFID systems works in such a way that as soon as pallets reach their destination the shipper can see online the condition of their shipment, temperature, humidity and CO2. The switch from electronic data loggers to RFID was followed by a large number of exporters and growers who were happy to pay a bit more for a technology that gave them so much information. Stepac has focused development on the food industry with success, but has also opened a path to a change with no return to more traditional systems. As a reader from the pharma industry, you may think that this is new. It is not. The switch started happening over two years ago and today is almost complete for some countries, while most pharma companies are still in the RFI (request for information) phase. The “D” is still missing. Now the next question is, are these systems good enough for pharma? Just think that while the classic range of control for biotech products like vaccines is 2-8°C, asparagus and cherries must be kept from 1.1°C to 2.2°C. Therefore excellent accuracy and resolution are fundamental in the conservation of food, just as important as in biotech and pharma. And what about the difference in value, and the “it is not the same thing” argument? Just find out the cost of shipment of cherries and the health consequences of broken

cold chain in the food industry and you will start realising that in fact the food particularly perishables - supply chain also has very tough regulations, just like pharma and biotech do. I heard a few years ago in one cold chain conference a representative of a large biotech company saying that perhaps they should also look at what the food industry is doing on cold chain management to learn from them. She was right to say so, but it is doubtful many companies took this initiative and there are reasons. RFID Technology for Pharma If the technology is there and most pharma and biotech companies know it, what has blocked pharma and biotech companies from taking a step forward to adopting RFID technology for their cold chain management? Probably it is a combination of three facts. First, the qualification of new products and validation of new processes take a long time and are highly costly in the pharma industry, and certainly more than in the food industry. Therefore the traditional pharma company changes something in its operations or process only if is not working any more or if there is a specific project with a clear reason to change (e.g. to reduce cost or improve the frequency of monitoring). This is good until some companies realise that the cost of waiting sometimes is higher than the cost of innovating. Secondly this same resistance to change did not encourage temperature monitor manufacturers to invest in solutions for the pharma or biotech industries. In other words: why invest in innovation for an industry that will hardly change its current solutions anyway? This can be seen by the fact that the number one supplier in temperature monitoring today, Sensitech, is still the major supplier for the industry, despite the industry seeing that some of the new players were being more innovative. Third, in the pharma industry, no data to prove that a process is safe means that it will not be adopted; so since there was no data that biotech products could be affected by radio frequencies then there was no way to prove the contrary, and therefore biotech companies did not wish to take a chance to be the first one to do so. INTERNATIONAL PHARMACEUTICAL INDUSTRY 71

Labs & Logistics Some biotech companies have been trying to find data and with positive results. Abbott Laboratories made a presentation in a Cold Chain Logistics Congress in 2010 in Rotterdam that biopharma was ready for RFID. This was based on results from a publication done together with the University of Florida, and this triggered enthusiasm that something might be about to change. Other biotech companies have been searching for information with frequent RFI on RFID temperature monitoring. And this is good news for the industry, as the technology itself has never been more ready. Two companies that have pushed the technology to be suitable for pharma are Intelleflex and CAEN. Both systems have their advantages and disadvantages, and I just would like to set out the major difference so that the reader can understand what to look for. Intelleflex uses a proprietary frequency, while CAEN RFID developed a technology using standard UHF frequency. To simplify the matter, the major advantage of Intelleflex is its capacity to read the tags from up to 100 meters, since the readers will only read the Intelleflex tags. The CAEN RFID reader has a shorter range because it uses standard frequency and could read anything in an environment, but has the advantage that any UHF reader (including standard PDAs) could read their tags. Basically the major advantage of Intelleflex is its long range facilitating the installation, while the major advantage of CAEN RFID is that it does not require proprietary readers. Both systems can suit specific needs and both deserve the credit of innovation to be compliant to the pharma industry. The question is, now that there has been commitment from these and other valuable manufacturers to innovate and adapt RFID technology to be suitable for pharma, will pharma and biotech companies also take the step to appreciate and adopt this innovation? Time will answer this, but time is not what innovative companies have in excess. It will be up to the pharma company to reach out and benefit innovation or remain on more traditional systems despite 72 INTERNATIONAL PHARMACEUTICAL INDUSTRY

ECCS I-PLUG logger

its limitations and higher operational costs. RFID may also not be the best solution for all applications; perhaps in the end it will be a combination of RFID and more traditional temperature monitors. The key will be to be able to integrate data from conventional data loggers, RFID tags and all the information that is crucial to the management of the cold chain and shipping of products, so that the huge amount of data that is collected starts to make sense for the users. For this, pharma companies must also be willing to come out to manufacturers and simply ask them what they want: monitoring, managing data, shipments, inspecting, tracing, proactive action or reducing cost. I am sure that either from todayâ&#x20AC;&#x2122;s players, or from the new ones to come, the solution is there - as long as they are told what they need. Needs will evolve, but so will the solutions available out there integrating RFID with the latest communication technologies. The Future of RFID RFID is old technology, but innovation of types and performance of sensors beyond temperature: humidity, shock, light, CO2 and others using RFID as a form of communication has numerous applications beyond pharma and beyond cold chain. Authentication is the most important: there has been standardisation in protocol such as EPC Global 1 GEN 2 as well as the use of UHF as standard frequency in most countries. This has been the first step in allowing RFID technology to help instruments to be compatible with each other and therefore reduce investment capital when sensors are integrated in the company process

systems. Developments on different frequencies for more proprietary systems facilitating installation and maintenance will also continue, making it easy for companies to try what RFID has to offer. But the most exciting thing of all will be to see for the first time technology making a difference to the one who will really benefit from it: the patient. Technology today can monitor seamlessly from production to the patient, and he should have the right sometimes to see that cold chain graph or that green light LED that assures him that what he is ingesting has the same efficacy it had when it left production. Are pharma companies willing to evaluate this scenario? Utopia? I am sure we will be there soon.

Alex Guillen is currently CEO of Escort Cold Chain Solutions SA (ECCS) with HO in Switzerland, and formerly Director of Commercial Operations-Public markets for Novartis Vaccines. In the last years ECCS evolved from the exclusive distributor of Escort data loggers for the cold chain market to an independent solution provider of RFID and real-time monitoring solutions. Today ECCS has its own range of temperature monitors and time temperature indicators (TTI), making particular use of the latest communication technologies such as tablets, smartphones and transferring and managing data in the cloud. Email: aguillen@

Autumn 2012 Volume 4 Issue 4



Automating and Accelerating the Environmental Monitoring Process in Pharmaceutical Manufacturing As part of a highly regulated industry, pharmaceutical companies must perform various levels of product monitoring in the manufacturing process. In addition to product testing, the manufacturing environment must also be tested. This includes testing of the room, surfaces, air, and personnel throughout the manufacturing cycle. In large environments, this can involve a significant number of samples that must be captured, tracked and reviewed after incubation. Automating even a portion of this process can provide tangible benefits, and accelerating the process as part of a rapid testing programme can bring product to market faster. The current test used in microbial quality control, the culture-based test, has been the staple for over 100 years. It has many advantages that have made it the gold standard for quality control testing in pharmaceutical manufacturing. Unfortunately, the test has drawbacks, specifically in the area of time. In the traditional test, the micro-colonies need to grow to a size for the human eye to see. For the traditional environmental test that can take 3-5 days. The traditional environmental monitoring test is manual and leverages a contact plate and growth over time to get results. What makes environmental monitoring uniquely challenging is the large number of samples that need to be managed, handled and tracked. In a manual process, the larger number of samples that need to be managed increases the chance for error, leading to timeconsuming and costly investigations. The attached image illustrates the various steps associated with an environmental monitoring process. This represents one example of the activities that have to be performed when executing environmental monitoring. The analyst starts his day by gathering test materials, and he 74 INTERNATIONAL PHARMACEUTICAL INDUSTRY

moves through the process collecting samples in an area, gowning in and out, collecting samples in the next area, and finally returning to the incubator, filling out forms and monitoring cassettes. This sample process includes up to 23 unique steps. Because environmental monitoring testing involves a large number of tests, certain steps in this process, like recording counts, or entering data into a LIMS system, become more time-consuming. It is in these areas where automation can have an impact.

Automation could eliminate steps following sample capture, such as retrieving or moving plates, and counting plates. Technologies coming in 2013 can eliminate nearly all the steps after the sample is captured, potentially including a few of the administrative tasks like organising paperwork and sample labelling. Automation could take the 23 steps down to seven, or perhaps less. In addition, technologies coming in 2013 will also provide the results in about half the time. Microbial quality control professionals

Figure 1: sample steps in the environmental monitoring process Figure 2: steps to be eliminated by automation

Autumn 2012 Volume 4 Issue 4

Manufacturing are overburdened with other responsibilities. This approach saves time and eliminates steps. When technology is discussed, it can mean many things to many different businesses. Every company is at a different stage in their use of technology in the labs. Some are highly automated, with a laboratory information management system (LIMS), and others are still using paper forms and spreadsheets. Automated rapid detection technology needs to fit into either environment. Even if the business does not have a LIMS system, automation can still provide the value of detection, enumeration and reporting in half the time. In the LIMS environment, the process is simplified to include more automated steps in sample tracking and routine updates to the LIMS system. This allows all stakeholders to operate from the same information. While the process may be automated, there are several other criteria that must be met to facilitate the shift to automation. These include sample preparation, breadth of application and availability of the sample. The sample preparation must mimic as closely as possible the traditional method. The application must handle air, surface and personnel testing. The technology should function in a way that provides rapid results and it should be nondestructive. Benefits beyond automation and use of standard growth media include reduced time to results, comparable sample preparation, and support for high volumes. As an example, feasibility studies have shown that a typical three-day test can provide results in around 1.5 days, and a typical five-day test at one or two temperatures provides results in about 2.5 days. An area where automated detection can also have a value is interim results. Because the automation is regularly analysing the sample for changes at pre-set intervals, positive results can be attained within hours, and final results in the timeframes previously discussed. This allows the QC team to be pro-active in response to positive events. The use of automation simplifies a

userâ&#x20AC;&#x2122;s workflow. The user gathers her cassettes and takes her work order out to perform sampling. She captures her samples as she moves through the manufacturing area, perhaps labelling as she goes. When she returns, she loads the cassettes into the automated technology and the instrument takes over. From here the instrument images the samples every few hours, and based on defined parameters will alert users to any variations or positives, or in the event of a sample that has no issues, simply dispose of that sample and provide reporting either through an internal report engine or through integration to a LIMS system. This frees the analyst to work on more pressing projects, and react only if she receives an alert. One area of concern for businesses is the validation of these types of technologies. It is important that the right performance verification tests are done and the information is available to any company that is interested in evaluating the technology. Using the Growth Directâ&#x201E;˘ as an example, the types of organisms being testing to ensure the technology performs as expected include the typical USP organisms, as well as difficult, finicky organisms. We are testing disinfectants, surfaces and various air monitors to simplify the validation process for businesses. Continuing to use the Growth Directâ&#x201E;˘ as an example, the sample capture is exactly the same as the traditional method using contact plates. The validation entails a twopronged approach. The first part is validation of the automated incubation and counting, and the second part is validation of the growth-based media.

For environmental monitoring, this means proof of equivalence of microorganism capture through microbial spike and recovery experiments with suitable replicate numbers to allow estimates of accuracy with suitable precision for statistical analysis. In conclusion, opportunities exist to improve the efficiency and reduce the time to results of environmental monitoring through the use of automated detection and enumeration. Reduction of time and effort allows analysts to focus on higher-value activities. Automated rapid detection addresses the needs by not only providing results in half the time, but significantly reducing the steps associated with the traditional method. Validation of this type of technology can be straightforward.

Julie Sperry is Chief Commercial Officer at Rapid Micro Biosystems. Julie brings more than 30 years experience in healthcare, pharmaceutical and manufacturing operations. Ms. Sperry previously served in various General Management, Sales and Marketing leadership roles for a number of companies in the pharmaceutical research, development and production markets. Ms. Sperry graduated from Bowling Green State University in Bowling Green, OH USA with a degree in Chemistry and Microbiology and from Rockhurst College in Kansas City, MO with an MBA.. Email: INTERNATIONAL PHARMACEUTICAL INDUSTRY 75


Creating Component Quality: Understanding the Holistic Quality by Design Process Pharmaceutical manufacturers have challenged packaging manufacturers to increase the quality of components used in parenteral packaging. As new, sensitive pharmaceuticals and biopharmaceuticals are prepared for market, regulatory agencies have also asked manufacturers to build quality into products from the start. By improving the quality of the drug productâ&#x20AC;&#x2122;s container closure system, pharmaceutical packaging manufacturers can help to ensure consistent reliability throughout a drug productâ&#x20AC;&#x2122;s lifecycle. The importance of risk assessment and mitigation has spawned the development of next-generation components. These components must be created with enhanced quality and with an increased process understanding in order to meet or exceed the expectations of the pharmaceutical manufacturers and by extension, patients. High on the list of desired attributes are components with chemical cleanliness for sensitive drug products and improved material consistency. Such consistent, reliable, high-quality components must also be free from foreign contamination (particulate, fibres) and surface inhomogeneities (e.g. black spots). In order to provide pharmaceutical companies with components that also aid with flawless machinability and stoppering, most manufacturers are significantly increasing requirements for component production. Top concerns include: transparency of quality systems and products; fast response on quality issues; and 76 INTERNATIONAL PHARMACEUTICAL INDUSTRY

continuous process and product improvement. Pharmaceutical manufacturers are also seeking partners who have built quality

directly into the manufacturing process, and can provide a certificate of analysis for those products that meet compendia requirements of Autumn 2012 Volume 4 Issue 4



Manufacturing global regulatory agencies. New elastomer formulations made using cleaner and fewer ingredients have improved component quality. However, a deeper process understanding must be developed and implemented to mitigate risk during the manufacturing process. Forward-thinking manufacturers have incorporated holistic Quality by Design (QbD) processes into manufacturing sites, and by doing so, are now achieving the highest possible process understanding. Such knowledge lends itself to increased process stability, which corresponds with an increase in quality for packaging components. Clean elastomeric material and QbD processes are needed to provide packaging options that can increase the quality, functionality and compatibility of the component with the drug product. Packaging Selection Can Affect Quality Pharmaceutical manufacturers must ensure that the packaging components selected for its drug product containment do not interact with the drug product itself and are applicable for the intended use. Today’s formulations for pharmaceutical use should meet the following properties: • Lowest possible extractable and leachable profile • High moisture and gas protection/ barrier properties • Low fragmentation tendency • Excellent physical properties • Excellent self-sealing properties after needle removal • Meet Ph.Eur., USP and JP requirements The formulation provides drug product manufacturers with pharmaceutical grade components that withstand industry standard sterilisation processes using steam, ETO or gamma irradiation (to a certain intensity). Modern halobutyl formulations are not cytotoxic and are produced according to cGMP industry standards. Early partnerships with packaging manufacturers can help to determine proper selection based on the drug 78 INTERNATIONAL PHARMACEUTICAL INDUSTRY

and component formulations. When considering drug lifecycle planning, modern formulations can help to improve transitioning plans for pharmaceutical manufacturers. Drug products often require a variety of container closure systems during the course of development and commercialisation. For example, a drug manufacturer may initially present the product in a stopper-vialbased system. As the drug moves toward commercialisation, a syringeplunger-based application may offer differentiation and ease of use for patients. Selecting an elastomeric formulation that can be used for the production of various designs, including stoppers, plungers and other primary packaging components, with the same formulation will aid in this transition. From a chemical analysis standpoint, this eases testing requirements and mitigates risk as manufacturers move from one container to another because components that are in contact with the drug product remain the same. Developing a Deeper Process Understanding It’s no secret that higher quality often means higher costs for manufacturers. While pharmaceutical companies work to meet new quality and compliance paradigms, a balance must be achieved between the realities of managing costs to provide a product meeting payers’ requirements and facilitating profitability to continue adequate business reinvestment. To balance these priorities effectively, the adoption of Quality by Design (QbD) concepts is gathering momentum. QbD, which focuses on patient needs, is a diversion from traditional manufacturing processes that are based primarily on experience and practicality. Although time-tested and useful, traditional manufacturing ensures quality through inspection. The focus is on equipment, capabilities and process reproducibility. Improvements are made on a one-off basis, and tend to be reactive instead of proactive. In a QbD model, a systematic, science-based approach is used to view the process methodically. This approach ensures quality through

understanding the product and process parameters, and by moving controls upstream. Unlike traditional manufacturing methods, the QbD focus is on process robustness, understanding and controlling variability. Improvements are made proactively and on a continual basis instead of waiting for an issue to provoke change. Risk assessments of all process steps involved, including incoming inspection, compounding, moulding, trimming, washing, sterilisation, packing and transportation, are done continuously to assure an in-depth understanding of the process. Such an understanding can help minimise product variability. The QbD framework can be adopted by pharmaceutical industry suppliers in order to provide benefits to the pharmaceutical drug manufacturer. High-quality components and sterile packaging must be well-understood to provide significant benefit to the overall container closure or delivery system. This need is a key driver for adoption of QbD for primary packaging materials such as elastomeric stoppers used for vial closures and plungers used for prefillable syringe systems. Using the current pharmaceutical framework for QbD as described in ICH Q8R2, Pharmaceutical Development, industry-leading suppliers build QbDbased components with the following elements: • Quality Target Product Profile (QTPP) – a QTPP is a prospective target summary of the quality characteristics required for the product. It is the basis of design for product development • Critical Quality Attributes (CQAs) – the specific quality attributes needed to achieve the QTPP • Risk assessments – to link material attributes and process parameters to CQAs • Design space – a multidimensional combination of the interaction of input variables (e.g. material attributes) and process parameters that have been demonstrated to provide assurance of quality • Control strategy – a planned set of controls derived from current product and process understanding that ensure process performance Autumn 2012 Volume 4 Issue 4



Manufacturing and product quality •P  roduct lifecycle management and continual improvement – includes monitoring and management of all phases of the product lifecycle from initial development through marketing and finally discontinuation These elements build the basis for an ongoing, living process that maximises product and process understanding while providing improved product consistency. Pharmaceutical customers will save money from a total cost of ownership standpoint when they use components based on this process. A Quality by Design process also offers increased transparency because the supplier provides a managed knowledge base of technical information and product and process documentation. Creating the QbD Line A process designed with QbD principles requires a significant upfront investment. However, it delivers an improved, data-driven output providing manufacturers with superior product and process understanding that minimises process risk, emphasises patient-critical quality requirements and enhances drug product effectiveness. Manufacturing with QbD principles provides a more efficient and consistent process, resulting in a higher-quality final deliverable with well understood and controlled sources of variation. In a holistic QbD process, an understanding of raw materials and their impact on the final product is necessary to create the QTPP. The QTPP forms the basis for drug product formulation and process development. A series of considerations should be made for the QTPP of a sterile product. Some of these considerations include the desired product performance based on the intended clinical setting, dosage strength and delivery mode, pharmacokinetic characteristics, drug product quality criteria, sterility and the container closure system itself, just to mention a few. Scientific rationale and quality risk management are used to define 80 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Critical Quality Attributes (CQAs) and Critical Process Parameters (CPPs) for a given product and process in support of achieving the QTPP. Building quality into a component’s design will help to: •D  evelop control strategy •E  nsure product quality throughout the product lifecycle • Increase product and process knowledge • Increase transparency and understanding for regulators and industry •E  nhance evaluation of changes The QTPP forms the basis of drug product formulation and process development. A strong QTPP will help create a high-quality component that can be manufactured consistently. That said, manufacturing is not an exact science; variables can and do occur. However, with a production process designed with QbD principles in mind, manufacturers will not only minimise variations but also achieve a greater understanding of how and why the variation occurs. To create a QbD manufacturing line, environmental upgrades, including implementing cleanroom best practices, are needed. Proper gowning is essential and should begin in the personnel locker rooms, where street clothes and cleanroom gowning must be separated. Deionisation at critical manufacturing steps also helps to avoid contamination caused by static electricity. Milling and extrusion should be completed in an ISO 8 environment with state-of-the-art equipment and HEPA filtration in all process rooms. Moulding should be completed by high-tonnage presses for improved dimensional control. Auto-spray for consistency of release agent and airassisted mould unloading can reduce sheet distortions. O-type trim presses and enhanced trim dies can help to lower particle contamination. Precision trim control and automated control of web positioning and spraying for lubrication minimise variability. As the market moves toward cleaner products and maximum product and process understanding, ultra-clean elastomeric formulations and products manufactured with QbD principles will meet these

NovaPure components from West were developed following QbD principles to ensure quality, safety and efficacy throughout a drug product’s lifecycle. NovaPure® is a registered trademark of West Pharmaceutical Services, Inc., in the United States and other jurisdictions.

stringent quality needs. By partnering with a packaging manufacturer early in the development process, pharmaceutical manufacturers can meet and exceed quality expectations for drug product packaging, and deliver a safe, effective drug product to their customers.

Sascha Karhoefer holds an engineering degree in Biotechnology from the University of Applied Science, Aachen, Germany. He joined West in 2005, and spent more than five years with the Technical Customer Support Team, where he was responsible for West´s biopharmaceutical customers and served as the Key European Liaison, representing elastomers of West´s Japanese partner as product and technical support manager. He assumed his position as Manager, Injectable Container Solutions Platform Europe in 2011 and was promoted to Director, Global ICS Platform in 2012. Email: sascha.karhoefer@

Autumn 2012 Volume 4 Issue 4




Something in the Air – Ionisation as a Solution to Static In 600BC the philosopher and mathematician Thales of Miletus reported that after rubbing a piece of amber on the fur of a cat the amber attracted and held feathers—the first account of static electricity (literally, electricity at rest). What Thales observed was what we now know as triboelectric charging, where certain materials become electrically charged after contact with a different material through friction. While generating a controlled static charge has positive applications in some manufacturing scenarios—by, for example, allowing the temporary adhesion between two or more surfaces of opposite polarity—in many operations across a multitude of industries uncontrolled static electricity causes serious production problems. These range from downtime due to machinery jams, through product contamination, to product loss in industries such as electronics, when even a low static voltage can totally destroy sensitive components. People can be damaged, too, with employees suffering electric shocks. And, in the most extreme cases, where flammable materials are used there is also the real possibility of fires and explosions—as the tragic death of a Pennsylvania man in 2010 while filling his car with gas graphically demonstrated. Uncontrolled static attraction is a particular problem for plastics industries and consequently for medical device manufacturers. Among the processes where it can be an issue are injection-moulding, blow-moulding, thermoforming, parts conveying and collection, and assembly processes. Even in the most stringent cleanrooms, static charge attracts particulates from people, processes and equipment, so it is important that appropriate measures are taken to ensure that static is kept to a minimum, if not completely 82 INTERNATIONAL PHARMACEUTICAL INDUSTRY

eliminated. This article looks at the common production issues arising from static, explains exactly what static is, and describes the principal techniques for neutralising static. In particular, it explains why active air ionisation is an especially effective and practical solution. The Damage Static Can Do The primary problems resulting from electrostatic charges are: Electrostatic Attraction (ESA): Not only are airborne contaminating particles attracted to charged surfaces, but charged airborne particles can be attracted to surfaces that are totally free of any charge. This problem affects most plastic-based industries in one form or another, but static in medical device manufacture is the biggest single cause of rejected products, affecting a diverse range of devices, including catheters, syringes, replacement joints, pacemakers and stents. Material Misbehaviour: Uncontrolled ESA gives rise to other problems besides product contamination. It can disrupt automated processes by causing parts to stick to each other or to equipment, misrouting or repelling. This imposes significant cost penalties because it forces manufacturers to run their machines at much slower speeds than might otherwise be the case. Operator Shocks: Operator shocks are typically the result of an accumulated charge or ‘battery effect’ occurring during the collection of parts in a bin or assembly area, and while they can be painful, in most cases the effects are usually non-lifethreatening and short-lived. However, there are also cost implications in the ‘recoil’ reaction associated with the initial shock, after which there

can be a moment of disorientation, bringing with it subsequent hazards such as collision with other operators and/or machinery. More stringent health and safety standards place an increasing burden of responsibility on manufacturers to protect staff from static discharge. Understanding Static When a material or object holds a net electrical charge—positive or negative—it is said to have a static charge. The term ‘static’ is relative, as in many cases static charges will slowly decrease over time. How long depends on the resistance of the material, and for practical purposes the two extremes can be taken as plastics and metal. Plastics generally have very high resistances, so can maintain static charges for long periods; metals have very low resistances, and an earthed metal object will hold its charge for an imperceptibly short time. The voltage present on a material is dependent on the amount of charge on the material and the capacitance of the material. The simple relationship is Q=CV, where Q is the charge, V the voltage and C the capacitance of the material. It can be seen that for a given charge on a material, the lower the capacitance the higher the voltage, and vice versa. Plastics generally have very low capacitive values and hence a small charge can produce very high voltages. This is why problems with static are most noticeable when working with plastic, because it is the voltage level which causes the attraction of dust, operator shock and misbehaviour of materials. Static electricity results from an imbalance in the molecular construction of the material. In a balanced atom the positive charges in the nucleus are equal to the negative charges of the electrons orbiting the nucleus, so the overall charge is zero. Autumn 2012 Volume 4 Issue 4



Manufacturing This balance can change, however. If electrons are removed, the result is a greater positive charge in the nucleus; if extra electrons are added, the overall charge becomes negative. In both cases, static electricity is the result. There are three main causes of such imbalances: friction, separation, and induction. Friction: As two materials are rubbed together the electrons associated with the surface atoms on each material come into very close proximity with each other, and can move from one material to another. The direction in which the electrons travel—from Material A to Material B or vice versa—depends on the Triboelectric Series, which is based on the order of the polarity of charge separation when a material is touched by another. A material towards the bottom of the series, when touched to a material near the top of the series, will attain a more negative charge, and vice versa. In addition, the harder they are pressed together, the greater the exchange of electrons and the higher the charge generated. A practical example is if a piece of polythene is rubbed on a nylon carpet with gentle force a moderate negative charge will be generated on the polythene, whereas if the force is increased a larger negative charge will be achieved. The speed of the rubbing action also affects the level of charge; the faster the rubbing the higher the level of charge. This is because the surface electrons gain heat energy generated by the friction, and this extra energy allows them to break their atomic bonds and transfer to other atoms. Separation: When materials are in contact, surface electrons are in close proximity, and when separated tend to adhere to one material or the other, depending—again—on their positions on the Triboelectric Series. The faster the separation, the higher the charge generated, and the slower the separation the lower the charge. A common example is a PVC web moving over a Teflon-coated roller. As the two separate, the electrons tend to adhere to the Teflon, generating a 84 INTERNATIONAL PHARMACEUTICAL INDUSTRY

net negative charge on the Teflon and a net positive charge on the PVC. Induction: The surface of a material in close proximity to a high positive voltage will tend to become positively charged. This is caused by ionisation of the air between the surface of the material and the voltage source, which carries surface electrons away from the material to the source. This may occur when an operator is working

near charged materials and becomes charged himself. On touching an earthed object he will discharge to it and get an electric shock. Neutralising Static through Active Air Ionisation The same fundamental principle governs every technique for neutralising static: where a material has a positive surface charge, electrons must be added to the Autumn 2012 Volume 4 Issue 4



Manufacturing surface to re-balance the charge; where the surface charge is negative, the excess electrons must be removed. The two basic techniques for doing this are via conductivity and replacement. The former involves making an insulator conductive and then grounding it. Ways of achieving this include humidification, and applying anti-static chemicals (either as coatings or adding them to plastics during manufacture). Carbon can similarly be added during manufacture to make plastic conductive. When it comes to tackling static during the production process, the replacement technique using active air ionisation is more practical. Active air ionisation employs high-voltage AC or ‘pulsed’ DC to produce ionised air to neutralise surface charges. The voltage is fed to an array of titanium emitter pins mounted on an ionising bar. This creates a high-energy “ion cloud” made up of a very large number of positive and negative ions, which are attracted to particles or surfaces carrying an opposite charge, thus rapidly neutralising the surface. The choice of AC or DC is determined by the application. An AC system can only generate ions in accordance with the AC frequency. Pulsed DC ionisation allows control of both frequency and the relative balance between positive and negative ions, offering optimum solutions for specific materials and more demanding applications. For example, lower frequencies allow ionisation over longer distances, and the balance control allows output to be adjusted to suit the charge polarity on the target. Compared to AC units the 977CM operates at lower frequencies between 1-20 Hz and features a variable output voltage up to 30 kV peak-to-peak. The ionising bar (see diagram) consists of a series of emitters connected alternately to the negative and positive outputs of the 977CM. The casing of the bar is made of plastic and hence there is no proximity earth. The output from the power supply is effectively a square wave switching from negative to positive at the chosen frequency. Looking at the 86 INTERNATIONAL PHARMACEUTICAL INDUSTRY

positive half of the waveform, the controller switches on the high output voltage connected to the positive emitters, which establishes an electric field between the emitter and the surrounding earthed objects. At the sharp point of the emitter this field is extremely strong, and positive ions are produced. The similar charge of the ion and the emitter drives the ions away from the bar. On the negative half of the cycle, the power supply delivers a high negative voltage to the alternate set of emitters and, in similar fashion to the AC eliminators, negative ions are produced at the emitter point. A statically-charged object in the vicinity of the ionising bar will attract or repel the ions, dependent upon their relative polarities. When the ions reach the charged surface, the electrons will be exchanged and the surface neutralised. Low frequency operation lends pulsed DC eliminators to long range neutralisation. The relatively long duration of each half of the cycle causes large “clouds” of ions of alternating polarity to be emitted from the bar. This distance between the positive and negative ions close to the bar greatly reduces the rate of re-combination (positive and negative ions coming together and cancelling each other out). Note that at long distances from the bar, fewer ions are deliverable to a staticallycharged surface and so the speed of neutralisation is reduced. Therefore when utilising pulsed DC equipment thought must be given to the distance at which the bar will be mounted from the target surface. An additional feature of the pulsed DC system is that the output waveform can be altered and the duration of the negative and positive sections can be increased or decreased. For instance, if the charge to be neutralised is known to be positive, the duration of the negative section can be increased and the positive part of the waveform reduced. This will increase the production of negative ions and decrease the production of positive ions, making the system more efficient at neutralising the positive charge. Similarly, for a known negative charge the output can be biased towards

positive ion production. As awareness grows of the problems uncontrolled static can cause, more and more medical device manufacturers are installing static neutralisation solutions. A typical example is a customer specialising in the development and manufacture of mould tools, plastic injection-moulded components and the assembly of complex devices for the pharmaceutical, drug delivery, medical and healthcare industries. This company was supplied with static solutions for the assembly of injection-moulded drug delivery devices, along with static control equipment throughout the injectionmould process, right up to hand assembly. During assembly of the plastic components, ionising blowers and nozzles neutralise the parts and remove the excess plastic flash and statically-attracted airborne contaminates. This is carried out within a Class 7 cleanroom. During the operation of the bench-mounted ionising nozzle, the removed particulate is directed towards a tack-mat area, where it is captured to avoid future recontamination of product. Once clean, the drug delivery device is manually inspected under an illuminated magnifying glass for cleanness.

David Rogers is Business Unit Director for Static Control. He has responsibility for the Meech range of static eliminators and static generators, including product developments and technical support for Meech distributors and customers worldwide. With a background in science and engineering, he joined Meech in 1995 to provide additional sales support - a role he still carries out on a daily basis. Many years of fielding questions from different industries has given David an in-depth knowledge of static electricity related problems and their solutions in a range of diverse markets. Email:

Autumn 2012 Volume 4 Issue 4




Uses of Sieves in the Pharmaceutical Industry and the Increased Demand for Containment

A sieve or screener is an essential part of every pharmaceutical production process, particularly as product quality and integrity are so important. The use of a sieve safeguards against customer compensation or litigation, as it eliminates all oversized contamination. It therefore ensures that ingredients and finished products are quality assured during production and before use or dispatch. However, the design of sieving equipment has had to undergo radical changes in recent years to overcome the new demands of companies manufacturing pharmaceuticals. These demands include improved productivity and product quality and, most importantly, improving the health and safety of the operators of sieves and screeners. The latest generation of sieve has made large improvements to safety by containing the powders being processed, thus adhering to occupational exposure limits. In basic terms, a sieve consists of a housing containing a removable wire mesh of a defined aperture size. This assembly is vibrated by an electric motor so particles which are small enough pass through the mesh apertures, and any particles or contamination that are too big remain on the top. Most units used in the pharmaceutical industry tend to be circular in shape and of a highquality good manufacturing practice (GMP) design (see Figure 1). Stainless steel mesh with a high tolerance on 88 INTERNATIONAL PHARMACEUTICAL INDUSTRY

company. Grading or sizing of powders or granules is carried out to separate different ranges of particle sizes. For example, primary and intermediates need to be sieved to remove oversized and undersized particles in order to ensure a correct particle size distribution ready for granulation and subsequent tablet pressing.

the apertures is also specified to give excellent product quality. Types of Sieving There are two main types of sieving â&#x20AC;&#x201C; safety screening and grading. This article will concentrate on sieves used for safety screening, but a quick explanation of grading will also be given. Safety screening of powders, sometimes known as control sieving or security/check screening, is carried out to ensure the correct product quality. The sieve removes any oversized contamination from the powder. This could be something which has accidentally found its way into the process line, such as packaging, a piece of PPE equipment, or extraneous particles, which may be inherent in the material. The removal of this contamination improves the quality of the powder and final product, and therefore ensures the reputation of the pharmaceutical

Where are Sieves Used? Most pharmaceutical processes are hazard analysis and critical control point (HACCP) controlled. This means that an analysis of the process is carried out in terms of where hazards can occur. Critical control points are identified and some form of prevention is put in place. Sieving equipment will help considerably at any point at which there is a risk of contamination entering the process. These critical control points are found in many different areas of the production process. On the primary side, a good example is where raw ingredients are de-bagged because of the potential for parts of the bag to be accidentally introduced into the process. Another example on the primary side is where mixing or blending takes place, as this is another area for potential contamination. On the secondary side, many pharmaceutical companies consider the finished powder packaging area critical and place a sieve here to prevent contamination and therefore customer complaints. Autumn 2012 Volume 4 Issue 4

Manufacturing Features and Benefits of Check Screening Sieves Sieves used for check screening are designed to be extremely simple to operate and maintain, with the emphasis on making them easy to strip down and clean effectively. Their compact design means that they can be placed in small or restricted height areas of the production process – possibly where a sieve was not originally deemed necessary but is now essential. The sieve mesh itself is a removable item, so the aperture size of the mesh can be changed according to the powder being processed. Modern units use mesh that is securely bonded with adhesive to a frame, which gives a much higher tension in the mesh than older styles that secured the frame with a clip or screws. Having a consistent and hightension level gives better throughputs and reduces blinding or blocking of the sieve apertures. Another recent development is the use of an FDAapproved adhesive to bond the sieve mesh to the frame. All other contact parts of the sieve are manufactured from stainless steel and can be polished to very low surface roughness (Ra) values in order to ensure good flow properties and easy cleaning. These components are simple to remove and wash in an autoclave or other cleaning vessel, thus removing any chance of cross-contamination between different batches of material. Pneumatic Conveying and the Sieving Process One of the most popular ways of transporting solids at manufacturing sites is through pneumatic conveying systems. Pneumatic conveying is often selected because it is totally enclosed and dust-tight, ideal for dusty or dirty materials. These systems can also be installed anywhere a pipeline can be fitted within a site. Sieving manufacturers have had to innovatively develop machines to allow producers to sieve ingredients before, during and after pneumatic conveying. Sieving ingredients before they enter the system ensures a product free of contaminants, which could otherwise lead to rejected products or potentially damage other equipment. Screening

material within or after the systems allows for a high level of quality control. There are two main types of pneumatic conveying – by positive pressure or vacuum. Simply explained, positive pressure systems can achieve longer distances at higher rates as jet blowers push air through the pipes. Vacuum systems, on the other hand, use suction technology to draw air and displace materials, and cannot generally achieve the distances and high levels of pressure systems. Different sieves must be used depending on the type of conveying system employed. For example, in most cases, a certified pressure vessel is required for positive pressure pneumatic conveying. On the other hand, a manufacturer using a vacuum system is not so restricted and can select from a wider range of sieve designs tailored for vacuum pneumatic conveying systems. High-quality Mesh – What to Look For Any company using a sieve needs to carefully consider the quality of mesh being used, regardless of the exact type of sieve model. A poor-quality mesh can easily break, resulting in not only unnecessary downtime but also compromised product quality. Mesh screen material, mesh size and mesh tension must all be looked at. The most common screen material is woven stainless steel wire mesh, although more exotic metals, such as bronze, can be used if required for a specific application. Stainless steel is a reliable, durable material and is suited for most applications. Synthetic woven meshes are also available, where chemical compatibility is a concern. In general they are made of polyester or nylon. Historically, and particularly in the USA, mesh count, which is the number of apertures per linear inch, was used to specify the screen opening size of a mesh. However, this method often led to a false measurement. Nowadays, it is more common to use microns to define opening size, i.e. by measuring the number of microns per aperture, which provides a much more precise and accurate measurement. To ensure the optimal operating efficiency of the

screen, it is crucial the mesh is properly tensioned; otherwise the screen will not give its best performance. Ultrasonic Deblinding System Most powders can be screened quickly and accurately by a standard sieve, however, some pharmaceutical powders may be sticky or have irregular shaped particles, which can cause mesh-blinding problems (see Figure 2). The method of ultrasonically

exciting the stainless steel mesh wires of a powder-screening machine by high-frequency, low-amplitude vibration to prevent apertures blocking has been used for over 25 years. The ultrasonic frequency is applied to the sieve mesh via an acoustically developed transducer (see Figure 3).

This breaks down the surface tension, effectively making the stainless steel wires friction-free and preventing particles both slightly greater and smaller than the mesh from blinding or blocking it. Screen blinding or blocking is a common problem when sieving difficult powders on screens of 500μm and below. It occurs if either one or a combination of particles sit on or in an aperture of the mesh and INTERNATIONAL PHARMACEUTICAL INDUSTRY 89

Manufacturing stay there, or when particles adhere to the mesh wires, preventing other particles from using these openings to pass through. It is particularly common with sticky powders or materials, which contain a large number of particles of a size similar to that of the apertures of the mesh. When blocking occurs, the useful screening area is reduced and therefore capacity will drop. The system works on the power by demand (PBD) principle, which solves the problem of uneven loading. Constant feedback from the separator screen to the PBD controls monitors the throughput of material in the system. When there is a heavy loading on the sieve mesh, PBD increases power, maintaining the amplitude of the ultrasonics to pass materials through quickly and efficiently without blinding. There are several knockon benefits to eliminating blinding via an ultrasonic deblinding device. The first is that sieving capacities improve, increasing productivity. The second is that because the mesh is kept free from blockage for longer, manual cleaning is more infrequent and therefore the chance of damaging the mesh is reduced. Finally, ultrasonic deblinding systems enable powders to be sieved using meshes with smaller apertures. This enables even finerquality products to be produced than previously possible, or even to screen powders that could not be sieved before. The Effect of the Atex Directive Recent legislation has had a significant effect on the design of sieving equipment. On March 1st, 1996, the European Community adopted a Directive on equipment and protective systems intended for use in potentially explosive atmospheres (94/9/EC). ‘Atmospheres Explosibles’ is more commonly referred to as the ATEX Directive, whose primary function is to eliminate the possibility of explosions. It applies to electrical and mechanical equipment intended for use in potentially explosive atmospheres. The Directive affects all industries involving powders, dusts and vapours including food, metal powders, powder paint, pharmaceutical powders and chemicals. From July 2003, all new equipment purchased for installation 90 INTERNATIONAL PHARMACEUTICAL INDUSTRY

and use in a potentially explosive atmosphere has had to comply with the requirements of the ATEX Directive. Design changes to sieving units are mainly focused on making sure that the unit is free of any potential sources of ignition. Therefore, it is essential to properly earth all components and remove all other possibilities of a spark or excessive heat generation. However, when an electrical component is continuously in contact with powder and dust during sieving there is a further hazard of an explosion. The ultrasonic probe of the deblinding system described earlier needs to be made safe as it is placed inside the sieve – an area often categorised as Zone 20. Some manufacturers have addressed this by enclosing the transducer and cable to eliminate the possibility of any explosion. The equipment has to go through rigorous testing procedures and be approved by certified bodies. Only then can it be deemed to meet essential health and safety requirements. This, in turn, allows difficult-to-sieve powders to be screened effectively and safely, and gives the user complete peace of mind.

safety. The GMP design of the sieve is based on clean lines, which makes sanitation easier and performance greater. Clean-down times are reduced as the sieve is simple to disassemble in seconds without the need for tools. Crevice-free, smooth surfaces make the product contact parts easy to clean and fully washable. The unit is clamped together with an airlock system. This pneumatic lock gives an even and high clamping force across all sealing faces, and therefore guards against powder leakage more effectively than traditional band clamps or over-centre toggle clamps (see Figure 4). To assist with FDA process approval, this pneumatic clamping system can be validated, as it provides a repeatable and measurable seal.

Improvements in Containment Employers have been using occupational exposure limits (OELs) for many years to safeguard their employees’ health. They are used to assessing the adequacy of the control measures and indicate if a problem ever occurs. This has forced manufacturers of process equipment to design machines which contain dust and fumes much more effectively, so that these OELs can be met. In the case of sieving equipment, this is especially important as the very action of a vibrating sieve causes dust to be generated. Traditionally, sieves have used either over-centre toggle clamps or circular band clamps to secure the component parts together. These are not ideal mechanisms for ensuring dust-tight operation, as they rely on operators to tighten them correctly to ensure an adequate seal. The latest generation of sieve addresses this clamping issue by utilising a validated pneumatic clamping system, giving large improvements in product containment and operator health and

Conclusion It is obvious that sieves or screeners continue to have a large part to play in the safe production of pharmaceutical products. However, it is important that companies using this equipment choose carefully, making sure that they comply with the new ATEX legislation and safeguard the health and safety of their operators.

Rob O’Connell graduated from the University of Nottingham in 1993 with a BEng (hons) in Mechanical Engineering. He joined Russell Finex in 1995 as the Technology Centre Manager and has held several technical and commercial positions within the company. He is currently the President of US Operations at Russell Finex Inc. Email: rob_oconnell@

Autumn 2012 Volume 4 Issue 4




Syringe Siliconisation Trends, Methods, Analysis Procedures Summary Ready-to-fill, i.e. sterile, prefillable glass syringes, are washed, siliconised, sterilised and packaged by the primary packaging manufacturer. They can then be filled by the pharmaceutical companies without any further processing. These days the majority of prefillable syringes are made of glass and the trend looks set to continue. The siliconisation of the syringe barrel is an extremely important aspect of the production of sterile, prefillable glass syringes because the functional interaction of the glass barrel siliconisation and the plunger stopper siliconisation is crucial to the efficiency of the entire system. Both inadequate and excessive siliconisation can cause problems in this connection. The use of modern technology can achieve an extremely uniform distribution of silicone oil in glass syringes with reduced quantities of silicone oil. Another option for minimising the amount of free silicone oil in a syringe is the thermal fixation of the silicone oil on the glass surface in a process called baked-on siliconisation. Plasticbased silicone oil-free or low-silicone oil prefillable syringe systems are a relatively new development. Silicone oil-free lubricant coatings for syringes are also currently in the development phase. Introduction Primary packaging for injectables almost exclusively comprises a glass container (cartridge, syringe, vial) and an elastomer closure. Ampoules are an exception. Elastomers are by nature slightly sticky, so all elastomer closures (plunger stoppers for syringes and cartridges, serum or lyophilisation stoppers) are siliconised. Siliconisation prevents the rubber closures from sticking together and simplifies processing of the articles on the filling lines. For example, it minimises 92 INTERNATIONAL PHARMACEUTICAL INDUSTRY

mechanical forces when the stoppers are inserted. Siliconisation is therefore essential to process capability. Although syringes and cartridges are always siliconised, this applies to a lesser extent to vials and ampoules. On the container the siliconisation provides a barrier coating between the glass and drug formulation. It also prevents the adsorption of formulation components on the glass surface. The hydrophobic deactivation of the surface also improves the containersâ&#x20AC;&#x2122; drainability. In prefillable syringes and cartridges, siliconisation also performs another function. It lubricates the syringe barrel or cartridge body, enabling the plunger to glide through it. Siliconisation of the plunger stopper alone would not provide adequate lubrication. Silicone oils are ideal as a lubricant because they are largely inert, hydrophobic and viscoelastic. Chemical and physical requirements for lubricants are set out in the relevant monographs of the American (United States Pharmacopoeia, USP) and European (Pharmacopoeia Europaea, Phar. Eur) pharmacopoeia1,2. Section 3.1.8 of Phar. Eur. also defines a kinematic viscosity of between 1000 and 30,000 mm2/s for silicone oils used as lubricants3. By contrast, the monograph for polydimethylsiloxane (PDMS) in the USP2 permits the use of silicone oils with a viscosity of 20 to 30,000 centistokes. However, increasingly stringent quality requirements and new bioengineered drugs are now taking siliconisation technology to its limits. Nonhomogenous siliconisation, which can occur when simple coating techniques are used on longer syringe barrels can, in some cases, lead to mechanical problems. These include the incomplete drainage of the syringe in an auto-injector or high gliding forces. Silicone oil droplets are always observed in filled syringes. The number

of silicone oil droplets increases in line with the quantity of silicone oil used. Droplets which are visible to the naked eye could be viewed as a cosmetic defect. At sub-visual level, the issue of whether silicone oil particles could induce protein aggregation is currently under discussion4. In light of this development, there is an obvious trend towards optimised or alternative coating techniques. Attempts are being made to achieve the most uniform possible coating with a reduced quantity of silicone oil and to minimise the amount of free silicone oil by way of baked-on siliconisation. In this context, reliable analysis technologies that can be used to make qualitative and quantitative checks on the coating are absolutely essential. Alternative coating techniques are also being developed. Silicone Oils and their Properties Silicone oils have been used for half a century in numerous pharmaceutical applications. For example, they are used as lubricants in pharmaceutics production and as inert pharmaceutical base materials (e.g. soft capsule walls)5. Trimethylsiloxy end-blocked polydimethylsiloxane (PDMS, dimethicone) in various viscosities is generally used for siliconisation (Fig. 1). Figure 1. Polydimethylsiloxane

The most frequently used silicone oil for the siliconisation of primary packaging components is DOW CORNINGÂŽ 360 Medical Fluid, which has a viscosity of 1000 cSt. PDMS is produced by reducing quartz sand to silicone metal. In the next step, the silicone reacts directly with methyl chloride in a process called MĂźller-Rochow synthesis to create methyl chlorosilanes. In this process, Autumn 2012 Volume 4 Issue 4



Packaging a mixture of different silanes is produced, the majority of which (75% 90%) are dimethyldichlorosilane (CH3)2 SiCl2. After distillative separation, the dimethyldichlorosilane is converted by hydrolysis or methanolysis into silanols which condense into lowmolecular-weight chains and cycles. In an acidic (cationic) or alkaline (anionic) catalysed polymerisation, polydimethylsiloxanes with hydroxy functions are generated. After the addition of trimethylchlorosilane they are furnished with trimethylsiloxy end groups. The short chain molecules are removed from the resulting polydisperse polymers by way of vaporisation, leaving deployable PDMS. The characteristic aspect of the PDMS molecule is the Si-O bond. With a bond energy of kcal/ 108 mol, it is considerably more stable than the C-O bond (83 kcal/mol) or the C-C bond (85 kcal/mol). PDMS is accordingly less sensitive to thermal loads, UV radiation or oxidation agents. Reactions such as oxidation, polymerisation or depolymerisation do not occur until temperatures exceed 130°C. The molecule also typically has a flat bond angle (Si-O-Si 130 °C) which has low rotation energy and is especially flexible (Fig. 2). A high bond Figure 2. Visual image of polydimethylsiloxane

length (1.63Å Si-O as compared to 1.43Å for C-O) makes the molecule comparatively gas-permeable6. The spiral-shaped (and therefore easily compressible) molecule is surrounded by CH3 groups which are responsible for the chemical and mechanical properties of PDMS. The molecule’s methyl groups only interact to a very limited extent. This ensures low viscosity, even with high molecular weights, which simplifies the distribution of PDMS on surfaces and makes it a very effective lubricant. PDMS is also largely inert, and 94 INTERNATIONAL PHARMACEUTICAL INDUSTRY

reactions with glass, metals, plastics or human tissues are minimal. The CH3 groups make PDMS extremely hydrophobic. It is insoluble in water, but soluble in non-polar solvents6. Siliconised Syringes As already explained, the syringe system only works if the glass barrel and plunger stopper siliconisation are homogenous and optimally harmonised. For needle syringes, siliconisation of the needle is also essential to prevent it sticking to the skin, thereby minimising injection pain. For the so-called oily siliconisation of the syringe glass barrel DOW CORNING® 360 with a viscosity of 1000 cSt is used. The DOW CORNING® 365 siliconisation emulsion is often used in the bakedon siliconisation process. The needle is siliconised using a wipe technique during ready-to-fill processing. DOW CORNING® 360 with a viscosity of 12,500 cSt is used. Another option is the thermal fixation of silicone oil on the needle during the needle mounting process. The goal of syringe barrel siliconisation is to obtain the most even anti-friction coating possible along the entire length of the syringe in order to minimise break loose and gliding forces when the plunger stopper is deployed (Fig. 3). Figure 3. S  ample force profile of a prefillable syringe.

Inadequate siliconisation of the syringe barrel, particularly the existence of unsiliconised areas, can cause slipstick effects that impair the syringe’s function. The forces in the injection process can then be too high or the entire system can fail. Since inadequate siliconisation and gaps in the coating are often found on the lower end of the syringe (luer tip/needle end), it is possible that the syringe will not be completely emptied. Such defects can remain undiscovered, particularly in auto-injectors since these are closed systems. The result could be that an inadequate dosage of the medication is administered.

The obvious solution is to increase the amount of silicone oil used to achieve a homogenous coating. However, as already mentioned, increasing the amount of silicone oil used is also associated with higher quantities of silicone particles in the solution. With protein-based drugs, in particular, undesirable interactions with silicone oil particles cannot be ruled out. Sub-visual silicone oil particles are thought to promote protein aggregation which can increase the severity of immune responses and reduce the drug’s tolerability. However, the underlying mechanism is not yet fully understood. There is a discussion as to whether protein aggregation is influenced by additional motion, e.g. shaking the syringe7. Experiments have also shown that when silicone oil in excess of 1mg/syringe is used the additional silicone oil does not further reduce gliding forces. The interior siliconisation of glass syringe barrels has another advantage. It prevents the drug solution from interacting with the glass surface and rules out related problems such as the loss of active ingredients through adsorption or pH value changes due to alkali leaching. Prefillable glass syringes are only manufactured from high-quality type 1 borosilicate glass. However, sodium ions can still leach out of the glass surface if the syringe contains an aqueous solution and is stored for a long period of time. This leads to higher pH values which could be problematic in unbuffered systems. Acidic environments foster this process. Si-O-Na + H20 – SiOH + NaOH In alkaline environments, on the other hand, an etching process is observed. 2NaOH + (SiO2)X – Na2SiO3 + H2O Aqueous solutions with a high pH value cannot therefore be stored for long periods of time in borosilicate glass containers. They have to be lyophilised and reconstituted before use. In extreme cases, the etching of the glass surface can cause delamination. Hydrophobic deactivation of the container by siliconisation effectively protects the glass surface. Autumn 2012 Volume 4 Issue 4

Packaging Optimized Siliconisation For the above-mentioned reasons, the main objective in siliconisation is to achieve the most homogenous possible coating with the minimum possible quantity of silicone oil. Initially it is necessary to establish the minimum quantity of silicone oil which will reliably satisfy the quality requirements of the application. In the production of readyto-fill syringes, siliconisation generally takes place after washing and drying. Fixed nozzles positioned at finger flange level under the syringe barrel spray the silicone oil onto the inside surface. In long syringes, the silicone oil is sometimes unevenly distributed and the concentration of the silicone oil is lower at one end of the syringe (luer tip/needle end). The use of diving nozzles can considerably improve the evenness of the coating across the entire length of the syringe body. In this process, the nozzles are inserted into the syringe to apply the silicone oil (finely atomised) in motion. The result is practically linear as is shown by the

closely bundled gliding forces in the force path diagram (Fig. 4). Studies on 1ml long syringes have revealed considerable potential for reducing the amount of silicone oil required. In the experiment, the quantity of silicone oil per syringe could be reduced by 40% without any impairment of the systemâ&#x20AC;&#x2122;s functional properties (Fig. 5). In practice, the calculation of the optimum quantity of silicone oil has to take syringe volume, plunger stopper type (coated/uncoated), plunger stopper placement method (seating tube/vacuum) and application requirements (injection systems) into account. Plunger stoppers from different suppliers not only differ in terms of the type of rubber used and their design, they are also coated with silicone oils of different viscosities. The siliconisation methods also differ considerably. These variables can have a bigger impact on the syringe systemâ&#x20AC;&#x2122;s functional properties than the syringe siliconisation of different suppliers, as shown by Eu et al.1.

Figure 4. C  omparison of force profiles diving nozzle vs. fixed nozzle

Figure 5. F  orce profile after optimised siliconisation

Baked-on Siliconisation Another key advancement in siliconisation technology is the baked-on siliconisation technology. It involves the application of silicone oil as an emulsion which is then baked on to the glass surface in a special kiln at a specific temperature and for a specific length of time. In the baked-on process, both hydrogen and covalent bonds form between the glass surface and the polydimethylsiloxane chains. The bonds are so strong that part of the silicone oil cannot be removed with solvent and a permanent hydrophobic layer is created (Fig. 6). In addition the average molecule weight increases as a result of polymerisation and the vaporisation of short chain polymers. The resulting, extremely thin, layer of silicone in conjunction with the low quantity of silicone oil used in the emulsion minimises free silicone in the syringe and ensures that the required quality of finish is achieved. The layer thickness measures 1 5 -50 nm. By comparison, the average layer thickness with oily siliconisation is 500-1000 nm. Baked-on siliconisation reduces the measurable quantity of free silicone oil to approx. 10 % of the normal value. As a result, there are fewer sub-visual and visual silicone oil particles in the solution. This siliconisation process is therefore recommended for use with sensitive protein formulations. It is also advantageous for ophthalmological preparations which are associated with very stringent requirements as regards particle contamination. Another benefit is the stability of the mechanical properties of the filled syringe throughout its shelf-life. The ribs of a plunger stopper press into the silicone layer when a syringe with oily siliconisation is stored for long periods of time and the glass comes into direct contact with the rubber. Since elastomers are always slightly sticky, the break loose forces increase over the storage period. With baked-on siliconisation, however, this phenomenon is not observed to the same extent (Fig. 7). The break loose force remains practically constant over the entire storage period.


Packaging an indirect method of determining the evenness of the siliconisation (Fig. 9). This process is also destructive and associated with problems. For example, the results are influenced by the positioning of the plunger stopper and there is no standard for extrusion speed. A value of 100 mm/min is often taken for empty syringe systems; and

Figure 6. Baked-on siliconisation

Figure 9.G  lide force measurement

Figure 7.Comparison of syringes with oily and baked-on siliconisation

Analysis Methods The optimisation of the siliconisation process necessitates reliable qualitative and quantitative analysis methods. Online methods for the one hundred per cent control of siliconisation during production are not currently available. In process control, random samples are taken and several destructive and non-destructive methods are used. In the glass dust test, the siliconisation is made visible by dusting it with finest glass particles (Fig. 8). This destructive method is simple but time-consuming. It is also associated with the problems that the quality of the siliconisation is subjectively evaluated and the results are affected by temperature and air humidity. Measuring the gliding force is 96 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Figure 8. Glass dust test: left - syringe siliconised with a diving nozzle; right syringe siliconised with a fixed nozzle

up to 380 mm/min. for filled systems. Relatively fast quantitative and nondestructive results can be obtained with reflexometry. For example, the Layer Explorer UT (Fig. 10) which is manufactured by rapID scans the syringe body line-by-line. It can measure layer thicknesses of 15 nm to several thousand nm with a precision of 5 nm (Fig. 10.1). Scanning a 40 mm syringe with the Layer Explorer takes Autumn 2012 Volume 4 Issue 4



Packaging Figure 10.Silicone layer thickness measurement with the Layer Explorer RapID (own data)

Figure 10.1. Silicone layer thickness measurement with the Layer Explorer

Figure 11. ZebraScience visualisation of siliconisation (own data)


approximately one minute. Another non-destructive technique such as the one developed by Zebra Science (Fig. 11) is based on digital image processing. The entire inside surface of the syringe barrel is imaged to visualise typical siliconisation surface structures. The technology captures these visual cues as a direct indication of silicone oil presence and poorly siliconised areas (Fig. 11.1). It delivers fast qualitative results and is suitable for empty and filled syringes. However, empty syringes should be measured immediately after siliconisation because even just half an hour after siliconisation the distribution of the silicone provides a completely different picture, and it takes a very experienced person to interpret the results properly. Unfortunately, this method is also not fast enough to facilitate 100 % online control during the washing and siliconisation process. Outlook There is a trend towards reducedsilicone systems or baked-on siliconisation in glass syringe finishing. Improved analysis techniques and a better understanding of the phenomena involved support optimised use of silicone oil. New issues are arising as a result of the use of innovative materials or coatings. In light of the increasing complexity of devices and the more widespread incidence of biopharmaceuticals with specific requirements, new alternative materials for primary packaging products are becoming increasingly interesting. For example, the inside surfaces of vials and syringes can be coated with pure SiO2 in a plasma process to minimise their interaction with drugs. Plastic

Autumn 2012 Volume 4 Issue 4


systems based on cyclic olefins (COP/ COC) are also gaining in significance for prefilled syringes and vials. COP syringes such as the ClearJect TasPack™ by Taisei Kako Co. Ltd have glass-like transparency. Additionally, they have a higher break resistance, their pH stability range is larger and there is no metal ion leaching. Excellent dosage precision is also very important in packaging for bio-pharmaceuticals. In most cases siliconisation is also essential in COP syringes. Silicone oil-free systems are a brand new approach. The gliding properties of the fluoropolymer coating on specially developed plunger stoppers eliminate the need to siliconise plastic syringes. There are as many innovative ideas for the development of primary packaging products as there are innovative drugs and syringe systems. Reprinted with the permission by ECV· Editio Cantor Verlag für Medizin und Naturwissenschaften GmbH. Originally published as: Petersen, C. Containers Made of Cyclic Olefins as a New Option for the Primary Packaging of Parenterals. Pharm. Ind. 2012;74(1):156-162. References 1. United States Pharmacopoeia 35 NF 30. Dimethicone, The United States

Pharmacopeial Convention Inc, Rockville, USA, 2011 2. Pharmacopoea Europaea. 7th edition. Dimeticon, Deutscher Apotheker Verlag, Stuttgart, Germany, 2011, p.2788 3. Pharmacopoea Europaea. 7th edition. 3.1.8 Silicon oil for use as a lubricant, Deutscher Apotheker Verlag, Stuttgart, Germany, 2011, p.486 4. Jones LS, Kaufmann A, Middaugh CR. Silicone oil induced aggregation of proteins. J Pharm Sci 2005; 94(4):918927 5. Colas A, Siang J, Ulman K. Silicone in Pharmaceutical Applications Part 2: Silicone Excipients. Dow Corning Corporation, Midland, USA, 2001 6.  Colas A. Silicone in Pharmaceutical Applications. Dow Corning Corporation, Midland, USA, 2001 7. Thirumangalathu R, Krishnan S, Speed Ricci M, Brems DN, Randolph TW, Carpenter JF. Silicone Oil and agitationinduced aggregation of a monoclonal antibody in aqueous suspension. J Pharm Sci 2009; 98(9):3167-3181 8. Rathore N, Pranay P, Eu B. Variability in syringe components and its impact on functionality of delivery systems. PDA J Pharm Sci and Tech 2011; 65:468480

Ms. Petersen studied from 1990 to 1996 bioprocess engineering at the Technical University of Berlin. After two years post-graduate work in the field of oncology research she joined in 1998 company Life Sciences Meissner & Wurst working finally as a lead validation engineer mainly on projects for biopharma customers. From 2000 – 2007 she held different positions at West Pharmaceutical Services (a leading supplier of elastomer components for the pharmaceutical industry) European Technical Support and Marketing department finally as Senior Manager Biotechnology. Since Dec 2007 she is working as Director Business Development for the Tubular Glass Devision at Gerresheimer Bünde. Ms. Petersen is a member of pharmaceutical organisations like PDA and APV. She is a frequent speaker on international congresses and seminars related to primary packaging and drug delivery devices for injectable drug products. Email: c.petersen@



With Intelligent Packaging You’re Always One Move Ahead 1. Can we start with a history of Dividella? What are your key innovations for the pharmaceutical industry? Rondo, a sister company of Dividella, invented the famous Rondo flutes in 1935. Rondo flutes were used worldwide to package a wide range of products before the plastic age. In 1980 the carton-converting and machine-building activities were separated, and Dividella was founded and started to develop new machine concepts for mono material (carton) packaging solutions. In the following years Dividella developed the NeoTOP (toploading) and NeoWallet (blister wallets) packaging and machine family. 2. We see you do some patient-friendly packaging solutions. What is your take on patient adherence packaging? Can innovative packaging make any significant improvement in patient compliance and patient engagement? Dividella has a team of experienced packaging engineers who are designing packaging solutions for the pharmaceutical industry. The packaging solutions are always a balance between a lot of factors, such as ease of use, supporting patient compliance, product protection, small volume, low packaging material cost. In close collaboration with the customer, these factors are identified and various designs compared against them. Patient compliance and engagement plays a key role and can definitely be influenced by the packaging. Human factor studies have become an important tool to test the effectiveness of a packaging solution. By: Ray Bullman, NCPIE 3. You manufacture packaging solutions for vials, syringes, injector pens, needles and other sensitive and valuable products which require a patient-friendly and safe packaging solution. Can you explain what the pharma companies are looking for nowadays, and how you are keeping costs down? Besides the must-haves, such as product protection, patient compliance etc., pharma companies would like to see ecological packaging solutions, using less material and no plastic if possible. A typical Dividella NeoTOP carton, compared to a blister in a sideload carton, is 30% smaller in volume and the material cost is significantly lower. If you look at a sterile product that is shipped by air 100 INTERNATIONAL PHARMACEUTICAL INDUSTRY

freight and needs to be in the cold chain, 30% less volume means a lot of money. By: Michaël Nieuwesteeg – Netherlands Packaging Centre 4. Efficiency has never been more paramount, and the supply chain is always a key target when it comes to cost-cutting. Packaging specifically is one of the most costly areas and one of the areas you can easily control and reduce your costs. As a renowned packaging manufacturer, can you give your perspective on how pharma can best streamline their packaging costs? We typically start by analysing the customer’s packaging portfolio and the average packaging lot sizes. The goal is to identify products that can use a harmonised carton in order to reduce the tooling cost and the changeover times. For smaller-lot products harmonisation usually has a cost advantage. For higher-lot products it’s typically better to optimise the carton in order to have the smallest packaging volume and material cost. Very important in this exercise is trying to anticipate future products and their packaging needs so they can be covered by the same packaging concept. By: Francesco Laterza – RA/QA Manager 5. In your opinion, how can innovative packaging and pack design aid market entry? If an innovative packaging solution can address new needs of a market, it will definitely aid market entry. For example, Nordic countries are starting to push harder for plastic-free packaging solutions. With our NeoTOP mono material carton solution we can offer a solution to that problem. So the market entry into Nordic countries will be easier. Another example is if you already have a flexible packaging solution and a flexible, modular packaging line, your new product may just be added to an existing packaging line. The investment cost is much lower and the time to market is much faster. By: Camilla Kent Hansen – Manager Market Access, EFPIA 6. In your opinion, what are the most pressing packaging and labelling challenges faced by pharma for medical devices, and what are the ultimate solutions in response to the recast of the Medical Device Directive? One of the challenges Dividella faces is that the design of medical devices is a

time-consuming process. Vials, syringes etc. are pretty standardized, and the packaging design can start early. With medical devices there are often no samples or even drawings available until a much later project phase. We don’t think that the Medical Device Directive will change this. A flexible packaging concept also helps here in order to expedite the design phase. By: David Dickinson – Principal Consultant 7. What are your views on the global regulatory developments and their implications to better understand their impact on packaging and labelling? We believe that there will be an even harder push than today for anti-counterfeiting and tamper-evidence solutions. The basic approach is to serialise the products. But clever packaging designs in combination with safety labels will be another important factor. There probably will not be a global regulatory direction, and therefore individual solutions for specific regions will be the answer. By: Matthias Buerger – VP, Quality Assurance & Regulatory Affairs EMEA 8. What is the future for Dividella? How do you see the company’s position in the next five years? Dividella is part of the Körber Medipak group which has a clear growth strategy. Dividella therefore has to keep its leading position in the pharma toploader business, which is still a rather small niche, and continue to develop innovative packaging and machine solutions. We see ourselves as a reliable and experienced partner for pharmaceutical companies that are not only looking for a packaging machine but a packaging solution. With the Körber structure, Dividella can also handle projects as a systems integrator.

Christoph Hammer, has an education as an electrical engineer with additional degrees in business and production technology. Christoph brings a long term experience in the food and pharmaceutical packaging industry in the fields of engineering and consulting. Email: Autumn 2012 Volume 4 Issue 4



Reviews & PREVIEWS

CPhI Pharma Awards Winners The winners of this year’s CPhI Pharma Awards were unveiled during the opening-night ceremony for CPhI Worldwide, which was held at the Feria de Madrid from October 9-12, 2012. Formerly known as the CPhI Innovation Awards, the awards were rebranded and expanded this year to highlight the pivotal role that innovation plays in the pharma industry, and to recognise its ability to drive growth and change for a better future. The CPhI Pharma Awards celebrated three categories of winners. The flagship award for Best Innovation was given to pioneers in new, innovative and commercially scalable technology. Several past winners of this award have gone on to great commercial success with their winning products. The Sustainable Stand Design award celebrated an exhibitor that had embraced sustainability in their stand design and presence onsite in Madrid. Finally, a new Best Sustainable Packaging award category was introduced to recognise a packaging system that improves the sustainability of pharma product packs. A judging panel of industry experts were charged with the difficult task of selecting the Best Innovation award winners from a high-standard group of entries. The Gold award went to Haemopharm Healthcare for its NIV needle-free vial closure. This novel closure can be used for both plastic and glass vials that contain pharmaceutical products in liquid, gel or powder forms. No needle is required, which improves safety in handling, and the process and the product remain uncontaminated because the seal is hermetically reclosable, which removes the possibility of leakage. The Silver Award for Best Innovation was presented to Merck Millipore for its silica drug delivery project. Increasing numbers of pharma 102 INTERNATIONAL PHARMACEUTICAL INDUSTRY

products reaching the market have very low solubility and permeability, making formulation into products very challenging. This new technique uses the two-fold pore structure of biomodal silica to deliver a large surface area and increase transport to improve delivery properties. Unlike existing solubility-enhancing techniques, there are no concerns about nanoparticles or toxic chemicals – silica is inert and has been used in pharma products for many years. Bioclin’s Multi-Oral Remin remineralisation gel won the Bronze Award. This gel is designed for application in the oral cavity, where it both treats and prevents bleeding gums and tooth erosion. Based on an acetylated polymannose complex, it blocks the adhesion of harmful bacteria and neutralises them, but has no effect on human tissue. Its anti-plaque properties also give a remineralisation effect. The Best Sustainable Stand Design award category returned to recognise the continued importance of sustainability in business practice. The award encourages exhibitors to incorporate sustainability into the design of their stands from conception because of its impact on the sustainability of the events industry, something that UBM is committed to improving. The winner of this award, Solvias, was deemed

to have fully taken on board UBM’s philosophy of minimising impact on the environment through its presence at the event. The winner of the new Best Sustainable Packaging award was MWV Healthcare for its Shellpak Renew package. The package consists of an outer carton made from recyclable, tear-resistant paperboard, combined with an easy-slide blister with an integrated calendar that helps patients track their medication. It is easy to open and senior-friendly, while being child-resistant. It also has a small footprint, reducing the space needed on pharmacy shelves. The CPhI Pharma Awards will return next year during the 2013 event scheduled from October 22-24 at the Messe Frankfurt in Germany. To learn more about the awards and how your company may participate, please visit:

Andrew Pert, Brand Director CPhI Worldwide, ICSE, P-MEC Europe, InnoPack, UBM Live Email:

Autumn 2012 Volume 4 Issue 4

The Future of Tissue Processing: Safer, Cleaner and Greener (Oct 31 - Nov 1, 2012) The Breakthrough of supercritical CO2 Nowadays, given the option to use material for bone tissue transplantation that is sterile, virus inactivated and still maintains the mechanical properties, you shouldnâ&#x20AC;&#x2122;t have to think twice. EMCM, European Medical Contract Manufacturing, the centre of excellence in developing and manufacturing sterile medicinal products, elected to hold a conference on these issues. Henriette Valster, Managing Director, presided over an international group of orthopaedic surgeons and professionals from the tissue banking & MedTech industries to both discuss and to agree a way of developing further initiatives in Tissue Processing. Supercritical CO2 (scCO2) is an innovative processing method which seems to be the answer. It acts as both a cleaning and a sterilisation process, free from previous methods that would usually use chemicals or radiation. ScCO2 guarantees a high quality tissue, free of harmful immunogenic agents whilst still preserving its biological and osteogenic characteristics, beneficial for the patients. The use of scCO2 for cleaning and sterilisation is a commonly known technique in the food, cosmetic, pharmaceutical & oil industries. It turns out that it can be very suitable to treat musculoskeletal tissue as well. The nature of bone tissue makes it difficult to penetrate and render it clean and sterile. Over the years, the world of tissue transplantation has been confronted with difficult choices in providing safe and effective products to meet the healthcare requirements and patient needs. Sourcing, Cleaning and Sterilisation techniques are of prime importance for optimal in-growth of the transplant whilst it should be free of harmful immunogenic agents and with minimal damage to its structural and functional integrity. Interest in the scCO2 technique is worldwide! On October 31 and November 1, 2012, the latest innovations in tissue processing, its clinical implications and the regulatory requirements were presented at the EYE, in Amsterdam, The Netherlands. World renowned speakers, such as Prof. John N. Kearney, Prof. Pieter Buma and Dr. Heinz Winkler, prominent personalities from the tissue banking industry in the USA & Europe, as well as specialist surgeons in the field of orthopaedics and European regulatory authorities joined the event keen to explain that the scCO2 approach to Tissue Processing is indeed safer, cleaner and greener. During the two days conference in Amsterdam all the ins and outs of this innovation were discussed. From the background of the scCO2 technique, its applications, features and applications options, to the orthopaedic results with scCO2 processed tissue perspective, European Regulatory frameworks on human tissue transplantations and future perspective of application of human tissue in the orthopaedic practice. A more detailed report will be published in Volume 5 Issue 1 (February 2013)


Subscription offer Advertisers index Guarantee you receive your copy of IPI: International Pharmaceutical Industry, 4 issues per year. Pharma Publications is delighted to be able to offer it’s readers a great one year’s subscription offer. Subscribe now to receive your 20% discount Post: Unit J413, The Biscuit Factory Tower Bridge business complex 100 clements road, London SE16 4DG

Page 33

Almac Group

Page 3

Analytical Biochemical Laboratory BV

Page 43

BARC Global Central Laboratory

Page 59

Berlinger & Co. AG

Page 55

BioCon Valley

Page 53

Bioneer A/S

Page 97

Biotech Services International Ltd

Page 31



+44 (0)20 7237 2036

Page 91

Bobst Group SA


Complete and fax this form to +1 480 247 5316


Catenion GmbH

Page 23

Cellgenix GmbH

Page 49

Centrical Global Ltd.

Page 85

Colder Products Company

Page 35

Copper Development Association

Page 73

Envirotainer AB

Page 69


Email: Your details to Please tick the relevant boxes below: UK & N. Ireland £120 inc p&p Europe €132 inc p&p USA $200 inc p&p I enclose a cheque made payable to Pharmapubs

Page 45

ExCard Research GmbH

I wish to pay by credit card (Mastercard or Visa)

Page 9

ExpreS2ion Biotechnologies

Card number

Page 79

FeF Chemicals A/S

Page 87


Page 15


Valid from

Expiry date

Security code Name:

Page 29

Glycotope Biotechnology GmbH

Page 25

Health Protection Agency

Page 77

InGell Labs BV

Page 37

LC Patents

Page 81


Job title:

Page 13

Miller Insurance Services


Page 5

MPI Research

Page 39

Müller GmbH

Page 21

Novozymes A/S


One2One – A service of Hospira

Page 65

Orbsen Consulting

Page 83

Patheon Inc.




Phage Consultants


Page 103

Pharmintech 2013 - Ipack-Ima Spa


Page 93

Pöppelmann GmbH & Co. KG

Page 27

PreSens - Precision Sensing GmbH

Email: Signature

For advertising opportunities, contact us on +44 (0)02 7237 2036 or Email

Page 19

Scottish Development International

Page 63



Therapure Biopharma Inc.

Page 61

United Parcel Service of America, Inc (UPS)

Page 41

Woodley Equipment Company Ltd.

Page 101 10th Annual Conference on Controlled Release – SMi Group 104 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Autumn 2012 Volume 4 Issue 4

IPI - International Pharmaceutical Industry  

IPI - International Pharmaceutical Industry

Read more
Read more
Similar to
Popular now
Just for you